I've seen this on the release notes for 1.9.1:
- (bug 8673) Minor fix for web service API content-type header
What is the web service API? I couldn't find information about it on Google and on MediaWiki.
On 1/24/07, Fernando Correia fernandoacorreia@gmail.com wrote:
I've seen this on the release notes for 1.9.1:
- (bug 8673) Minor fix for web service API content-type header
What is the web service API? I couldn't find information about it on Google and on MediaWiki
http://en.wikipedia.org/w/api.php
HTH, Mathias
Thanks!
2007/1/24, Mathias Schindler mathias.schindler@gmail.com:
On 1/24/07, Fernando Correia fernandoacorreia@gmail.com wrote:
I've seen this on the release notes for 1.9.1:
- (bug 8673) Minor fix for web service API content-type header
What is the web service API? I couldn't find information about it on
and on MediaWiki
http://en.wikipedia.org/w/api.php
HTH, Mathias
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Where the information about the details of MediaWiki web service API? Quoting Fernando Correia fernandoacorreia@gmail.com:
Thanks!
2007/1/24, Mathias Schindler mathias.schindler@gmail.com:
On 1/24/07, Fernando Correia fernandoacorreia@gmail.com wrote:
I've seen this on the release notes for 1.9.1:
- (bug 8673) Minor fix for web service API content-type header
What is the web service API? I couldn't find information about it on
and on MediaWiki
http://en.wikipedia.org/w/api.php
HTH, Mathias
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
+++++++++++++++++++++++++++++++++++++++++++ This Mail Was Scanned By Mail-seCure System at the Tel-Aviv University CC.
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
On 24/01/07, assafkat@post.tau.ac.il assafkat@post.tau.ac.il wrote:
Where the information about the details of MediaWiki web service API?
Same API, same calling procedure, same documentation.
Rob Church
I am didn't heard about the Web service before this thread...
Quoting Rob Church robchur@gmail.com:
On 24/01/07, assafkat@post.tau.ac.il assafkat@post.tau.ac.il wrote:
Where the information about the details of MediaWiki web service API?
Same API, same calling procedure, same documentation.
Rob Church
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
+++++++++++++++++++++++++++++++++++++++++++ This Mail Was Scanned By Mail-seCure System at the Tel-Aviv University CC.
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
have you considered REST api? it would be cleaner i think and REST apis take caching of pages into consideration (important for performance)
Mathias Schindler wrote:
On 1/24/07, Fernando Correia fernandoacorreia@gmail.com wrote:
I've seen this on the release notes for 1.9.1:
- (bug 8673) Minor fix for web service API content-type header
What is the web service API? I couldn't find information about it on Google and on MediaWiki
http://en.wikipedia.org/w/api.php
HTH, Mathias
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Ittay Dror wrote:
have you considered REST api? it would be cleaner i think and REST apis take caching of pages into consideration (important for performance)
Can you explain for the gallery what's non-RESTy about the current draft api?
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
well, it's RPC oriented. 'action' is used to denote what is being done. and server side changes will be done by 'GET'. this means there's no clear partitioning of data, and no clear separation between actions that change the server side state and those that just retrieve it (which can then be cached). also, in REST, the returned information can contain urls to other pieces of information that can be naturally navigated to retrieve that information.
Brion Vibber wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Ittay Dror wrote:
have you considered REST api? it would be cleaner i think and REST apis take caching of pages into consideration (important for performance)
Can you explain for the gallery what's non-RESTy about the current draft api?
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFFt7KjwRnhpk1wk44RAs8uAJwPK/WbeX6ROJ09uEdyTpxnr4Qz+QCfbLoj o+pHP/KoFolvn/Y9qNTm578= =CfhG -----END PGP SIGNATURE-----
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Is there a way that I can run one wiki site on multiple servers for load balancing?
I have searched everywhere I could think of but I am unable to find any documentation on how this would be accomplished.
Thanks,
Russ
Russ Lavoie wrote:
Is there a way that I can run one wiki site on multiple servers for load balancing?
I have searched everywhere I could think of but I am unable to find any documentation on how this would be accomplished.
Have all of your apache configuration, document root and MediaWiki script files in a directory on the local hard drives of the application servers, synchronised on demand with a master copy using rsync. Put the image directory on a network filesystem such as NFS. You can use any HTTP load balancer in front: squid, pound, LVS, etc.
It's the same as setting up multiple servers for any LAMP application, so maybe you're being a bit too specific in your search for documentation.
-- Tim Starling
Russ Lavoie wrote:
Is there a way that I can run one wiki site on multiple servers for load balancing?
I have searched everywhere I could think of but I am unable to find any documentation on how this would be accomplished.
Have all of your apache configuration, document root and MediaWiki script files in a directory on the local hard drives of the application
servers, synchronised on demand with a master copy using rsync. Put the image directory on a network filesystem such as NFS. You can use any HTTP load balancer in front: squid, pound, LVS, etc.
It's the same as setting up multiple servers for any LAMP application, so maybe you're being a bit too specific in your search for
documentation.
-- Tim Starling
How do you synchronize the MySQL? Or are you using a single remote MySQL server that all servers use. -Jim _______________________________________________ MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Sullivan, James (NIH/CIT) [C] wrote:
How do you synchronize the MySQL? Or are you using a single remote MySQL server that all servers use.
Use a single MySQL server -- that's what servers are for. ;)
If you do actually need to spread load among multiple *database backend* servers due to overload *on the backend*, then set up one or more slave servers using MySQL's replication.
See the comments in includes/DefaultSettings.php for how to configure $wgDBservers to list additional database servers to read from. (Writes will go only to the master. Do not attempt multi-master replication! MySQL ain't meant for it.)
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
Hello Brion,
Do not attempt multi-master replication! MySQL ain't meant for it.)
Can you explain why it will not work? I know the is counter issue that can be tweak.
I have good experience on internal LAMP application and tough abut give it a try on mediawiki.
Addady
Brion Vibber wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Sullivan, James (NIH/CIT) [C] wrote:
How do you synchronize the MySQL? Or are you using a single remote MySQL server that all servers use.
Use a single MySQL server -- that's what servers are for. ;)
If you do actually need to spread load among multiple *database backend* servers due to overload *on the backend*, then set up one or more slave servers using MySQL's replication.
See the comments in includes/DefaultSettings.php for how to configure $wgDBservers to list additional database servers to read from. (Writes will go only to the master. Do not attempt multi-master replication! MySQL ain't meant for it.)
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFFt5j6wRnhpk1wk44RAh+xAKC9FpmkZTHTXNCUtL5FcTXxFgSLsgCg3abi fQELVqs5zxHlNTu3OyTiUZw= =Ma6e -----END PGP SIGNATURE-----
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
On 24/01/07, Sullivan, James (NIH/CIT) [C] sullivan@mail.nih.gov wrote:
How do you synchronize the MySQL? Or are you using a single remote MySQL server that all servers use.
Wikimedia uses a single master and multiple slaves in a replication environment. Our biggest wiki, the English Wikipedia, has a separate master to that of the other wikis, and a separate slave cluster, I believe.
Rob Church
So what I am getting from all of these posts is to use rsync for the frontend and mysql clustering for the backend... Is there any other way to get this working with a better solution other than rsync?
Thanks
Russ
-----Original Message----- From: mediawiki-l-bounces@lists.wikimedia.org [mailto:mediawiki-l-bounces@lists.wikimedia.org] On Behalf Of Rob Church Sent: Wednesday, January 24, 2007 12:06 PM To: MediaWiki announcements and site admin list Subject: Re: [Mediawiki-l] Load Balancing?
On 24/01/07, Sullivan, James (NIH/CIT) [C] sullivan@mail.nih.gov wrote:
How do you synchronize the MySQL? Or are you using a single remote MySQL server that all servers use.
Wikimedia uses a single master and multiple slaves in a replication environment. Our biggest wiki, the English Wikipedia, has a separate master to that of the other wikis, and a separate slave cluster, I believe.
Rob Church
_______________________________________________ MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Russ Lavoie wrote:
So what I am getting from all of these posts is to use rsync for the frontend and mysql clustering for the backend... Is there any other way to get this working with a better solution other than rsync?
It might help if we knew what "better" meant.
What requirements do you have for deploying updated software that are not met by something like rsync?
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
Let me explain a little more, rsync has to tally up all the files in the directories you will be sharing between the two servers which puts an unnecessary load on the system (plus it isn't always up-to-date on both servers). I would like the updates to be real time. Any suggestions?
Thanks,
Russ
-----Original Message----- From: mediawiki-l-bounces@lists.wikimedia.org [mailto:mediawiki-l-bounces@lists.wikimedia.org] On Behalf Of Brion Vibber Sent: Wednesday, January 24, 2007 1:16 PM To: MediaWiki announcements and site admin list Subject: Re: [Mediawiki-l] Load Balancing?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Russ Lavoie wrote:
So what I am getting from all of these posts is to use rsync for the frontend and mysql clustering for the backend... Is there any other
way
to get this working with a better solution other than rsync?
It might help if we knew what "better" meant.
What requirements do you have for deploying updated software that are not met by something like rsync?
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
_______________________________________________ MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Russ Lavoie wrote:
Let me explain a little more, rsync has to tally up all the files in the directories you will be sharing between the two servers which puts an unnecessary load on the system (plus it isn't always up-to-date on both servers). I would like the updates to be real time.
Why?
Are you really going to be *constantly editing the source code*?
In our experience this is relatively rare; even with our very active code development and hundreds of sites which occasionally need configuration changes we may go days between code pushes.
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
On 1/24/07, Brion Vibber brion@pobox.com wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Russ Lavoie wrote:
Let me explain a little more, rsync has to tally up all the files in the directories you will be sharing between the two servers which puts an unnecessary load on the system (plus it isn't always up-to-date on both servers). I would like the updates to be real time.
Why?
Are you really going to be *constantly editing the source code*?
In our experience this is relatively rare; even with our very active code development and hundreds of sites which occasionally need configuration changes we may go days between code pushes.
Just to clarify in case it's unclear to Russ:
The mediawiki CONTENT (article pages, etc) is not hosted on the webservers. It's in the database server(s). Except for the images, and handling that via a NFS server works fine (or, worked fine for me, with my testing, I don't run it in production).
The webservers only need to be synchronized if you change something in the MediaWiki configuration. Rsync or the equivalent will work fine (or CacheFS, or...).
Hi,
why don't u use a NFS Share to put your Wiki DocumentRoot ?
I've try loadbalancing & mysqlclustering for use with mediawiki ...
No problem at all with mysql5-cluster ( except for some indexes & keys that are not yet supported for NDB Type ) but I don't use these tables in my wiki so, iy works fine,
but I've experienced many problem with apache load balancing ( using Heartbeat + LDirector ) ...
Some times, you have to re-authenticate because the cookies used belongs to on server and became invalid on the other :(
I guess we have to adapt the mediawiki code for that ...
Best regards,
Arnaud.
I've not really time to make much more tests right now ...
Le 24 janv. 07 à 23:08, Russ Lavoie a écrit :
Let me explain a little more, rsync has to tally up all the files in the directories you will be sharing between the two servers which puts an unnecessary load on the system (plus it isn't always up-to-date on both servers). I would like the updates to be real time. Any suggestions?
Thanks,
Russ
-----Original Message----- From: mediawiki-l-bounces@lists.wikimedia.org [mailto:mediawiki-l-bounces@lists.wikimedia.org] On Behalf Of Brion Vibber Sent: Wednesday, January 24, 2007 1:16 PM To: MediaWiki announcements and site admin list Subject: Re: [Mediawiki-l] Load Balancing?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Russ Lavoie wrote:
So what I am getting from all of these posts is to use rsync for the frontend and mysql clustering for the backend... Is there any other
way
to get this working with a better solution other than rsync?
It might help if we knew what "better" meant.
What requirements do you have for deploying updated software that are not met by something like rsync?
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFFt7CGwRnhpk1wk44RAoz5AJ4hKNrdEGuOyWesjzoCMX9cIxHhHgCg3Tur hwQqlIF8i4KcTfI7jzJ7OI8= =Qzpc -----END PGP SIGNATURE-----
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Hi!
Wikimedia uses a single master and multiple slaves in a replication environment. Our biggest wiki, the English Wikipedia, has a separate master to that of the other wikis, and a separate slave cluster, I believe.
Indeed, there's no better example of scaled out mediawiki than Wikipedia. Even for single wiki, say English Wikipedia we use following distribution:
- Multiple tiers of Squids - Multiple Apache/PHP servers, running MediaWiki - Multiple memcached servers for object cache and various not-very- persistent storage - Multiple Lucene hosts serving search - Multiple clusters (groups) of external storage nodes, each consisting of small master-slaves replication system carrying a subset of texts - Master database - Slave databases for general DB use ( can be promoted to master, if needed) - Slave databases allocated for specific DB use (as Ariel serves watchlists) - Slave databases extended/built for specific DB use (extended indexing on db6 to allow faster contributions access) - Multiple hosts running job queue (delayed tasks) - Multiple load balancers between various components.
Once you need to scale more than English wikipedia, we can discuss about additional possibilities to split some of the tasks. All the code for above load balancing is inside mediawiki.
Cheers,
On 24/01/07, Domas Mituzas midom.lists@gmail.com wrote:
All the code for above load balancing is inside mediawiki.
Under the hood of that there tank beats a heart of...pixie dust, in some cases. ;)
Rob Church
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Tim Starling wrote:
Russ Lavoie wrote:
Is there a way that I can run one wiki site on multiple servers for load balancing?
I have searched everywhere I could think of but I am unable to find any documentation on how this would be accomplished.
Have all of your apache configuration, document root and MediaWiki script files in a directory on the local hard drives of the application servers, synchronised on demand with a master copy using rsync. Put the image directory on a network filesystem such as NFS. You can use any HTTP load balancer in front: squid, pound, LVS, etc.
Two additional notes:
1) Be sure to keep the servers' clocks in sync! A standard NTP system will do.
2) PHP's session storage is used to handle login sessions. Session data must be stored in a common location unless your load balancing system is "sticky" (sending a given client always to the same server). A networked filesystem will usually do fine, or we have the 'sessions in memcached' sample implementation of storing session data in a shared memcached cloud.
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
mediawiki-l@lists.wikimedia.org