I understand how to make a wikifarm on a single server, but how does one go about setting up multiple servers for a single wiki? Do I just copy LocalSettings.php to two instances and set up some sort of recurring rsync cron job for the /image directory?
Hi all, There is a way to use graphviz extension in templates? (MW 1.5.x)
tnx Marco
On 19/06/06, Marco Rota marcor@sorint.it wrote:
There is a way to use graphviz extension in templates? (MW 1.5.x)
Install the extension plus dependencies, then insert appropriate markup into the desired template page.
Rob Church
Thank you for answer. The problem is that if I use parameters in a template page it render them like text.
For example... this is the template "graph": <graphviz> digraph G { {{{1}}} -> {{{2}}}; } </graphviz>
I recall them: {{graph|ciao|bye}}
It show:
1 ---> 2
not:
ciao ---> bye
Any suggestion? TNX
Marco
----- Original Message ----- From: "Rob Church" robchur@gmail.com To: "MediaWiki announcements and site admin list" mediawiki-l@wikimedia.org Sent: Monday, June 19, 2006 5:28 PM Subject: Re: [Mediawiki-l] graphviz extension & templates
On 19/06/06, Marco Rota marcor@sorint.it wrote:
There is a way to use graphviz extension in templates? (MW 1.5.x)
Install the extension plus dependencies, then insert appropriate markup into the desired template page.
Rob Church _______________________________________________ MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
On 19/06/06, Marco Rota marcor@sorint.it wrote:
Thank you for answer. The problem is that if I use parameters in a template page it render them like text.
Contact the writer of the extension and ask them to support wiki markup as part of the extension input?
Rob Church
Moin,
On Monday 19 June 2006 18:03, Rob Church wrote:
On 19/06/06, Marco Rota marcor@sorint.it wrote:
Thank you for answer. The problem is that if I use parameters in a template page it render them like text.
Contact the writer of the extension and ask them to support wiki markup as part of the extension input?
That would be complicated, and probably not work for something like:
<graph output="{{{1}}}"> [ A ] -> [ B ] </graph>
anyway. So, is/will/can there be a way that template parameters are replaced properly inside extension tags or their parameters?
Best wishes,
Tels
On 19/06/06, Tels nospam-abuse@bloodgate.com wrote:
That would be complicated, and probably not work for something like:
<graph output="{{{1}}}"> [ A ] -> [ B ] </graph>
anyway. So, is/will/can there be a way that template parameters are replaced properly inside extension tags or their parameters?
Then have a selective parse done; call the individual functions for brace substitution.
Rob Church
So, is/will/can there be a way that template parameters are replaced properly inside extension tags or their parameters?
The job of handling content passed to the extension is handled by the extension itself. It might be possible for the extension handler to call the necessary parser functions to get parameters parsed properly, but I have never tried this. The answers can be found in the *scary* Parser.php file.
As the author of one of the Graphviz extensions, I am interested to hear what the MediaWiki parser experts have to say about this. Would you support extensions invoking the parser to provide variable substitution?
Greg
Moin,
On Monday 19 June 2006 18:39, Gregory Szorc wrote:
So, is/will/can there be a way that template parameters are replaced properly inside extension tags or their parameters?
The job of handling content passed to the extension is handled by the extension itself. It might be possible for the extension handler to call the necessary parser functions to get parameters parsed properly, but I have never tried this. The answers can be found in the *scary* Parser.php file.
As the author of one of the Graphviz extensions, I am interested to hear what the MediaWiki parser experts have to say about this. Would you support extensions invoking the parser to provide variable substitution?
I don't think this is the right way. The parser should handle this, not every extension re-inventing the parsing/replacing parameter handling etc. This only leads to code duplication with all the negative side effects.
Best wishes,
Tels
Daniel J McDonald wrote:
I understand how to make a wikifarm on a single server, but how does one go about setting up multiple servers for a single wiki? Do I just copy LocalSettings.php to two instances and set up some sort of recurring rsync cron job for the /image directory?
Multiple web servers?
The upload directory needs to be shared with a network filesystem, such as NFS. Periodic rsync is insufficient, as requests may be served by different servers at different times.
Other things to keep in mind:
* Time syncronization is important. Set up NTP and ensure that all your web servers' clocks are synchronized, or you may get exciting out-of-order edits.
* PHP session storage must be shared. Either set this up on a networked filesystem or set up memcached and enable the memcached session storage option.
-- brion vibber (brion @ pobox.com)
- PHP session storage must be shared. Either set this up on a networked
filesystem or set up memcached and enable the memcached session storage option.
It is also possible to use PHP's session_set_save_handler function to store sessions elsewhere, say in a database table. See http://php.net/session_set_save_handler for more info.
In response to the original question, you are probably also curious about database clustering. There has been some talk of this on the list in the past. However, I don't recall anybody ever saying they have set up a MySQL cluster with MediaWiki. However, it should be possible to set up MySQL with multi-master replication. Hopefully MySQL 5.1 will handle all of the current shortcomings, but we will have to wait and see...
Gregory Szorc gregory.szorc@gmail.com
On 19/06/06, Gregory Szorc gregory.szorc@gmail.com wrote:
In response to the original question, you are probably also curious about database clustering. There has been some talk of this on the list in the past. However, I don't recall anybody ever saying they have set up a MySQL cluster with MediaWiki.
Wikimedia have been telling porkies since 2004, then?
Rob Church
On 6/19/06, Rob Church robchur@gmail.com wrote:
On 19/06/06, Gregory Szorc gregory.szorc@gmail.com wrote:
In response to the original question, you are probably also curious
about
database clustering. There has been some talk of this on the list in
the
past. However, I don't recall anybody ever saying they have set up a
MySQL
cluster with MediaWiki.
Wikimedia have been telling porkies since 2004, then?
I thought Wikimedia uses MySQL replication, not clustering. MySQL clustering uses main memory to store tables. Somehow I don't think the Wikimedia servers have that much memory. BTW, MySQL 5.1 will finally support disk-based clustering. Even then, there is still the issue of FULLTEXT indexes in cluster mode. Does the cluster storage engine support FULLTEXT indexes?
Greg
On 19/06/06, Gregory Szorc gregory.szorc@gmail.com wrote:
I thought Wikimedia uses MySQL replication, not clustering. MySQL clustering uses main memory to store tables.
Sorry, I keep getting the two confused. Side effect of actually having done something productive this week.
Rob Church
Hello,
I thought Wikimedia uses MySQL replication, not clustering. MySQL clustering uses main memory to store tables. Somehow I don't think the Wikimedia servers have that much memory.
You're slightly wrong. We have enough memory to store our mysql core database in memory.
BTW, MySQL 5.1 will finally support disk-based clustering.
Yes, for non-indexed data.
Even then, there is still the issue of FULLTEXT indexes in cluster mode. Does the cluster storage engine support FULLTEXT indexes?
We don't use FULLTEXT indexes on live site anyway. It's pure InnoDB, and search is offloaded to Lucene.
Anyway, Cluster is designed to handle zillions of small transactions per second without data loss, it could work as a solution for session storage (and it is used that way on several biggish sites).
On the other hand, our access pattern would not like distributed database that much as there're quite a lot of batch reads. We already solve the 'clustering' by simply having replicated sets of database nodes working with different workloads / patterns if needed. We're quite happy with current setup, I guess ;-)
Domas
I thought Wikimedia uses MySQL replication, not clustering. MySQL clustering uses main memory to store tables. Somehow I don't think the Wikimedia servers have that much memory.
[...]
On the other hand, our access pattern would not like distributed database that much as there're quite a lot of batch reads. We already solve the 'clustering' by simply having replicated sets of database nodes working with different workloads / patterns if needed. We're quite happy with current setup, I guess ;-)
With replication, there is only one writeable node. How do you force mediawiki to do reads from replica #n and writes to the master?
Unless its multi master replication then they are all writable, aren't they?
Arthur arthur@astarsolutions.co.uk
-----Original Message-----
With replication, there is only one writeable node
Daniel J McDonald wrote:
With replication, there is only one writeable node. How do you force mediawiki to do reads from replica #n and writes to the master?
Code that works with the database asks the load balancer for a database connection object that's either the master (for write or time-sensitive read work) or a randomly selected not-too-lagged slave (for non-sensitive read work).
See LoadBalancer.php.
-- brion vibber (brion @ pobox.com)
Brion Vibber wrote:
Daniel J McDonald wrote:
With replication, there is only one writeable node. How do you force mediawiki to do reads from replica #n and writes to the master?
Code that works with the database asks the load balancer for a database connection object that's either the master (for write or time-sensitive read work) or a randomly selected not-too-lagged slave (for non-sensitive read work).
See LoadBalancer.php.
Ok, I get it. The master is listed in the wgDBServer variable, and the slaves in the wgDBServers array of arrays. Upon failure of the master, it's a manual process to get writes working again, but reads should function fine.
Since I already have my wiki on a replicated 4.1 database, it's a whole lot less work than building a new MySQL 5.0 cluster from scratch with that large learning curve.
Y'all have been most helpful!
Daniel J McDonald wrote:
Ok, I get it. The master is listed in the wgDBServer variable, and the slaves in the wgDBServers array of arrays. Upon failure of the master, it's a manual process to get writes working again,
Yes; we're not losing sales if the site's partially broken for a few minutes, so it hasn't yet been worth it to invest time and money in setting up automatic transparent failover for something that's very rare.
but reads should function fine.
Likely not; because reads need to function properly even when slaves are slightly lagged, some items will check the master. For instance, on a page view the tiny 'page' record is read from the master, then revision metadata, text, and link-coloring information are pulled from a slave. Otherwise you can edit a page and have it serve you back the previous version -- which we don't want then stuck in a cache!
-- brion vibber (brion @ pobox.com)
Hi!
Ok, I get it. The master is listed in the wgDBServer variable, and the slaves in the wgDBServers array of arrays. Upon failure of the master, it's a manual process to get writes working again, but reads should function fine.
Wikimedia database setup is slightly evolved already. We have multiple sets of databases (enwiki/others), and some slaves work only with particular datasets (like holbach, our 4GB box has only dewiki). Additionally we employ application servers in our text external storage role. So we have our own partitioning already.
Since I already have my wiki on a replicated 4.1 database, it's a whole lot less work than building a new MySQL 5.0 cluster from scratch with that large learning curve.
You can always ask MySQL to assist - they've got consultants who have lots of experience with mediawiki!!!! ;-)
Domas
mediawiki-l@lists.wikimedia.org