Another question if this is a single wiki why not again rsync that across
the 4 servers that way when you update one you have them all updated easily
via rsync when a new release of MW comes out.
I think its best I step back on this as I am no wiki expert at all. I can
provide solutions to certain issues in terms of server layout and what not
but that is about it :(
On Tue, Oct 28, 2014 at 7:14 AM, Justin Lloyd <jclbugz(a)gmail.com> wrote:
I think my explanation was not the clearest it could
have been. Let's say
for the moment that I have one wiki. That wiki is served by a load balancer
in front of a server farm consisting of four Apache vhosts, one per
physical server, each with its own copy of MediaWiki, LocalSettings.php,
etc. Thus, a request for say
http://wiki.domain.com/wiki/Main_Page (and
thus all of its included images, css, js, etc.) is actually distributed by
the load balancer across the four vhosts. Each of the four physical hosts
NFS mounts the same shared directory from the single NFS server so that all
four Apache vhosts have simultaneous read-write access to the same uploaded
multimedia content.
In case that description is missing your point, I'll add that I do indeed
rsync the NFS server's shared directory to another server nightly (I could
easily shorten that interval), which in turn gets rsynced to an offsite
server. So the NFS server is a single point of failure, but I do have both
local and remote copies of the uploaded content. My desire is for increased
reliability of that backend fileserver.
Does that answer your question or am I still missing your point? :)
Justin
On Mon, Oct 27, 2014 at 10:41 PM, Jonathan Aquilina <
eagles051387(a)gmail.com>
wrote:
You are mentioning NFS why not use rsync to
replicate to a 2ndary nfs
server and set it to run lets say every 5 to 10 min or how ever often you
want to keep the 2ndary server updated.
On Tue, Oct 28, 2014 at 1:13 AM, Justin Lloyd <jclbugz(a)gmail.com> wrote:
> Hi all,
>
> Currently I have five wikis with the largest one being about 35k
articles
> (109k pages) and pretty heavily trafficked.
My basic server
architecture
is
> four web servers behind a load balancer and with a single NFS server
that
shares
out a directory that contains the upload directory content for
each
of the five wikis, e.g. /wiki/wiki1, /wiki/wiki2,
etc. (There are also
MySQL and Memcached servers but they are not relevant to this
discussion.)
Each web server mounts /wiki in one location, say
/var/www/images and
each
> of the five MediaWiki instances on the server has its images
subdirectory
as a
symlink to its corresponding subdirectory under the mount, e.g.
/var/www/images/wiki2.
Obviously the NFS server is a single point of failure but I've yet to
come
> up with a good alternative shared-filesystem architecture that doesn't
> require an expensive license like SNFS.
>
> Finally, I'm considering moving the whole shebang to AWS but using S3
> directly on the web servers doesn't seem viable in this architecture.
>
> So I'm wondering how others are approaching the design of load
balancing
> (multiple instances of) MediaWiki across
multiple web servers while
> maintaining a single source for each wikis upload directory content.
I'm
willing
to COMPLETELY reevaluate my wiki server architecture as long as
it's fast and highly available, so all suggestions are welcome!
Justin
_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
--
Jonathan Aquilina
_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l