Since I have open sourced the wikix program, now anyone wanting the images can download them directly from Wikipedia. I am in process of restructuring the WikiGadugi site, so anyone wanting the bittorrent downloads need to finish up this week, as I will discontinue them shortly since folks now have the ability to download them directly. The wikix program is not very intensive on the main Wikimedia servers. The program is setup to behave as several workstations, and it really does not take that long to get the images.
To date, under ten folks have downloaded the entire image archive, so it does not appear that there is that much interest.
Jeff
On Tue, 24 Apr 2007 17:22:57 -0600, Jeff V. Merkey wrote:
Since I have open sourced the wikix program, now anyone wanting the images can download them directly from Wikipedia. I am in process of restructuring the WikiGadugi site, so anyone wanting the bittorrent downloads need to finish up this week, as I will discontinue them shortly since folks now have the ability to download them directly. The wikix program is not very intensive on the main Wikimedia servers. The program is setup to behave as several workstations, and it really does not take that long to get the images.
I was under the impression that bulk downloads needed to be throttled, and that it would take a lot longer than that to download everything. Does this just grab the images as fast as it can get them? Is that allowed?
Steve Sanbeg wrote:
On Tue, 24 Apr 2007 17:22:57 -0600, Jeff V. Merkey wrote:
Since I have open sourced the wikix program, now anyone wanting the images can download them directly from Wikipedia. I am in process of restructuring the WikiGadugi site, so anyone wanting the bittorrent downloads need to finish up this week, as I will discontinue them shortly since folks now have the ability to download them directly. The wikix program is not very intensive on the main Wikimedia servers. The program is setup to behave as several workstations, and it really does not take that long to get the images.
I was under the impression that bulk downloads needed to be throttled, and that it would take a lot longer than that to download everything. Does this just grab the images as fast as it can get them? Is that allowed?
It's faster to get them from Wikipedia. The bittorrent downloads take about 1 1.2 weeks to download the archive. Using wikix directly only takes 1 1/2 days given the current size of the image set for commons.
Getting them from Wikipedia is faster due to the squid caching both locally and internet wide. My analysis of the data sets from Wikipedia indicates that 60% of the images are cached either locally on squid or at other remote cache servers.
Since they are cached in a distributed manner, the program will only access wikipedia intermittently. Copyvio is the bigger issue than performance. My image mirroring has had almost no noticable impact on Wikipedia with wikix. The program behaves like 16 workstations, so Wikipedia seems to be able to handle it with little additional overhead. Given the number of squid servers Brion has active, I think the impact is minimal in comparison to the massive amounts of access the site gets daily.
Jeff
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
Jeff V. Merkey wrote:
Steve Sanbeg wrote:
On Tue, 24 Apr 2007 17:22:57 -0600, Jeff V. Merkey wrote:
Since I have open sourced the wikix program, now anyone wanting the images can download them directly from Wikipedia. I am in process of restructuring the WikiGadugi site, so anyone wanting the bittorrent downloads need to finish up this week, as I will discontinue them shortly since folks now have the ability to download them directly. The wikix program is not very intensive on the main Wikimedia servers. The program is setup to behave as several workstations, and it really does not take that long to get the images.
I was under the impression that bulk downloads needed to be throttled, and that it would take a lot longer than that to download everything. Does this just grab the images as fast as it can get them? Is that allowed?
It's faster to get them from Wikipedia. The bittorrent downloads take about 1 1.2 weeks to download the archive. Using wikix directly only takes 1 1/2 days given the current size of the image set for commons.
Getting them from Wikipedia is faster due to the squid caching both locally and internet wide. My analysis of the data sets from Wikipedia indicates that 60% of the images are cached either locally on squid or at other remote cache servers.
Since they are cached in a distributed manner, the program will only access wikipedia intermittently. Copyvio is the bigger issue than performance. My image mirroring has had almost no noticable impact on Wikipedia with wikix. The program behaves like 16 workstations, so Wikipedia seems to be able to handle it with little additional overhead. Given the number of squid servers Brion has active, I think the impact is minimal in comparison to the massive amounts of access the site gets daily.
Jeff
Also, the number of people actually needing to do this seems small. To date, only 9 folks have downloaded the image archive in a month period. That's a small number. I will leave the bittorrent active at wikigadugi and throttled since it has little impact on my bandwidth at present for the next month or so if folks still want to get at it. Wikix is a better method and given all the "gloom and doom" talk about creating backup sites for Wikimedia (which I think is probably not a big a concern as people think), the Wikix tool's time has come and folks should get access to it if they feel a need to mirror Wikipedia sites elsewhere. At least it gives the community the tools to do this.
Jeff
On Wed, Apr 25, 2007 at 10:36:38AM -0600, Jeff V. Merkey wrote:
Also, the number of people actually needing to do this seems small. To date, only 9 folks have downloaded the image archive in a month period. That's a small number. I will leave the bittorrent active at wikigadugi and throttled since it has little impact on my bandwidth at present for the next month or so if folks still want to get at it.
If I've drawn all the proper inferences from that paragraph, you mean that you've only seen 9 clients hit *your* copies of the torrent files in question.
You *do* know how BitTorrent works, right? :-)
Cheers, -- jra
Jay R. Ashworth wrote:
On Wed, Apr 25, 2007 at 10:36:38AM -0600, Jeff V. Merkey wrote:
Also, the number of people actually needing to do this seems small. To date, only 9 folks have downloaded the image archive in a month period. That's a small number. I will leave the bittorrent active at wikigadugi and throttled since it has little impact on my bandwidth at present for the next month or so if folks still want to get at it.
If I've drawn all the proper inferences from that paragraph, you mean that you've only seen 9 clients hit *your* copies of the torrent files in question.
You *do* know how BitTorrent works, right? :-)
I have only seen 9 client from the tracker for the torrents. I created the tracker and the torrent files and the archives, and since the torrent is configured to use a tracker, all the downloads are monitored from my site.
Jeff
Cheers, -- jra
On Thu, Apr 26, 2007 at 10:34:52AM -0600, Jeffrey V. Merkey wrote:
You *do* know how BitTorrent works, right? :-)
I have only seen 9 client from the tracker for the torrents. I created the tracker and the torrent files and the archives, and since the torrent is configured to use a tracker, all the downloads are monitored from my site.
Oh yeah: the tracker. Clearly, *I'm* the one who forgets how BitTorrent works. :-)
Cheers, -- jr 'spelling flame rule' a
wikitech-l@lists.wikimedia.org