I should state some of the following items of info in response to the
email correspondence received :
1) Windows version information (I am not providing the full 'winver'
response obtained, because its probably not necessary – all that I
imagine that you'd need to know is the approximate windows OS upon
which I am attempting to download the relevant information).
Microsoft (R) Windows
Version 5.1 (Build 2600.xpsp_sp3_gdr.080814-1236 : Service Pack 3)
Copyright (C) 2007 Microsoft Corporation
BLA BLA
Physical memory available to Windows : 1,038,404 KB
2) In terms of whether the OS is 32bit or 64 bit, after using the
dxdiag, sysdm and winmsd methods, I believe it is a 32 bit OS. I
don't know if this helps figure out why the download might cut out at
an amount less than the full file download size of 14GB. ALSO, I
should have stated in my original statement that sometimes it is
possible for me to download more than 4GB, but that (for some reason
or other) the download cuts out (dunno why).
3) As a separate point, it occurs to me that one of the reasons for
why the download might cut out is that there are a sequence of servers
(according to tracert) upon which I rely for the download to proceed.
I could be wrong, but all it may take is one server (for whatever
reason) deciding that the download is problematic for the whole file
download to fail.
4) Also, the version of mozilla firefox used is 3.0.7 (thought I am
not sure whether this would explain why the download usually cuts out
at arbitrary points).
5) Is there some type of timeout command lying somewhere which might
instruct the wikipedia server to quit a particular attempt to download
a large file if it is taking too long?
I should also state that I use a system for which I do not have
administrative rights (though why this would cause the download to cut
out is anyone's guess).
It also seems like a good idea to split large files up using a file
splitter (whichever one takes your fancy) as larger file downloads
would seem to be problematic for most people who have access to
networks with only a limited connection speed.
It occurs to me that, given the randomness of this problem, this
response might also be correspondingly random. Still, how long might
it take to organise something in the way of a (perhaps unix script
automated?) file splitting for the larger wikipedia database download
files?
Many thanks to everyone for their responses so far to this query.
PS – If it were ever the case that bit torrent were used for the
dissemination of large files (there has been some mention of this on
the wikipedia database download talk page), I can still imagine that
there might be problems with trying to propagate the WHOLE of such a
large file (~14GB) – though this assertion might run contrary to other
peoples experiences. Anyhow, it occurs to me that, for the interests
of redundancy, it would be worthwhile to figure out whether there's a
way of changing the structure of the wikipedia database download so
that, even if only the first 1GB of the database were downloaded, it
would still be possible to read the information on it (perhaps this is
already the case – but, from what I gather, once an incomplete
database dump is downloaded – it is pretty useless, unless someone can
correct me).
On 4/11/09, Brian <Brian.Mingus(a)colorado.edu> wrote:
I'm pretty sure it's impossible to encourage
people to include relevant
information in their OPs.
You don't suppose you could have at least told us your operating system,
whether you are running 32 or 64 bits?
Are you on linux with no large file support?
On Fri, Apr 10, 2009 at 12:21 PM, Jameson Scanlon <
jameson.scanlon(a)googlemail.com> wrote:
Does anyone on the wikitech mailing list happen
to know whether it
would be possible for some of the larger wikipedia database downloads
(which are, say, 16GB or so in size) to be split into parts so that
they can be downloaded. For whatever reason, whenever I have
attempted to download the ~14GB files (say, from
http://static.wikipedia.org/downloads/2008-06/en/ ), I have found that
only 2GB (presumably, the first 2GB) of what I have sought to download
has actually been downloaded. Is there anyway around this? Could
anyone possibly suggest what possible reasons there might be for this
difficulty in downloading the material?
Thanks. .
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l