[WikiEN-l] Long-term searchability of the internet

Carcharoth carcharothwp at googlemail.com
Mon Jan 17 13:31:29 UTC 2011


On Mon, Jan 17, 2011 at 11:17 AM, Charles Matthews
<charles.r.matthews at ntlworld.com> wrote:
> On 16/01/2011 23:46, Tony Sidaway wrote:
>
>  > We
>> don't need to be able to find every single thing on the internet, only
>> the useful stuff.  A huge amount of the useful stuff is on Wikipedia.
>
> This is true, but not particularly "objective". The OP's question itself
> has merit. The long-term view surely must depend on whether [[Moore's
> law]] is with us or against us on this issue, for example. The "content"
> of the Web in particular is limited only by the number of hard drives
> that can be lashed onto it. The idea that a search engine company could
> download "all" webpages so as to have a local copy to work on may some
> day seem laughably naive (and I believe is already obsolescent).

Yes. My point was not that we would want to find every single thing on
the internet (I was careful to exclude large parts of the internet),
but my point was that the useful parts of the internet (which
Wikipedia would want to draw upon when constructing new pages or
updating existing ones) might be, or have always been, expanding
faster than it is possible to keep up with it. Traditional
encyclopedias, with a print deadline, didn't have this problem. My
question is whether it is possible to attain the "sum of all human
knowledge". Aristotle was said to be the "last person to know
everything there was to be known in his own time". Retaining the
caveats of "useful", can Wikipedia become a pseudo-Aristotle, or is
that an unattainable goal?

Talking about the true size of the internet, I'm reminded of the
concept of the Deep Web:

http://en.wikipedia.org/wiki/Deep_Web

Carcharoth



More information about the WikiEN-l mailing list