On Fri, 2003-12-19 at 06:30, Arvind Narayanan wrote:
On Thu, Dec 18, 2003 at 09:58:13AM -0800, Daniel Mayer
wrote:
Erik wrote:
I feel that it is extremely tedious to have to
click around
many times and load many pages to get a complete
picture of an issue, a person etc.
There is little difference between clicking on a TOC link in a huge article
than clicking on a link to another article.
For me there's a huge difference. My latency on wikipedia
is usually between 5 and 10 seconds. OTOH I have high bandwidth.
This is an important point.
There are people with 28kbps flaky bandwidth in the world. Lots of
them.
I of course have a whopping 2 mbps download speed line but that is due
to unfair global economics. Further my neighbours do not seem to need
wikipedia as much as the places where there are no internet even.
So I would greatly prefer to download a huge article
at once.
This helps to be saved on a floppy and be searched effectively (Ctrl+F
for most browsers)
Printing becomes more meaningful.
> >I think an article should have as much
information
> >related to its title as possible for that reason, and
> >things should only be split off if a certain maximum
> >size is reached (I tend towards 30-40K), or if they
> >are not really related.
again the search utility becomes helpful.
Anything very big will easily attract attention and then reorganisation
can be thought about.
I use tabbed browsing (a great recent trick to keep the number of open
windows low) and find myself sometimes getting entangled in anything
upto 50 tabs on a decent night.
Monolithic pages should not become a bad thing. Clear strucutre and
named anchors can help monolithic pages be very useful.
To me 'standards' that seem to specify file size as a 'usability' issue
based on average screen size a misled corporate PR feature.
But eventually I must admit, technology that interchange the data
between monolithic and individual nodes an author wishes for will be a
useful thing.
Ramanan