On 8/22/06, Delirium delirium@hackish.org wrote:
I'd certainly put it in the top 100, even the top 10, although probably not the top 3. When a user looks up an article, the worst possible result would be to find a misleading or otherwise incorrect article. The second-worst result would be to find no article. The third-worst would be to find a somewhat shoddy but not misleading or wrong article. So I'd rank "we don't cover that at all" pretty high on the list of ways our coverage of a topic could suck.
[snip]
I think you need to interject some [[bayesian]] style reasoning into the above...
Even if it were REALLY terrible to get no article, would it matter much if it were very rare? If it were only mildly bad to get inaccurate information, but it was fairly common how bad would that be?
With 18x the article count of our competition... I *hope* that 'no result' found isn't really due to a significant coverage problem... If it is, then what reason do we have to believe that continuing the same old process of creating articles will *ever* solve that problem?
Although our lack of logging makes it currently impossible to substantiate with data, I strongly believe that we could get substantially more improvement on 'no article found' front for our *readers* by improving our search ([[Double Metaphone]] enabled title search, for example), creating redirects, better navigation tools, improvements to main page layouts... etc.. than simply by writing more articles at this point.
Unfortunately I can't even tell people what to fix until we have better data.
Recently, by using some third party search data we determined that there were no pages for a large number of fairly common search strings. A couple of coverage gaps were found... but their numbers were dwarfed by the number of alternate representations and typeohs. Many redirects and a few new pages have been created as a result ... I believe many of these will be useful to readers, but we won't be able to tell without page view data for them.