On 5/7/07, David Gerard dgerard@gmail.com wrote:
On 07/05/07, Jeff V. Merkey jmerkey@wolfmountaingroup.com wrote:
Would it be possible to block Brandt's article from being scraped from the search engines in the main site robots.txt file? It would help alleviate the current conflict and hopefully remove the remaining issues between Daniel and Wikipedia. After this final issue is addressed, I feel we have done all we can to correct Daniel's bio and address his concerns, that being said, there is a limit on how far good samaritanism should go, and I think we have done about all we can here. The rest is up to Daniel.
This idea has actually been suggested on wikien-l and met with a mostly positive response - selective noindexing of some living biographies. This would actually cut down the quantity of OTRS complaints tremendously. The response was not unanimous, I must note.
This would be very useful for another use case: Sometimes google will pick up a cached copy of a vandalized page. In order to purge the google cache you need to make the page 404 (which deletion doesn't do), put the page into a robots.txt deny, or include some directive in the page that stops indexing.
If we provided some directive to do one of the latter two (ideally the last) we could use it temporally to purge google cached copies of vandalism... so it would even be useful for pages that we normally want to keep indexed.