tstarling(a)svn.wikimedia.org wrote:
+ * Robot policies per article.
+ * These override the per-namespace robot policies.
+ * Must be in the form of an array where the key part is a properly
+ * canonicalised text form title and the value is a robot policy.
+ * Example:
+ * $wgArticleRobotPolicies = array( 'Main Page' => 'noindex' );
+ */
+$wgArticleRobotPolicies = array();
Hmmmm, this doesn't seem like a big improvement over robots.txt to me.
If we're going to do this at all, shouldn't it be configurable through
the wiki in some way? Otherwise it's exactly the same requirement for
server operations personnel to intervene to change anything.
No it's not a big improvement. It's a small improvement that took a couple
of minutes to implement. I suggested on the IRC channel that search engine
delisting should be part of the page protection interface. AmiDaniel got
excited about that and told me he was going to go off and implement it.
Personally I have other priorities.
The immediate problem was a request to add a single arbitration case to
robots.txt, presumably due to potential damage to the subject's
reputation. It didn't seem wise to me to turn robots.txt into a list of
potentially libellous articles. $wgArticleRobotPolicies is relatively private.
-- Tim Starling