No, because then people get used to entering it like that . . . and they happily go on entering it like that . . . and then one day, they're interested in looking up info about the Robot Exclusion Policy Protocol and type in http://en.wikipedia.org/robots.txt . . . and WTF?
I knew someone would make that argument, but I really don't find it a very compelling reason to not do something.
Firstly, how do you know this isn't happening already? There's a meta redirect, so how do you know people aren't just using that? And how do you know that right now someone isn't looking at http://en.wikipedia.org/robots.txt and going "Huh? Where's my Wikipedia article?" ?
Secondly, if someone wants to read an article about the gory details of the robot exclusion policy, it seems a stretch to suggest that they will have no idea about HTTP redirects or that there's a file called robots.txt. (I.e. if the robot exclusion policy required us using a file called http://en.wikipedia.org/Cabbage_Patch_Doll then I agree, that would be a real problem, but "robots.txt"? Come on!)
Thirdly, of the one-point-something million article pages, there's a handful which have real content in the web root. It's like 10 files (equivalent to 0.001% of the article space), and some of them are like http://en.wikipedia.org/COPYING (i.e. they're not doing anything particularly useful).
Look, I'm not trying to be obtuse, but I sincerely think that a redirect is better, and pointing to corner-cases as a reason not to make things better in the general-case doesn't make much sense to me.
All the best, Nick.