On Fri, Aug 18, 2006 at 07:33:24PM +0200, Steve Bennett wrote:
On 8/18/06, Jay R. Ashworth jra@baylink.com wrote:
I very strongly suspect that no one who hasn't lived intimately with the parser code (that's, what, 4 or 5 people? :-) could predict what those things would do; they all seem implementation defined to me.
Or almost all...
They do illustrate why making a late pass to hotlink URLs might not be a safe approach, though.
(oops, I should have changed the subject earlier)
Depends what you mean by a "late pass". Any "early pass" is wrong - basically, a URL should only match if absolutely nothing else does - no normal links, for instance. But what kind of "late pass" - is there a parse tree that you can check to see whether the token has been matched against anything fancier than plain text?
No, my suggestion had been to do a final pass that handled that and several other things (like MAGIC words)... but on reflection, I think you *don't* want that processing applied to things which have already been parser-expanded, so I guess you have to let the parser handle them in-line as well.
The most interesting revelation of the above tests, for those who missed it, is that it *is* possible to link to a page named after a URL, but [[http://foo.com]] won't do it (that generates a, what was it, "direct link"). However, [[ http://foo.com]] works, although the page ends up being called "Http://foo.com". It's not completely inconceivable to me that one day we might want to write an article about a URL, like if some postmodern band names an album "http://stupid.com" or something.
Hee.
Cheers, -- jr 'http://www.washme.com/soap.html%27a