Hi there.
I've made some modifications to MediaWiki 1.17.0 that others might be interested in. I'd be flattered if some or all of them made it into the official MediaWiki release.
Firstly, I've added links to the W3C HTML validation service hosted by MIT. These show up as 'W3C HTML 5.0' icons next to the 'Powered by MediaWiki' icon at the bottom of the page, and link the user to the HTML validation service. I've found having this feature invaluable in helping me to get my wiki text that includes HTML valid. Unfortunately I've had to disable some features of MediaWiki in order for the validation service to pass the generated HTML. You can read about how I modified MediaWiki for W3C HTML validation at:
http://www.progclub.org/wiki/Pcwiki#W3C_validation_icon
Secondly, and more importantly I feel, I've added extended links to the 'section edit' links section. In addition to showing an 'edit' link, I include a 'link' link, and a 'top' link. 'top' just links to #top, nothing remarkable there. 'link' provides a link to the canonical URL for that section. My web-site is available via several domain names, www.progclub.org, progclub.org, progclub.info, etc. I have nominated 'www.progclub.org' as the 'canonical' domain name to be used when publishing links, and nominated 'http' (rather than 'https') as the 'canonical' scheme. In order to support canonical section links I had to make a few changes to the settings files, and update the linker, etc., as explained at:
http://www.progclub.org/wiki/Pcwiki#Extended_edit_links
Anyway, that's it. Thought some of you might like to know. If you like the ideas but don't like the implementation, then please feel free to BYO implementation. Mine's a little bit less than perfect owing to no i18n support and me not knowing the best file to put my functions in. So, I'd be happy if someone else gave my code a spruce up.
Thanks!
John.
John Elliot wrote:
Hi there.
I've made some modifications to MediaWiki 1.17.0 that others might be interested in. I'd be flattered if some or all of them made it into the official MediaWiki release.
Firstly, I've added links to the W3C HTML validation service hosted by MIT. These show up as 'W3C HTML 5.0' icons next to the 'Powered by MediaWiki' icon at the bottom of the page, and link the user to the HTML validation service. I've found having this feature invaluable in helping me to get my wiki text that includes HTML valid.
Theoretically, the wiki should never generate invalid HTML, but it's not perfect. You can however install Tidy to have it clean up things for you.
Unfortunately I've had to disable some features of MediaWiki in order for the validation service to pass the generated HTML. You can read about how I modified MediaWiki for W3C HTML validation at:
Interesting. We should probably try to register rel=edit, or drop it out (opinions?). EditURI is already documented at http://microformats.org/wiki/existing-rel-values#POSH_usage The W3C validator ResourceLoaderDynamicStyles was purposefully added even though it is not conforming. I think that by removing it you disabled some ResourceLoader feature.
Secondly, and more importantly I feel, I've added extended links to the 'section edit' links section. (...)
In order to support canonical section links I had to make a few changes to the settings files, and update the linker, etc.,
You could have also just used the hook.
Anyway, that's it. Thought some of you might like to know. If you like the ideas but don't like the implementation, then please feel free to BYO implementation. Mine's a little bit less than perfect owing to no i18n support and me not knowing the best file to put my functions in. So, I'd be happy if someone else gave my code a spruce up.
Thanks!
John.
Thanks for sharing!
I did a fast review of it below: You have an html injection problem in the extended links. What I don't understand is why you worked so hard to make the extended link use the canonical domain. A relative link would have worked fine. Also, all the $wgCanonical* pieces could have been replaced with setting $wgServer to the chosen domain and a few calls to getFullUrl. (if $wgCanonicalSecureHost != $wgCanonicalHost, $wgServer can be set conditionally)
Finally, if you wanted to mark one of them as canonical, IMHO you should have added a <link rel="canonical"> (but be careful with redirects, when MediaWiki adds one itself).
On Tue, Aug 9, 2011 at 10:43 PM, Platonides Platonides@gmail.com wrote:
ResourceLoaderDynamicStyles was purposefully added even though it is not conforming.
AFAIK <meta> tags with made-up name="" attributes are perfectly legal, at least in HTML5.
I think that by removing it you disabled some ResourceLoader feature.
Yes, you will have broken CSS precendence order for site/user CSS.
Roan Kattouw (Catrope)
On 10/08/2011 6:50 AM, Roan Kattouw wrote:
On Tue, Aug 9, 2011 at 10:43 PM, PlatonidesPlatonides@gmail.com wrote:
ResourceLoaderDynamicStyles was purposefully added even though it is not conforming.
AFAIK<meta> tags with made-up name="" attributes are perfectly legal, at least in HTML5.
There's a list: http://wiki.whatwg.org/wiki/MetaExtensions
There's a bug filed in the validator.w3.org bug-tracker about the fact that some items that are on "the list" aren't being accepted:
On 10/08/2011 6:43 AM, Platonides wrote:
Theoretically, the wiki should never generate invalid HTML, but it's not perfect.
It was pretty close to perfect. I only had to comment out two features to get conformance with the (experimental) HTML5 validator. I assumed HTML5 output in my code, although AIUI MediaWiki also supports other varieties of HTML.
You could have also just used the hook.
Ah, but I don't know how to use the hook (yet), do I? :P I'll figure it out and fix up my code.
Thanks for sharing!
No worries.
I did a fast review of it below: You have an html injection problem in the extended links.
OK, I'll look into that.
What I don't understand is why you worked so hard to make the extended link use the canonical domain. A relative link would have worked fine.
Yeah, I have relative links if the canonical domain features aren't specified. (I added that as an after-thought, and you might have reviewed my code before I'd posted the update.)
Also, all the $wgCanonical* pieces could have been replaced with setting $wgServer to the chosen domain and a few calls to getFullUrl. (if $wgCanonicalSecureHost != $wgCanonicalHost, $wgServer can be set conditionally)
Ah, I didn't know about $wgServer.
Finally, if you wanted to mark one of them as canonical, IMHO you should have added a<link rel="canonical"> (but be careful with redirects, when MediaWiki adds one itself).
I don't know anything about <link rel="canonical">, nor do I really know how/where to go about adding it. Maybe I'll look into that.
Thanks.
John.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
On 11-08-09 12:40 PM, John Elliot wrote:
Hi there.
I've made some modifications to MediaWiki 1.17.0 that others might be interested in. I'd be flattered if some or all of them made it into the official MediaWiki release.
Firstly, I've added links to the W3C HTML validation service hosted by MIT. These show up as 'W3C HTML 5.0' icons next to the 'Powered by MediaWiki' icon at the bottom of the page, and link the user to the HTML validation service. I've found having this feature invaluable in helping me to get my wiki text that includes HTML valid. Unfortunately I've had to disable some features of MediaWiki in order for the validation service to pass the generated HTML. You can read about how I modified MediaWiki for W3C HTML validation at:
Please don't edit DefaultSettings.php; Your $wgFooterIcons change could have been done in LocalSettings.php without causing trouble for yourself when you upgrade.
Secondly, and more importantly I feel, I've added extended links to the 'section edit' links section. In addition to showing an 'edit' link, I include a 'link' link, and a 'top' link. 'top' just links to #top, nothing remarkable there. 'link' provides a link to the canonical URL for that section. My web-site is available via several domain names, www.progclub.org, progclub.org, progclub.info, etc. I have nominated 'www.progclub.org' as the 'canonical' domain name to be used when publishing links, and nominated 'http' (rather than 'https') as the 'canonical' scheme. In order to support canonical section links I had to make a few changes to the settings files, and update the linker, etc., as explained at:
We have a DoEditSectionLink hook, adding those links should be possible without any modifications to core. Which is probably a good idea, because I reorganized the Linker in 1.18 so you're going to run into a conflict when you upgrade and end up undoing or having to re-do your code changes.
Anyway, that's it. Thought some of you might like to know. If you like the ideas but don't like the implementation, then please feel free to BYO implementation. Mine's a little bit less than perfect owing to no i18n support and me not knowing the best file to put my functions in. So, I'd be happy if someone else gave my code a spruce up.
Thanks!
John.
Since Platonides made a reply I won't say to much more. Though I don't really see anything we could add to MediaWiki. Most can already be done with hooks and config and fit more as extensions than default core features. And removing functionality of MediaWiki doesn't really count as adding something to core. I can't comprehend the obsession with killing de-facto standardized patterns just to shove a button on a site to a picky validator that only takes into account a single relevant standard.
On 10/08/2011 6:53 AM, Daniel Friesen wrote:
Please don't edit DefaultSettings.php; Your $wgFooterIcons change could have been done in LocalSettings.php without causing trouble for yourself when you upgrade.
Ah, didn't realise LocalSettings.php was the right place to do my edits.
I guess I was coming from the perspective that I was contributing to core.
We have a DoEditSectionLink hook, adding those links should be possible without any modifications to core.
Yep, got that now. I'll fix that up. Thanks.
Though I don't really see anything we could add to MediaWiki. Most can already be done with hooks and config and fit more as extensions than default core features.
Fair enough.
And removing functionality of MediaWiki doesn't really count as adding something to core.
I kinda felt like I was adding more than I was taking away.
I can't comprehend the obsession with killing de-facto standardized patterns just to shove a button on a site to a picky validator that only takes into account a single relevant standard.
I'd rather not confuse my users who are trying to validate the HTML in their wiki text with noise from the platform. I don't control the validator, but I think it's a useful tool to incorporate on my web-site.
Maybe there should be a setting for 'strict html' which leaves out features that services like validator.w3.org don't accept.
John.
John Elliot wrote:
On 10/08/2011 6:53 AM, Daniel Friesen wrote:
Please don't edit DefaultSettings.php; Your $wgFooterIcons change could have been done in LocalSettings.php without causing trouble for yourself when you upgrade.
Ah, didn't realise LocalSettings.php was the right place to do my edits.
I guess I was coming from the perspective that I was contributing to core.
I assumed so, that's why I didn't mention it.
You could have also just used the hook.
Ah, but I don't know how to use the hook (yet), do I? :P I'll figure it out and fix up my code.
See docs/hooks.txt from your mediawiki folder.
On 10/08/2011 6:53 AM, Daniel Friesen wrote:
I can't comprehend the obsession with killing de-facto standardized patterns just to shove a button on a site to a picky validator that only takes into account a single relevant standard.
One reason would be to help avoid embarrassments like this:
On Tue, Aug 9, 2011 at 6:37 PM, John Elliot jj5@jj5.net wrote:
On 10/08/2011 6:53 AM, Daniel Friesen wrote:
I can't comprehend the obsession with killing de-facto standardized patterns just to shove a button on a site to a picky validator that only takes into account a single relevant standard.
One reason would be to help avoid embarrassments like this:
I guess that's only embarrassing if those sorts of things bother you. I'm with Dan on this one, I've never thought those validator buttons are very nice additions to a website. Remind me a little too much of the "Optimized for NN4" buttons of old.
-Chad
On 10/08/2011 8:47 AM, Chad wrote:
I guess that's only embarrassing if those sorts of things bother you.
I think it would be fair to have it both ways. There are a class of people who are bothered by things of that sort. I'm sure I'm not alone, at least on this issue. :P
I'm with Dan on this one, I've never thought those validator buttons are very nice additions to a website. Remind me a little too much of the "Optimized for NN4" buttons of old.
I think there's a pretty big difference between providing a feature whereby a user can validate the HTML they've just authored and proclaiming browser patronage.
On Tue, Aug 9, 2011 at 6:52 PM, John Elliot jj5@jj5.net wrote:
On 10/08/2011 8:47 AM, Chad wrote:
I guess that's only embarrassing if those sorts of things bother you.
I think it would be fair to have it both ways. There are a class of people who are bothered by things of that sort. I'm sure I'm not alone, at least on this issue. :P
I'm with Dan on this one, I've never thought those validator buttons are very nice additions to a website. Remind me a little too much of the "Optimized for NN4" buttons of old.
I think there's a pretty big difference between providing a feature whereby a user can validate the HTML they've just authored and proclaiming browser patronage.
Most of our users won't know or care whether their pages validate. Those that do presumably know how to use a validator already.
-Chad
On 10/08/2011 8:55 AM, Chad wrote:
Most of our users won't know or care whether their pages validate. Those that do presumably know how to use a validator already.
Isn't an unstated goal of MediaWiki/Wikipedia to decrease ignorance?
If you already know how to use a validator you might still be pleased to have it easily integrated for you.
On 11-08-09 04:01 PM, John Elliot wrote:
On 10/08/2011 8:55 AM, Chad wrote:
Most of our users won't know or care whether their pages validate. Those that do presumably know how to use a validator already.
Isn't an unstated goal of MediaWiki/Wikipedia to decrease ignorance?
If you already know how to use a validator you might still be pleased to have it easily integrated for you.
WikiText != (X)HTML
<br> -> <br /> <ul> -> nothing (user generated empty lists are killed) <script></script> -> <script></script> <b>a<i>b</b>c</i> -> <b>a<i>b</i></b><i>c</i>
In the interest of letting you output things like <div>s, and outputting things like lists with requirements that can't be fulfilled with the built-in syntaxes WikiText includes a SUBSET of HTML. Since it's impractical to re-invent the syntax for all of the stuff you may want to output into the content of your page. However we DO NOT just output this. Everything a user inputs is preprocessed and run though an entire parser. That <br>, the handling that decides that the end result of that should be a br tag in the right syntax for the XHTML or HTML5 output is the WikiText parser, not a html parser. WikiText is a loose syntax, and by the time (X)HTML is output that entire subset of HTML you have in your page has been parsed by MW's Preprocessor which has decided what (SG|HT|X)ML like structures should semantically mean and pieced together the markup. Incorrectly formed html will already have been re-formed to be correct and only interpretable one way. WikiText is loose so instead of errors, if the parser doesn't like something you inputted it's not going to pass that through raw and let a html validator say it's wrong, it's going to decide it doesn't like it and treat it as plaintext. By the time the parser is done with your input MediaWiki will have already re-formed or rejected any of your errors in your page, and the last thing you'll care about is a html validator, it'll be the way your page looks after MediaWiki has rejected tags and outputted some tags as plaintext.
Frankly... Fixing our abuse of : (definition lists) for talk page indentation is probably a higher priority than getting the few things like use of obsolete valign attributes that will cause a validator to revolt to validate.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
On 10/08/2011 9:49 AM, Daniel Friesen wrote:
WikiText is loose so instead of errors, if the parser doesn't like something you inputted it's not going to pass that through raw and let a html validator say it's wrong, it's going to decide it doesn't like it and treat it as plaintext.
Well, the validation feature that I added to my web-site helped me catch a bug for you.
If you are outputting WikiText that includes the HTML-like <h1>, <h2>, etc., tags, then make sure you're not outputting them in the context of table content, because that is invalid. In order to turn such WikiText into compliant HTML, the <h1> WikiText should be converted to a <span class="h1"> HTML element, and so forth. The various skins should be updated to do something sensible with the h* classes.
I'll let you know if my HTML validator helps me to easily catch any other bugs like this for you.
We've already established that MediaWiki is broken because it's outputting empty <ul> elements, so maybe you can have a look at fixing that up too.
Thanks.
John.
On 11-08-09 05:03 PM, John Elliot wrote:
On 10/08/2011 9:49 AM, Daniel Friesen wrote:
WikiText is loose so instead of errors, if the parser doesn't like something you inputted it's not going to pass that through raw and let a html validator say it's wrong, it's going to decide it doesn't like it and treat it as plaintext.
Well, the validation feature that I added to my web-site helped me catch a bug for you.
If you are outputting WikiText that includes the HTML-like <h1>, <h2>, etc., tags, then make sure you're not outputting them in the context of table content, because that is invalid. In order to turn such WikiText into compliant HTML, the <h1> WikiText should be converted to a <span class="h1"> HTML element, and so forth. The various skins should be updated to do something sensible with the h* classes.
<h#> tags are not invalid inside of table contents. <tr>'s contents are flow content, and <h#> tags are flow content.
<h#> tags are however invalid inside of <th> tags which are phrasing content. However in that context the correct thing would not necessarily be to turn the h# into a span, but fold it into the header that's already there. Which may or may not be what the user wants. Both of those changes can break a user's site styles.
Would you like to argue for a $wgStricterParsing bool that will sacrifice parser output consistency for things like folding == headers into parent th's (perhaps turn into a span if they explicitly use a <h#> instead of ==), and other things we haven't been able to do to the parser for compat reasons?
I'll let you know if my HTML validator helps me to easily catch any other bugs like this for you.
We've already established that MediaWiki is broken because it's outputting empty <ul> elements, so maybe you can have a look at fixing that up too.
That was a HTML4/XHTML1 rule that's been removed. An empty <ul></ul> is valid HTML5. Wikipedia is just currently set to output an XHTML DOCTYPE and well-formed XML output because of some bots that still use screen-scraping content that were given a second chance to have their developers fix them to use the api before HTML5 is turned on permanently.
Thanks.
John.
On Tue, Aug 9, 2011 at 9:19 PM, Daniel Friesen lists@nadir-seen-fire.com wrote:
<h#> tags are not invalid inside of table contents. <tr>'s contents are flow content, and <h#> tags are flow content.
You mean <td>, not <tr>.
<h#> tags are however invalid inside of <th> tags which are phrasing content.
There's a bug open asking to change this, and allow flow content inside <th>:
On 10/08/2011 11:19 AM, Daniel Friesen wrote:
Would you like to argue for a $wgStricterParsing bool that will sacrifice parser output consistency for things like folding == headers into parent th's (perhaps turn into a span if they explicitly use a<h#> instead of ==), and other things we haven't been able to do to the parser for compat reasons?
Sure. I'll argue for that. :)
That was a HTML4/XHTML1 rule that's been removed. An empty<ul></ul> is valid HTML5.
I wasn't sure if the empty <ul> was valid HTML5, or if the validator wasn't strict enough about it yet. In any event, I'm happy to take your word for it. If XHTML support is being deprecated, I won't argue for fixing the empty <ul> elements. As I mentioned, the only two features that were a concern for me in providing valid HTML(5) were the two meta elements, and it's been suggested that each of these can be removed.
I still think that having 'link' and 'top' anchors on section headings next to edit links would be a good thing to have on by default in core. There is provision for disabling them, and having them available by default is a sensible and productive course of action in my opinion. It's handy to be able to right-click on the 'link' anchor and copy the canonical URL of the section you're looking at, and it's handy to be able to jump back to the top of the page (indeed, I discovered that the #top functionality was already half implemented, so presumably there is a plan for something like this).
On 11-08-09 06:50 PM, John Elliot wrote:
On 10/08/2011 11:19 AM, Daniel Friesen wrote:
Would you like to argue for a $wgStricterParsing bool that will sacrifice parser output consistency for things like folding == headers into parent th's (perhaps turn into a span if they explicitly use a<h#> instead of ==), and other things we haven't been able to do to the parser for compat reasons?
Sure. I'll argue for that. :)
Oh right, one the 'other' things I had one mind but couldn't remember while writing was converting things like valign, width/height, etc... into css styles.
Though given the fact that block content in a th in XHTML was valid and is essentially a regression in the unfinished html5 and there is an open bug report on it I'll opt for the stance that folding headers there should wait till we see what the decision on that is.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
On 10/08/2011 12:19 PM, Daniel Friesen wrote:
Oh right, one the 'other' things I had one mind but couldn't remember while writing was converting things like valign, width/height, etc... into css styles.
That'd be nice.
Though given the fact that block content in a th in XHTML was valid and is essentially a regression in the unfinished html5 and there is an open bug report on it I'll opt for the stance that folding headers there should wait till we see what the decision on that is.
I'm happy with that. As I tried to point out in my first post, the whole HTML validation icon was just a bit of fun (it pleased at least one person [1] :P), and the feature I actually care about is the extended section edit links.
On Tue, Aug 9, 2011 at 9:50 PM, John Elliot jj5@jj5.net wrote:
I wasn't sure if the empty <ul> was valid HTML5, or if the validator wasn't strict enough about it yet. In any event, I'm happy to take your word for it.
You don't have to:
"Content model: Zero or more li elements." http://www.whatwg.org/specs/web-apps/current-work/multipage/grouping-content...
The HTML5 validator is generally very strict, much more so than previous validators.
I still think that having 'link' and 'top' anchors on section headings next to edit links would be a good thing to have on by default in core.
I don't -- it's a lot of extra clutter for not much gain. For "top", users can hit Home or such. For "link", you can go to the top and then click in the ToC in almost all cases. But I'm not involved in UI decision-making at all at this point.
John Elliot wrote:
That was a HTML4/XHTML1 rule that's been removed. An empty<ul></ul> is valid HTML5.
I wasn't sure if the empty<ul> was valid HTML5, or if the validator wasn't strict enough about it yet. In any event, I'm happy to take your word for it. If XHTML support is being deprecated, I won't argue for fixing the empty<ul> elements.
Those <ul> are an old discussion https://bugzilla.wikimedia.org/show_bug.cgi?id=24500
On 11-08-09 03:37 PM, John Elliot wrote:
On 10/08/2011 6:53 AM, Daniel Friesen wrote:
I can't comprehend the obsession with killing de-facto standardized patterns just to shove a button on a site to a picky validator that only takes into account a single relevant standard.
One reason would be to help avoid embarrassments like this:
Embarrassing? You do realize that the validator is complaining that it sees `<ul></ul>`, right? I can't even find the spot in the HTML4 or XHTML1 spec where it says that a perfectly fine marked up list is invalid if it doesn't contain any items.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
* Daniel Friesen wrote:
Embarrassing? You do realize that the validator is complaining that it sees `<ul></ul>`, right? I can't even find the spot in the HTML4 or XHTML1 spec where it says that a perfectly fine marked up list is invalid if it doesn't contain any items.
http://www.w3.org/TR/html4/struct/lists.html#edef-UL defines it as taking `(LI)+` as content, where `+` means "one or more".
On 10/08/2011 9:08 AM, Daniel Friesen wrote:
On 11-08-09 03:37 PM, John Elliot wrote:
Embarrassing? You do realize that the validator is complaining that it sees `<ul></ul>`, right?
I hadn't realised that.
I can't even find the spot in the HTML4 or XHTML1 spec where it says that a perfectly fine marked up list is invalid if it doesn't contain any items.
Well you could save yourself some trouble and let validator.w3.org guide you in that matter.
On 11-08-09 04:14 PM, John Elliot wrote:
On 10/08/2011 9:08 AM, Daniel Friesen wrote:
I can't even find the spot in the HTML4 or XHTML1 spec where it says that a perfectly fine marked up list is invalid if it doesn't contain any items.
Well you could save yourself some trouble and let validator.w3.org guide you in that matter.
Guide? You mean link to that information so I have the actual words of the spec that browsers are supposed to implement? It doesn't give any link. You mean take the validator's interpretation as the absolute an unequivocal authority that it's wrong without seeing the information in the actual spec saying it's wrong? Like hell... If I accepted a validator's narrow (potentially incorrectly implemented) interpretation of something I wouldn't be using CSS vendor prefixes like -moz-border-radius since last time I checked the css validator says they're wrong, even though vendor prefixes are a valid part of the spec.
That's actually a pretty good direction to take explaining the fallacy of validators and badges for them. Our css includes output like: .foo { background-image: url(data:...); background-image: url(...) !ie;} Strict interpretation wise, that's invalid css because !ie is not valid inside the background-image. Are we going to remove that? Hell no, if we did versions of IE people are still using would stop displaying background images because they can't handle data uris.
Likewise with HTML what matters is NOT that a strict validator says it's ok, but that it's well-formed so that all browsers have the same interpretation of it, and conforms to understandable patterns either de-facto or detailed in external specs (like the RSD spec, Universal Edit Button, etc...). eg: EditURI is part of RSD [Really Simple Discovery], it's a standard way of letting software discover the api endpoint(s) for a page. In the future not having that could mean that a tool one of your users may use could break because it can't find the api.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
On 10/08/2011 10:16 AM, Daniel Friesen wrote:
On 11-08-09 04:14 PM, John Elliot wrote:
On 10/08/2011 9:08 AM, Daniel Friesen wrote:
I can't even find the spot in the HTML4 or XHTML1 spec where it says that a perfectly fine marked up list is invalid if it doesn't contain any items.
Well you could save yourself some trouble and let validator.w3.org guide you in that matter.
Guide? You mean link to that information so I have the actual words of the spec that browsers are supposed to implement? It doesn't give any link.
Why are you telling me?
http://validator.w3.org/feedback.html
You mean take the validator's interpretation as the absolute an unequivocal authority that it's wrong without seeing the information in the actual spec saying it's wrong? Like hell... If I accepted a validator's narrow (potentially incorrectly implemented) interpretation of something I wouldn't be using CSS vendor prefixes like -moz-border-radius since last time I checked the css validator says they're wrong, even though vendor prefixes are a valid part of the spec.
If you'd seen Pirates of the Caribbean, you'd understand that there's a difference between rules and guidelines.
Of course some of us take the guidelines rather seriously.
That's actually a pretty good direction to take explaining the fallacy of validators and badges for them. Our css includes output like: .foo { background-image: url(data:...); background-image: url(...) !ie;} Strict interpretation wise, that's invalid css because !ie is not valid inside the background-image. Are we going to remove that? Hell no, if we did versions of IE people are still using would stop displaying background images because they can't handle data uris.
Again, I think it's reasonable to engineer a system where policy decisions such as this are at the option of the user.
I'm a user of MediaWiki, and I care more about strict compliance with open-standards than I do about supporting antiquated browsers, so I should be able to take the decision to not support that.
Likewise with HTML what matters is NOT that a strict validator says it's ok, but that it's well-formed so that all browsers have the same interpretation of it, and conforms to understandable patterns either de-facto or detailed in external specs (like the RSD spec, Universal Edit Button, etc...).
Again, there will be a class of MediaWiki users who have a different opinion.
eg: EditURI is part of RSD [Really Simple Discovery], it's a standard way of letting software discover the api endpoint(s) for a page. In the future not having that could mean that a tool one of your users may use could break because it can't find the api.
There is a process for this defacto standard to be integrated with web-standards, and by not supporting it until it has gone through that process you provide incentives for its proponents to ensure they do things in an amiable and collaborative fashion.
On Tue, Aug 9, 2011 at 3:40 PM, John Elliot jj5@jj5.net wrote:
Unfortunately I've had to disable some features of MediaWiki in order for the validation service to pass the generated HTML. You can read about how I modified MediaWiki for W3C HTML validation at:
You can skip one of the code edits by setting $wgUniversalEditButton = false; in LocalSettings.php. We could register rel=edit, but frankly I think we should just drop the functionality. It's useless <head> clutter, there are no plausible real-world use-cases that are important enough to justify it. Same for the RSD link, unless that's used for something I don't know about.
On Tue, Aug 9, 2011 at 4:50 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
AFAIK <meta> tags with made-up name="" attributes are perfectly legal, at least in HTML5.
In HTML4, yes. In HTML5, no.
""" Conformance checkers must use the information given on the WHATWG Wiki MetaExtensions page to establish if a value is allowed or not: values defined in this specification or marked as "proposed" or "ratified" must be accepted, whereas values marked as "discontinued" or not listed in either this specification or on the aforementioned page must be rejected as invalid. Conformance checkers may cache this information (e.g. for performance reasons or to avoid the use of unreliable network connectivity). """ http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#t...
However, all you have to do is register them on the wiki and then ask the guy who runs the validator (Henri Sivonen) to update the validator's list.
On Tue, Aug 9, 2011 at 6:25 PM, John Elliot jj5@jj5.net wrote:
There's a bug filed in the validator.w3.org bug-tracker about the fact that some items that are on "the list" aren't being accepted:
There's not much point in filing bugs in the HTML5 validator against the W3C's validator component, I don't think. The HTML5 option on validator.w3.org just secretly asks validator.nu, I think just sending an HTTP request to it. Probably the easiest way to get the new values added is to ask hsivonen on freenode.
wikitech-l@lists.wikimedia.org