Why are so few community-developed mediawiki extensions used by the Foundation?
Why do developers have such priviledged access to the source code, and the community such little input?
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
2009/1/9 Brian Brian.Mingus@colorado.edu:
Why are so few community-developed mediawiki extensions used by the Foundation?
Why do developers have such priviledged access to the source code, and the community such little input?
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
Well... Maybe just because software development requires at least some basic knowledge of programming, and cannot be performed by voting only? I guess some feedback from Wikipedia community is welcome - but quite obviously programmers cannot work in a manner of discussing and voting every line of code they are assumed to produce...
On Fri, Jan 9, 2009 at 5:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
It's an issue of scale. Do you have any idea how big the foundation projects are? Inefficient code could cripple our donation-supported infrastructure. It's not that people don't want to use the newest and coolest toys, it's that in order to keep the sites running at all the foundation really needs to aim for a functional level of minimalism.
Why do developers have such priviledged access to the source code, and the community such little input?
In my experience, this is the way that most open source projects operate. You can download and play with the source code to your heart's content, but typically only a handful of "committers" have access to modify the code. Average joe user like you and me can submit patches if we see fit. Through patches we could build trust among the developers and eventually become committers. I would be very interested to hear about other successful open source projects that didn't use any kinds of safeguards like this.
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
well, the core software does improve and grow through normal development effort. We wouldn't want a situation where improvements could not be implemented without community approval. Foundation projects run on MediaWiki software, and updates to the software are reflected in the projects. It's not like they're installing things as big and pervasive as Semantic MediaWiki without community approval.
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
And who says that money isn't going to be used to test existing tools? Without money, our developers are all volunteers, and they will do the testing they want to do when they have time to do it. Let me ask, are you doing any testing of potentially useful MediaWiki extensions yourself?
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
Whatever the design turns out to be, I'm sure we're going to need developers to implement it. Plus, there are tons of existing usability requests at bugzilla, and not enough development hands to even implement the things the community has already asked for. Plus, there are all those cool pre-existing community-developed extensions that need to be tested by developers.
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
Why would it be, has the community requested it? Again, it's economy of scale: Wikipedia is too huge to serve as a beta test for all sorts of random extensions. A smaller website like Wikibooks would be a much better place to do extension testing, and in fact has been used in the past as a beta test site for new extensions. You can't load just any software onto Wikipedia and expect the servers to handle it well. Wikipedia is simply too huge for that kind of avant garde management.
--Andrew Whitworth
Let me ask, are you doing any testing of potentially useful MediaWiki
extensions yourself?
I run a dozen wikis, a few of them quite large. And I am a software developer. I have put to use pretty much every significant extension to MediaWiki, and I have pushed SMW to its limits.
That it currently has limits is not, to me, a reason to ignore it.
Here is one of my public wikis, and the software I work on: http://grey.colorado.edu/emergent
I believe that if a core developer were to become excited about an extension such as SMW, they could have already scaled it to Wikipedia size. Why should we have to wait for them to become excited about something before it gets implemented? The community deserves a larger voice than that. There needs to be more rational oversight.
Why do developers have such priviledged access to the source code, and the community such little input?
In my experience, this is the way that most open source projects operate. You can download and play with the source code to your heart's content, but typically only a handful of "committers" have access to modify the code. Average joe user like you and me can submit patches if we see fit. Through patches we could build trust among the developers and eventually become committers. I would be very interested to hear about other successful open source projects that didn't use any kinds of safeguards like this.
I'm an average joe and I have commit access. I rarely use it, but after I'd committed a couple of minor bugfixes I asked Brion what was required to get commit access and in response he pretty much just gave it to me. All patches are reviewed before being implemented on the live site, of course.
In order to solicit community feedback on this very important issue, I suggest the Foundation put up a multi-language banner on all Wikipedia's soliciting input via a survey.
*How can Wikipedia be more usable?*
I also suggest the Foundation put up a We're Hiring banner. In tough global economic conditions, and for the amount of money the Foundation has been given, they could afford to hire 20 best in class developers who are otherwise out of work.
800,000 / 30,000 = 26. Is that not a fair wage? If the Foundation only plans to hire three developers to work on this project then it must be spending the money on something else entirely.
The community also deserves a usability lab, and a full assessment of how Semantic MediaWiki, Semantic Forms, and Project Halo could contribute to usability. I predict they will find that, while they do not cover every problem, the main issue that needs to be worked on is scaling them. This is something that the core developers are experts at. They are not experts on usability.
I would like to make clear that I believe the usability issue has largely been solved, and the community is just waiting for the core developers, who have kept a tight lock and key on the source code, to recognize that.
On Fri, Jan 9, 2009 at 3:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
Why do developers have such priviledged access to the source code, and the community such little input?
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
-- You have successfully failed!
On Fri, Jan 9, 2009 at 5:33 PM, Brian Brian.Mingus@colorado.edu wrote:
In order to solicit community feedback on this very important issue, I suggest the Foundation put up a multi-language banner on all Wikipedia's soliciting input via a survey.
Are you willing to make the translations and the banner? Are you willing to make the survey, administer it, and interpret results? Most of the "Foundation" are volunteers who can't put multilingual banners all over the place every time somebody would like to know some vague something about the software.
*How can Wikipedia be more usable?*
I also suggest the Foundation put up a We're Hiring banner. In tough global economic conditions, and for the amount of money the Foundation has been given, they could afford to hire 20 best in class developers who are otherwise out of work.
800,000 / 30,000 = 26. Is that not a fair wage? If the Foundation only plans to hire three developers to work on this project then it must be spending the money on something else entirely.
First off, I'm a professional software developer and I would not work for $30K. For 800K/year, you're looking at more like 10-15 developers at the most, and that's under the assumption that you're only hiring them for a single year. You're going to spend a lot of up-front time training them, so the better investment by far is 3-5 developers for several years. This is not to mention cost increases for hardware and hosting that will come from adding more software to the backend and a "prettier" frontend.
The community also deserves a usability lab, and a full assessment of how Semantic MediaWiki, Semantic Forms, and Project Halo could contribute to usability. I predict they will find that, while they do not cover every problem, the main issue that needs to be worked on is scaling them. This is something that the core developers are experts at. They are not experts on usability.
If our core developers are not experts in usability (and I wouldn't necessarily agree with that point anyway), then it makes sense to hire people who are good with usability. If you look at the job postings, you'll see that it's exactly what is intended. Setting up some kind of "usability lab" has already been done, see https://en.labs.wikimedia.org. This is the exact clearinghouse where the Collections extension and FlaggedRevs extension were tested.
I would like to make clear that I believe the usability issue has largely been solved, and the community is just waiting for the core developers, who have kept a tight lock and key on the source code, to recognize that.
The issue most certainly hasn't been solved. It's not just about finding pretty tools, but about scaling them to fit Wikipedia (which is no trivial task), and ensuring that they meet the needs of our users. These things don't happen by insulting our developers or making demands on a mailing list alone.
--Andrew Whitworth
2009/1/9 Brian Brian.Mingus@colorado.edu:
800,000 / 30,000 = 26. Is that not a fair wage? If the Foundation only plans to hire three developers to work on this project then it must be spending the money on something else entirely.
First of all, we're hiring three people because we already have two. We've hired Naoko, and we will allocate Trevor full-time to the project.
Secondly, base salaries if we hire locally (which we do, in this case), are obviously much higher. See payscale.com and other sites to get an idea of salaries in various parts of the world. That does not include recruitment, benefits, equipment, office space and supplies, staff development and travel, administrative overhead such as payroll, etc. Plus the other costs we've budgeted, such as research costs for usability tests, allocation of experienced on-staff developers to support the project, etc.
Thirdly, if you were to hire remotely at lower salaries, you'd simply incur much of the cost you'd save in salaries in other ways, especially management, oversight, and travel. This is especially true for a project of this complexity where you're not just handing some set of specs over to an outsourcing firm. (You of all people, advocating for a complex tool like Semantic MediaWiki, should appreciate that.)
There are isolated projects that can be managed well by giving them to experienced remote developers. For a project of this scope, complexity and importance, I believe it's critical to have a local team that can fully focus on the project and collaborate with the core staff in San Francisco on an as-needed basis.
Erik I am glad you are still around and keeping an eye on things.
I believe that, with the audience the Foundation has access to, it could save a lot of money by hiring people who love Wikipedia and want to work for it. I don't think its true that the only way to get seasoned developers is to wave a large carrot (aka $$$) in front of their face. I believe there exist experienced developers who would gladly give a year of their life, working at a lower wage, to work on Wikipedia.
The only way to access these people is to ask them directly - with a We're Hiring banner, for example.
On Fri, Jan 9, 2009 at 4:14 PM, Erik Moeller erik@wikimedia.org wrote:
2009/1/9 Brian Brian.Mingus@colorado.edu:
800,000 / 30,000 = 26. Is that not a fair wage? If the Foundation only
plans
to hire three developers to work on this project then it must be spending the money on something else entirely.
First of all, we're hiring three people because we already have two. We've hired Naoko, and we will allocate Trevor full-time to the project.
Secondly, base salaries if we hire locally (which we do, in this case), are obviously much higher. See payscale.com and other sites to get an idea of salaries in various parts of the world. That does not include recruitment, benefits, equipment, office space and supplies, staff development and travel, administrative overhead such as payroll, etc. Plus the other costs we've budgeted, such as research costs for usability tests, allocation of experienced on-staff developers to support the project, etc.
Thirdly, if you were to hire remotely at lower salaries, you'd simply incur much of the cost you'd save in salaries in other ways, especially management, oversight, and travel. This is especially true for a project of this complexity where you're not just handing some set of specs over to an outsourcing firm. (You of all people, advocating for a complex tool like Semantic MediaWiki, should appreciate that.)
There are isolated projects that can be managed well by giving them to experienced remote developers. For a project of this scope, complexity and importance, I believe it's critical to have a local team that can fully focus on the project and collaborate with the core staff in San Francisco on an as-needed basis. -- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/9 Brian Brian.Mingus@colorado.edu:
Erik I am glad you are still around and keeping an eye on things.
Thank you, I appreciate that. :-)
I believe that, with the audience the Foundation has access to, it could save a lot of money by hiring people who love Wikipedia and want to work for it. I don't think its true that the only way to get seasoned developers is to wave a large carrot (aka $$$) in front of their face. I believe there exist experienced developers who would gladly give a year of their life, working at a lower wage, to work on Wikipedia.
That is evidently true. In fact, everyone we're hiring accepts that they are going to be paid under market rates. We are also working with remote contractors on specific projects. If you are interested in working as a remote contractor, or you know brilliant people who would be, make a pitch to jobs at wikimedia dot org. We have put a general note on the job openings page that we appreciate hearing from people who are passionate and interested throughout the year, regardless of current openings.
As for advertising this extremely broadly, I think that would be doing a disservice to serious candidates as we simply would be drowning in applications. (Sometimes, we already are.) And, having reviewed CVs for almost every position that we've hired for in 2008, I can tell you that arriving at a reasonable shortlist in a fair and accurate fashion is a lot of work - and with the exception of some sanity filtering, it's not a task you can easily give to someone else. We might try it regardless, but only if we have a process in place to deal with the predictable level of interest.
Well, I believe my concerns have been adequately addressed. I have only one last point of input on usability (for now ;-). I believe it my be the case that the often bizarre idiosyncrasies of MediaWiki were implemented because the developers were spread out around the world, in isolation, communicating only over IRC and sometimes e-mail. I know there are yearly developer spurts at Wikimania, but I do not know about the daily development environment at the offices, and whether development continues in a largely isolated fashion. It seems that it would be prudent to accept consulting advice from Ward Cunningham, as he not only invented wiki collaboration, but revolutionized programmer collaboration with Extreme Programming. It is not prudent to allow developers to collaborate in any manner of their choosing, as it will often be far below what is optimal. If you want to spend the money wisely, and avoid the common pitfalls prevalent in MediaWiki's fragile design, you must ensure the developers are working side by side and following certain rules.
Cheers,
On Fri, Jan 9, 2009 at 4:30 PM, Erik Moeller erik@wikimedia.org wrote:
2009/1/9 Brian Brian.Mingus@colorado.edu:
Erik I am glad you are still around and keeping an eye on things.
Thank you, I appreciate that. :-)
I believe that, with the audience the Foundation has access to, it could save a lot of money by hiring people who love Wikipedia and want to work
for
it. I don't think its true that the only way to get seasoned developers
is
to wave a large carrot (aka $$$) in front of their face. I believe there exist experienced developers who would gladly give a year of their life, working at a lower wage, to work on Wikipedia.
That is evidently true. In fact, everyone we're hiring accepts that they are going to be paid under market rates. We are also working with remote contractors on specific projects. If you are interested in working as a remote contractor, or you know brilliant people who would be, make a pitch to jobs at wikimedia dot org. We have put a general note on the job openings page that we appreciate hearing from people who are passionate and interested throughout the year, regardless of current openings.
As for advertising this extremely broadly, I think that would be doing a disservice to serious candidates as we simply would be drowning in applications. (Sometimes, we already are.) And, having reviewed CVs for almost every position that we've hired for in 2008, I can tell you that arriving at a reasonable shortlist in a fair and accurate fashion is a lot of work - and with the exception of some sanity filtering, it's not a task you can easily give to someone else. We might try it regardless, but only if we have a process in place to deal with the predictable level of interest. -- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hello, Brian.
On Fri, Jan 9, 2009 at 2:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
The plan for Usability Initiative includes intensive reviews of MediaWiki extensions which are already available. Then we will enable candidate extensions with some set of test data in the test and lab environment. Community involvement is essential in validating which extensions to adopt.
Usability test is targeted for users with no or little experience in editing Wikipedia and the goal is to identify interactive obstacles. The proposed solution will be tested for feedback similar way as testing existing extensions.
I also believe it is important to iterate the process above so that we can reach out to as many as possible.
The project page is in the plan and once it is up, I hope to exchange and share ideas with the community.
Best,
- Naoko
Thank you Naoko.
How can we be sure the money will be spent wisely?
Obama recently appointed a Chief Performance Officer. Do you have someone providing similar oversight?
On Fri, Jan 9, 2009 at 3:51 PM, Naoko Komura nkomura@gmail.com wrote:
Hello, Brian.
On Fri, Jan 9, 2009 at 2:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
The plan for Usability Initiative includes intensive reviews of MediaWiki extensions which are already available. Then we will enable candidate extensions with some set of test data in the test and lab environment. Community involvement is essential in validating which extensions to adopt.
Usability test is targeted for users with no or little experience in editing Wikipedia and the goal is to identify interactive obstacles. The proposed solution will be tested for feedback similar way as testing existing extensions.
I also believe it is important to iterate the process above so that we can reach out to as many as possible.
The project page is in the plan and once it is up, I hope to exchange and share ideas with the community.
Best,
- Naoko
-- Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
As you surely know, the work of all staff, including 'how they spend money' is continuously assessed by the ED who in turn is evaluated by the board. There is also 3rd party financial audit. What are you hinting at?
Erik/Naoko: does the Stanton grant include a condition for (external) specific program evaluation?
On 1/10/09, Brian Brian.Mingus@colorado.edu wrote:
Thank you Naoko.
How can we be sure the money will be spent wisely?
Obama recently appointed a Chief Performance Officer. Do you have someone providing similar oversight?
On Fri, Jan 9, 2009 at 3:51 PM, Naoko Komura nkomura@gmail.com wrote:
Hello, Brian.
On Fri, Jan 9, 2009 at 2:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
The plan for Usability Initiative includes intensive reviews of MediaWiki extensions which are already available. Then we will enable candidate extensions with some set of test data in the test and lab environment. Community involvement is essential in validating which extensions to adopt.
Usability test is targeted for users with no or little experience in editing Wikipedia and the goal is to identify interactive obstacles. The proposed solution will be tested for feedback similar way as testing existing extensions.
I also believe it is important to iterate the process above so that we can reach out to as many as possible.
The project page is in the plan and once it is up, I hope to exchange and share ideas with the community.
Best,
- Naoko
-- Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
-- You have successfully failed! _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Fri, Jan 9, 2009 at 3:19 PM, mbimmler@gmail.com wrote:
<snip>
Erik/Naoko: does the Stanton grant include a condition for (external) specific program evaluation?
Yes, we are required to submit a quarterly report to the Stanton Foundation to inform the project progress and status which includes financial report.
Best,
- Naoko
Hi all;
I would like to know how is going to be rated the success of this operation/project. Do you hope a big wave of new users? More edits per day? To improve the visits/edits ratio? What are your wishes and your realistic predictions?
Regards, emijrp
Naoko Komura escribió:
On Fri, Jan 9, 2009 at 3:19 PM, mbimmler@gmail.com wrote:
<snip>
Erik/Naoko: does the Stanton grant include a condition for (external) specific program evaluation?
Yes, we are required to submit a quarterly report to the Stanton Foundation to inform the project progress and status which includes financial report.
Best,
- Naoko
Hoi, Usability research done by UNICEF on MediaWiki, by English language people in Tanzania had 100% of their test subjects failing to create a new article. This research is repeatable, and it is easy to improve on this because UNICEF created extensions that will be part of the initial research. The Uniwiki extensions are being tested elsewhere and bugfixes to make it work with the latest MediaWiki have been supplied.
People like myself positively HATE editing English language Wikipedia because of all the crap that is considered "must have" like citations, info boxes et al. Wikia has worked on software that separates the crap from the text. This means that it will become a lot easier to edit Wikipedia text.
When the usability of MediaWiki is improved, people will be encouraged to contribute to MediaWiki projects. It will be really hard to make the convoluted policies of the different Wikipedias clear. Many policies exist that on the face of it makes sense. However, when you combine them all, you get a mess that prevents people from contributing. I recently declined to write an en.wp article because I am hesitant because of all this.
Realistically, in order to make MediaWiki and Wikipedia more usable, there are two aspects. There are technical aspects that will make it easy to contribute and there are the community aspects. For the Stanton project to do the technical aspects is a no brainer; obviously they will experiment with all the technical bits and bobs and make a difference. To get some traction on the community aspects, it takes a community that acknowledges that cleanup is needed.
When both technical and community issues are addressed, many more people will edit but be realistic, the biggest difference will be in the other Wikipedias because that is where the growth still has to happen. This is in turn dependent on the quality of the Internationalisation that is part of the Stanton project and the Localisation that is done at Betawiki. Thanks, GerardM
2009/1/10 emijrp emijrp@gmail.com
Hi all;
I would like to know how is going to be rated the success of this operation/project. Do you hope a big wave of new users? More edits per day? To improve the visits/edits ratio? What are your wishes and your realistic predictions?
Regards, emijrp
Naoko Komura escribió:
On Fri, Jan 9, 2009 at 3:19 PM, mbimmler@gmail.com wrote:
<snip>
Erik/Naoko: does the Stanton grant include a condition for (external) specific program evaluation?
Yes, we are required to submit a quarterly report to the Stanton
Foundation
to inform the project progress and status which includes financial
report.
Best,
- Naoko
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Sat, Jan 10, 2009 at 4:03 AM, Gerard Meijssen gerard.meijssen@gmail.comwrote:
[snip] When the usability of MediaWiki is improved, people will be encouraged to contribute to MediaWiki projects. It will be really hard to make the convoluted policies of the different Wikipedias clear. Many policies exist that on the face of it makes sense. However, when you combine them all, you get a mess that prevents people from contributing. I recently declined to write an en.wp article because I am hesitant because of all this.
Realistically, in order to make MediaWiki and Wikipedia more usable, there are two aspects. There are technical aspects that will make it easy to contribute and there are the community aspects. For the Stanton project to do the technical aspects is a no brainer; obviously they will experiment with all the technical bits and bobs and make a difference. To get some traction on the community aspects, it takes a community that acknowledges that cleanup is needed.
[/snip]
For once Gerard, you and I agree 100%. I got into a similar discussion with Andrew Lih and Liam Wyatt on a recent episode of Wikipedia Weekly. And we came to the same conclusion: usability in Wikimedia projects is hindered by two things. Firstly, the usability issues within MediaWiki itself. The second being community issues.
As you say above and I said in the podcast: the usability grant will hopefully improve the usability of MediaWiki itself, and I look forward to seeing their work, for both the benefit of WMF projects and outside users of Mediawiki. However, no sum of money can solve the underlying community issues. That's for the wikis to identify and fix themselves.
-Chad
2009/1/9 Brian Brian.Mingus@colorado.edu:
Why are so few community-developed mediawiki extensions used by the Foundation?
Most of them aren't applicable (YouTube, Google Maps extensions, etc.) or not tested to the scale of Wikipedia and would therefore require significant investments of resources to be ready for deployment.
Why do developers have such priviledged access to the source code, and the community such little input?
I disagree with the underlying premises. There are more than 150 committers to the MediaWiki SVN. Commit access is granted liberally. Code is routinely updated and deployed in a very open fashion. BugZilla is filled with thousands of community requests. The backlog of requests is now more aggressively processed.
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
I disagree with the underlying premises. For example, developers don't deploy any feature we/they like. Features which are likely to be disruptive are only deployed after community consultation. An example of this is the FlaggedRevs extension, for which a clear community process has been defined.
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
In part, to stop ignoring and start testing them.
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
In part, to be able to accommodate such input.
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
SMW is a hugely complex tool. Along with other approaches to handle information architecture, it merits examination. Such examination will happen as resources for it become available. The priority for obtaining such resources will compete with other priorities such as usability, internationalization support, rich media support, etc.
Erik,
I am skeptical of the current development process. That is because it has led to the current parser, which is not a proper parser at all, and includes horrifying syntax.
The current usability issue is widespread and goes to MediaWiki's core. Developers should not have that large of a voice in usability, or you get what we have now.
We do not even have a parser. I am sure you know that MediaWiki does not actually parse. It is 5000 lines worth of regexes, for the most part.
In order to solve usability, even for new users, I believe that you must write a new parser from scratch.
Are you prepared to do that?
On Fri, Jan 9, 2009 at 4:05 PM, Erik Moeller erik@wikimedia.org wrote:
2009/1/9 Brian Brian.Mingus@colorado.edu:
Why are so few community-developed mediawiki extensions used by the Foundation?
Most of them aren't applicable (YouTube, Google Maps extensions, etc.) or not tested to the scale of Wikipedia and would therefore require significant investments of resources to be ready for deployment.
Why do developers have such priviledged access to the source code, and
the
community such little input?
I disagree with the underlying premises. There are more than 150 committers to the MediaWiki SVN. Commit access is granted liberally. Code is routinely updated and deployed in a very open fashion. BugZilla is filled with thousands of community requests. The backlog of requests is now more aggressively processed.
Why must the community 'vote' on extensions such as Semantic MediaWiki,
and
yet the developers can implement any feature they like, any way they like it?
I disagree with the underlying premises. For example, developers don't deploy any feature we/they like. Features which are likely to be disruptive are only deployed after community consultation. An example of this is the FlaggedRevs extension, for which a clear community process has been defined.
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
In part, to stop ignoring and start testing them.
Why has the Foundation gone ahead and approved the hire of several
employees
for usability design, when the community has had almost zero input into
what
that design should be?
In part, to be able to accommodate such input.
Why is this tool not being tested on Wikipedia, right now?
http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
SMW is a hugely complex tool. Along with other approaches to handle information architecture, it merits examination. Such examination will happen as resources for it become available. The priority for obtaining such resources will compete with other priorities such as usability, internationalization support, rich media support, etc. -- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/9 Brian Brian.Mingus@colorado.edu:
In order to solve usability, even for new users, I believe that you must write a new parser from scratch.
I disagree, though the project team may ultimately agree with you. The biggest barriers to entry for new users aren't likely to be obscure edge cases involving apostrophes; they're likely to be ugly blocks of syntax such as references, templates and magic words interspersed with article text. Those issues can be addressed without necessarily rewriting (or speccing out) the whole parser. It does seem that parser/syntax deficiencies become more relevant if we want to employ a two-way WYSIWYG/wiki-text model like the one that's currently being tested on some Wikia sites (e.g. twilightsaga.wikia.com).
2009/1/9 Brian Brian.Mingus@colorado.edu:
I am skeptical of the current development process. That is because it has led to the current parser, which is not a proper parser at all, and includes horrifying syntax.
Er, that would be a direct descendant of UseModWiki. That this has been a hair-tearing nightmare ever since is largely because of the huge corpus of text that needs to remain parseable - that doesn't support your argument at all, and calls into question that you even have one.
- d.
On Fri, Jan 9, 2009 at 3:30 PM, David Gerard dgerard@gmail.com wrote:
2009/1/9 Brian Brian.Mingus@colorado.edu:
I am skeptical of the current development process. That is because it has led to the current parser, which is not a proper parser at all, and
includes
horrifying syntax.
Er, that would be a direct descendant of UseModWiki. That this has been a hair-tearing nightmare ever since is largely because of the huge corpus of text that needs to remain parseable - that doesn't support your argument at all, and calls into question that you even have one.
It would be a potentially acceptable technical solution to change the parser and markup syntax to make it easier to work with, as long as there was an automated conversion tool to shift from what's in the DB now to what would be there going forwards.
Adding in a new parser in parallel and a bit to flag whether a page was in old or new format would make the conversion easy and prevent the necessity for a flag day. Conversion done in semi-automated manner with user review in real time would be a lot safer than having to autoconvert the whole thing at once and deal with the edge cases all at the same time.
On Fri, Jan 9, 2009 at 5:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
Because there's approximately one person (Tim Starling) who reviews such extensions in practice, and he has limited time. There's approximately one other person (Brion Vibber) who is qualified review such extensions, although he hasn't for a long time that I can think of, presumably because he's busy with other things.
Why do developers have such priviledged access to the source code
So that they can actually improve it. I don't know what alternative you're suggesting.
and the community such little input?
Because the community can't write code, or won't. Features will not get implemented unless someone writes the code. Therefore, anyone who cannot write the code themselves will not get their features implemented unless they happen to convince someone with commit access. Anyone who writes reasonably good code and shows mild commitment to the project is given commit access and becomes a developer if they so choose.
Why must the community 'vote' on extensions such as Semantic MediaWiki
The community is not being asked to vote on SMW that I've heard. If they don't show enough interest, however, it might not be worth the time of one of the few possible extension reviewers to try reviewing such a huge extension.
and yet the developers can implement any feature they like, any way they like it?
No developer can implement any feature without review by Brion. If a developer were to commit a feature as large as SMW, it would be disabled or reverted, for the same reason as given above: nobody with time to review. This has happened with a number of features at various points, like category and image redirects/moves, and rev_deleted. They were all committed by developers but haven't been enabled (or have by now, but weren't for a pretty long time).
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
Part of that million will be spent on looking at existing tools. The resources for that currently don't exist, or anyway aren't allocated.
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
We've received multiple assurances that all the usability improvements will be made in a fully transparent fashion with community input, as befits Wikimedia's mission.
Why is this tool not being tested on Wikipedia, right now? http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
Because it hasn't met review for enabling on Wikipedia, and likely won't without major structural changes (for performance reasons). Wikimedia handles 70,000 requests per second peak on 400 servers or so. You cannot do that if you willy-nilly enable code that hasn't been carefully tested in a suitably large production environment. And no non-Wikimedia wiki is anywhere close to the size of Wikipedia.
On Fri, Jan 9, 2009 at 5:33 PM, Brian Brian.Mingus@colorado.edu wrote:
800,000 / 30,000 = 26. Is that not a fair wage?
I find it incredibly unlikely that you'd be able to find 26 developers with enough skill who are willing to work for $30,000 a year average. That's below entry level for programmers, let alone senior programmers. You certainly can't rely on current economic conditions, unless you don't mind if they all jump ship as soon as the economy improves and before they're properly familiar with the software.
Moreover, I'm pretty sure that a large chunk of the money is going to have to go to conducting the actual usability tests. I don't know how expensive those are, but they can't be free.
I would like to make clear that I believe the usability issue has largely been solved, and the community is just waiting for the core developers, who have kept a tight lock and key on the source code, to recognize that.
Can you propose any tenable alternative development model that wouldn't overload the servers or crash the site when people upload code that's buggy or just doesn't scale? We have enough of that checked in with our current procedures. It's only kept at bay because everything is reviewed by one of a few highly trusted people, who have worked on MediaWiki and Wikimedia for several and are intimately familiar with the details of how it works and what's been done before.
You cannot escape that review barrier. Every open-source project has it, and must have it, to avoid their code becoming a complete mess.
On Fri, Jan 9, 2009 at 6:13 PM, Brian Brian.Mingus@colorado.edu wrote:
I am skeptical of the current development process. That is because it has led to the current parser, which is not a proper parser at all, and includes horrifying syntax.
The current parser is inherited from somewhere between 2001 or 2003. It's possibly even inherited from UseModWiki. It was developed before the current development process was in place, and so has nothing to do with it.
On Fri, Jan 9, 2009 at 6:50 PM, George Herbert george.herbert@gmail.com wrote:
It would be a potentially acceptable technical solution to change the parser and markup syntax to make it easier to work with, as long as there was an automated conversion tool to shift from what's in the DB now to what would be there going forwards.
Correct, but it would still be a huge project. And there's only about one person who would possibly be situated to do it right, and he has a ton of other critical things to do.
On Fri, Jan 9, 2009 at 6:59 PM, Brian Brian.Mingus@colorado.edu wrote:
I believe it my be the case that the often bizarre idiosyncrasies of MediaWiki were implemented because the developers were spread out around the world, in isolation, communicating only over IRC and sometimes e-mail. I know there are yearly developer spurts at Wikimania, but I do not know about the daily development environment at the offices, and whether development continues in a largely isolated fashion.
The large majority of new code is written by volunteers in their spare time. These volunteers are not going to be willing or able to move to a centralized place to improve communication, and Wikimedia cannot afford to drop them. In any event, communication over IRC, e-mail, and websites is the universal standard in the open source world, and has resulted in a large number of unquestionably high-quality products, like the Linux kernel and Firefox.
I don't know what you want -- more involvement of the community (which is distributed across the world), or less communication by purely electronic means? You can't have both.
I still believe my questions have been answered adequately. However,
Why do developers have such priviledged access to the source code
So that they can actually improve it. I don't know what alternative you're suggesting.
This question cannot be viewed outside of the context of the rest of the sentence, so this answer is invalid.
and the community such little input?
Because the community can't write code, or won't.
False: Extension Matrix.
Features will notget implemented unless someone writes the code. Therefore, anyone who cannot write the code themselves will not get their features implemented unless they happen to convince someone with commit access. Anyone who writes reasonably good code and shows mild commitment to the project is given commit access and becomes a developer if they so choose.
That the core developers must approve the code with little to no oversight is exactly my point. The development of MediaWiki should not be based on what the core developers believe the flavor of the day is. It leads to monstrosities such as the current parser. If the community funds MediaWiki development, they should have a very strong say in what features get implemented.
Why must the community 'vote' on extensions such as Semantic MediaWiki
The community is not being asked to vote on SMW that I've heard. If they don't show enough interest, however, it might not be worth the time of one of the few possible extension reviewers to try reviewing such a huge extension.
The developers should not be the ones deciding what their time should be spent on. Especially if it leads to irrational choices. And I am under the impression that various language Wikipedia's can enable SMW if they reach a consensus. Is that wrong?
and yet the developers can implement any feature they like, any way they like it?
No developer can implement any feature without review by Brion.
This is a *major* problem with MediaWiki.
If a developer were to commit a feature as large as SMW, it would be disabled or reverted, for the same reason as given above: nobody with time to review. This has happened with a number of features at various points, like category and image redirects/moves, and rev_deleted. They were all committed by developers but haven't been enabled (or have by now, but weren't for a pretty long time).
And yet major features to MW have been implemented on developer whim. If I were to compile a list of such features, I might go to you for input. I feel like I am preaching to the choir! I'm curious: of all the possible "improvements" to MediaWiki, why do you feel the horrifying "parser" functions were chosen? For the increase in usability? Pfffft.
Why is this tool not being tested on Wikipedia, right now?
http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
Because it hasn't met review for enabling on Wikipedia, and likely won't without major structural changes (for performance reasons). Wikimedia handles 70,000 requests per second peak on 400 servers or so. You cannot do that if you willy-nilly enable code that hasn't been carefully tested in a suitably large production environment. And no non-Wikimedia wiki is anywhere close to the size of Wikipedia.
I do not believe the job of the core developers should be choosing what extensions are enabled. If an extension appears to solve the usability issue, and yet it does not scale, their job is to scale it. And when expert PHP developers write extensions, give talks at Wikimania, provide community support, and do what it takes to develop a thorough understanding of MediaWiki, they should be given a larger voice. Much larger than being ignored altogether.
I would like to make clear that I believe the usability issue has largely
been solved, and the community is just waiting for the core developers,
who
have kept a tight lock and key on the source code, to recognize that.
Can you propose any tenable alternative development model that wouldn't overload the servers or crash the site when people upload code that's buggy or just doesn't scale? We have enough of that checked in with our current procedures. It's only kept at bay because everything is reviewed by one of a few highly trusted people, who have worked on MediaWiki and Wikimedia for several and are intimately familiar with the details of how it works and what's been done before.
You cannot escape that review barrier. Every open-source project has it, and must have it, to avoid their code becoming a complete mess.
I do not dispute review. I dispute the fact that the core developers only review code that suits their fancy.
On Fri, Jan 9, 2009 at 6:13 PM, Brian Brian.Mingus@colorado.edu wrote:
I am skeptical of the current development process. That is because it has led to the current parser, which is not a proper parser at all, and
includes
horrifying syntax.
The current parser is inherited from somewhere between 2001 or 2003. It's possibly even inherited from UseModWiki. It was developed before the current development process was in place, and so has nothing to do with it.
This is partly false. There have been several efforts to write a proper parser, and the current parser has undergone major structural changes. I don't believe a computer scientist would have a huge problem writing a proper parser. Are any of the core developers computer scientists?
On Fri, Jan 9, 2009 at 6:59 PM, Brian Brian.Mingus@colorado.edu wrote:
I believe it my be the case that the often bizarre idiosyncrasies of MediaWiki were implemented
because
the developers were spread out around the world, in isolation,
communicating
only over IRC and sometimes e-mail. I know there are yearly developer
spurts
at Wikimania, but I do not know about the daily development environment
at
the offices, and whether development continues in a largely isolated fashion.
The large majority of new code is written by volunteers in their spare time. These volunteers are not going to be willing or able to move to a centralized place to improve communication, and Wikimedia cannot afford to drop them. In any event, communication over IRC, e-mail, and websites is the universal standard in the open source world, and has resulted in a large number of unquestionably high-quality products, like the Linux kernel and Firefox.
I do not believe that is how Firefox is developed. The linux kernel is another story - it has proper oversight, and Torvald's "network of trust" - 15 crack developers whom he knows well and have written exceptional quality code for him for many years.
I don't know what you want -- more involvement of the community (which is distributed across the world), or less communication by purely electronic means? You can't have both.
The EP suggestion was only related to how the developers in the SF offices should work, so that funds would not be wasted.
Simetrical, a general comment on your reply: I do not believe it is fair to reply to parts of sentences. It lead to several replies that were clearly out of context. I want to clarify one of my sentences that you broke into parts:
Why must the community 'vote' on extensions such as Semantic MediaWiki and
yet the developers can implement any feature they like, any way they like it?
I was not disputing that the community should vote: In fact, I believe all code that is written should be a result of a) community vote and b) rational oversight provided by the foundation, but at a higher level than the core developers.
I do have another question: Who approved deploying parser functions on Wikipedia? On Sat, Jan 10, 2009 at 7:08 PM, Brian Brian.Mingus@colorado.edu wrote:
Simetrical, a general comment on your reply: I do not believe it is fair to reply to parts of sentences. It lead to several replies that were clearly out of context. I want to clarify one of my sentences that you broke into parts:
Why must the community 'vote' on extensions such as Semantic MediaWiki
and yet the developers can implement any feature they like, any way they like it?
I was not disputing that the community should vote: In fact, I believe all code that is written should be a result of a) community vote and b) rational oversight provided by the foundation, but at a higher level than the core developers.
On Sat, Jan 10, 2009 at 9:04 PM, Brian Brian.Mingus@colorado.edu wrote:
False: Extension Matrix.
See the rest of that paragraph. Anyone who can write code and wants commit access can get it. The only ones without commit access who want it are those who can't or won't write code. Most of the extension developers are apparently uninterested in getting commit access, since they haven't asked for it. There is only a small barrier between developers, and people who are willing and able to code and want to become developers.
The development of MediaWiki should not be based on what the core developers believe the flavor of the day is. It leads to monstrosities such as the current parser.
I already pointed out that the current parser was not originally written in the current development model. It's not a reasonable example to support your point.
If the community funds MediaWiki development, they should have a very strong say in what features get implemented.
The Wikimedia Foundation funds MediaWiki development. It employs both of the core developers and therefore has total control over what features get implemented. The Board delegates most of these decisions to its CTO, who's one of the core developers in question.
The developers should not be the ones deciding what their time should be spent on.
The majority of developers are volunteers, so no one else can decide what their time is spent on. The few who are employees are told what to do by the Wikimedia Foundation, through its CTO Brion Vibber, as in any organization. You appear to disagree with some of Brion's decisions, but he was appointed CTO and lead developer by the Board of Trustees. How can he not have discretion to do what he thinks is best? Who should, then?
And I am under the impression that various language Wikipedia's can enable SMW if they reach a consensus. Is that wrong?
Yes. All code must pass review regardless of consensus, and SMW has not.
And yet major features to MW have been implemented on developer whim.
Not remotely as large as SMW, without the approval of a senior developer. If you think you can find a counterexample, show me.
I'm curious: of all the possible "improvements" to MediaWiki, why do you feel the horrifying "parser" functions were chosen? For the increase in usability? Pfffft.
ParserFunctions were developed to address the fact that the community was using horrible hacks like {{qif}} to achieve the functionality anyway. The conclusion was that a relatively efficient and clean way of achieving basic logic was preferable to what people were devising. It's been proposed that we ditch these as well and move to embedding a real programming language instead, but there hasn't been much activity on that.
I do not believe the job of the core developers should be choosing what extensions are enabled. If an extension appears to solve the usability issue, and yet it does not scale, their job is to scale it.
Someone must make the decision of what to spend limited development resources on. In practice, that has to be a developer of some kind, because no one else would be informed enough to properly evaluate proposals on their merits. The community can decide whether it wants a given extension, but it is in no position to determine whether the cost of fixing it up and enabling it is worth the benefit. That must be made by some individual or group appointed for this purpose. That would currently be the senior developers.
And when expert PHP developers write extensions, give talks at Wikimania, provide community support, and do what it takes to develop a thorough understanding of MediaWiki, they should be given a larger voice. Much larger than being ignored altogether.
Any such person can ask for commit access and be given it. If they gain the trust of the senior developers, they can potentially become trusted enough to do things like extension review themselves. We've had people other than Tim and Brion who were allowed to enable extensions -- Avar, for instance (although that was a long time ago, when things were different). In practice, nobody I know of meets your description.
I do not dispute review. I dispute the fact that the core developers only review code that suits their fancy.
They only review code that they have time to review. There are two of them, what do you expect? And not only must they review all code, they need to write code too, and fulfill other duties.
I think the fundamental point you're missing here is lack of resources. Brion and Tim do not fail to review Semantic MediaWiki because that's their "whim". They simply don't have the resources to review everything. They need to make decisions. If they spent time reviewing SMW, that would be time they couldn't spend on other things. They've judged that the other things are currently more important. On what basis do you question that judgment, since you evidently don't know what development resources *are* actually being spent on?
This is partly false. There have been several efforts to write a proper parser, and the current parser has undergone major structural changes. I don't believe a computer scientist would have a huge problem writing a proper parser. Are any of the core developers computer scientists?
If you're familiar with the attempts to write a parser, you're also familiar with the fact that they've all failed, because wikitext is unparseable using a real parser. Tim knows a considerable amount of computer science, as well as understanding the requirements for a parser much better than any outsider, and would be the logical one to write a new parser -- but he's needed for a lot of other things as well. Again, limited resources.
I do not believe that is how Firefox is developed.
According to a statistic I recently saw, 80% of Firefox source code is written by people not employed by Mozilla. Moreover, even the people employed by Mozilla live in radically different places. Of the layout/content superreviewers, for instance, Google indicates that Robert O'Callahan lives in New Zealand, David Baron lives in California, Boris Zbarsky lives in Illinois, Jonas Sicking lives in Sweden, etc.
So no, that's exactly how Firefox is developed. Along with every other project that uses an open-source development model. Open development inherently means you're willing to accept any contributors. That means you accept them from anywhere in the world. That means over the Internet.
The linux kernel is another story - it has proper oversight, and Torvald's "network of trust" - 15 crack developers whom he knows well and have written exceptional quality code for him for many years.
What bearing does that have on what I said? It's still perfectly good software developed over the Internet exclusively.
On Sat, Jan 10, 2009 at 9:08 PM, Brian Brian.Mingus@colorado.edu wrote:
I was not disputing that the community should vote: In fact, I believe all code that is written should be a result of a) community vote and b) rational oversight provided by the foundation, but at a higher level than the core developers.
What's a "higher level than the core developers"? Do you mean to imply that development resources should be allocated on a fine-grained level by non-developers? How could anyone who's not a developer of the software make such decisions intelligently? What, moreover, is wrong with the current system, other than the fact that it doesn't agree with you?
On Sat, Jan 10, 2009 at 9:33 PM, Brian Brian.Mingus@colorado.edu wrote:
I do have another question: Who approved deploying parser functions on Wikipedia?
Tim Starling both wrote and deployed ParserFunctions.
Thank for your answers.
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community. According to Jimbo's user page (his bolded): "*Any changes to the software must be gradual and reversible.* We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus."
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus and has led to an extremely fast devolution in wiki syntax. Further, the usability of Wikipedia has declined at a rate proportional to the adoption of parser functions.
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
I am quite sure that the answer to Wikipedia's usability issues was not properly taken into concern when ParserFunctions were written. This is based on a very simple principle that I am following in this discussion: Improvements in usability in MediaWiki will not happen through the addition of syntax, but rather the removal of syntax, and the improvement of the User Interface Design.
I also believe this new grant will help, but I do not believe it will fix a broken process. I would like to discuss that process further, and how it could be improved.
On Sat, Jan 10, 2009 at 7:53 PM, Aryeh Gregor <Simetrical+wikilist@gmail.comSimetrical%2Bwikilist@gmail.com
wrote:
On Sat, Jan 10, 2009 at 9:04 PM, Brian Brian.Mingus@colorado.edu wrote:
False: Extension Matrix.
See the rest of that paragraph. Anyone who can write code and wants commit access can get it. The only ones without commit access who want it are those who can't or won't write code. Most of the extension developers are apparently uninterested in getting commit access, since they haven't asked for it. There is only a small barrier between developers, and people who are willing and able to code and want to become developers.
The development of MediaWiki should not be based on what the core developers believe the flavor of the day is. It leads to monstrosities such as the current parser.
I already pointed out that the current parser was not originally written in the current development model. It's not a reasonable example to support your point.
If the community funds MediaWiki development, they should have a very strong say in what features get implemented.
The Wikimedia Foundation funds MediaWiki development. It employs both of the core developers and therefore has total control over what features get implemented. The Board delegates most of these decisions to its CTO, who's one of the core developers in question.
The developers should not be the ones deciding what their time should be spent on.
The majority of developers are volunteers, so no one else can decide what their time is spent on. The few who are employees are told what to do by the Wikimedia Foundation, through its CTO Brion Vibber, as in any organization. You appear to disagree with some of Brion's decisions, but he was appointed CTO and lead developer by the Board of Trustees. How can he not have discretion to do what he thinks is best? Who should, then?
And I am under the impression that various language Wikipedia's can enable SMW if they reach
a
consensus. Is that wrong?
Yes. All code must pass review regardless of consensus, and SMW has not.
And yet major features to MW have been implemented on developer whim.
Not remotely as large as SMW, without the approval of a senior developer. If you think you can find a counterexample, show me.
I'm curious: of all the possible "improvements" to MediaWiki, why do you feel the horrifying "parser" functions were chosen? For the increase in usability? Pfffft.
ParserFunctions were developed to address the fact that the community was using horrible hacks like {{qif}} to achieve the functionality anyway. The conclusion was that a relatively efficient and clean way of achieving basic logic was preferable to what people were devising. It's been proposed that we ditch these as well and move to embedding a real programming language instead, but there hasn't been much activity on that.
I do not believe the job of the core developers should be choosing what extensions are enabled. If an extension appears to solve the usability issue, and yet it does not scale, their job is to scale it.
Someone must make the decision of what to spend limited development resources on. In practice, that has to be a developer of some kind, because no one else would be informed enough to properly evaluate proposals on their merits. The community can decide whether it wants a given extension, but it is in no position to determine whether the cost of fixing it up and enabling it is worth the benefit. That must be made by some individual or group appointed for this purpose. That would currently be the senior developers.
And when expert PHP developers write extensions, give talks at Wikimania, provide
community
support, and do what it takes to develop a thorough understanding of MediaWiki, they should be given a larger voice. Much larger than being ignored altogether.
Any such person can ask for commit access and be given it. If they gain the trust of the senior developers, they can potentially become trusted enough to do things like extension review themselves. We've had people other than Tim and Brion who were allowed to enable extensions -- Avar, for instance (although that was a long time ago, when things were different). In practice, nobody I know of meets your description.
I do not dispute review. I dispute the fact that the core developers only review code that suits their fancy.
They only review code that they have time to review. There are two of them, what do you expect? And not only must they review all code, they need to write code too, and fulfill other duties.
I think the fundamental point you're missing here is lack of resources. Brion and Tim do not fail to review Semantic MediaWiki because that's their "whim". They simply don't have the resources to review everything. They need to make decisions. If they spent time reviewing SMW, that would be time they couldn't spend on other things. They've judged that the other things are currently more important. On what basis do you question that judgment, since you evidently don't know what development resources *are* actually being spent on?
This is partly false. There have been several efforts to write a proper parser, and the current parser has undergone major structural changes. I don't believe a computer scientist would have a huge problem writing a proper parser. Are any of the core developers computer scientists?
If you're familiar with the attempts to write a parser, you're also familiar with the fact that they've all failed, because wikitext is unparseable using a real parser. Tim knows a considerable amount of computer science, as well as understanding the requirements for a parser much better than any outsider, and would be the logical one to write a new parser -- but he's needed for a lot of other things as well. Again, limited resources.
I do not believe that is how Firefox is developed.
According to a statistic I recently saw, 80% of Firefox source code is written by people not employed by Mozilla. Moreover, even the people employed by Mozilla live in radically different places. Of the layout/content superreviewers, for instance, Google indicates that Robert O'Callahan lives in New Zealand, David Baron lives in California, Boris Zbarsky lives in Illinois, Jonas Sicking lives in Sweden, etc.
So no, that's exactly how Firefox is developed. Along with every other project that uses an open-source development model. Open development inherently means you're willing to accept any contributors. That means you accept them from anywhere in the world. That means over the Internet.
The linux kernel is another story - it has proper oversight, and Torvald's "network of trust"
15 crack developers whom he knows well and have written exceptional
quality
code for him for many years.
What bearing does that have on what I said? It's still perfectly good software developed over the Internet exclusively.
On Sat, Jan 10, 2009 at 9:08 PM, Brian Brian.Mingus@colorado.edu wrote:
I was not disputing that the community should vote: In fact, I believe
all
code that is written should be a result of a) community vote and b)
rational
oversight provided by the foundation, but at a higher level than the core developers.
What's a "higher level than the core developers"? Do you mean to imply that development resources should be allocated on a fine-grained level by non-developers? How could anyone who's not a developer of the software make such decisions intelligently? What, moreover, is wrong with the current system, other than the fact that it doesn't agree with you?
On Sat, Jan 10, 2009 at 9:33 PM, Brian Brian.Mingus@colorado.edu wrote:
I do have another question: Who approved deploying parser functions on Wikipedia?
Tim Starling both wrote and deployed ParserFunctions.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Yes, ParserFunctions has been a nightmare for me, a detour into coding that, while challenging, defeats the essence of a wiki, quick.
Fred
Thank for your answers.
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community. According to Jimbo's user page (his bolded): "*Any changes to the software must be gradual and reversible.* We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus."
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus and has led to an extremely fast devolution in wiki syntax. Further, the usability of Wikipedia has declined at a rate proportional to the adoption of parser functions.
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
I am quite sure that the answer to Wikipedia's usability issues was not properly taken into concern when ParserFunctions were written. This is based on a very simple principle that I am following in this discussion: Improvements in usability in MediaWiki will not happen through the addition of syntax, but rather the removal of syntax, and the improvement of the User Interface Design.
I also believe this new grant will help, but I do not believe it will fix a broken process. I would like to discuss that process further, and how it could be improved.
On Sat, Jan 10, 2009 at 7:53 PM, Aryeh Gregor <Simetrical+wikilist@gmail.comSimetrical%2Bwikilist@gmail.com
wrote:
On Sat, Jan 10, 2009 at 9:04 PM, Brian Brian.Mingus@colorado.edu wrote:
False: Extension Matrix.
See the rest of that paragraph. Anyone who can write code and wants commit access can get it. The only ones without commit access who want it are those who can't or won't write code. Most of the extension developers are apparently uninterested in getting commit access, since they haven't asked for it. There is only a small barrier between developers, and people who are willing and able to code and want to become developers.
The development of MediaWiki should not be based on what the core developers believe the flavor of the day is. It leads
to
monstrosities such as the current parser.
I already pointed out that the current parser was not originally written in the current development model. It's not a reasonable example to support your point.
If the community funds MediaWiki development, they should have a very strong say in what features get implemented.
The Wikimedia Foundation funds MediaWiki development. It employs both of the core developers and therefore has total control over what features get implemented. The Board delegates most of these decisions to its CTO, who's one of the core developers in question.
The developers should not be the ones deciding what their time should
be
spent on.
The majority of developers are volunteers, so no one else can decide what their time is spent on. The few who are employees are told what to do by the Wikimedia Foundation, through its CTO Brion Vibber, as in any organization. You appear to disagree with some of Brion's decisions, but he was appointed CTO and lead developer by the Board of Trustees. How can he not have discretion to do what he thinks is best? Who should, then?
And I am under the impression that various language Wikipedia's can enable SMW if they
reach a
consensus. Is that wrong?
Yes. All code must pass review regardless of consensus, and SMW has not.
And yet major features to MW have been implemented on developer whim.
Not remotely as large as SMW, without the approval of a senior developer. If you think you can find a counterexample, show me.
I'm curious: of all the possible "improvements" to MediaWiki, why do you feel the horrifying "parser" functions were chosen? For the increase in usability? Pfffft.
ParserFunctions were developed to address the fact that the community was using horrible hacks like {{qif}} to achieve the functionality anyway. The conclusion was that a relatively efficient and clean way of achieving basic logic was preferable to what people were devising. It's been proposed that we ditch these as well and move to embedding a real programming language instead, but there hasn't been much activity on that.
I do not believe the job of the core developers should be choosing
what
extensions are enabled. If an extension appears to solve the
usability
issue, and yet it does not scale, their job is to scale it.
Someone must make the decision of what to spend limited development resources on. In practice, that has to be a developer of some kind, because no one else would be informed enough to properly evaluate proposals on their merits. The community can decide whether it wants a given extension, but it is in no position to determine whether the cost of fixing it up and enabling it is worth the benefit. That must be made by some individual or group appointed for this purpose. That would currently be the senior developers.
And when expert PHP developers write extensions, give talks at Wikimania, provide
community
support, and do what it takes to develop a thorough understanding of MediaWiki, they should be given a larger voice. Much larger than
being
ignored altogether.
Any such person can ask for commit access and be given it. If they gain the trust of the senior developers, they can potentially become trusted enough to do things like extension review themselves. We've had people other than Tim and Brion who were allowed to enable extensions -- Avar, for instance (although that was a long time ago, when things were different). In practice, nobody I know of meets your description.
I do not dispute review. I dispute the fact that the core developers
only
review code that suits their fancy.
They only review code that they have time to review. There are two of them, what do you expect? And not only must they review all code, they need to write code too, and fulfill other duties.
I think the fundamental point you're missing here is lack of resources. Brion and Tim do not fail to review Semantic MediaWiki because that's their "whim". They simply don't have the resources to review everything. They need to make decisions. If they spent time reviewing SMW, that would be time they couldn't spend on other things. They've judged that the other things are currently more important. On what basis do you question that judgment, since you evidently don't know what development resources *are* actually being spent on?
This is partly false. There have been several efforts to write a
proper
parser, and the current parser has undergone major structural
changes. I
don't believe a computer scientist would have a huge problem writing
a
proper parser. Are any of the core developers computer scientists?
If you're familiar with the attempts to write a parser, you're also familiar with the fact that they've all failed, because wikitext is unparseable using a real parser. Tim knows a considerable amount of computer science, as well as understanding the requirements for a parser much better than any outsider, and would be the logical one to write a new parser -- but he's needed for a lot of other things as well. Again, limited resources.
I do not believe that is how Firefox is developed.
According to a statistic I recently saw, 80% of Firefox source code is written by people not employed by Mozilla. Moreover, even the people employed by Mozilla live in radically different places. Of the layout/content superreviewers, for instance, Google indicates that Robert O'Callahan lives in New Zealand, David Baron lives in California, Boris Zbarsky lives in Illinois, Jonas Sicking lives in Sweden, etc.
So no, that's exactly how Firefox is developed. Along with every other project that uses an open-source development model. Open development inherently means you're willing to accept any contributors. That means you accept them from anywhere in the world. That means over the Internet.
The linux kernel is another story - it has proper oversight, and Torvald's "network of
trust"
15 crack developers whom he knows well and have written exceptional
quality
code for him for many years.
What bearing does that have on what I said? It's still perfectly good software developed over the Internet exclusively.
On Sat, Jan 10, 2009 at 9:08 PM, Brian Brian.Mingus@colorado.edu wrote:
I was not disputing that the community should vote: In fact, I
believe all
code that is written should be a result of a) community vote and b)
rational
oversight provided by the foundation, but at a higher level than the
core
developers.
What's a "higher level than the core developers"? Do you mean to imply that development resources should be allocated on a fine-grained level by non-developers? How could anyone who's not a developer of the software make such decisions intelligently? What, moreover, is wrong with the current system, other than the fact that it doesn't agree with you?
On Sat, Jan 10, 2009 at 9:33 PM, Brian Brian.Mingus@colorado.edu wrote:
I do have another question: Who approved deploying parser functions
on
Wikipedia?
Tim Starling both wrote and deployed ParserFunctions.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
-- You have successfully failed! _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/11 Brian Brian.Mingus@colorado.edu:
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus and has led to an extremely fast devolution in wiki syntax. Further, the usability of Wikipedia has declined at a rate proportional to the adoption of parser functions.
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
From the POV of what you find in articles yes (the horrid multi
template infoboxes we used to have). From the POV of not having a template with a secondary function as a doomsday button yes (Template:Qif).
Remember a lot of ParserFunctions was not new functionality per se but providing a better way to do stuff people had already worked out how to do with existing functions.
So yes going by how widely template:qif was deployed there was consensus among those who deal with these things that such functions should exist.
Remember a lot of ParserFunctions was not new functionality per se but providing a better way to do stuff people had already worked out how to do with existing functions.
The proper way to do this is to provide a better user interface, not to add new syntax that takes an already Turing-complete template language to the next level.
2009/1/11 Brian Brian.Mingus@colorado.edu:
Remember a lot of ParserFunctions was not new functionality per se but providing a better way to do stuff people had already worked out how to do with existing functions.
The proper way to do this is to provide a better user interface, not to add new syntax that takes an already Turing-complete template language to the next level.
Err you've just proposed extending a situation where any admin can lock the database for about half an hour.
I don't believe the specific technical details that led to the development of ParserFunctions are all that relevant. It is always possible to implement a simple 'crash guard', so its not even that great of an excuse. No single person should have the power to develop and deploy such a thing on Wikipedia, even with the consensus of a tiny fraction of editors who were being inconvenienced. Proper thought was simply not given to it, and Jimbo's statement does not allow for its deployment.
On Sat, Jan 10, 2009 at 8:44 PM, geni geniice@gmail.com wrote:
2009/1/11 Brian Brian.Mingus@colorado.edu:
Remember a lot of ParserFunctions was not new functionality per se but providing a better way to do stuff people had already worked out how to do with existing functions.
The proper way to do this is to provide a better user interface, not to
add
new syntax that takes an already Turing-complete template language to the next level.
Err you've just proposed extending a situation where any admin can lock the database for about half an hour.
-- geni
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Brian wrote:
I am quite sure that the answer to Wikipedia's usability issues was not properly taken into concern when ParserFunctions were written. This is based on a very simple principle that I am following in this discussion: Improvements in usability in MediaWiki will not happen through the addition of syntax, but rather the removal of syntax, and the improvement of the User Interface Design.
I also believe this new grant will help, but I do not believe it will fix a broken process. I would like to discuss that process further, and how it could be improved.
[Apologies in advance for a fairly long email.]
Although I agree that wiki-syntax in many cases is far from ideal, this is also a rather tricky issue that hasn't been satisfactorily solved despite decades of academic and industrial research. So I don't think it's really the case that best practices exist that MediaWiki developers have egregiously failed to follow. I have a bit of interest in / knowledge of this, since developing end-user-programmable systems that are also easy to use is my research area / day job. ;-)
The fundamental issue is that in complex domains (like "writing an encyclopedia"), you can't fully analyze and understand the domain ahead of time, use that understanding to provide an easy-to-use domain-specific language or interface, and then be done. So for Wikipedia, it's virtually impossible to provide a built-in set of simple formatting features (a person infobox, a map-dot-placement feature, etc.) that are both easy to use and cover all situations. Domains change, understanding of domains changes, new issues come up, and users need some way to develop their structures and representations as they're being used. If you *don't* allow that, what ends up happening is that people keep using the simple features you've provided, but in more and more convoluted ways to try to force the system to do what they want.
MediaWiki's first solution to that was templates, which are supposed to factor out common formatting. But they ought to be edited to handle new cases---i.e. the proper solution to a template not handling a case is to change the template, not to ask article authors to use it in some more complicated way. That often requires if-else type of logic. To implement that required a convoluted use of templates to force if-else logic into a feature not designed for it. That started happening a lot---as always, when faced with something that they want to do that the system has no interface for, users creatively resort to convoluted ways to force it to do what they want anyway. ParserFunctions were a solution to the convoluted template logic, giving MediaWiki some logic that can be used directly.
I don't think it's on the whole a terrible solution. The general problem of making systems both end-user-extensible and easy to use has been rediscovered dozens of times, ranging from the 1980s idea of "open systems" [1], to attempts to make end-user-programming usable [2], to an idea of "meta-design" [3] that argues for the need for design environments like CAD to be built with end-user-customization in mind [3]. But how to do all that is hard, and I think it'd be fair to say it's an open problem.
So of course I'd welcome a usability push and carefully designed solutions, and no doubt there are things that could be done to make templates/ParserFunctions easier to use while retaining their power. But I wouldn't expect too much magic to appear. =]
-Mark
[1] Carl Hewitt (1985). The challenge of open systems. _Byte_ 10(4): 223-242.
[2] Bonnie A. Nardi (1993). _A Small Matter of Programming_. MIT Press.
[3] G. Fischer, E. Giaccardi, Y. Ye, A. G. Sutcliffe, and N. Mehandjiev (2004). Meta-Design: A manifesto for end-user development. _Communications of the ACM_ 47(9): 33-37. http://l3d.cs.colorado.edu/~gerhard/papers/CACM-meta-design.pdf
Mark, Keep in mind regarding my Semantic drum beating that I am not a developer of Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put it, an advocate.
That said, I believe these two extensions together solve the problem you are talking about. And for whatever reason, the developers of MediaWiki are willing to create new complicated syntax, but not new interfaces.
In your assessment, do these extensions solve the interface extensibility problem you describe?
To the list,
Regarding development process, why weren't a variety of sophisticated solutions, in addition to ParserFunctions, thoroughly considered before they were enabled on the English Wikipedia?
Should ParserFunctions be reverted (a simple procedure by my estimate, which is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
The fundamental issue is that in complex domains (like "writing an
encyclopedia"), you can't fully analyze and understand the domain ahead of time, use that understanding to provide an easy-to-use domain-specific language or interface, and then be done. So for Wikipedia, it's virtually impossible to provide a built-in set of simple formatting features (a person infobox, a map-dot-placement feature, etc.) that are both easy to use and cover all situations. Domains change, understanding of domains changes, new issues come up, and *users need some way to develop their structures and representations as they're being used.* If you *don't* allow that, what ends up happening is that people keep using the simple features you've provided, but in more and more convoluted ways to try to force the system to do what they want.
Not sure why I said "English Wikipedia" - but I mean all Foundation sites of course :)
On Sat, Jan 10, 2009 at 10:43 PM, Brian Brian.Mingus@colorado.edu wrote:
Mark, Keep in mind regarding my Semantic drum beating that I am not a developer of Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put it, an advocate.
That said, I believe these two extensions together solve the problem you are talking about. And for whatever reason, the developers of MediaWiki are willing to create new complicated syntax, but not new interfaces.
In your assessment, do these extensions solve the interface extensibility problem you describe?
To the list,
Regarding development process, why weren't a variety of sophisticated solutions, in addition to ParserFunctions, thoroughly considered before they were enabled on the English Wikipedia?
Should ParserFunctions be reverted (a simple procedure by my estimate, which is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
The fundamental issue is that in complex domains (like "writing an
encyclopedia"), you can't fully analyze and understand the domain ahead of time, use that understanding to provide an easy-to-use domain-specific language or interface, and then be done. So for Wikipedia, it's virtually impossible to provide a built-in set of simple formatting features (a person infobox, a map-dot-placement feature, etc.) that are both easy to use and cover all situations. Domains change, understanding of domains changes, new issues come up, and *users need some way to develop their structures and representations as they're being used.* If you *don't* allow that, what ends up happening is that people keep using the simple features you've provided, but in more and more convoluted ways to try to force the system to do what they want.
Brian wrote:
Mark, Keep in mind regarding my Semantic drum beating that I am not a developer of Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put it, an advocate.
That said, I believe these two extensions together solve the problem you are talking about. And for whatever reason, the developers of MediaWiki are willing to create new complicated syntax, but not new interfaces.
In your assessment, do these extensions solve the interface extensibility problem you describe?
To the list,
Regarding development process, why weren't a variety of sophisticated solutions, in addition to ParserFunctions, thoroughly considered before they were enabled on the English Wikipedia?
Should ParserFunctions be reverted (a simple procedure by my estimate, which is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
A simple procedure? Yes, disabling the extension would be rather simple, repairing the thousands of templates that would be broken in the process, not so much.
I believe it is possible to expand the parser functions in place in a non-destructive way. There are always edge cases of course. But if it is not possible, it is a clear violation of a core wiki principle - that all changes be easily revertible.
On Sat, Jan 10, 2009 at 11:42 PM, Alex mrzmanwiki@gmail.com wrote:
Brian wrote:
Mark, Keep in mind regarding my Semantic drum beating that I am not a developer
of
Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put
it,
an advocate.
That said, I believe these two extensions together solve the problem you
are
talking about. And for whatever reason, the developers of MediaWiki are willing to create new complicated syntax, but not new interfaces.
In your assessment, do these extensions solve the interface extensibility problem you describe?
To the list,
Regarding development process, why weren't a variety of sophisticated solutions, in addition to ParserFunctions, thoroughly considered before
they
were enabled on the English Wikipedia?
Should ParserFunctions be reverted (a simple procedure by my estimate,
which
is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
A simple procedure? Yes, disabling the extension would be rather simple, repairing the thousands of templates that would be broken in the process, not so much.
-- Alex (wikipedia:en:User:Mr.Z-man)
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Brian wrote:
I believe it is possible to expand the parser functions in place in a non-destructive way. There are always edge cases of course. But if it is not possible, it is a clear violation of a core wiki principle
- that all changes be easily revertible.
ParserFunctions was checked into SVN in April 2006, presumably enabled around the same time. Its had nearly 3 years for ParserFunctions to integrate themselves into wikitext. By expanding them in place, you're going to be replacing the infobox syntax in articles with table syntax, hardly an improvement. To use a real example: {{Infobox Mountain | Name = Mount Blackmore | Elevation = 10,154 feet (3,094 m) | Location = [[Montana]], [[United States|USA]] | Range = [[Gallatin Range]] | Coordinates = {{coord|45|26|40|N|111|00|10|W|type:mountain_region:US}} | Topographic map = [[United States Geological Survey|USGS]] Mount Blackmore | Easiest route= Hike }}
would be replaced by something like:
{| cellpadding="5" cellspacing="0" class="infobox geography vcard" style="border:1px solid #999966; float:right; clear:right; margin-left:0.75em; margin-top:0.75em; margin-bottom:0.75em; background:#ffffff; width:305px; font-size:95%;" |- class="fn org" ! style="text-align:center; background:#e7dcc3; font-size:110%;" colspan="2"| Mount Blackmore |- |- class="note" | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3; width:85px;" | [[Summit (topography)|Elevation]] | style="border-top:1px solid #999966; width:220px;" | 10,154 feet (3,094 m) |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | Location | class="label" style="border-top:1px solid #999966;" | [[Montana]], [[United States|USA]] |- | class="note" style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Mountain range|Range]] | style="border-top:1px solid #999966;" | [[Gallatin Range]] |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Geographic coordinate system|Coordinates]] | style="border-top:1px solid #999966;" | {{coord|45|26|40|N|111|00|10|W|type:mountain_region:US}} |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Topographic map|Topo map]] | style="border-top:1px solid #999966;" | [[United States Geological Survey|USGS]] Mount Blackmore |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | Easiest [[Climbing route|route]] | style="border-top:1px solid #999966;" | Hike |- |}
I'm not making this up. I picked a random, small infobox from an article on Special:Random, and expanded it with Special:ExpandTemplates. Like them or not, ParserFunctions do a pretty good job of hiding complex wikitext from the average user, by putting it all in the templates. Without them, you have to A) put the tables directly into articles, which is a lot worse looking than using an infobox, B) Use the infoboxes and show all the unused fields as blank (which is ugly to the readers as well), C) Go back to using the pre-ParserFunctions template hacks, or D) Replace all the infoboxes with SMW.
Of course, infoboxes aren't the only use of ParserFunctions in templates. They're used in all of the "maintenance templates" like {{cleanup}} on enwiki, I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki. Most could be substituted a lot easier than the infoboxes, but the question is, why? Why make the wikitext in articles harder to edit by forcing templates to be replaced by tables? Why make the job of template-coders harder by making it so templates can't be as useful? Rather than 1 infobox that works for all types of settlements, we'd have to have thousands, an infobox that works for a major Chinese city wouldn't work for a small town in America.
Are you saying that we should be able to revert the software to any given revision and expect it to work fine? If we made sure every single software change was fully backwards compatible, you could bet MediaWiki would have far fewer features and a lot more bugs than it does now.
All changes are easily revertible in the short term. When changes exist for years, causing thousands of other changes as a result, how to revert them gets rather difficult (the same is somewhat true of edits to articles as well). You're proposing we install Semantic MediaWiki and rewrite the Parser in a way that will likely not be fully backwards compatible. Neither of these changes will be easily revertible once deployed, especially after 2+ years.
I believe this example is an even clearer demonstration of the usability disaster that is parser functions. And it is just the kind of thing that can be essentially snuck into MediaWiki without the complete community consensus. Perhaps that's not the case - I would be interested in reading a more complete history of the discussions around ParserFunctions if there are significant details I am missing.
When Guido van Rossum makes a change to Python as seemingly trivial as a switch statement, he writes a formal description of the change. The change is then discussed on community mailing lists for a long period of time, and it is compared directly to other submissions. He then polls the community regarding the change at his keynote address, and rejects it if there is no popular support. http://www.python.org/dev/peps/pep-3103/
There absolutely are people out there who would like to have a larger voice in wiki syntax, WikiCreole comes to mind. These folks have usability in mind. ParserFunctions do not.
"*Any changes to the software must be gradual and reversible.* We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus." -- Jimmy Wales
On Sun, Jan 11, 2009 at 9:11 AM, Alex mrzmanwiki@gmail.com wrote:
Brian wrote:
I believe it is possible to expand the parser functions in place in a non-destructive way. There are always edge cases of course. But if it is not possible, it is a clear violation of a core wiki
principle
- that all changes be easily revertible.
ParserFunctions was checked into SVN in April 2006, presumably enabled around the same time. Its had nearly 3 years for ParserFunctions to integrate themselves into wikitext. By expanding them in place, you're going to be replacing the infobox syntax in articles with table syntax, hardly an improvement. To use a real example: {{Infobox Mountain | Name = Mount Blackmore | Elevation = 10,154 feet (3,094 m) | Location = [[Montana]], [[United States|USA]] | Range = [[Gallatin Range]] | Coordinates = {{coord|45|26|40|N|111|00|10|W|type:mountain_region:US}} | Topographic map = [[United States Geological Survey|USGS]] Mount Blackmore | Easiest route= Hike }}
would be replaced by something like:
{| cellpadding="5" cellspacing="0" class="infobox geography vcard" style="border:1px solid #999966; float:right; clear:right; margin-left:0.75em; margin-top:0.75em; margin-bottom:0.75em; background:#ffffff; width:305px; font-size:95%;" |- class="fn org" ! style="text-align:center; background:#e7dcc3; font-size:110%;" colspan="2"| Mount Blackmore |- |- class="note" | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3; width:85px;" | [[Summit (topography)|Elevation]] | style="border-top:1px solid #999966; width:220px;" | 10,154 feet (3,094 m) |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | Location | class="label" style="border-top:1px solid #999966;" | [[Montana]], [[United States|USA]] |- | class="note" style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Mountain range|Range]] | style="border-top:1px solid #999966;" | [[Gallatin Range]] |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Geographic coordinate system|Coordinates]] | style="border-top:1px solid #999966;" | {{coord|45|26|40|N|111|00|10|W|type:mountain_region:US}} |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | [[Topographic map|Topo map]] | style="border-top:1px solid #999966;" | [[United States Geological Survey|USGS]] Mount Blackmore |- | style="border-top:1px solid #999966; border-right:1px solid #999966; background:#e7dcc3;" | Easiest [[Climbing route|route]] | style="border-top:1px solid #999966;" | Hike |- |}
I'm not making this up. I picked a random, small infobox from an article on Special:Random, and expanded it with Special:ExpandTemplates. Like them or not, ParserFunctions do a pretty good job of hiding complex wikitext from the average user, by putting it all in the templates. Without them, you have to A) put the tables directly into articles, which is a lot worse looking than using an infobox, B) Use the infoboxes and show all the unused fields as blank (which is ugly to the readers as well), C) Go back to using the pre-ParserFunctions template hacks, or D) Replace all the infoboxes with SMW.
Of course, infoboxes aren't the only use of ParserFunctions in templates. They're used in all of the "maintenance templates" like {{cleanup}} on enwiki, I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki. Most could be substituted a lot easier than the infoboxes, but the question is, why? Why make the wikitext in articles harder to edit by forcing templates to be replaced by tables? Why make the job of template-coders harder by making it so templates can't be as useful? Rather than 1 infobox that works for all types of settlements, we'd have to have thousands, an infobox that works for a major Chinese city wouldn't work for a small town in America.
Are you saying that we should be able to revert the software to any given revision and expect it to work fine? If we made sure every single software change was fully backwards compatible, you could bet MediaWiki would have far fewer features and a lot more bugs than it does now.
All changes are easily revertible in the short term. When changes exist for years, causing thousands of other changes as a result, how to revert them gets rather difficult (the same is somewhat true of edits to articles as well). You're proposing we install Semantic MediaWiki and rewrite the Parser in a way that will likely not be fully backwards compatible. Neither of these changes will be easily revertible once deployed, especially after 2+ years.
-- Alex (wikipedia:en:User:Mr.Z-man)
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/11 Brian Brian.Mingus@colorado.edu:
Keep in mind regarding my Semantic drum beating that I am not a developer of Semantic Mediawiki or Semantic Forms. I am just a user, and as Erik put it, an advocate.
Semantic MediaWiki's syntax is disastrously horrible and intended for ontology geeks, not the mere humans for whom the tag soup nature of wikitext is a *feature*, not a bug. It's really not clear how you can condemn the present parser and consider SMW not awful.
- d.
On Sat, Jan 10, 2009 at 10:20 PM, Brian Brian.Mingus@colorado.edu wrote:
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community.
However, the community as a whole has not objected to ParserFunctions. They were enabled with the full consent of the community. You seem to be claiming support that you don't have.
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus
Source?
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
You cannot blame that on ParserFunctions. Notwithstanding some revert-warring by Netoholic, the template before ParserFunctions used {{qif}}:
http://en.wikipedia.org/w/index.php?title=Template:Infobox&oldid=4907322...
The only change attributable to ParserFunctions was the change from {{qif}} to {{#if}}. The rest of the increase in complexity is due to the community only.
You're attacking the wrong thing. You don't have a problem with ParserFunctions' introduction, in that historical context. Your problem is with the complexity of templates. Yes, that's potentially unusable, if you need to edit templates. Of course, the overwhelming majority of editors, particularly new ones, do not need to edit templates, only use them. ParserFunctions make it easier to use templates, not harder.
But the bigger point is that complex templates serve a purpose. They allow uniformity across the site with much less effort. The answer to any usability problems they cause is not to try abolishing complex templates -- that would make usability even worse. You'd have to subst in the infobox HTML for every article, which would make them even harder to edit; clutter up histories with bot changes (because that *would* happen); etc. The answer is to allow new editors to suppress complicated and confusing templates, or edit them in a more user-friendly manner. This is likely to be something that the usability work will address. Disabling ParserFunctions would solve absolutely nothing, and nor would have skipping it in the first place.
Of course, any individual wiki that wanted to could ask for ParserFunctions to be disabled. No community has even attempted this that I'm aware of, which says to me that your views are pretty idiosyncratic. If you think ParserFunctions should be disabled on enwiki, start a discussion there and get consensus. Good luck at even getting a *lack of* consensus in *favor* of them. I'd be a little surprised if you could get a single other person to agree with you.
On Sat, Jan 10, 2009 at 10:38 PM, Brian Brian.Mingus@colorado.edu wrote:
The proper way to do this is to provide a better user interface, not to add new syntax that takes an already Turing-complete template language to the next level.
There is a need on Wikipedia for complex user-generated logic. It's crystal-clear that this is how the community feels. The interface for *using* templates should be a lot simpler -- in fact, so should the interface for using all wikitext features, including simple stuff like italics. But the interface for *creating* complex templates necessarily cannot be very simple, any more than any interface to something really programmable can be very simple. This isn't a big problem, because the average user does not need to create or maintain complex templates.
On Sat, Jan 10, 2009 at 10:55 PM, Brian Brian.Mingus@colorado.edu wrote:
I don't believe the specific technical details that led to the development of ParserFunctions are all that relevant. It is always possible to implement a simple 'crash guard', so its not even that great of an excuse.
Only possible if you don't mind site functionality becoming unreliable, with some articles updated and some not.
No single person should have the power to develop and deploy such a thing on Wikipedia, even with the consensus of a tiny fraction of editors who were being inconvenienced.
The overwhelming majority of editors supported ParserFunctions' deployment, and the minority who did not have long since fallen silent as far as I know. If Wikipedians had asked for ParserFunctions to be disabled, it would have been. If they did that now, with demonstrated consensus, it still would be.
On Sun, Jan 11, 2009 at 12:43 AM, Brian Brian.Mingus@colorado.edu wrote:
Should ParserFunctions be reverted (a simple procedure by my estimate, which is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
I don't know who you think Jimbo is. He's the (co-)founder of Wikipedia and holds a seat on its board. He is not the director of technical affairs. He cannot give the developers direct orders on his own. What he says is not authoritative except *maybe* on the English Wikipedia, and then only if it's very specific.
And he has never objected to ParserFunctions in the past. You're the only one who's construing his principle (which was written years ago and can't be feasibly applied to the vastly larger communities today anyway) as applying to ParserFunctions.
On Sun, Jan 11, 2009 at 11:42 AM, Brian Brian.Mingus@colorado.edu wrote:
I believe this example is an even clearer demonstration of the usability disaster that is parser functions. And it is just the kind of thing that can be essentially snuck into MediaWiki without the complete community consensus. Perhaps that's not the case - I would be interested in reading a more complete history of the discussions around ParserFunctions if there are significant details I am missing.
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)... http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)... http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)... http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)... http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)... http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...
I found this quote from Tim Starling particularly interesting:
"The main reason I'm calling it a trial is to avoid appearing to have made a unilateral decision to enable it permanently. The critics of this concept now have one final chance to turn community opinion against it, before it becomes ingrained. However the reception has generally been positive. I've received a number of private compliments on it, in addition to what can be seen publically."
Not a single objection was raised on the Village Pump in that time period. See also the mailing list announcement, which received more replies:
http://lists.wikimedia.org/pipermail/wikitech-l/2006-April/022548.html
There were some objections, but the objectors seemed to pretty clearly be outnumbered. (Especially if you ignore people making generic complaints about RTL support that actually had nothing to do with ParserFunctions.) The idea that this was a unilateral decision by Tim against the community's wishes is completely wrong, although he was the one who did it.
Hoi, I mentioned it before, the Neapolitan wikipedia decided to do away with templates because it prevents people from contributing to their project. It works for them not to use templates.
Please do not understand this as a request to do away with templates. Templates are an important impediment for new people to contribute to our projects. I hope that the Stanton project will help us improve the usability of templates because that is sorely needed. We are currently at a point where the parser functionality is its own worst enemy from a usability point of view. Thanks, GerardM
2009/1/11 Aryeh Gregor <Simetrical+wikilist@gmail.comSimetrical%2Bwikilist@gmail.com
On Sat, Jan 10, 2009 at 10:20 PM, Brian Brian.Mingus@colorado.edu wrote:
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community.
However, the community as a whole has not objected to ParserFunctions. They were enabled with the full consent of the community. You seem to be claiming support that you don't have.
I believe that the introduction of ParserFunctions to MediaWiki was not
done
with community consensus
Source?
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
You cannot blame that on ParserFunctions. Notwithstanding some revert-warring by Netoholic, the template before ParserFunctions used {{qif}}:
http://en.wikipedia.org/w/index.php?title=Template:Infobox&oldid=4907322...
The only change attributable to ParserFunctions was the change from {{qif}} to {{#if}}. The rest of the increase in complexity is due to the community only.
You're attacking the wrong thing. You don't have a problem with ParserFunctions' introduction, in that historical context. Your problem is with the complexity of templates. Yes, that's potentially unusable, if you need to edit templates. Of course, the overwhelming majority of editors, particularly new ones, do not need to edit templates, only use them. ParserFunctions make it easier to use templates, not harder.
But the bigger point is that complex templates serve a purpose. They allow uniformity across the site with much less effort. The answer to any usability problems they cause is not to try abolishing complex templates -- that would make usability even worse. You'd have to subst in the infobox HTML for every article, which would make them even harder to edit; clutter up histories with bot changes (because that *would* happen); etc. The answer is to allow new editors to suppress complicated and confusing templates, or edit them in a more user-friendly manner. This is likely to be something that the usability work will address. Disabling ParserFunctions would solve absolutely nothing, and nor would have skipping it in the first place.
Of course, any individual wiki that wanted to could ask for ParserFunctions to be disabled. No community has even attempted this that I'm aware of, which says to me that your views are pretty idiosyncratic. If you think ParserFunctions should be disabled on enwiki, start a discussion there and get consensus. Good luck at even getting a *lack of* consensus in *favor* of them. I'd be a little surprised if you could get a single other person to agree with you.
On Sat, Jan 10, 2009 at 10:38 PM, Brian Brian.Mingus@colorado.edu wrote:
The proper way to do this is to provide a better user interface, not to
add
new syntax that takes an already Turing-complete template language to the next level.
There is a need on Wikipedia for complex user-generated logic. It's crystal-clear that this is how the community feels. The interface for *using* templates should be a lot simpler -- in fact, so should the interface for using all wikitext features, including simple stuff like italics. But the interface for *creating* complex templates necessarily cannot be very simple, any more than any interface to something really programmable can be very simple. This isn't a big problem, because the average user does not need to create or maintain complex templates.
On Sat, Jan 10, 2009 at 10:55 PM, Brian Brian.Mingus@colorado.edu wrote:
I don't believe the specific technical details that led to the
development
of ParserFunctions are all that relevant. It is always possible to
implement
a simple 'crash guard', so its not even that great of an excuse.
Only possible if you don't mind site functionality becoming unreliable, with some articles updated and some not.
No single person should have the power to develop and deploy such a thing
on
Wikipedia, even with the consensus of a tiny fraction of editors who were being inconvenienced.
The overwhelming majority of editors supported ParserFunctions' deployment, and the minority who did not have long since fallen silent as far as I know. If Wikipedians had asked for ParserFunctions to be disabled, it would have been. If they did that now, with demonstrated consensus, it still would be.
On Sun, Jan 11, 2009 at 12:43 AM, Brian Brian.Mingus@colorado.edu wrote:
Should ParserFunctions be reverted (a simple procedure by my estimate,
which
is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
I don't know who you think Jimbo is. He's the (co-)founder of Wikipedia and holds a seat on its board. He is not the director of technical affairs. He cannot give the developers direct orders on his own. What he says is not authoritative except *maybe* on the English Wikipedia, and then only if it's very specific.
And he has never objected to ParserFunctions in the past. You're the only one who's construing his principle (which was written years ago and can't be feasibly applied to the vastly larger communities today anyway) as applying to ParserFunctions.
On Sun, Jan 11, 2009 at 11:42 AM, Brian Brian.Mingus@colorado.edu wrote:
I believe this example is an even clearer demonstration of the usability disaster that is parser functions. And it is just the kind of thing that
can
be essentially snuck into MediaWiki without the complete community consensus. Perhaps that's not the case - I would be interested
in
reading a more complete history of the discussions around ParserFunctions
if
there are significant details I am missing.
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=49208103
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=50342128
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=51448619
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=52579836
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=53776093
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=54982355
I found this quote from Tim Starling particularly interesting:
"The main reason I'm calling it a trial is to avoid appearing to have made a unilateral decision to enable it permanently. The critics of this concept now have one final chance to turn community opinion against it, before it becomes ingrained. However the reception has generally been positive. I've received a number of private compliments on it, in addition to what can be seen publically."
Not a single objection was raised on the Village Pump in that time period. See also the mailing list announcement, which received more replies:
http://lists.wikimedia.org/pipermail/wikitech-l/2006-April/022548.html
There were some objections, but the objectors seemed to pretty clearly be outnumbered. (Especially if you ignore people making generic complaints about RTL support that actually had nothing to do with ParserFunctions.) The idea that this was a unilateral decision by Tim against the community's wishes is completely wrong, although he was the one who did it.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Simetrical,
Thanks for your research. I have read the links you sent in full. Here is the motivation for developing developing parser functions:
"*In response to a campaign by users of the English Wikipedia to harrass
developers by introducing increasingly ugly and inefficient meta-templates to popular pages, I've caved in and written a few reasonably efficient parser functions.*" -- Tim Starling 15:25, 5 April 2006 (UTC)
I see on Village Pump (technical) and wikitech-l, in addition to an associated talk page, that there was a vocal group of people who objected to parser functions and that they were ignored and the extension was enabled anyway.
At all Wikimanias I have been to, including ones in 2006 and 2007, there have been discussions on usability where the audience contains 50-100 people. There is always an inspired discussion about how awful wiki syntax is. You are correct that I do not like templates, but I do not blame that on a process that still exists. It certainly would be interesting to know the history of the implementation of templates, if anyone knows that. The point is that I am aware of a much larger community than I see in these links that would have at least liked to have a meeting of the minds on parser functions before they were enabled. The community was apparantly bypassed by the developers. Compare the process here to that which Guido uses in the Python community: "*A quick poll during my keynote presentation at PyCon 2007 shows this proposal has no popular support. I therefore reject it.*" -- Guido
I strongly disagree with your point that a lack of uprise in the community against parser functions is evidence that they should have been implemented. Uprise take a lot of energy - certainly a lot more energy than the "few hours of work" that Tim put into them. As a previous poster pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki." MediaWiki effectively has a programming language in it because of a few hours of developer work and a few minutes of conversation. A programming language that, apparantly, cannot be reverted.
I have been to the developer presentations at all but one Wikimania, and they are dissapointing. I don't really see new ideas for features being presented to the community, and I do not believe that the developers ideas are *a priori* better than the community ideas. The presentations put together by community members are better thought out, more polished, more comprehensive and more inspired than those that come from the developers. And they are often ignored by the developers. I reject the argument they do not have time. Over the course of years, I expect that a CTO would both the time, inspiration and technology vision to recognize something amazing when he saw it, and take it upon himself to review it, rather than implement a new interface for the iPhone.
I maintain my position that the process of adding new features to MediaWiki is broken, and that it happens largely at developer whim. I am discouraged that Erik believes we should maintain the broken status quo, and kludge a WYSIWYG on top of it.
On Sun, Jan 11, 2009 at 10:13 AM, Aryeh Gregor < Simetrical+wikilist@gmail.com Simetrical%2Bwikilist@gmail.com> wrote:
On Sat, Jan 10, 2009 at 10:20 PM, Brian Brian.Mingus@colorado.edu wrote:
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community.
However, the community as a whole has not objected to ParserFunctions. They were enabled with the full consent of the community. You seem to be claiming support that you don't have.
I believe that the introduction of ParserFunctions to MediaWiki was not
done
with community consensus
Source?
Is there *anybody* on this list that is willing to say that this is more usable than what we had before? http://en.wikipedia.org/w/index.php?title=Template:Infobox&action=raw
You cannot blame that on ParserFunctions. Notwithstanding some revert-warring by Netoholic, the template before ParserFunctions used {{qif}}:
http://en.wikipedia.org/w/index.php?title=Template:Infobox&oldid=4907322...
The only change attributable to ParserFunctions was the change from {{qif}} to {{#if}}. The rest of the increase in complexity is due to the community only.
You're attacking the wrong thing. You don't have a problem with ParserFunctions' introduction, in that historical context. Your problem is with the complexity of templates. Yes, that's potentially unusable, if you need to edit templates. Of course, the overwhelming majority of editors, particularly new ones, do not need to edit templates, only use them. ParserFunctions make it easier to use templates, not harder.
But the bigger point is that complex templates serve a purpose. They allow uniformity across the site with much less effort. The answer to any usability problems they cause is not to try abolishing complex templates -- that would make usability even worse. You'd have to subst in the infobox HTML for every article, which would make them even harder to edit; clutter up histories with bot changes (because that *would* happen); etc. The answer is to allow new editors to suppress complicated and confusing templates, or edit them in a more user-friendly manner. This is likely to be something that the usability work will address. Disabling ParserFunctions would solve absolutely nothing, and nor would have skipping it in the first place.
Of course, any individual wiki that wanted to could ask for ParserFunctions to be disabled. No community has even attempted this that I'm aware of, which says to me that your views are pretty idiosyncratic. If you think ParserFunctions should be disabled on enwiki, start a discussion there and get consensus. Good luck at even getting a *lack of* consensus in *favor* of them. I'd be a little surprised if you could get a single other person to agree with you.
On Sat, Jan 10, 2009 at 10:38 PM, Brian Brian.Mingus@colorado.edu wrote:
The proper way to do this is to provide a better user interface, not to
add
new syntax that takes an already Turing-complete template language to the next level.
There is a need on Wikipedia for complex user-generated logic. It's crystal-clear that this is how the community feels. The interface for *using* templates should be a lot simpler -- in fact, so should the interface for using all wikitext features, including simple stuff like italics. But the interface for *creating* complex templates necessarily cannot be very simple, any more than any interface to something really programmable can be very simple. This isn't a big problem, because the average user does not need to create or maintain complex templates.
On Sat, Jan 10, 2009 at 10:55 PM, Brian Brian.Mingus@colorado.edu wrote:
I don't believe the specific technical details that led to the
development
of ParserFunctions are all that relevant. It is always possible to
implement
a simple 'crash guard', so its not even that great of an excuse.
Only possible if you don't mind site functionality becoming unreliable, with some articles updated and some not.
No single person should have the power to develop and deploy such a thing
on
Wikipedia, even with the consensus of a tiny fraction of editors who were being inconvenienced.
The overwhelming majority of editors supported ParserFunctions' deployment, and the minority who did not have long since fallen silent as far as I know. If Wikipedians had asked for ParserFunctions to be disabled, it would have been. If they did that now, with demonstrated consensus, it still would be.
On Sun, Jan 11, 2009 at 12:43 AM, Brian Brian.Mingus@colorado.edu wrote:
Should ParserFunctions be reverted (a simple procedure by my estimate,
which
is a good thing) based solely on the fact that they are the most clear violation of Jimbo's principle that I am aware of?
I don't know who you think Jimbo is. He's the (co-)founder of Wikipedia and holds a seat on its board. He is not the director of technical affairs. He cannot give the developers direct orders on his own. What he says is not authoritative except *maybe* on the English Wikipedia, and then only if it's very specific.
And he has never objected to ParserFunctions in the past. You're the only one who's construing his principle (which was written years ago and can't be feasibly applied to the vastly larger communities today anyway) as applying to ParserFunctions.
On Sun, Jan 11, 2009 at 11:42 AM, Brian Brian.Mingus@colorado.edu wrote:
I believe this example is an even clearer demonstration of the usability disaster that is parser functions. And it is just the kind of thing that
can
be essentially snuck into MediaWiki without the complete community consensus. Perhaps that's not the case - I would be interested
in
reading a more complete history of the discussions around ParserFunctions
if
there are significant details I am missing.
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=49208103
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=50342128
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=51448619
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=52579836
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=53776093
http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)...http://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_%28technical%29/Archive&oldid=54982355
I found this quote from Tim Starling particularly interesting:
"The main reason I'm calling it a trial is to avoid appearing to have made a unilateral decision to enable it permanently. The critics of this concept now have one final chance to turn community opinion against it, before it becomes ingrained. However the reception has generally been positive. I've received a number of private compliments on it, in addition to what can be seen publically."
Not a single objection was raised on the Village Pump in that time period. See also the mailing list announcement, which received more replies:
http://lists.wikimedia.org/pipermail/wikitech-l/2006-April/022548.html
There were some objections, but the objectors seemed to pretty clearly be outnumbered. (Especially if you ignore people making generic complaints about RTL support that actually had nothing to do with ParserFunctions.) The idea that this was a unilateral decision by Tim against the community's wishes is completely wrong, although he was the one who did it.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/11 Brian Brian.Mingus@colorado.edu:
I see on Village Pump (technical) and wikitech-l, in addition to an associated talk page, that there was a vocal group of people who objected to parser functions and that they were ignored and the extension was enabled anyway.
This is wikipedia. We could find vocal opposition to kittens.
At all Wikimanias I have been to, including ones in 2006 and 2007, there have been discussions on usability where the audience contains 50-100 people. There is always an inspired discussion about how awful wiki syntax is.
Because it's easy to complain about without directly challenging anyone.
You are correct that I do not like templates, but I do not blame that on a process that still exists. It certainly would be interesting to know the history of the implementation of templates, if anyone knows that. The point is that I am aware of a much larger community than I see in these links that would have at least liked to have a meeting of the minds on parser functions before they were enabled. The community was apparantly bypassed by the developers. Compare the process here to that which Guido uses in the Python community: "*A quick poll during my keynote presentation at PyCon 2007 shows this proposal has no popular support. I therefore reject it.*" -- Guido
Python is not comparable. Most wikipedians are not aware of phaser functions. Most PyCon 2007 will probably at least understand what the changes mean.
I strongly disagree with your point that a lack of uprise in the community against parser functions is evidence that they should have been implemented. Uprise take a lot of energy - certainly a lot more energy than the "few hours of work" that Tim put into them.
We have uprisings against stuff all the time. It doesn't appear to be something wikipedians lack the energy to do.
As a previous poster pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki." MediaWiki effectively has a programming language in it because of a few hours of developer work and a few minutes of conversation. A programming language that, apparantly, cannot be reverted.
Again no. It already was. {{Qif}} dates from November 2005. People were working out how to build calculators out of templates.
I have been to the developer presentations at all but one Wikimania, and they are dissapointing. I don't really see new ideas for features being presented to the community, and I do not believe that the developers ideas are *a priori* better than the community ideas.
Most of the community does not go to Wikimania.
The presentations put together by community members are better thought out, more polished, more comprehensive and more inspired than those that come from the developers. And they are often ignored by the developers. I reject the argument they do not have time. Over the course of years, I expect that a CTO would both the time, inspiration and technology vision to recognize something amazing when he saw it, and take it upon himself to review it, rather than implement a new interface for the iPhone.
What amazing thing do you think has been missed?
I maintain my position that the process of adding new features to MediaWiki is broken, and that it happens largely at developer whim. I am discouraged that Erik believes we should maintain the broken status quo, and kludge a WYSIWYG on top of it.
Doing much about the phaser runs into the issue that what in any normal enviroment is some weird unused corner case on wikipedia is likely to have be widely abused and deployed.
On Sunday 11 January 2009 20:08:22 Brian wrote:
pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki." MediaWiki effectively has a programming language in it because of a few hours of developer work and a few minutes of conversation. A programming language that, apparantly, cannot be reverted.
That programming language was introduced solely because Wikipedia editors have already been using another programming language, which had the same capabilities, but used much more resources. If you can get Wikipedians not to use any programming language in their templates, you can effectively revert the change, and I don't see why the extension wouldn't be turned off afterwards.
I have been to the developer presentations at all but one Wikimania, and they are dissapointing. I don't really see new ideas for features being presented to the community, and I do not believe that the developers ideas
Including mine? :)
Perhaps, do you have a link? :)
On Sun, Jan 11, 2009 at 3:12 PM, Nikola Smolenski smolensk@eunet.yu wrote:
On Sunday 11 January 2009 20:08:22 Brian wrote:
pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki." MediaWiki effectively has a programming language in it because of a few hours of developer work and a few minutes of conversation. A programming language that, apparantly, cannot be reverted.
That programming language was introduced solely because Wikipedia editors have already been using another programming language, which had the same capabilities, but used much more resources. If you can get Wikipedians not to use any programming language in their templates, you can effectively revert the change, and I don't see why the extension wouldn't be turned off afterwards.
I have been to the developer presentations at all but one Wikimania, and they are dissapointing. I don't really see new ideas for features being presented to the community, and I do not believe that the developers
ideas
Including mine? :)
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hoi, The Wikimania presentations of Alexandria are no longer online.. I am trying to find out if a backup exists.. Thanks, GerardM
PS If you have a copy of the Merrick Schaeffer presentation, I would be happy to learn that you do..
2009/1/11 Brian Brian.Mingus@colorado.edu
Perhaps, do you have a link? :)
On Sun, Jan 11, 2009 at 3:12 PM, Nikola Smolenski smolensk@eunet.yu wrote:
On Sunday 11 January 2009 20:08:22 Brian wrote:
pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki." MediaWiki effectively has a programming language in it because of a few hours of developer work and a few minutes of conversation. A programming
language
that, apparantly, cannot be reverted.
That programming language was introduced solely because Wikipedia editors have already been using another programming language, which had the same capabilities, but used much more resources. If you can get Wikipedians
not
to use any programming language in their templates, you can effectively
revert
the change, and I don't see why the extension wouldn't be turned off afterwards.
I have been to the developer presentations at all but one Wikimania,
and
they are dissapointing. I don't really see new ideas for features being presented to the community, and I do not believe that the developers
ideas
Including mine? :)
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Thanks Gerard, could you also inquire about the year before? I remember them being in some obscure ftp directory, unlabeled.
On Sun, Jan 11, 2009 at 3:21 PM, Gerard Meijssen gerard.meijssen@gmail.comwrote:
Hoi, The Wikimania presentations of Alexandria are no longer online.. I am trying to find out if a backup exists.. Thanks, GerardM
PS If you have a copy of the Merrick Schaeffer presentation, I would be happy to learn that you do..
2009/1/11 Brian Brian.Mingus@colorado.edu
Perhaps, do you have a link? :)
On Sun, Jan 11, 2009 at 3:12 PM, Nikola Smolenski smolensk@eunet.yu wrote:
On Sunday 11 January 2009 20:08:22 Brian wrote:
pointed out, "I would bet there's at least one template that uses a ParserFunction on 75% or more of all the articles on enwiki."
MediaWiki
effectively has a programming language in it because of a few hours
of
developer work and a few minutes of conversation. A programming
language
that, apparantly, cannot be reverted.
That programming language was introduced solely because Wikipedia
editors
have already been using another programming language, which had the same capabilities, but used much more resources. If you can get Wikipedians
not
to use any programming language in their templates, you can effectively
revert
the change, and I don't see why the extension wouldn't be turned off afterwards.
I have been to the developer presentations at all but one Wikimania,
and
they are dissapointing. I don't really see new ideas for features
being
presented to the community, and I do not believe that the developers
ideas
Including mine? :)
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Brian wrote:
Thank for your answers.
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community. According to Jimbo's user page (his bolded): "*Any changes to the software must be gradual and reversible.* We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus."
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus and has led to an extremely fast devolution in wiki syntax. Further, the usability of Wikipedia has declined at a rate proportional to the adoption of parser functions.
The evolution of templates, and then ParserFunctions, was led by community demand and was widely encouraged by the community. I was concerned about the usability implications of ParserFunctions, but the community demonstrated its intent to ignore any usability concerns by implementing complex templates, very similar to the ones seen today, using the parameter default mechanism alone. Resistance to this trend seemed very weak.
The decline of usability in the template namespace has been driven by technically-minded editors who are proud of their ability to make use of an arcane and cryptic syntax to produce ever more complex feats of text processing. This is an editorial issue and I cannot accept responsibility for it.
However, I am aware that I enabled this process, by implementing the few simple features that they needed. I regret my role in it. That's one of the reasons why I've been resisting the constant community pressure to enable StringFunctions, which I believe will lead to compiler-like functionality implemented in the template namespace. Instead, I've been trying to steer development in the direction of a readable embedded programming language.
If you want a wiki with infoboxes (and I suppose I do since I wrote one of them in the pre-template era using an Excel VBA macro), then I suppose we need some form of template feature. The problem with present-day parser functions is that they are terribly ugly, excessively punctuated, dense to the point of unreadability, with very limited commenting and self-documentation.
I believe that the solution to this problem lies in borrowing concepts from software engineering, such as variables, functions, minimally parenthesized programming languages, libraries, objects, etc. I know that many template programmers cannot program in a traditional programming language, but I have a feeling they could if they wanted to. I certainly find PHP programming much easier than template programming, after a few years of familiarity with both.
I'm also aware that most (non-template) Wikipedia editors have no desire to learn how to program, and do not believe that it should be necessary in the course of editing articles. I think that with enough development time, a suitable platform in MediaWiki could connect these two types of editors. For example there could be an easy-to-use form-based template invocation generator, with forms written by the same technically minded editors who write arcane templates today. Citations could be inserted into articles by invoking a popup box and entering text into clearly labelled form fields.
From another post: We do not even have a parser. I am sure you know that MediaWiki does not actually parse. It is 5000 lines worth of regexes, for the most part.
"Parser" is a convenient and short name for it.
I've reviewed all of the regexes, and I stand by the vast majority of them. The PCRE regular expression module is a versatile text scanning language, which is compiled to bytecode and executed in a VM, very much like PHP. It just so happens that for most text processing tasks where there is a choice between PHP or PCRE, PCRE is faster. In certain special cases, it's possible to gain extra performance by using primitive text scanning functions like strpos() which are implemented in C. Where this is possible, I have done so. But if you want to, say, find the first match from a list of strings in a single subject, searching from a given offset, then the fastest way to do it in standard PHP is a regex with the /S modifier.
In two cases, I found the available algorithms accessible from standard PHP to be inconveniently slow, so I wrote the FSS and wikidiff2 extensions in C and C++ respectively.
Perhaps, like so many computer science graduates, you are enamored with the taxonomy of formal grammars and the parsers that go with them. There are a number of problems with these traditional solutions.
Firstly, there are theoretical problems. The concept of a regular grammar is not versatile enough to describe languages such as XML, and not descriptive enough to allow unambiguous parse tree production from a language like wikitext. It's trivial to invent irregular grammars which can be nonetheless processed in linear time. My aims for wikitext, namely that it be easy for humans to write but fast to convert to HTML, do not coincide well with the taxonomy of formal grammars.
Secondly, there are practical problems. Past projects attempting to parse wikitext using flex/bison or similar schemes have failed to achieve the performance of the present parser, which is surprising because I didn't think I was setting the bar very high. You can bet that if I ever rewrote it in C++ myself, it would be much faster. The PHP compiler community is currently migrating away from LALR towards a regex-based parser called re2c, mostly for performance reasons.
Thirdly, there is the fact that certain phases of MediaWiki's parser are already very similar to the textbook parsers and can be analysed in those terms. The main difference is that our parser is better optimised. For example, the preprocessor acts like a recursive descent parser, but with a non-recursive frontend (using an internal stack), a caching phase, and a parse tree expansion phase with special-case recursive to iterative transformations to minimise stack depth.
Yet another post:
I don't believe a computer scientist would have a huge problem writing a proper parser. Are any of the core developers computer scientists?
Frankly, as an ex-physicist, I don't find the field of computer science particularly impressive, either in terms of academic rigour or practical applications. I think my time would be best spent working as a software engineer for a cause that I believe in, rather than going back to university and studying another socially-disconnected field.
-- Tim Starling
2009/1/13 Tim Starling tstarling@wikimedia.org:
I believe that the solution to this problem lies in borrowing concepts from software engineering, such as variables, functions, minimally parenthesized programming languages, libraries, objects, etc. I know that many template programmers cannot program in a traditional programming language, but I have a feeling they could if they wanted to.
How well do those concepts stand up when you have a lot of people copying and pasting code they don't really understand (writing an infobox from scratch is hard modifying an existing one less so)?
On Tue, Jan 13, 2009 at 12:28 PM, geni geniice@gmail.com wrote:
How well do those concepts stand up when you have a lot of people copying and pasting code they don't really understand (writing an infobox from scratch is hard modifying an existing one less so)?
Pretty well, I suspect. Of course, real languages are less tolerant of error than templates (even PHP makes syntax errors fatal), but I don't think the bar to entry would be huge. You might also get more real programmers willing to deal with complex templates instead of avoiding them like the plague because the language is so hideous, as I at least currently do.
Aryeh Gregor wrote:
On Tue, Jan 13, 2009 at 12:28 PM, geni geniice@gmail.com wrote:
How well do those concepts stand up when you have a lot of people copying and pasting code they don't really understand (writing an infobox from scratch is hard modifying an existing one less so)?
Pretty well, I suspect. Of course, real languages are less tolerant of error than templates (even PHP makes syntax errors fatal), but I don't think the bar to entry would be huge. You might also get more real programmers willing to deal with complex templates instead of avoiding them like the plague because the language is so hideous, as I at least currently do.
If we have things like functions and libraries, it may actually do better than the current system. It would be easier to just have one main template that contains most of the more complex code, subtemplates would just include the main template and call the necessary functions and there'd be less need for copy/pasting. This is already done to a certain extent with "meta-templates," but they aren't quite as versatile as they could potentially be with a real programming language.
geni wrote:
2009/1/13 Tim Starling tstarling@wikimedia.org:
I believe that the solution to this problem lies in borrowing concepts from software engineering, such as variables, functions, minimally parenthesized programming languages, libraries, objects, etc. I know that many template programmers cannot program in a traditional programming language, but I have a feeling they could if they wanted to.
How well do those concepts stand up when you have a lot of people copying and pasting code they don't really understand (writing an infobox from scratch is hard modifying an existing one less so)?
Copying is an exceedingly common practice in software engineering. If I replaced "infobox" with "feature" in your comment then you'd sound like a software engineer. The answer is that it will work just fine, the copier only has to have the vaguest familiarity with the language to be able to do it.
However, it's generally discouraged, because the widely copied code becomes difficult to edit. If a bug is found in it, for instance, it will be necessary to find all instances of the code and to change them. The process of merging common code to a library of functions is called refactoring (during maintenance) or abstraction (during design), and is a common task for more experienced programmers. The aim of refactoring is to keep the boilerplate text that is copied as small and elegant as possible, to minimise the number of things that can go wrong with it.
The template equivalent to refactoring is the introduction of meta-templates, such as {{infobox}}.
-- Tim Starling
2009/1/13 Tim Starling tstarling@wikimedia.org:
The template equivalent to refactoring is the introduction of meta-templates, such as {{infobox}}.
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
The user interface for templates is not entirely horrible - {{templatename|field1=value|field2=value}} etc. As long as that stays reasonably assumable, the plumbing behind it can be as esoteric as is needed - no-one really cares as long as it works reliably.
- d.
David Gerard wrote:
2009/1/13 Tim Starling tstarling@wikimedia.org:
The template equivalent to refactoring is the introduction of meta-templates, such as {{infobox}}.
Nothing wrong with that; C++ has abstract classes which aren't intended to be instantiated directly, but form useful models for hiding commonality among derived classes within a common framework.
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
Which makes their derivations all the more familiar as editors use a wider range of templates.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
The user interface for templates is not entirely horrible - {{templatename|field1=value|field2=value}} etc. As long as that stays reasonably assumable, the plumbing behind it can be as esoteric as is needed - no-one really cares as long as it works reliably.
And at the page-edit level, can be more easily parsed by relatively simple tools.
David Gerard wrote:
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at first it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
On Wed, Jan 14, 2009 at 9:57 AM, Nikola Smolenski smolensk@eunet.yu wrote:
David Gerard wrote:
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at first it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
People (including yours truly) have been asking for this for years...
Magnus
That's pretty much exactly what Semantic MediaWiki offers.
SMW has developed a lot, since many of you saw it. By now, you may * switch off inline queries if you are afraid they won't work fast enough * get rid of the ugly syntax everyone is scared about (and simply hide it all in templates by using the #declare function) * have all that data sitting there inside the DB and export it in standard data formats like RDF or JSON (ok, well, the last one is *almost* finished)
We would be very much interested in having SMW tested on a labs machine with a copy of a reasonably big Wikipedia (e.g. German).
And, just to take note to the title of this thread -- I never thought and the developers never gave me the feeling that the software is out of reach for the community. Access to SVN was swiftly granted, and both Tim and Brion were always giving encouraging and valuable feedback to us.
Cheers, denny
Magnus Manske wrote:
On Wed, Jan 14, 2009 at 9:57 AM, Nikola Smolenski smolensk@eunet.yu wrote:
David Gerard wrote:
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at first it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
People (including yours truly) have been asking for this for years...
Magnus
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Access to svn does not imply access to MediaWiki. Changes to MediaWiki have been almost entirely up to core developer discretion, and as I have demonstrated, 'consensus' has largely implied that they, and only they, thought the changes made Wikipedia better. The ideas are rarely presented to the community in a formal, well-designed demo format (as SMW has been, time and time again), and they are not evaluated for their usability. When a usability issue arises third party tools are not properly considered. Rather, they reinvent the wheel in an inferior manner.
On Thu, Jan 15, 2009 at 11:07 AM, Denny Vrandečić <dvr@aifb.uni-karlsruhe.de
wrote:
That's pretty much exactly what Semantic MediaWiki offers.
SMW has developed a lot, since many of you saw it. By now, you may
- switch off inline queries if you are afraid they won't work fast enough
- get rid of the ugly syntax everyone is scared about (and simply hide
it all in templates by using the #declare function)
- have all that data sitting there inside the DB and export it in
standard data formats like RDF or JSON (ok, well, the last one is *almost* finished)
We would be very much interested in having SMW tested on a labs machine with a copy of a reasonably big Wikipedia (e.g. German).
And, just to take note to the title of this thread -- I never thought and the developers never gave me the feeling that the software is out of reach for the community. Access to SVN was swiftly granted, and both Tim and Brion were always giving encouraging and valuable feedback to us.
Cheers, denny
Magnus Manske wrote:
On Wed, Jan 14, 2009 at 9:57 AM, Nikola Smolenski smolensk@eunet.yu
wrote:
David Gerard wrote:
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at first it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
People (including yours truly) have been asking for this for years...
Magnus
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
The discussion to add a full-fledged programming language to MediaWiki is yet another example of this. Rather than evaluate existing tools which allow for user-interface extensibility, the developers would rather embed PHP within PHP. This allows you to do a variety of things:
* Simulate the brain * Write MediaWiki within MediaWiki * Compute any function * ... * Write an enyclopedia?
Our neural simulator contains an embedded dynamic language called C^c. It is interpreted C++. I assure you that it does not aid in usability. Our software did not start to become truly usable until we tackled the issue of user-extensible interfaces.
This issue has already been tackled in MediaWiki, and yet the solution to all of our problems is claimed to be a well-designed embedded scripting language. This is the largest possible hammer you could apply to the problem. I can't see how it is a reasonable next step.
2009/1/15 Brian Brian.Mingus@colorado.edu
Access to svn does not imply access to MediaWiki. Changes to MediaWiki have been almost entirely up to core developer discretion, and as I have demonstrated, 'consensus' has largely implied that they, and only they, thought the changes made Wikipedia better. The ideas are rarely presented to the community in a formal, well-designed demo format (as SMW has been, time and time again), and they are not evaluated for their usability. When a usability issue arises third party tools are not properly considered. Rather, they reinvent the wheel in an inferior manner.
On Thu, Jan 15, 2009 at 11:07 AM, Denny Vrandečić < dvr@aifb.uni-karlsruhe.de> wrote:
That's pretty much exactly what Semantic MediaWiki offers.
SMW has developed a lot, since many of you saw it. By now, you may
- switch off inline queries if you are afraid they won't work fast enough
- get rid of the ugly syntax everyone is scared about (and simply hide
it all in templates by using the #declare function)
- have all that data sitting there inside the DB and export it in
standard data formats like RDF or JSON (ok, well, the last one is *almost* finished)
We would be very much interested in having SMW tested on a labs machine with a copy of a reasonably big Wikipedia (e.g. German).
And, just to take note to the title of this thread -- I never thought and the developers never gave me the feeling that the software is out of reach for the community. Access to SVN was swiftly granted, and both Tim and Brion were always giving encouraging and valuable feedback to us.
Cheers, denny
Magnus Manske wrote:
On Wed, Jan 14, 2009 at 9:57 AM, Nikola Smolenski smolensk@eunet.yu
wrote:
David Gerard wrote:
The other useful thing that can be done with templates is to standardise the field names in them as much as possible per wiki.
The reason? To enhance machine readability of data in them. People are SERIOUSLY INTERESTED in this.
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at
first
it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
People (including yours truly) have been asking for this for years...
Magnus
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
2009/1/15 Brian Brian.Mingus@colorado.edu:
The discussion to add a full-fledged programming language to MediaWiki is yet another example of this. Rather than evaluate existing tools which allow for user-interface extensibility, the developers would rather embed PHP within PHP. This allows you to do a variety of things:
- Simulate the brain
- Write MediaWiki within MediaWiki
- Compute any function
- ...
- Write an enyclopedia?
Our neural simulator contains an embedded dynamic language called C^c. It is interpreted C++. I assure you that it does not aid in usability. Our software did not start to become truly usable until we tackled the issue of user-extensible interfaces.
This issue has already been tackled in MediaWiki, and yet the solution to all of our problems is claimed to be a well-designed embedded scripting language. This is the largest possible hammer you could apply to the problem. I can't see how it is a reasonable next step.
Brian,
You've been advocating Semantic Mediawiki, which would address a certain set of issues. However, I don't see how that would make the template / parser function syntax any less cumbersome (actually, adding semantic tags would probably make template code marginally more complicated). So, it would appear to me that the question of how to make templates more usable is separate from the question of whether to enable Semantic Mediawiki.
Did you have a different solution to the template / parser function usability issues? What existing tools might you suggest for making things like Template:Infobox [1] and Template:Cite_web [2] more accessible?
-Robert Rohde
[1] http://en.wikipedia.org/wiki/Template:Infobox [2] http://en.wikipedia.org/wiki/Template:Cite_web
2009/1/15 Brian Brian.Mingus@colorado.edu
Access to svn does not imply access to MediaWiki. Changes to MediaWiki have been almost entirely up to core developer discretion, and as I have demonstrated, 'consensus' has largely implied that they, and only they, thought the changes made Wikipedia better. The ideas are rarely presented to the community in a formal, well-designed demo format (as SMW has been, time and time again), and they are not evaluated for their usability. When a usability issue arises third party tools are not properly considered. Rather, they reinvent the wheel in an inferior manner.
Maybe I'm the only one thinking this...but if you see problems, why don't you try to get involved fixing them? Saying "we have problems, and you guys won't listen to us" isn't helpful, it's just complaining. If you have such an enlightened opinion as to the state of usability within MediaWiki, why not get involved and share said wisdom?
As many people have said earlier in this thread: the developers create no barriers to helping with the software. Us getting involved with all of the various wiki communities every time a change is proposed would be counter-productive--I for one don't want to get into an enwiki debate over the placement of a button on the preferences page. Just as you don't expect Commons to come and ask every wiki if they think an image should be deleted, don't expect the developers to come and ask the community for their blessing every time something needs changing.
-Chad
Chad,
What more would you like me to do, specifically? I have attended the conferences, I am aware of the MediaWiki development process and I am pointing towards high-quality code that meets every possible standard the community could reasonably ask. The most important of those standards is that the design was very well thought out and presented to the community over a period of years. At the same time many features which have come to be known as mainstays of Wikipedia have been snuck into the source code with far less effort.
In this discussion I have expressed feelings I have had for years, and now that there is money on the table, I believe it is time we got to the heart of the issue. I am pointing to the MediaWiki development process being broken as a core part of that issue.
I reject many of the excuses that have been presented. For example:
- Developers didn't have the time
When one considers the period of years that we are talking about this certainly appears to be false.
- Users were already doing this, so we just made it easier for them
This is patently false - that particular advanced users are doing something *does not imply consensus.* Before ParserFunctions were implemented consensus should have been checked. Specifically, I believe a design should have been presented at Wikimania so that everyone had a chance to evaluate them. My experience has been that the community looks down on templates. That these templates were hurting the servers is a great opportunity to ask the community what the best solution is. Was the best solution to ingrain templates into Wikipedia by making them even easier to use, or to remove them altogether in favor of some alternate technology? That discussion was simply not had. And ParserFunctions is just one such example.
- Show us the code - why don't you just fix the problem?
I do not consider writing code to be an impediment to design and process discussions. Furthermore, it would be suggested that I implement the code as an extension so that it might be ignored by the core developers along with every other extension. Lastly, the code has already been written. If it is not production-ready it is at the very least an excellent demo. This is also related to the 'Developers didn't have the time' issue. I fully believe that the core developers could reimplement various extensions in a scalable manner in relatively short order - they are, after all, crack php coders. The real problem is that they do not have the incentive. They have been given the keys and the community has not been given a voice. When a community member writes code to help MediaWiki, its put into the archives of extensions and quickly made obsolete by changes to core MediaWiki code.
On Thu, Jan 15, 2009 at 11:40 AM, Chad innocentkiller@gmail.com wrote:
2009/1/15 Brian Brian.Mingus@colorado.edu
Access to svn does not imply access to MediaWiki. Changes to MediaWiki
have
been almost entirely up to core developer discretion, and as I have demonstrated, 'consensus' has largely implied that they, and only they, thought the changes made Wikipedia better. The ideas are rarely presented to the community in a formal, well-designed demo format (as SMW has been,
time
and time again), and they are not evaluated for their usability. When a usability issue arises third party tools are not properly considered. Rather, they reinvent the wheel in an inferior manner.
Maybe I'm the only one thinking this...but if you see problems, why don't you try to get involved fixing them? Saying "we have problems, and you guys won't listen to us" isn't helpful, it's just complaining. If you have such an enlightened opinion as to the state of usability within MediaWiki, why not get involved and share said wisdom?
As many people have said earlier in this thread: the developers create no barriers to helping with the software. Us getting involved with all of the various wiki communities every time a change is proposed would be counter-productive--I for one don't want to get into an enwiki debate over the placement of a button on the preferences page. Just as you don't expect Commons to come and ask every wiki if they think an image should be deleted, don't expect the developers to come and ask the community for their blessing every time something needs changing.
-Chad _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
I have one more:
- Developers don't have to wait for community consensus before implementing changes. Developers don't have to wait for the community to vote on every line of code.
This is obviously not something I have suggested, so its not a very good argument against the process being broken. My argument applies largely to major changes in MediaWiki - and yes, major changes have been snuck into MediaWiki without consensus.
On Thu, Jan 15, 2009 at 12:19 PM, Brian Brian.Mingus@colorado.edu wrote:
Chad,
What more would you like me to do, specifically? I have attended the conferences, I am aware of the MediaWiki development process and I am pointing towards high-quality code that meets every possible standard the community could reasonably ask. The most important of those standards is that the design was very well thought out and presented to the community over a period of years. At the same time many features which have come to be known as mainstays of Wikipedia have been snuck into the source code with far less effort.
In this discussion I have expressed feelings I have had for years, and now that there is money on the table, I believe it is time we got to the heart of the issue. I am pointing to the MediaWiki development process being broken as a core part of that issue.
I reject many of the excuses that have been presented. For example:
- Developers didn't have the time
When one considers the period of years that we are talking about this certainly appears to be false.
- Users were already doing this, so we just made it easier for them
This is patently false - that particular advanced users are doing something *does not imply consensus.* Before ParserFunctions were implemented consensus should have been checked. Specifically, I believe a design should have been presented at Wikimania so that everyone had a chance to evaluate them. My experience has been that the community looks down on templates. That these templates were hurting the servers is a great opportunity to ask the community what the best solution is. Was the best solution to ingrain templates into Wikipedia by making them even easier to use, or to remove them altogether in favor of some alternate technology? That discussion was simply not had. And ParserFunctions is just one such example.
- Show us the code - why don't you just fix the problem?
I do not consider writing code to be an impediment to design and process discussions. Furthermore, it would be suggested that I implement the code as an extension so that it might be ignored by the core developers along with every other extension. Lastly, the code has already been written. If it is not production-ready it is at the very least an excellent demo. This is also related to the 'Developers didn't have the time' issue. I fully believe that the core developers could reimplement various extensions in a scalable manner in relatively short order - they are, after all, crack php coders. The real problem is that they do not have the incentive. They have been given the keys and the community has not been given a voice. When a community member writes code to help MediaWiki, its put into the archives of extensions and quickly made obsolete by changes to core MediaWiki code.
On Thu, Jan 15, 2009 at 11:40 AM, Chad innocentkiller@gmail.com wrote:
2009/1/15 Brian Brian.Mingus@colorado.edu
Access to svn does not imply access to MediaWiki. Changes to MediaWiki
have
been almost entirely up to core developer discretion, and as I have demonstrated, 'consensus' has largely implied that they, and only they, thought the changes made Wikipedia better. The ideas are rarely
presented
to the community in a formal, well-designed demo format (as SMW has been,
time
and time again), and they are not evaluated for their usability. When a usability issue arises third party tools are not properly considered. Rather, they reinvent the wheel in an inferior manner.
Maybe I'm the only one thinking this...but if you see problems, why don't you try to get involved fixing them? Saying "we have problems, and you guys won't listen to us" isn't helpful, it's just complaining. If you have such an enlightened opinion as to the state of usability within MediaWiki, why not get involved and share said wisdom?
As many people have said earlier in this thread: the developers create no barriers to helping with the software. Us getting involved with all of the various wiki communities every time a change is proposed would be counter-productive--I for one don't want to get into an enwiki debate over the placement of a button on the preferences page. Just as you don't expect Commons to come and ask every wiki if they think an image should be deleted, don't expect the developers to come and ask the community for their blessing every time something needs changing.
-Chad _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Brian wrote:
Chad,
What more would you like me to do, specifically? I have attended the conferences, I am aware of the MediaWiki development process and I am pointing towards high-quality code that meets every possible standard the community could reasonably ask. The most important of those standards is that the design was very well thought out and presented to the community over a period of years. At the same time many features which have come to be known as mainstays of Wikipedia have been snuck into the source code with far less effort.
In this discussion I have expressed feelings I have had for years, and now that there is money on the table, I believe it is time we got to the heart of the issue. I am pointing to the MediaWiki development process being broken as a core part of that issue.
I reject many of the excuses that have been presented. For example:
- Developers didn't have the time
When one considers the period of years that we are talking about this certainly appears to be false.
There's currently over 3000 open bugs with dozens being added very week, many have been sitting for years. We currently have 2 developers who are tasked with reviewing basically every change to MediaWiki core and extensions currently used by Wikimedia. They also have to review extensions requested by various projects as well as core features that are disabled pending further review. They also do server admin work and add new committers. Brion also oversees new technical hiring and Tim handles releases of new versions. They also manage to find time to some substanital coding work themselves.
Most of the rest of the developers are volunteers.
- Users were already doing this, so we just made it easier for them
This is patently false - that particular advanced users are doing something *does not imply consensus.* Before ParserFunctions were implemented consensus should have been checked. Specifically, I believe a design should have been presented at Wikimania so that everyone had a chance to evaluate them. My experience has been that the community looks down on templates. That these templates were hurting the servers is a great opportunity to ask the community what the best solution is. Was the best solution to ingrain templates into Wikipedia by making them even easier to use, or to remove them altogether in favor of some alternate technology? That discussion was simply not had. And ParserFunctions is just one such example.
400 people went to Wikimania 2006 (according to the Wikipedia article), I would hardly call that "everyone." If something is hurting the servers, we probably can't spend half a year or so having people submit proposals and vote on things. Template writers were using inefficient conditionals, so efficient conditionals were implemented. I can't say I've witnessed this general disdain for templates that you claim, is there some evidence for this?
- Show us the code - why don't you just fix the problem?
I do not consider writing code to be an impediment to design and process discussions. Furthermore, it would be suggested that I implement the code as an extension so that it might be ignored by the core developers along with every other extension. Lastly, the code has already been written. If it is not production-ready it is at the very least an excellent demo. This is also related to the 'Developers didn't have the time' issue. I fully believe that the core developers could reimplement various extensions in a scalable manner in relatively short order - they are, after all, crack php coders. The real problem is that they do not have the incentive. They have been given the keys and the community has not been given a voice. When a community member writes code to help MediaWiki, its put into the archives of extensions and quickly made obsolete by changes to core MediaWiki code.
Most extensions are ignored because they are written, /possibly/ documented on mediawiki.org or some random external site, then never mentioned or touched. If people want to get their extensions implemented, they should propose them to the community, then if the community wants it, the core developers will have an incentive to review and possibly improve them.
On 1/15/09 11:19 AM, Brian wrote:
Chad,
What more would you like me to do, specifically?
The first things that would help would be:
1) Stop looking to blame someone for past wrongs 2) Think of something that *would* actually help, and do that
When a discussion starts in a negative direction, and continues on and on and on in that direction, it ends up alienating the people you would need to be working with to accomplish your goal -- it all ends up sidetracked as a big ad-hominem debate about who's a bigger jerk and nothing actually productive gets done.
If you'd like to push for more active evaluation of SMW and introduction of either SMW or a refactored, slimmed down data storage/query system to testing and production use, I think that's great!
We've been looking at it for years and hoping we'd have a chance to poke at it some day; it's the beginning of a new year, projects are starting up, and this is a time that we're setting priorities.
But it would probably be better to focus on positives like thinking about what can be accomplished and getting interested parties excited about working together than to repeat over and over that you believe a past decision was wrong -- even if you're absolutely sure that it was.
-- brion
To be clear, I still consider the process to be broken, and I think it would help if there were more transparency there. More transparency means features do not get implemented just because someone with the keys thinks its a good idea, but because they spec'd the feature out formally and there was no doubt in anyones mind that they had given due process to finding consensus. I did not come to this thread attacking anyone, but rather a process. That certain people felt attacked is unfortunate - they have missed my point.
I am willing to put in some work though. What I plan to do is show the grant usability team (and mediawiki-l list) what editing Wikipedia articles might look like with SMW+SF. I haven't been able to find an adequate demo of this.
On Fri, Jan 16, 2009 at 11:31 AM, Brion Vibber brion@wikimedia.org wrote:
On 1/15/09 11:19 AM, Brian wrote:
Chad,
What more would you like me to do, specifically?
The first things that would help would be:
- Stop looking to blame someone for past wrongs
- Think of something that *would* actually help, and do that
When a discussion starts in a negative direction, and continues on and on and on in that direction, it ends up alienating the people you would need to be working with to accomplish your goal -- it all ends up sidetracked as a big ad-hominem debate about who's a bigger jerk and nothing actually productive gets done.
If you'd like to push for more active evaluation of SMW and introduction of either SMW or a refactored, slimmed down data storage/query system to testing and production use, I think that's great!
We've been looking at it for years and hoping we'd have a chance to poke at it some day; it's the beginning of a new year, projects are starting up, and this is a time that we're setting priorities.
But it would probably be better to focus on positives like thinking about what can be accomplished and getting interested parties excited about working together than to repeat over and over that you believe a past decision was wrong -- even if you're absolutely sure that it was.
-- brion
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and topics c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it up and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with nearly everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start telling how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas, that is an unfair characterization of my e-mails, which I do not believe you have read in full.
I have only advocated SMW + SF as a method of allowing users to extend the user interface. I am not interested in SMW for "academic data crunching." DBPedia is wonderful project for people with those interests. Mine are in modeling the brain, and I have in the past tried to predict wikipedia's quality. I don't know if you were at that talk, but I believe I remember you at that conference.
I don't care all that much about ontologies. I am not a semantic web guru. I have pointed out that these technologies provide a means for users to design the interfaces and that these technologies have been overlooked by developers. They do not provide usability in niche cases, which you would know had you read my e-mails. They potentially improve usability in 75% of articles by providing custom tailored user interfaces.
But had you read my e-mails you would also know that I do not advocate enabling the extensions unmodified, but giving them proper consideration and refactoring the minimalist set of features that would be useful into something that is scalable.
That is, I want to discuss the how the process of adding new features to MediaWiki is broken, and how this has been a specific example.
On Mon, Jan 19, 2009 at 7:31 AM, Domas Mituzas midom.lists@gmail.comwrote:
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and topics c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it up and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with nearly everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start telling how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hello!
Domas, that is an unfair characterization of my e-mails, which I do not believe you have read in full.
Oh, I did read your emails :-) I think they are unfair characterization of our development work, which you definitely do not understand in full.
But had you read my e-mails you would also know that I do not advocate enabling the extensions unmodified, but giving them proper consideration and refactoring the minimalist set of features that would be useful into something that is scalable.
That was happening, that will happen in future, that is happening now, at one pace or another, depending on various other issues.
That is, I want to discuss the how the process of adding new features to MediaWiki is broken, and how this has been a specific example.
You seem to be living in the idea of process, we are a bit on other side here, more concentrated on productivity. Indeed, in development team if at least one person agrees with you, you usually have green light - we manage to trust people, and we do not want to build stupid obstacles to stop the progress of the project and the platform.
Only very very bored people can be looking for formal processes to define formal specifications to find formal consensus.
This community, which takes quite a bit of effort to communicate with, effort which I have not seen from the development team: [ Jimmy quote included ]
You know, once upon a time, "full community consultation" was writing an email to wikipedia-l (thats where everyone subscribed :), and the three other guys would usually immediately agree with your modification and say "Jimbo, that is a great idea!" :) The general traffic is different, community is way bigger, developers are still same bunch of people, who have to accommodate everyone.
Now it is a bit more complicated, with lots of different communities out there, the communities themselves partitioned in multiple subcommunities, people having way different interests, and different time investment.
By telling you haven't seen any effort you are either blindly insulting people who are doing the work, or just prove the point, that whatever communication you're doing, you won't reach everyone (and certain people will come back later bitching - therefore, ultimate consensus is unachievable).
By starting this discussion in foundation-l, rather than wikitech-l (where don't seem to be participating in too many discussions), indicates you didn't try too much of communications effort yourself (though, heh, finally I managed to match your face to the name ;-)
Cheers,
This community, which takes quite a bit of effort to communicate with, effort which I have not seen from the development team:
Any changes to the software must be gradual and reversible. We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus. -- Jimbo Waleshttp://en.wikipedia.org/wiki/User:Jimbo_Wales
I've been told by a volunteer developer in that this quote is irrelevant. I wonder how many people believe that is true.
On Mon, Jan 19, 2009 at 7:31 AM, Domas Mituzas midom.lists@gmail.comwrote:
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and topics c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it up and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with nearly everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start telling how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hoi, I think it is correct. There is also nothing in there stopping Semantic MediaWiki from going live. Thanks, GerardM
2009/1/19 Brian Brian.Mingus@colorado.edu
This community, which takes quite a bit of effort to communicate with, effort which I have not seen from the development team:
Any changes to the software must be gradual and reversible. We need to
make
sure that any changes contribute positively to the community, as
ultimately
determined by everybody in Wikipedia, in full consultation with the community consensus. -- Jimbo Wales<
http://en.wikipedia.org/wiki/User:Jimbo_Wales%3E
I've been told by a volunteer developer in that this quote is irrelevant. I wonder how many people believe that is true.
On Mon, Jan 19, 2009 at 7:31 AM, Domas Mituzas <midom.lists@gmail.com
wrote:
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and topics c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it up and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with nearly everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start telling how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Gerard, I'm not sure I understood the full context of your e-mail. There is only one thing stopping it from going live in my opinion - developer enthusiasm. I don't think thats how things are supposed to work.
On Mon, Jan 19, 2009 at 11:04 AM, Gerard Meijssen <gerard.meijssen@gmail.com
wrote:
Hoi, I think it is correct. There is also nothing in there stopping Semantic MediaWiki from going live. Thanks, GerardM
2009/1/19 Brian Brian.Mingus@colorado.edu
This community, which takes quite a bit of effort to communicate with, effort which I have not seen from the development team:
Any changes to the software must be gradual and reversible. We need to
make
sure that any changes contribute positively to the community, as
ultimately
determined by everybody in Wikipedia, in full consultation with the community consensus. -- Jimbo Wales<
http://en.wikipedia.org/wiki/User:Jimbo_Wales%3E
I've been told by a volunteer developer in that this quote is irrelevant.
I
wonder how many people believe that is true.
On Mon, Jan 19, 2009 at 7:31 AM, Domas Mituzas <midom.lists@gmail.com
wrote:
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and
topics
c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it up and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with nearly everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start telling how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
There cannot be community consensus if the developers are unwilling to seriously consider alternate technological solutions to the ones they come up with. That is a key piece of the broken process -- developers of SMW have presented their ideas to the community, but whether or not there was ever consensus (or could have been consensus) has not mattered since the developers were unwilling to give the software serious look. In other words, there has been a chilling effect. Why bother going to as many people as you can for input if that input will make no difference?
On Mon, Jan 19, 2009 at 1:45 PM, Marcus Buck me@marcusbuck.org wrote:
Brian hett schreven:
There is only one thing stopping it from going live in my opinion -
developer
enthusiasm.
What about community consensus?
Marcus Buck
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hoi. The Brion is not God. He and the other half gods, have sufficient enthusiasm for all the weird and wonderful stuff we throw at them. They even spend considerable effort on Semantic MediaWiki and Denny et al are the first to acknowledge this and to say that they provided valuable insights. It is not that they are not enthusiastic, it is that they have a life as well. They also have to take care of our first priority and that is to make sure that the show stays on the road.
To get a sense of perspective, the Stanton perspective will take some 890.000 dollar. A large amount indeed but it will not bring all those things that we would like. The WMF is slowly but surely ramping up the professional organisation. This organisation will never bring all the things that I want not what you want. and certainly not all the things that the communit(y./ies) says (it/they) want(s). Thanks, GerardM
2009/1/19 Brian Brian.Mingus@colorado.edu
Gerard, I'm not sure I understood the full context of your e-mail. There is only one thing stopping it from going live in my opinion - developer enthusiasm. I don't think thats how things are supposed to work.
On Mon, Jan 19, 2009 at 11:04 AM, Gerard Meijssen < gerard.meijssen@gmail.com
wrote:
Hoi, I think it is correct. There is also nothing in there stopping Semantic MediaWiki from going live. Thanks, GerardM
2009/1/19 Brian Brian.Mingus@colorado.edu
This community, which takes quite a bit of effort to communicate with, effort which I have not seen from the development team:
Any changes to the software must be gradual and reversible. We need
to
make
sure that any changes contribute positively to the community, as
ultimately
determined by everybody in Wikipedia, in full consultation with the community consensus. -- Jimbo Wales<
http://en.wikipedia.org/wiki/User:Jimbo_Wales%3E
I've been told by a volunteer developer in that this quote is
irrelevant.
I
wonder how many people believe that is true.
On Mon, Jan 19, 2009 at 7:31 AM, Domas Mituzas <midom.lists@gmail.com
wrote:
Hello Brian,
thanks for all your insights, bashing and vocal support of your pet ideas.
I understand, that SMW is academically interesting concept (though there're contradicting ideas in academia too, suggesting natural language processing as an alternative, and this seems where currently research tries to go too), and it provides "usability" in niche cases (academic data crunching).
I fail to see why you associate SMW with general usability we're trying to think about? Is that something we simple mortals cannot understand, or are you simply out of touch from reality?
See, our project is special.
a) We have mass collaboration at large b) We end up having mass collaboration on individual articles and
topics
c) We have mega-mass readership d) We have massive scope and depth
And, oh well, we have to run software development to facilitate all that. As you may notice, the above list puts quite some huge constraints on what we can do. All our features end up being incremental, and even though in theory they are easy to revert, it is the mass collaboration that picks it
up
and moves to a stage where it is not that easy (and that happens everywhere, where lots of work is being done).
So, you are attacking templates, which have helped to deal with
nearly
everything we do (and are tiny, compared to overall content they facilitate), and were part of incremental development of the site and where editing community was going. Of course, there are ways to make some of our template management way better (template catalogues, more visual editing of parameters, less special characters for casual editors), but they generally are how we imagine and do information management.
Now, if you want to come up with academic attitudes, and start
telling
how ontology is important, and all the semantic meanings have to be highlighted, sure, go on, talk to community, they can do it without software support too - by normalizing templates, using templates for tagging relations, then use various external tools to build information overlays on top of that. Make us believe stuff like that has to be deployed by showing initiative in the communities, not by showing initiative by external parties.
Once it comes to actual software engineering, we have quite limited resources, and quite important mandate and cause. We have to make sure, that readers will be able to read, editors will be able to edit, and foundation will still be able to support the project. We may not always try to be exceptionally perfect (Tim does ;-), but that is because we do not want to be too stressed either.
So, when it comes to reader community, software is doing work for them. Some of readers end up engineering software to make it better. When it comes to editing community, software does the work for them. Some of editors end up engineering software to make it better.
Which community are you talking about?
BR,
Domas Mituzas -- http://dammit.lt/ -- [[user:midom]]
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Wed, Jan 14, 2009 at 4:57 AM, Nikola Smolenski smolensk@eunet.yu wrote:
Another useful thing: after an article is parsed, write all the templates it uses and their parameters in the database. Even if at first it isn't possible to read this data on Wikipedia, Toolserver could do wonders with it :)
This should be fairly easy . . . the table would be absolutely ginormous, probably bigger than any table we currently have, but with zero reads (or near-zero, supposing it's made available through the API and such) that might not be the end of the world.
Tim,
As a qualified software engineer, I'm inclined to agree with you on a number of points. I apologize for my brevity, but I am somewhat restricted by the use of a mobile device.
After downloading, installing and maintaining a low-traffic Mediawiki setup, I think the experience can be improved dramatically. It's clear that a heavy amount of work has been done on improving processing speed and providing additional functionality, but in terms of a polished experience it doesn't match publishing tools like phpbb or Wordpress. Whether this is due to versatility requirements, or a lack of focus on this as a design requirement, I'm not sure.
On the subject of templates, they can be very much a black art. Templates are seldom commented to describe their function, import other templates without making it clear and place dependancies on extensions that may not be clear. Coupled with this, there's no peer review on templates as there is with articles. If a template performs the required function it is accepted and reused regardless of how clear or efficient the underlying code is. While suggesting that every high-use template be subjected to a formal code review is a somewhat silly idea, I think that you have three challenges on your hands.
The first is to make template writing more accessible, through the use of easily digestible starter documentation leading on to more complex examples
The second is to encourage good use of templates, both through code comments and supplementary document subpages. Note that the two are different - one tells you how it works while the other tells you how to use it.
The third is to examine template parsing itself, with a view to revisiting the language and perhaps performing a refresh if appropriate.
Hope all this helps.
Gazimoff
Sent from my iPhone
On 13 Jan 2009, at 16:13, Tim Starling tstarling@wikimedia.org wrote:
Brian wrote:
Thank for your answers.
ParserFunctions are my specific example of how the current development process is very, very broken, and out of touch with the community. According to Jimbo's user page (his bolded): "*Any changes to the software must be gradual and reversible.* We need to make sure that any changes contribute positively to the community, as ultimately determined by everybody in Wikipedia, in full consultation with the community consensus."
I believe that the introduction of ParserFunctions to MediaWiki was not done with community consensus and has led to an extremely fast devolution in wiki syntax. Further, the usability of Wikipedia has declined at a rate proportional to the adoption of parser functions.
The evolution of templates, and then ParserFunctions, was led by community demand and was widely encouraged by the community. I was concerned about the usability implications of ParserFunctions, but the community demonstrated its intent to ignore any usability concerns by implementing complex templates, very similar to the ones seen today, using the parameter default mechanism alone. Resistance to this trend seemed very weak.
The decline of usability in the template namespace has been driven by technically-minded editors who are proud of their ability to make use of an arcane and cryptic syntax to produce ever more complex feats of text processing. This is an editorial issue and I cannot accept responsibility for it.
However, I am aware that I enabled this process, by implementing the few simple features that they needed. I regret my role in it. That's one of the reasons why I've been resisting the constant community pressure to enable StringFunctions, which I believe will lead to compiler-like functionality implemented in the template namespace. Instead, I've been trying to steer development in the direction of a readable embedded programming language.
If you want a wiki with infoboxes (and I suppose I do since I wrote one of them in the pre-template era using an Excel VBA macro), then I suppose we need some form of template feature. The problem with present-day parser functions is that they are terribly ugly, excessively punctuated, dense to the point of unreadability, with very limited commenting and self-documentation.
I believe that the solution to this problem lies in borrowing concepts from software engineering, such as variables, functions, minimally parenthesized programming languages, libraries, objects, etc. I know that many template programmers cannot program in a traditional programming language, but I have a feeling they could if they wanted to. I certainly find PHP programming much easier than template programming, after a few years of familiarity with both.
I'm also aware that most (non-template) Wikipedia editors have no desire to learn how to program, and do not believe that it should be necessary in the course of editing articles. I think that with enough development time, a suitable platform in MediaWiki could connect these two types of editors. For example there could be an easy-to-use form-based template invocation generator, with forms written by the same technically minded editors who write arcane templates today. Citations could be inserted into articles by invoking a popup box and entering text into clearly labelled form fields.
From another post:
We do not even have a parser. I am sure you know that MediaWiki does not actually parse. It is 5000 lines worth of regexes, for the most part.
"Parser" is a convenient and short name for it.
I've reviewed all of the regexes, and I stand by the vast majority of them. The PCRE regular expression module is a versatile text scanning language, which is compiled to bytecode and executed in a VM, very much like PHP. It just so happens that for most text processing tasks where there is a choice between PHP or PCRE, PCRE is faster. In certain special cases, it's possible to gain extra performance by using primitive text scanning functions like strpos() which are implemented in C. Where this is possible, I have done so. But if you want to, say, find the first match from a list of strings in a single subject, searching from a given offset, then the fastest way to do it in standard PHP is a regex with the /S modifier.
In two cases, I found the available algorithms accessible from standard PHP to be inconveniently slow, so I wrote the FSS and wikidiff2 extensions in C and C++ respectively.
Perhaps, like so many computer science graduates, you are enamored with the taxonomy of formal grammars and the parsers that go with them. There are a number of problems with these traditional solutions.
Firstly, there are theoretical problems. The concept of a regular grammar is not versatile enough to describe languages such as XML, and not descriptive enough to allow unambiguous parse tree production from a language like wikitext. It's trivial to invent irregular grammars which can be nonetheless processed in linear time. My aims for wikitext, namely that it be easy for humans to write but fast to convert to HTML, do not coincide well with the taxonomy of formal grammars.
Secondly, there are practical problems. Past projects attempting to parse wikitext using flex/bison or similar schemes have failed to achieve the performance of the present parser, which is surprising because I didn't think I was setting the bar very high. You can bet that if I ever rewrote it in C++ myself, it would be much faster. The PHP compiler community is currently migrating away from LALR towards a regex-based parser called re2c, mostly for performance reasons.
Thirdly, there is the fact that certain phases of MediaWiki's parser are already very similar to the textbook parsers and can be analysed in those terms. The main difference is that our parser is better optimised. For example, the preprocessor acts like a recursive descent parser, but with a non-recursive frontend (using an internal stack), a caching phase, and a parse tree expansion phase with special-case recursive to iterative transformations to minimise stack depth.
Yet another post:
I don't believe a computer scientist would have a huge problem writing a proper parser. Are any of the core developers computer scientists?
Frankly, as an ex-physicist, I don't find the field of computer science particularly impressive, either in terms of academic rigour or practical applications. I think my time would be best spent working as a software engineer for a cause that I believe in, rather than going back to university and studying another socially-disconnected field.
-- Tim Starling
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Tue, Jan 13, 2009 at 1:29 PM, Gazimoff gazimoff@o2.co.uk wrote:
After downloading, installing and maintaining a low-traffic Mediawiki setup, I think the experience can be improved dramatically. It's clear that a heavy amount of work has been done on improving processing speed and providing additional functionality, but in terms of a polished experience it doesn't match publishing tools like phpbb or Wordpress. Whether this is due to versatility requirements, or a lack of focus on this as a design requirement, I'm not sure.
It's the latter. MediaWiki development primarily focuses on Wikipedia. Most developers are drawn from the Wikipedia or at least Wikimedia community, and the ones who do the most work tend to be paid by the Wikimedia Foundation (mainly to do things for Wikimedia projects). Things only useful for third parties therefore tend to be given less priority in practice.
Hello Tim,
I definitively like to see things develop in the direction you described. That would make the templates more useful, either for the editors but also for the readers and other developers who can datamine the Wikimedia-project entries. And at some point we must simply ignore the desire of the template-developers and go in a direction that would be beneficial for the majority of the editors and users.
Ting
On Fri, Jan 9, 2009 at 5:00 PM, Brian Brian.Mingus@colorado.edu wrote:
Why are so few community-developed mediawiki extensions used by the Foundation?
Are people asking for them? Are there bugs open asking for review? Are there problems with the current code? Does it scale to WMF level? Things like this need answering. We can't just go enable 360+ extensions in SVN (and heaven knows how many more floating around the internet) site-wide without careful consideration.
Why do developers have such priviledged access to the source code, and the community such little input?
Because allowing the software to be edited on wiki would make a nightmare to review and would delay scap by weeks on end. As pointed out by several other people, Brion and Tim are pretty liberal with giving out commit access. Pretty much just earn the trust of your fellow developers, hang around #mediawiki, submit patches, that sort of thing. Just as enwiki, commons, frwikisource all have their respective communities, MediaWiki has its own :) It's all about getting involved.
Why must the community 'vote' on extensions such as Semantic MediaWiki, and yet the developers can implement any feature they like, any way they like it?
As a general rule of thumb, the "votes" in bugzilla are largely ignored, so "voting" for bugs isn't really helping anyone. However, before things are implemented on wikis, the core developers (those who will enable your extension) want to make sure there's a consensus for it. Just respecting the wiki principles by which we work.
I also take argument with your second statement, that developers can implement anything they like and get away with it. This is most certainly not true. Ask any non-core developer how many times Brion or Tim has reverted their changes, and I'm sure most of us will raise at least one hand.
Why does the Foundation need 1 million for usability when amazing tools continue to be ignored and untested?
I believe Erik (and others?) have said that reviewing existing tools is a goal for this grant. No point in reinventing the wheel if we don't need to (unless the wheel is a square shape, in which case it might need some fixing :)
Why has the Foundation gone ahead and approved the hire of several employees for usability design, when the community has had almost zero input into what that design should be?
Because perhaps they want to start making progress, rather than spending a year debating it and making no solid progress. There are _known_ issues with MediaWiki's usability, and part of this new team's job is to collect all the known issues, identify more, and begin fixing them. I don't see how a debate on foundation-l (or elsewhere) will help them in this goal.
Why is this tool not being tested on Wikipedia, right now?
http://wiki.ontoprise.com/ontoprisewiki/index.php/Image:Advanced_ontology_br...
I go back to my first question: has anyone asked for it?
I think your original post sets forth an incorrect assumption. "Why is the software out of the reach of the community?" It most certainly is not, as anyone is welcome to join in and pitch their $0.02. However, we're not going to go around and advertise a debate on all the wiki village pumps about some new feature. Experience has shown us that if the wikis do not agree with a particular change, they make sure to let us know in no uncertain terms :)
-Chad
wikimedia-l@lists.wikimedia.org