project was partially inspired by my SOC participation
as a student for mediaWiki in SOC 06. While metavid efforts will likely
not directly benefit Wikipedia until 2009. Metavid pioneered a lot of
the technologies that are making their way into the software. For
example
and or the
sequencer efforts currently under way with kaltura.
I mentored two students this summer, one for metavid under wikimedia's
SOC. While its true as students coming fresh to the code they do need a
lot of hand holding and it would have been more productive and easier to
just "do it myself". I see it as more an effort for community building
and a future investment in getting people to think about participating
in open source.
The other project mentored for xiph was mostly a failure.. but the
project concept has been picked up by a community member and is now
ready for integration testing with mediawiki upload system :)
I think its fine to have SOC projects "fail" in getting code onto
production servers while succeed in testing the waters for high risk
areas and plant seeds for improving the free software ecosystem.
I think wikimedia should continue to participate and maybe avoid high
risk by not committing core developers to be mentors. This should be
more possible now that technical staff is expanding beyond a few
stretched extremely thin people.
peace,
--michael
Chad wrote:
On Thu, Dec 11, 2008 at 11:37 AM, Gerard Meijssen
<gerard.meijssen(a)gmail.com
wrote:
Hoi,
Last year Nikerabbit was enrolled in a Finnish Summer of Code project. He
did a ton of great work for Betawiki as part of this project. The Liquid
Threads project was a GSOC project. It is used by the WikiEducator project
and as such I would rate it successful.
That's great, but neither Betawiki nor WikiEducator are the WMF, nor
are those functionalities being used (beyond the localizations provided
from BW) within the WMF. That's precisely what this is about, making
use of the GSOC-style of development for MediaWiki itself.
When you look at our own bigger projects, SUL took a crazy amount of time to
materialise, we are still not able to produce
predictable data dumps. When
you look at commercial projects, at least 50% of such projects fail to meet
expectations. The notion that classical "in the office" projects do better
is not one I share.
SUL required a massive amount of work to coordinate, not to mention the
task of coming up with a model that not only works, but is scalable to the
WMF's needs and also actually make sense. Good data models are essential.
Dumps are another thing that requires careful work and coordination. Poor
execution leads to poor results. It's bad enough having someone e-mail the
list(s) once every month or two saying "new dumps please," but imagine if
we provided dumps that were just inherently bad.
I fail to see where you get this 50% of commercial projects fail. Without a
reliable source, I have to assume you've just made up this number and have
never worked in software development.
When we are to do a proper job for summer of code projects, obviously all
our existing developers are most likely to do the
better job. Nikerabbit's
project is a case in point. If there are observations why such a project
does not work out as well as we hope, we should address those issues. The
most important thing achieved with a summer of code project is not only the
software but also the experience given to what we hope will be a developer
who stays with our project after his project.
Thanks,
GerardM
Tim never said that GSOC is a bad model. Nor did he say that it produces bad
results. He simply said that based on previous experiences, it hasn't worked
_for us_. And it's true. Looking at last summer's projects, I don't see a
lot of
results:
1) HTMLDiff - ended up as a highly experimental, still somewhat buggy,
disabled-
by-default feature that was an i18n mess (and still might not be 100% fixed)
2) Category Redirects - what ever happened to this? Was this ever merged
from
branch to core? The branch hasn't been touched since August, at the least.
A model failing in one use case doesn't indicate a failed model overall; it
simply
means it doesn't work in this situation. :)
-Chad
_______________________________________________
foundation-l mailing list
foundation-l(a)lists.wikimedia.org
Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/foundation-l