I whipped together a php script this afternoon that allows you to make
arbitrary queries for Gerrit change sets, and then perform bulk actions on
the resulting change sets from the command line. Currently it only supports
doing a bulk 'submit' (with approve +2 and verify +1), but it won't take
much to add in other things (like abandon, verify, approve, etc). Take a
look and feel free to comment/make changes:
Software Engineer, Mobile
Whenever an action is made in Gerrit, a notification is sent to some IRC
channels. Previously we had to map each project to an IRC channel fall
backing to #mediawiki.
I have enhanced the python script to support wildcard when filtering on
Gerrit project name. Leslie Carr has deployed the changes some minutes
ago, and now:
- analytics and integrations are sent to #wikimedia-dev
- operations are sent to #wikimedia-operations
- labs to #wikimedia-labs
- mediawiki and mediawiki extensions to #mediawiki
Default is still #mediawiki
If someone need more specific rules, you should tweak the filenames
dictionary in templates/gerrit/hookconfig.py.erb .
As an example, we might want to send MobileFrontend notifications to
Antoine "hashar" Musso
Thought this would be interesting to wikitech-l.
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Tue, Mar 27, 2012 at 1:47 PM
Subject: Chennai Unofficial Wikimedia Hackathon Report
To: "Discussion list on Indian language projects of Wikimedia."
The Chennai Unofficial Wikimedia Hackathon Report
TL;DR: 13 completed hacks, including 2 core mediawiki patches, 3
tawiki userscript updates and 2 new deployed tools. It was super
awesome and super productive!
The 'Unofficial' Chennai Wikimedia
happened on Saturday, March 17 2012 at the Thoughtworks office in
Chennai. It was a one day, 8 hour event focusing on getting people
together to hack on stuff related to all Wikimedia projects - not just
The event started with us sailing past security reasonably easily, and
getting setup with internet without a glitch. People trickled in and
soon enough we had 21 people in there. Since this was a pure
hackathon, there were no explicit tutorials or presentations. As
people came in, we asked them what technologies/fields they are
familiar with, and picked out an idea for them to work on from the
Ideas List (http://www.mediawiki.org/wiki/Chennai_Hackathon_March_2012/Ideas).
This took care of the biggest problem with hackathons with new people
- half the day spent on figuring out what to work on, and when found,
it is completely outside the domain of expertise of the people hacking
on the idea. Talking together with them fast to pick an idea within 5
minutes that they can complete in the day fixed this problem and made
sure people can concentrate on coding for the rest of the day.
People started hacking, and just before lunch we made people come up
and tell us what they were working on. We then broke for lunch and
usual socialization happened over McDonalds burgers and Saravana
Bhavan dosas. Hacking started soon after, and people were
concentrating on getting their hacks done before the demo time. And we
did have quite a few demos!
Here's a short description of each of the demos, written purely in the
order in which they were presented:
1. Wikiquotes via SMS
By: @MadhuVishy and @YesKarthik
What it does:
Send a person name to a particular number, and you'll keep getting
back quotes from that person. Works in similar semi-automated fashion
as the DYKBot. Built on AppEngine + Python.
Deployed live! Send SMS '@wikiquote Gandhi' to 9243342000 to test it
out! Has limited data right now, however.
2. API to Rotate Images (Mediawiki Core Patch)
What it does:
Adds an API method that can arbitrarily rotate images. Think of this
as first step towards being able to rotate any image in commons with a
single button instantly, without having to wait for a bot. Patch was
attached to https://bugzilla.wikimedia.org/33186.
It was reviewed on that day itself (Thanks Reedy!). Vivek is now
figuring out how to modify his patch so that it would be accepted into
Mediawiki core. Vivek is also applying to work with Mediawiki for
GSoC, so we will hopefully get a long term contributor :)
3. Find list of unique Tamil words in tawiki
By: Shrinivasan T
What it does:
It took the entire tamil wikipedia dump and extracted all unique words
out of it. About 1.3 million unique tamil words were extracted. Has
multiple applications, including a tamil spell checker.
Code and the dataset live on github:
4. Program to help record pronunciations for words in tawikt
What it does:
Simple python program that gives you a word, asks you to pronounce it
and then uploads it to commons for being used in Wiktionary. Makes the
process much more streamlined and faster.
Code available at:
Preliminary testing with his friends shows that easy to record 500
words in half an hour. Is currently blocked on figuring out a way to
properly upload to commons
5. Translation of Gadgets/UserScripts to tawiki
By: SuryaPrakash [[:ta:பயனர்:Surya_Prakash.S.A.]]
What he did:
Surya spent the day translating two gadgets into Tamil, so they can be
used on tawiki. First is the 'Prove It' Reference addition tool
(http://ta.wikipedia.org/wiki/Mediawiki:Gadget-ProveIt.js). The second
one was the 'Speed Reader' extension that formats content into
multiple columns for faster scanning
(http://ta.wikipedia.org/wiki/Mediawiki:Gadget-TwoColumn.js). Last I
checked, these are available for anyone with only tamil knowledge to
use, so yay!
(He also tried to localize Twinkle for Tamil, couldn't because of
issues with the laptop he was using.
6. Structured database search over Wikipedia
What it does:
Built a tool that combined DBPedia and Wikipedia to allow you to
search in a semantic way. We almost descended into madness with people
searching for movies with Kamal and movies with Rajni (both provided
accurate results, btw). Amazing search tool that made it super easy to
query information in a natural way.
The code is available at
would be awesome to see this deployed somewhere, so would be great if
the community could come up with specific ideas on how to make this a
specific cool tool.
7. Photo upload to commons by Email
What it does:
Started with building a tool that will let you email a particular
address with pictures + metadata in the body of the page, and it will
be uploaded to commons. This was for the benefit of people with older
outdated phones *cough*Logic*cough* who would like to use their
phone's camera to contribute to commons, but can not due to technical
He wasn't able to get that to work during the hackathon - too many
technical issues cropped up. However, he's *very* definitely
interested in setting it up, and has made progress towards it. I
hope someone from the community (perhaps people doing WLM?) should be
able to get in touch with him to see if this tool could be developed
further with a specific goal in mind.
8. Lightweight offline Wiki reader
What it does:
There is a project called qvido
(http://projects.qi-hardware.com/index.php/p/qvido/) which was a
'lightweight' offline Wiki reader (compared to Kiwix, which is
heavier). It has been abandoned for a while, however. Feroze took the
time to revive the project, figure out how to build it (and wrote
build instructions!) and also fixed a bug so that it can be used to
demo showing offline Wiki navigation. He was able to demo it showing
the Odiya Wikipedia dump offline, with working link navigation.
There exists a git repo (https://github.com/feroze/qvido) with the
code + the build instructions. I hope that people interested in
offline projects check this out and see if it can be made useful, and
take this forward.
9. Patches to AssessmentBar
What it does:
AssessmentBar is a small widget/tool I'm building to make WP India
assessments easier (at the request of User:AshLin. Stay tuned for an
announcement in the next few days). Sathya spent time making the
backend for it more scalable, so the same server can support multiple
projects and concurrent users in a better way. Before that he was
contemplating setting up a hidden Tor node for Wikipedia (he's a Tor
core contributor) and then playing with data visualizations with WP
There is a pull request (https://github.com/yuvipanda/MadamHut/pull/2)
that I need to merge :)
10. Parsing Movie data into a database
By: Arunmozhi (Tecoholic) and Lavanya
What it does:
It scrapes the infoboxes of all movies from whatever category you give
it and stores this into a database. This is harder than it sounds
because parsing wikitext is similar to beating yourself up repeatedly
in the head with a large trout. They managed to figure out a nice way
to extract information from all Indian movie pages, and put it in a
database for programmatic easy access later.
I've asked them to put the code up publicly somewhere, and since I
believe Tecoholic is in this mailing list, he'll reply with the link
:) These kinds of data scraping can be used to build very nice tools
that show off how much information Wikipedia has, and perhaps also
help people contribute back by editing information for their favorite
movies. I hope the community comes up with a nice idea to utilize
this, and takes this project forward to its ultimate destiny: A super
sexy IMDB type site for Indian Movies with data sourced from Wikipedia
(I can dream :D)
11. Random Good WP India article tool
By: Shakti and Sharath
What it does:
It is a simple tool that shows you one B, A, GA or FA article every
time you go there. The idea is to provide a usable service for people
who want to accumulate lots of knowledge by randomly reading stuff,
but only want good stuff (so stubs, etc are filtered out (unlike
Special:Random)). I'll also note that neither of them had worked with
any web service before the hackathon, nor with JSON, nor with the
mediawiki API, yet were able to build and deploy this tool within the
day. /me gives a virtual imaginary barnstar to either of them
It is currently deployed at http://srik.me/WPIndia. Everytime you hit
that link, you'll get an article about India that the community has
deemed 'good'. The source code is available
(https://github.com/saki92/category-based-search). They are eager to
do more hacks such as these, and I'm hoping that the community will
find enough technical cool things for these enthusiastic volunteers to
12. Fix bugs on tawiki ShortURL gadget
What it does:
The short url service used in tawiki (tawp.in) is shown in the wiki
via a gadget. It is not the most user friendly gadget - you need to
right click and select copy. Bharath looked for a solution by which
you could click it and it would copy to the clipboard, but did not
find any that would work without flash. Hence he abandoned that and
started figuring out easier ways of making that happen. He also fixed
several bugs in the implementation of the gadget, and I expect it to
get deployed soonish. Thanks to SrikanthLogic for helping him through
Code is available at
He's still fixing things on the script. If the community needs people
to come fix up their user scripts/gadgets, Bharath would be a willing
(and awesome!) candidate!
13. Add 'My Uploads' to top bar along with My Contributions, etc
(Mediawiki Core Patch)
What it does:
Not satisfied with being the organizer of the hackathon, Srikanth
wanted to flex his programming muscles and spent time fixing a bug in
core mediawiki (https://bugzilla.wikimedia.org/show_bug.cgi?id=30915).
He spent a while digging around the proper way to do this, and managed
to make a proper patch!
It has been committed in gerrit (currently unable to find a link).
Should be merged in soon. Yay!
By: Russel Nickson
What it was supposed to do:
Exactly like Shrini's tool to record word pronunciations and upload to
commons, but written for Android so people could add prononciations on
Code is available at https://github.com/russelnickson/pronouncer. He
ran into technical issues with Android setup (it stops working
completely if you look at it the wrong way), and was unable to
complete this. I think this would still be a very useful tool, and
hope someone from the community steps up to work with Russel and get
2. Wiktionary cross lingual statistics
What it was supposed to do:
It was a statistical tool that generated statistics about how many
words overlap between all indic languages in Wiktionary (as measured
by interwiki links).
The code has been written (I've requested the author to put it up
publicly, will update list when it is). It, however, requires a lot of
time to be run. So validation by the community that such stats would
be useful would, IMO, definitely give Pranav the impetus to finish it
up and show us the pretty graphs :)
So, in all, 13 demos were produced (+ 2 near misses). I think we can
call this one a success, no? :)
Where do we go from here? Random thoughts:
1. Geek retention - this is reasonably easy. If we keep feeding
hackers interesting problems that affect a lot of people, they'll keep
helping us out. Is it possible to have some sort of a 'tools required'
or 'hacks required' or 'gadgets required' page/queue someplace where
we can always direct hackers looking for interesting problems to? IMO
Wikipedia is full of interesting technical problems, so this *should*
2. Follow ups - this time, I am able to do this personally (small
enough group). Clearly this will not scale. Do we have ideas/methods
for following up with these people so that they stay with us?
3. More of these? This was pretty much a 'zero cost' event - stickers
were the only 'cost'. A lot of places around the country would love to
have their space used for a hackathon of sorts. Should we do more of
these kind of 'Unofficial' hackathons?
Thanks due (in random order)
1. Thoughtworks/BalajiDamodaran: He graciously hosted us at
Thoughtworks. The biggest challenge for any hackathon is to find a
nice place which understands what hackathons are, and provides what is
considered the lifeblood of a hackathon - working WiFi. Balaji
(@openbala) was incredibly awesome, and this entire thing would've not
been possible at all without him and ThoughtWorks.
2. Dorai Thodla: He helped popularize the hackathon among the Chennai
Geeks crowd, and acted as a sounding board at various important times.
He also connected us with @openbala and enabled us to get the venue.
3. Srikanth Lakshmanan: The hackathon was his idea, and he made sure
it was executed in a nice way. He was the official 'organizer', and
made sure that all logistics were taken care of. Once the event
started, he was very helpful in helping people technically and in
picking up ideas, while also hacking on his own patch. This event was,
in essence, organized and run by him. He took an overnight trip from
Hyderabad straight out of office just for this. Thanks for making this
4. Shrini (aka the relentless forwarder): This event wouldn't have
been as much a success without him either. Evangelism across multiple
lists, adding a lot of ideas that could be done, helping the people
there out technically at all times and writing two really good hacks -
Thank you! I'm glad we get to keep you :)
5. Subhashish Panighrahi: For sending us stickers :D (And who all is
involved in that logistical process too!)
Most of all, this event was a success because of the quality and
dedication of the people who turned up, giving up their Saturdays.
Hope everyone who turned up had a nice time :) I am personally in
touch with most of them, and I also have their email address, phone
number *and* permission to contact them again. If anyone here thinks
that they liked one of the hacks and want to take it further, please
contact me (User:Yuvipanda on Mediawiki.org or yuvipanda(a)gmail.com)
and I'll get you people in touch. If there is a more accepted,
standard way of handling this type of private information, please let
me know as well!
Yuvi Panda T
Yuvi Panda T
Next week the Wikidata team will be complete and start working at full
speed. Finally! \o/ I will be holding the first round of Wikidata
office hours next week. You're all invited to ask questions and
discuss. If you can't attend there will be logs.
* 4. April, German, in #wikimedia-wikidata on freenode, 4:30pm UTC
for different time zones)
* 5. April, English, in #wikimedia-wikidata on freenode, 4:30pm UTC
for different time zones)
I plan to offer these regularly. My (virtual) door is open outside
these office hours as well of course ;-)
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Wikidata
Wikimedia Deutschland e.V.
Eisenacher Straße 2
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Good $localtime from translatewiki.net. I have been working with Chad
and Antoine to resume the daily translation updates from
translatewiki.net after the partial git migration of MediaWiki.
We now have scripts that enable twn staff to
* create clones of mediawiki core
(with four branches: master 1.19 1.18 1.17)
* create clones or all mediawiki extensions
(both from svn and git, git taking precedence over git)
* update those to latest versions
* export translations
* commit exported translations
repo [create|update|export|commit] [mediawiki|mediawiki-extensions|...]
The commmits for git will use the l10n-bot commit account. Commits for
svn will continue to happen with our personal accounts. As you know
every extension is now a separate git repository, there will be LOTS
of commits instead of just one in a day.
There are few remaining issues:
* automatic merging of translation commits [BLOCKER]
* not spamming your inbox with translation update commits
* pretty formatting of core message files [BLOCKER]
Now the last issue is something I want your opinion on:
Is it an issue if we drop pretty formatting of non-English message files?
If it is an issue, I request your help to make the
maintenance/language/rebuildLanguage.php script to support arbitrary
location instead of just the current wiki installation. If it is not
an issue, the first diffs will be big because of the change, but
remaining ones will be small as usual.
Summary: Chris McMahon is following up on improving Labs as a testing
environment, improving continuous integration, and learning from how
mobile gets and uses bug reports. Alolita and I are gathering data
about how our engineering teams currently take in bug reports from lots
of communication media.
A few of us just had a chat about some upcoming efforts to engage our
community in systematic testing (QA) efforts -- see
and http://www.mediawiki.org/wiki/Mobile_QA/Spec for the
conversation-starter. I figured some folks on this list would be
interested in some plans, ideas, and questions coming out of that. A
* Chris thinks his biggest priority, to improve MediaWiki's testability
overall, is to ensure Labs is a stable, robust, and consistent
environment so it's reasonable to point a firehose of testing at
the beta deployment cluster <http://labs.wikimedia.beta.wmflabs.org/> .
Unless we ensure a consistent and clean environment, most of the bug
reports will be the result of environment problems instead of the
MediaWiki bugs we want to find. So that's what he's focusing most of
his time on. (This limits the time he has available for manual testing
of individual features, but he also makes time to work on editor
engagement, Timed Media Handler testing, and a limited number of other
Of course automated testing is also key, so Chris is working with
Antoine on deploying from Jenkins to the beta cluster -- see
* Our mobile team has a pretty stable, if time-intensive, system set up
to get tests of its apps. They release a new version of their app every
2 weeks. Several days beforehand, they release a beta. They email
mobile-l, tweet, etc. to raise awareness, but the best way to actually
get testers is to personalize the boilerplate email and send it
personally to each of about 20 people who've shown interest in testing.
It takes a few minutes to send those emails, and then 12-15 hours to
respond to the feedback and dig into problems. Feedback and
conversation usually comes from about 5 people out of that 20, and
happens in IRC (so, it's not as easy to delegate, do asynchronously,
point other people to, etc.). The mobile team also tries to cover some
other feedback channels:
Yuvi works on this 100% every release
cycle, so it doesn't scale -- they couldn't really handle more testers
if they came.
Chris likes that the mobile team is clear and specific in directing its
testers on what to test, and has reasonable constraints on time, number
of testers, and test environment. For the next mobile release cycle
(starting on April 6th) Chris will shadow Yuvi to see what he does and
to start helping out.
* The internationalisation/localisation team gets feedback (including
bug reports and feature requests) through various channels (IRC, Village
Pumps, Bugzilla, private email, Twitter, mailing lists, etc.), and it's
time-intensive to gather, aggregate, triage, curate, and respond to it.
Alolita is following up with the i18n team to get more details on that
process -- how do they source feedback? Where do they get it, how do
they aggregate it, and how much time does it take? I'll be following up
with product managers to get similar step-by-step guides from other
projects. That way we can figure out how much time it's taking, how to
split up that workload among product managers and QA, and whether we can
get some quick wins in systematizing this process and doing it smarter.
(Separately from the feedback *aggregation* effort, various folks are
investigating feedback *mechanisms* integrated into user-facing things.
See https://www.mediawiki.org/wiki/Extension:MoodBar/Feedback for some
thoughts on this.)
* We really need help curating Bugzilla. Mark's Bug Squads can't come
soon enough! :-) And we aim to grow testing leaders in the community.
So if you're interested in stepping up and going from nitpicker to
LEADER of nitpickers, we have some tasks ready for you. :D
* Mozilla has a regular testing event
https://quality.mozilla.org/category/events/month and lists bugs in an
etherpad as they're reported:
https://etherpad.mozilla.org/testday-20120329 . Time-limited test
efforts like that are nice; if you can't constrain the number of
channels people use to report things, you can at least constrain *time*
and thus the amount of flow that comes through, so you can actually
follow up more effectively! Chris doesn't think we quite have the
social and technical infrastructure in place to properly support a
testing event like this yet, but that's a goal.
Thanks to Tomasz, Yuvi, Alolita, and Chris for contributing to this
Volunteer Development Coordinator
I'm putting together the list of existing extensions that want to
be migrated with the next batch. Everyone who has an existing
extension and wishes to be migrated in this batch, please add
your extension and maintainer(s) to the page on mw.org.
I believe there was some confusion and people were signing up
on the "new repositories" page. All of those entries were moved
to the new page.
The deadline to get on the list is Wednesday at 23:59 UTC.
The planned migration window is going to be on Friday.
Again, this is only for extensions that are already in SVN.
cross-posting to wikitech-l
On Thu, Mar 29, 2012 at 8:34 AM, Jon Robson <jrobson(a)wikimedia.org> wrote:
> I've been experimenting with the api that Max Semenik has been working
> MobileFrontend extension so that any searches, link to other articles
> and section expansions go via the api. It's something I'm very
> interested to do both from a usability and site performance
> perspective. The best example of what I'm trying to do can be seen on
> http://github.com . It's worth pointing out that this is not currently
> on the roadmap but it is something I'm very interested in us doing.
> The resulting prototype  has a few quirks (and a few things I
> haven't paid attention to e.g. page titles do not change) but I think
> is interesting as it is noticeably faster (please bear in mind this is
> a slow server) and reduces the load on the server in that any page
> loads after the first load will go via the api. Note devices which do
> not support the history api or jQuery are not effected by this code.
> It throws up some interesting challenges.
> 1) It enables sub section expansion for which the existing design
> doesn't really seem to work. Also it means pages like History of China
>  look different when viewed normally to when retrieved via the api
> (the normal version only has expansion on the top level sections)
> 2) Also the back button (at the moment) has no memory of which
> sections were opened/closed and instead goes back to the last loaded
> 3) The first page is still wastefully loaded. It would be good if we
> could load all sections dynamically on the first page whilst not give
> 4) Some sections just contain sub sections. So if I click a section
> with heading 'Section 1' that only contains 'Section 1.1' and 'Section
> 1.2' there is a brief flash whilst it tries to load any text for
> section 1 (Please load-> empty string). To see what I mean load the
> Japan page , load History of China via the link 'Chinese history
> texts' and click 'Prehistory' (observe 'loading content'). I guess it
> would be useful if the api let me know beforehand if a section was
> empty of any text so I didn't try to retrieve it.
> 5) The api returns the heading in the text. For example in  the
> line Prehistory includes the heading in the text. This is unnecessary
> as I can determine this from toclevel and line. In my code I'm
> currently scrubbing this out every time I load.
> Feel free to have a play  locally as the mobile-geo server is very
> very slow and improve on it.
> Let me know if this interests anyone other than me :-)
>  http://www.w3.org/TR/html5/history.html#the-history-interface
>  http://mobile-geo.wmflabs.org/w/index.php/Millet
>  http://mobile-geo.wmflabs.org/w/index.php/History_of_China
>  http://mobile-geo.wmflabs.org/w/index.php/Japan
>  https://gerrit.wikimedia.org/r/#change,3894
> Mobile-l mailing list
Software Engineer, Mobile
This email is going to briefly describe the old SVN workflow, and then
use that as a baseline to describe what we should do for Git. I
haven't had a chance to coordinate this mail with Chad (or anyone
else), so I'll reserve the right for him to completely contradict me
here. This is meant to provoke a discussion about how we're really
going to use Git, and to establish a plan for taking advantage of the
new workflow to move to much more frequent deployments.
In the old world, we had this:
│ └── 1.17wmf1 (branched from REL1_17)
│ └── 1.18wmf1 (branched from REL1_18)
└── 1.19wmf1 (branched from REL1_19)
Tarball releases would come out of the respective REL1_xx branches,
and deployments would come out of the 1.xxwmf1 branches. REL1_xx
branches have all extensions, and 1.xxwmf1 branches have only
Wikimedia production code. Each would be a relatively long lived
branch (6-18 months) into which critical fixes and priority features
would be merged from trunk.
Looking ahead to deployments, there's a couple of different ways to go
One plan would be to have a "wmf" branch that does not trail far
behind the master. The extensions we deploy to the cluster can be
included as submodules for that given branch. The process for
deployment at that point will be "merge from master" or "update
submodule reference" on the wmf branch. Then on fenari, you will git
pull and git submodule update before scapping like you're currently
used to. The downside of this approach is that there's not an obvious
way to have multiple production branches in play (heterogeneous
deploy). Seems solvable (e.g wmf1, wmf2, etc), but that also seems
Another possible plan would be to have something *somewhat* closer to
what we have today, with new branches off of trunk for each
deployment, and deployments happening as frequently as weekly.
This is how I was envisioning the process working, and just didn't get
a chance to sync up with Chad to find out what the issues of this
approach would be.
Since we don't have an imminent deployment coming from Git, we have a
little time to figure this situation out.
Regardless of the branching strategy, the goal would be to start as
early as April with much more frequent deployments to production. The
deployment plan would look something like this:
* Deploy 1.20wmf01 to test2 real soon now (say, no later than April 16).
* Deploy 1.20wmf01 to mediawiki.org a couple deploy days after that
("deploy day" meaning Monday through Thursday)
* Let simmer for some short-ish amount of time (TBD)
* Roll out 1.20wmf01 to more wikis, eventually making it to all of them
Given the way APC caches and other caching works, I suspect we can't
get away with having more than two simultaneous versions out on the
production cluster, but we could conceivably have a situation where,
for example, a deploy day or two after rolling out 1.20wmf01 out to
the last of the wikis, we then roll out 1.20wmf02 out to test2.
This topic is partially covered here:
...but I imagine we'll probably need to revise that based on this
conversation and perhaps break this out into a separate page.
There's a few of us that plan to meet in a couple of weeks to
formalize something here, but perhaps we can get this all hammered out
on-list prior to that.
Thoughts on this process?
I'm Currently working on a major in Computer Science and Technology and I
want to get involved in the open source community (I am a newbie). From
the listed ideas, Integrate upload from Flickr and Taxobox caught my
attention and I am working on a proposal, but I didn't want to wait more
time to introduce myself to you and the community.
PHP, I also have used MySQL on school projects (I have more experience in
SQL Server because an internship I did). The past 3 days I've been getting
familiar with the resources available for developers (MediaWiki
architecture, Manual:code and Coding conventions ). I am glad to say that I
I'll let you know when I post it on my userpage, hoping I can receive some