Hi,
I wonder if the way MediaWiki handle sections (delimited by
==titles==) could be improved. We have many informations about a page
like its history, the templates used, the pages that link to it and so
on, as well as we can rename or delete it.
Currently all of these cannot be applied to a section. If we edit a
section, we do not know the templates used in this section unless we
hit the preview button (bug 878 - when you edit a page, you are shown
the list of templates used immediately). We cannot check the history
of a particular section and we have to dig through all edits of the
page.
The most cumbersome situation is when you want to watch only one
section, especially on talk pages. You have no choice but to watch the
entire page and that can be really painful.
Is there an "easy" way to improve this situation?
--misdre
Guys,
we are creating an open source, nonprofit, e-learning social network and
looking for people to help.
Any interested send an email to infinitunm(a)infinitunm.com or to me
eros.phill(a)gmail.com
regards
eros phillipe
On Tue, May 31, 2011 at 10:35 AM, Neil Kandalgaonkar
<neilk(a)wikimedia.org> wrote:
> Are we all in deadlock or something? Are the users who can push waiting
> from some proposals/work from the rest of the community?
We had a hallway conversation about this just now (Neil, Trevor, Brion
and I, and then just Brion and I), which I think was pretty useful.
Here's where we went with it:
1. We rehashed the pre-commit review proposal that Neil suggested a
few months ago, and agreed that pre-commit would be helpful in keeping
the backlog down
2. Given our current tools/process, we agreed that insisting on
pre-commit review would be a pain in the butt.
3. Brion and I further discussed review process, trying to come up
with a system that give us the benefits of pre-commit review, without
actually switching to pre-commit review
Here's where things I think got interesting. Brion pointed out that
in ye olden days, he was much more aggressive about reverting things
he didn't understand. I pointed out that, as we broaden the pool of
committers, "I don't understand"-based reversions lead to a lot of
ugliness, since very few people claim to have a broad understanding of
the system and therefore an expectation of understanding every change.
Most reviewers, faced with a commit they don't understand, will leave
it for others to comment on. There's been a lot of unnecessary drama
and churn over reversions because of misunderstandings about what a
reversion means.
So, there's a number of possible solutions to this problem. These are
independent suggestions, but any of these might help:
1. We say that a commit has some fixed window (e.g. 72 hours) to get
reviewed, or else it is subject to automatic reversion. This will
motivate committers to make sure they have a reviewer lined up, and
make it clear that, if their code gets reverted, it's nothing
personal...it's just our process.
2. We encourage committers to identify who will be reviewing their
code as part of their commit comment. That way, we have an identified
person who has license to revert if they don't understand the code.
I coulda swore there are other ideas that came out of that
conversation, but alas, I wasn't taking notes. Anyway, I'm sure
they'll come up in this thread.
Rob
Neil spent some time yesterday hacking with the Hackpad guys (
https://hackpad.com/ -- an Etherpad fork) experimenting with embedding the
collaborative editor via an <iframe> into MediaWiki's edit page in order to
use it to do multiuser editing.
Apparently it's pretty cool so far and I'm looking forward to seeing it
stabilized! -- but more forward-looking we're tossing the idea around the
office of having a common protocol for such editor embedding, which we can
then use for other things:
* the contentEditable mode for WikiEditor
* the Ace syntax-highlighter in CodeEditor gadget
* experimental rich text editors etc
Some of Neil & my initial notes:
http://www.mediawiki.org/wiki/Future/Hackpad/Spec
The API between the host window and the (potentially offsite) iframe would
need to handle loading initial text, saving it, and some state checks at a
minimum; depending on how much we want to integrate the WikiEditor's toolbar
portion as a standard we may need to be able to let the toolbar control
things like selection and text insertion, or each editor variant could
manage its own toolbar and that could be a WikiEditor-specific protocol
part.
Thoughts?
-- brion
Are there any statistics about skin usage on Wikimedia projects?
For example:
* How many people use each skin on each project?
* How many accounts that edited before and after the Vector switch
have Monobook?
* How many accounts that were created after the Vector switch use Monobook?
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
"We're living in pieces,
I want to live in peace." - T. Moore
I've been looking to experiment with node.js lately and created a
little toy webapp that displays updates from the major language
wikipedias in real time:
http://wikistream.inkdroid.org
Perhaps like you, I've often tried to convey to folks in the GLAM
sector (Galleries, Libraries, Archives and Museums) just how much
Wikipedia is actively edited. GLAM institutions are increasingly
interested in "digital curation" and I've sometimes displayed the IRC
activity at workshops to demonstrate the sheer number of people (and
bots) that are actively engaged in improving the content there...with
the hopes of making the Wikipedia platform part of their curation
strategy.
Anyhow, I'd be interested in any feedback you might have about wikistream.
//Ed
I know this might be offtopic, but people on this list seem likely to be
interested or know people who are.
The Wikimedia Foundation has an open call for independent contractors
for the position "Networking Contractor Amsterdam" and just added two
more calls: "Internationalization and Localization Outreach" and
"Internationalization and Localization Feature Development".
The tech team is also currently looking to hire for the following
positions (and the following list will of course change):
* Operations Engineer
* Software Developer (Features)
* Systems Engineer - Data Analytics
* Product Manager (Features)
* Software Developer Rich Text Editing (Features)
* Software Developer Frontend
* Software Developer Backend
* Quality Assurance Lead
* Product Manager (Analytics)
All the job descriptions, required and desired qualifications, and
application instructions are on the Job Openings page:
http://wikimediafoundation.org/wiki/Job_openings (and there are also job
openings there for other Foundation departments, like Global Development).
As the job openings page says: "The positions listed here are based in
our San Francisco headquarters, but in some cases we may be open to the
possibility of people working remotely, unless otherwise noted in the
job posting itself. If you are not currently living in the San Francisco
Bay Area, and are not willing to relocate there, please make that clear
in your cover letter."
-Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation
The recent elections showed us that language issues and translation
are something we have to take very seriously from now on. As a first
step towards improving communication, it seems like we should get an
idea of which users speak which languages?
We could directly ask them to tell us, but upon reflection, the
information is already hidden in our database. A multilingual user is
one that actively edits two projects of different languages.
In devising a comprehensive translation strategy, we need to know how
interconnected any two given projects are. We also need to know how
connected any given project is to English, since it's our working
language.
We need to pay special attention to languages that are very 'distant'
from English-- distant in the sense of having few members who fluent
in both English and the language in question.
Could someone aid me in getting this data, or explaining why I don't
need it or why we already have it, etc?
Specifically, I'm looking for:
# For each non-english-language project, how many of their active
users are ALSO active on an english-language project? (the answer is
should be a single whole number for each project)
# For any two projects, how many users are there who are active on
both? (answer is a square matrix, roughly 750x750 )
# For any two languages, how many users appear to speak both
languages? (answer is a square matrix, roughly 750x750)
Does anyone know how to pull this out of the database? It's an
important question for us to recruit translators and really just
assess "where we are" in terms of inter-project language capabilities.
Alec
We've been fighting some weird regressions in the test cases for the Block
class which handles IP & user blocks, some of which took us a while to even
reproduce consistently. After Chad & others cleaned up some sqlite-related
issues, we still had a remaining stubborn one which failed in the full test
suite, but not when run standalone.
Between me, Mark H, and Chad, we seem to have finally worked that one out
today, and made some fixes to both Block and BlockTest which have resolved
it.
Full story if you like. :D -> http://etherpad.wikimedia.org/Block-test-bug
Short story:
http://ci.tesla.usability.wikimedia.org/cruisecontrol/buildresults/mw
"Unit Tests: (1628) All Tests Passed "
*break out the virtual champaign*
-- brion