Hi,
Gnunify 2013 ( v11 ) is announced. Cfp is open (there does not seem to
be a closing date, but better to register talks early).
Date: 15th, 16th and 17th Feb 2013
Venue:
Symbiosis Institute of Computer Studies and Research (SICSR),Pune.
Tracks include.
System Admin (Networking, security etc)
Web Techologies
Mobile Techonoligies
Cloud Computing
Scientific Computing
System Programmin
FOSS - Generic Topics
Current Trends
You have to register as user and then submit talk/workshop.
http://www.gnunify.in/user/register
(try forgot password with mail id, if you registered last year).
--
-------------------------------------------------------------*
*Sheel Sindhu Manohar ( शील सिंधु मनोहर ) <http://ssmanohar.in>
*www.jmilug.org *
-------------------------------------------------------------
Hey,
I just merged in changes into master of several extensions, most notably
Maps, Semantic MediaWiki and Validator, that make them dependent on
DataValues [0]. They will thus no longer work without DataValues. Please
let me know if you run into any issues :)
[0] https://www.mediawiki.org/wiki/Extension:DataValues
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil.
--
Hi,
Someone once suggested we create a control panel for bots. I think the
first step would be to create a page where we could see overview of all
bots we are running on projects. If we create some protocol for querying
bot status we could create some central monitoring server which would
either:
* Query actively each bot for a status (on some address and IP)
* Each bot would contact this server delivering the information to it
I would support the second as it's easier to manage - in first case we
would need to configure the "master server" with list of bots to query.
The system could be simply set of a daemon written in any language and a
php script. Bots would contact the server using php script (they would just
pass information whether they are running or having troubles using some
POST data) daemon would periodically flag all bots that didn't respond for
a certain period as having troubles / needing repair.
Thanks to this we would have overview of all active bots on all projects
and their status. What do you think? Is someone interested in working on
that.
The Editor Engagement team will be holding a special Echo IRC office
hours specifically for developers next Tuesday. We would like to let the
other developers know what we're up to and allow them to ask any
questions they may have about the Echo Notifications system. Since Echo
is designed to be utilized by other extensions, we also hope to provide
some guidance on how to accomplish this. Hope to see you there!
The meeting will be in #wikimedia-tech on Tuesday, January 8th at 11am
PST (19:00 UTC).
Ryan Kaldari
How many lines should have a script, and how many parameters are needed,
to get anything from anywhere (into MediaWiki world) and to do anything
with the result?
Well, I'm surprised that the answer of both questions is: one.
function getHtmlNew(parametri) {
$.ajax(parametri.ajax).done(function (data) {
parametri.callback( parametri, data);
});
}
Yes, parametri a pretty complex object, and callback() could be very simple
or extremely complex, anyway it runs and it is a script of one line,
needing one parameter only.
It runs too using Wikidata API.
Alex brollo
LevelUp is a mentorship program that will start in January 2013 and that
replaces the "20% time" policy
https://www.mediawiki.org/wiki/Wikimedia_engineering_20%25_policy for
Wikimedia Foundation engineers. Technical contributors, volunteer or
staff, have the opportunity to participate; see
https://www.mediawiki.org/wiki/Mentorship_programs/LevelUp for more details.
We started 20% time to ensure that Wikimedia Foundation engineers would
spend at least 20% of each week on tasks that directly serve the
Wikimedia developer and user community, including bug triage, code
review, extension review, documentation, urgent bugfixes, and so on. It
had various flaws. 1 day every week, I made people task-switch and it
got in the way of their deadlines, and it was perceived as a chore that
always needed doing.
It felt like enforcing a rota to do the dishes. So instead, let's build
a dishwasher. :-) We can cross-train each other and fill in the empty
rows on the maintainership table
https://www.mediawiki.org/wiki/Developers/Maintainers so our whole
community gains the capacity to get stuff done faster.
If you've been frustrated because of code review delays, I want you to
sign up for LevelUp -- by March 2013 you could be a comaintainer of a
codebase and be merging and improving other people's patchsets, which
will give them more time and incentive to merge yours. :-)
When I asked what people wanted to learn, I got a variety of responses
-- including "MediaWiki in general", "puppet", "networking", and "JS,
PHP, HTML, CSS, SQL" -- all of which you can learn through LevelUp.
When I asked how you wanted to learn, all of you said you wanted
real-life, hands-on work with mentors who could answer your questions.
Here you go. :-)
I won't be starting the matchmaking process in earnest till I come back
from the Thanksgiving break on Monday, but I will reply to talk page
messages and emails then. :-)
--
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
Hi folks,
One item that comes up pretty frequently in our regular conversations
with the Wikidata folks is the question of how change propagation
should work. This email is largely directed at the relevant folks in
WMF's Ops and Platform Eng groups (and obviously, also the Wikidata
team), but I'm erring on the side of distributing too widely rather
than too narrowly. I originally asked Daniel to send this (earlier
today my time, which was late in his day), but decided that even
though I'm not going to be as good at describing the technical details
(and I'm hoping he chimes in), I know a lot better what I was asking
for, so I should just write it.
The spec is here:
https://meta.wikimedia.org/wiki/Wikidata/Notes/Change_propagation#Dispatchi…
The thing that isn't covered here is how it works today, which I'll
try to quickly sum up. Basically, it's a single cron job, running on
hume[1]. So, that means that when a change is made on wikidata.org,
one has to wait for this job to get around to running before the item.
It'd be good for someone from the Wikidata team to
We've declared that Good Enough(tm) for now, where "now" is the period
of time where we'll be running the Wikidata client on a small number
of wikis (currently test2, soon Hungarian Wikipedia).
The problem is that we don't have a good plan for a permanent solution
nailed down. It feels like we should make this work with the job
queue, but the worry is that once Wikidata clients are on every single
wiki, we're going to basically generate hundreds of jobs (one per
wiki) for every change made on the central wikidata.org wiki.
Guidance on what a permanent solution should look like? If you'd like
to wait for Daniel to clarify some of the tech details before
answering, that's fine.
Rob
[1] http://wikitech.wikimedia.org/view/Hume
Hi All,
Due to some security concerns, ganglia.w.o is currently behind htaccess.
We're actively working to resolve the security issues, and will restore
public access once they are resolved.
Sorry for the inconvenience!
Cheers,
Peter
Can I change the full name in Gerrit somehow? I don't like the fact that
when I merge something in gerrit my commits have "JGonera" as an author
instead of "Juliusz Gonera".
Juliusz