When users are renamed, it is up to the user to find and replace all old
instances of their signature to make sure that they redirect to the
appropriate account if they want to put that much work into. The other
option is the re-register your old username and redirect it to the new one.
This is not feasible all the time, for instance if someone owns the SUL to
the old username and comes over to take it before it gets re-registered.
I was wondering if anyone had the interest to build a tool on labs that
would let a renamed user find-and-replace old signatures in an automated
fashion? The tool would check against the rename log and the old timestamp
to make sure it's the appropriate user, and substitute the old name part
with ~~~ to insert the new username while preserving the timestamp.
I'm not a developer my self, so thoughts on feasibility and implementation
are welcome. I think such a tool would be highly useful and I'm slightly
surprised it hasn't been made yet.
--
Keegan Peterzell
Community Liaison, Product
Wikimedia Foundation
Hello and welcome to the latest edition of the WMF Engineering Roadmap
and Deployment update.
The full log of planned deployments next week can be found at:
<https://wikitech.wikimedia.org/wiki/Deployments#Week_of_February_9th>
A quick list of notable items... (not much)
== Tuesday ==
* MediaWiki deploy
** group1 to 1.25wmf16: All non-Wikipedia sites (Wiktionary, Wikisource,
Wikinews, Wikibooks, Wikiquote, Wikiversity, and a few other sites)
** <https://www.mediawiki.org/wiki/MediaWiki_1.25/wmf16>
== Wednesday ==
* MediaWiki deploy
** group2 to 1.25wmf16 (all Wikipedias)
** group0 to 1.25wmf17 (test/test2/testwikidata/mediawiki)
Thanks and as always, questions and comments welcome,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
On Thu, Feb 5, 2015 at 8:50 PM, MZMcBride <z(a)mzmcbride.com> wrote:
> Hi.
>
> Phabricator search is currently pretty bad:
> <https://phabricator.wikimedia.org/T75854>.
>
> We should already be wary of putting all our eggs in the Phabricator
> basket. Search is hugely important to an issue tracker and code
> repository, so if we're going to continue growing Phabricator, we need to
> figure out short-term and long-term solutions for Phabricator search.
>
> Perhaps if Titan/Wikidata Query Service development is on hold, Nik could
> investigate this? Or perhaps dropping Elasticsearch for now in favor of
> the built-in search would be better? Thoughts, suggestions, etc. welcome.
^d was looking into this a bit today based on some reports that came
in on irc. He's been working actively with the upstream to make their
Elasticsearch integration better already and is now assigned to the
Release Engineering team that helped bootstrap the Phabricator install
so I'd expect he will continue to be interested in improvements.
Bryan
--
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Sr Software Engineer Boise, ID USA
irc: bd808 v:415.839.6885 x6855
I couldn't find a tool to convert my videos from whatever format into .ogv
outside my PC box before pushing to Commons. I guess there might exist
something like that, but perhaps I can't find it. But I made one for
myself. Maybe it might be useful to others too.
I call it CommonsConvert. Upload any video format, enter username and
password, and get the video converted into .ogv format plus pushed to
Commons all in one shot. You upload the original video in whatever format,
and get it on Commons in .ogv
Some rough edges currently, such as can't/don't know/no available means to
append license to uploaded video among other issues. Working on the ones I
can asap.
It uses mwclient module via Django based on Python. Django gives user info
to Python, Python calls avconv in a subprocess, and the converted file is
relayed to Commons via mwclient module via Media Wiki API.
I think not everyone has means/technical know how/interest/time converting
videos taken on their PC to an Ogg-Vorbis-whatever format before uploading
to Commons.
Doing the conversion on a server leaves room for user to focus on getting
videos than processing them.
I don't know if this is or will be of any interest that someone might wanna
use, but I personally would enjoy having a server sitting somewhere convert
my videos I want to save onto Commons, than using my local computer doing
that boring task.
In an email to this list a week or so ago, I 'ranted' about why commons
wants a specific format (which if not for commons, I never come across that
format anywhere), but has no provision for converting any videos thrown at
it into that format of its choice. Well....
Tool can be found here: khophi.co <http://khophi.co/commonsconvert>/
<http://khophi.co/commonsconvert>commonsconvert
<http://khophi.co/commonsconvert>
And this is sample video uploaded using the tool.
https://commons.m.wikimedia.org/wiki/File:Testing_file_for_upload_last.ogv
(will be deleted soon, likely)
What I do not know or have not experimented yet is whether uploading using
the api also has the 100mb upload restriction.
Will appreciate feedback.
Gerrit change 181958[1] was recently merged, which allows (among other
things) the ability for sysops to irrecoverably delete change tags. Since
irrecoverable deletion of anything from on-wiki is rather unprecedented, I
think we should stop granting it to all sysops in DefaultSettings.php, so
that wikis have to "opt-in" for this feature to be enabled. Thoughts?
[1]https://gerrit.wikimedia.org/r/#/c/181958/
LDQ 2015 CALL FOR PAPERS
2nd Workshop on Linked Data Quality
co-located with ESWC 2015, Portorož, Slovenia
June 1, 2015
http://ldq.semanticmultimedia.org/
Important Dates
* Submission of research papers: March 6, 2015
* Notification of paper acceptance: April 3, 2015
* Submission of camera-ready papers: April 17, 2015
Since the start of the Linked Open Data (LOD) Cloud, we have seen an
unprecedented volume of structured data published on the web, in most
cases as RDF and Linked (Open) Data. The integration across this LOD
Cloud, however, is hampered by the ‘publish first, refine later’
philosophy. This is due to various quality problems existing in the
published data such as incompleteness, inconsistency,
incomprehensibility, etc. These problems affect every application
domain, be it scientific (e.g., life science, environment),
governmental, or industrial applications.
We see linked datasets originating from crowdsourced content like
Wikipedia and OpenStreetMap such as DBpedia and LinkedGeoData and also
from highly curated sources e.g. from the library domain. Quality is
defined as “fitness for use”, thus DBpedia currently can be appropriate
for a simple end-user application but could never be used in the medical
domain for treatment decisions. However, quality is a key to the success
of the data web and a major barrier for further industry adoption.
Despite the quality in Linked Data being an essential concept, few
efforts are currently available to standardize how data quality tracking
and assurance should be implemented. Particularly in Linked Data,
ensuring data quality is a challenge as it involves a set of
autonomously evolving data sources. Additionally, detecting the quality
of datasets available and making the information explicit is yet another
challenge. This includes the (semi-)automatic identification of
problems. Moreover, none of the current approaches uses the assessment
to ultimately improve the quality of the underlying dataset.
The goal of the Workshop on Linked Data Quality is to raise the
awareness of quality issues in Linked Data and to promote approaches to
assess, monitor, maintain and improve Linked Data quality.
The workshop topics include, but are not limited to:
* Concepts
* - Quality modeling vocabularies
* Quality assessment
* - Methodologies
* - Frameworks for quality testing and evaluation
* - Inconsistency detection
* - Tools/Data validators
* Quality improvement
* - Refinement techniques for Linked Datasets
* - Linked Data cleansing
* - Error correction
* - Tools
* Quality of ontologies
* Reputation and trustworthiness of web resources
* Best practices for Linked Data management
* User experience, empirical studies
Submission guidelines
We seek novel technical research papers in the context of Linked Data
Quality with a length of up to 8 pages (long) and 4 pages (short)
papers. Papers should be submitted in PDF format. Other supplementary
formats (e.g. html) are also accepted but a pdf version is required.
Paper submissions must be formatted in the style of the Springer
Publications format for Lecture Notes in Computer Science (LNCS). Please
submit your paper via EasyChair at
https://easychair.org/conferences/?conf=ldq2015. Submissions that do not
comply with the formatting of LNCS or that exceed the page limit will be
rejected without review. We note that the author list does not need to
be anonymized, as we do not have a double-blind review process in place.
Submissions will be peer reviewed by three independent reviewers.
Accepted papers have to be presented at the workshop.
Important Dates
All deadlines are, unless otherwise stated, at 23:59 Hawaii time.
* Submission of research papers: March 6, 2015
* Notification of paper acceptance: April 3, 2015
* Submission of camera-ready papers: April 17, 2015
* Workshop date: May 31 or June 1, 2015 (half-day)
Organizing Committee
* Anisa Rula – University of Milano-Bicocca, IT
* Amrapali Zaveri – AKSW, University of Leipzig, DE
* Magnus Knuth – Hasso Plattner Institute, University of Potsdam, DE
* Dimitris Kontokostas – AKSW, University of Leipzig, DE
Program Committee
* Maribel Acosta – Karlsruhe Institute of Technology, AIFB, DE
* Mathieu d’Aquin – Knowledge Media Institute, The Open University, UK
* Volha Bryl – University of Mannheim, DE
* Ioannis Chrysakis – ICS FORTH, GR
* Jeremy Debattista – University of Bonn, Fraunhofer IAIS, DE
* Stefan Dietze – L3S, DE
* Suzanne Embury – University of Manchester, UK
* Christian Fürber – Information Quality Institute GmbH, DE
* Jose Emilio Labra Gayo – University of Oviedo, ES
* Markus Graube – Technische Universität Dresden, DE
* Maristella Matera – Politecnico di Milano, IT
* John McCrae – CITEC, University of Bielefeld, DE
* Felix Naumann – Hasso Plattner Institute, DE
* Matteo Palmonari – University of Milan-Bicocca, IT
* Heiko Paulheim – University of Mannheim, DE
* Mariano Rico – Universidad Politécnica de Madrid, ES
* Ansgar Scherp – Kiel University, DE
* Jürgen Umbrich – Vienna University of Economics and Business, AT
* Miel Vander Sande – MultimediaLab, Ghent University, iMinds, BE
* Patrick Westphal – AKSW, University of Leipzig, DE
* Jun Zhao – Lancaster University, UK
* Antoine Zimmermann – ISCOD / LSTI, École Nationale Supérieure des
Mines de Saint-Étienne, FR
* Andrea Maurino – University of Milan-Bicocca, IT
More details can be found on the workshop website:
http://ldq.semanticmultimedia.org/
Minutes and slides from four recent meetings have appeared under the
following URLs:
Analytics team:
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarter…
Parsoid and Services teams:
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarter…
Mobile Web and Apps teams:
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarter…
Product Process Improvements (update meeting):
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarter…
On Wed, Dec 19, 2012 at 6:49 PM, Erik Moeller <erik(a)wikimedia.org> wrote:
> Hi folks,
>
> to increase accountability and create more opportunities for course
> corrections and resourcing adjustments as necessary, Sue's asked me
> and Howie Fung to set up a quarterly project evaluation process,
> starting with our highest priority initiatives. These are, according
> to Sue's narrowing focus recommendations which were approved by the
> Board [1]:
>
> - Visual Editor
> - Mobile (mobile contributions + Wikipedia Zero)
> - Editor Engagement (also known as the E2 and E3 teams)
> - Funds Dissemination Committe and expanded grant-making capacity
>
> I'm proposing the following initial schedule:
>
> January:
> - Editor Engagement Experiments
>
> February:
> - Visual Editor
> - Mobile (Contribs + Zero)
>
> March:
> - Editor Engagement Features (Echo, Flow projects)
> - Funds Dissemination Committee
>
> We’ll try doing this on the same day or adjacent to the monthly
> metrics meetings [2], since the team(s) will give a presentation on
> their recent progress, which will help set some context that would
> otherwise need to be covered in the quarterly review itself. This will
> also create open opportunities for feedback and questions.
>
> My goal is to do this in a manner where even though the quarterly
> review meetings themselves are internal, the outcomes are captured as
> meeting minutes and shared publicly, which is why I'm starting this
> discussion on a public list as well. I've created a wiki page here
> which we can use to discuss the concept further:
>
> https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/Quarterly_r…
>
> The internal review will, at minimum, include:
>
> Sue Gardner
> myself
> Howie Fung
> Team members and relevant director(s)
> Designated minute-taker
>
> So for example, for Visual Editor, the review team would be the Visual
> Editor / Parsoid teams, Sue, me, Howie, Terry, and a minute-taker.
>
> I imagine the structure of the review roughly as follows, with a
> duration of about 2 1/2 hours divided into 25-30 minute blocks:
>
> - Brief team intro and recap of team's activities through the quarter,
> compared with goals
> - Drill into goals and targets: Did we achieve what we said we would?
> - Review of challenges, blockers and successes
> - Discussion of proposed changes (e.g. resourcing, targets) and other
> action items
> - Buffer time, debriefing
>
> Once again, the primary purpose of these reviews is to create improved
> structures for internal accountability, escalation points in cases
> where serious changes are necessary, and transparency to the world.
>
> In addition to these priority initiatives, my recommendation would be
> to conduct quarterly reviews for any activity that requires more than
> a set amount of resources (people/dollars). These additional reviews
> may however be conducted in a more lightweight manner and internally
> to the departments. We’re slowly getting into that habit in
> engineering.
>
> As we pilot this process, the format of the high priority reviews can
> help inform and support reviews across the organization.
>
> Feedback and questions are appreciated.
>
> All best,
> Erik
>
> [1] https://wikimediafoundation.org/wiki/Vote:Narrowing_Focus
> [2] https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings
> --
> Erik Möller
> VP of Engineering and Product Development, Wikimedia Foundation
>
> Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
>
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l(a)lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
--
Tilman Bayer
Senior Analyst
Wikimedia Foundation
IRC (Freenode): HaeB
Hi.
Phabricator search is currently pretty bad:
<https://phabricator.wikimedia.org/T75854>.
We should already be wary of putting all our eggs in the Phabricator
basket. Search is hugely important to an issue tracker and code
repository, so if we're going to continue growing Phabricator, we need to
figure out short-term and long-term solutions for Phabricator search.
Perhaps if Titan/Wikidata Query Service development is on hold, Nik could
investigate this? Or perhaps dropping Elasticsearch for now in favor of
the built-in search would be better? Thoughts, suggestions, etc. welcome.
MZMcBride
Oh what a day! Which began when perforce
a visitor from afar began to exhort
us to look far afield, to travel and visit
learn brand new things, uncover, elicit
stories of users far from our homes!
And so we set out, bravely to roam:
perhaps ten full blocks! We found creatures strange!
They all spoke English! The stories exchanged
recalled those of family: Mom, Dad, and friends --
it's true, then, we _are_ all the same in the end!
Such relief not to grapple
with projects baroque,
languages strange, or
features "they" wrote.
In tune with this sentiment
let's celebrate dominance!
Hush the less pertinent --
let's not mention "those" continents.
"Hurray," we all cheer: "Our wiki is strong!"
Our projects are weak, but shush, sing along:
our rivals are fierce, but yet we prevailed;
it must be because our PHP scaled!
Ignore those naysayers who laugh at our UX
And claims by our editors that it obstructs:
separate pages for talk, no friends and no chat --
no Serious Software has all of that!
Well, enough -- we're not free
to change even fonts
without acres of missives
to agony aunts:
let's move next to strategy,
where with speeches prolonged
new hires will tell us
what we got wrong.
Three commands we were given:
the first, to be punctual.
By fiat we've banished
the correct but eventual;
from now on our code
is timely _and_ functional.
Our prior disasters are
vanished by ritual.
The second was novel:
exhorted to innovate!
Our change-fearing userbase
I'm sure will reciprocate.
Perhaps we can grow
new crops of good editors.
New users, new processes,
throw off our fetters.
Perhaps we need spaces
where we can be bold --
it's hard else to see
how to do what we're told.
The last was to integrate,
engage with community;
never mind our tall silos
and product disunity:
we can have orphaned features
conflicted teams, clashing visions --
"What's key is to synergize!"
says our stratcom tactician.
Community discourse
will fix all that ails us:
except for those times
when instead they've assailed us.
Lift a glass to the mission!
We'll muddle through fine.
We all love each other,
but this day's been a grind.