I made a patch [0] for T39665 [1] about 6 months ago. It has been
rotting in gerrit since.
The core bug is related to glibc's iconv implementation and PHP (and
HHVM as well I think). To work around the iconv bug I wrote a little
helper function that will use mb_convert_encoding() instead if it is
present. in review PleaseStand pointed out that the libmbfl used by
mb_convert_encoding has some differences in the supported character
sets and character set naming [2] vs iconv.
I was hoping that someone on this list could step in and either
convince me to abandon this patch and pretend I never investigated the
problem or help design a solution that will plaster over these
differences in a reasonable way.
[0]: https://gerrit.wikimedia.org/r/#/c/172101/
[1]: https://phabricator.wikimedia.org/T39665
[2]: https://php.net/manual/en/mbstring.encodings.php
Bryan
--
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Sr Software Engineer Boise, ID USA
irc: bd808 v:415.839.6885 x6855
On Wed, May 6, 2015 at 12:13 AM Greg Grossmeier <greg(a)wikimedia.org> wrote:
> Quick general question: are you proposing this for pywikibot only? I
> think the answer is yes, just making sure.
>
> No, I'm proposing to do this in general. I never mentioned pywikibot as
the goal I just said I did a test in pywikibot and it worked well.
> <quote name="Amir Ladsgroup" date="2015-05-05" time="07:05:48 +0000">
> > Hey,
> > Github has a huge community of developers that collaborating with them
> can
> > be beneficial for us and them but Wikimedia codes are in gerrit (and in
> > future in phabricator) and our bug tracker is in phabrictor. sometimes It
> > feels we are in another planet.
> > Wikimedia has a mirror in github but we close pull requests immediately
> and
> > we barely check issues raised there. Also there is a big notice in
> > github[1], "if you want to help, do it our way". Suddenly I got an idea
> > that if we can synchronize github activities with gerrit and phabricator,
> > it would help us by letting others help in their own way. It made me so
> > excited that I wrote a bot yesterday to automatically duplicates patches
> of
> > pull requests in gerrit and makes a comment in the pull request stating
> we
> > made a patch in gerrit. I did a test in pywikibot and it worked well
> [2][3].
> >
> > Note that the bot doesn't create a pull request for every gerrit patch
> but
> > it creates a gerrit patch for every (open) pull requests.
> >
> > But before I go on we need to discuss on several important aspects of
> this
> > idea:
> > 1- Is it really necessary to do this? Do you agree we need something like
> > that?
> > 2-I think a bot to duplicate pull requests is not the best idea since it
> > creates them under the bot account and not under original user account.
> We
> > can create a plugin for phabrictor to do that but issues like privacy
> would
> > bother us. (using OAuth wouldn't be a bad idea) What do you think? What
> do
> > you suggest?
> > 3- Even if we create a plugin, still a bot to synchronize comments and
> code
> > reviews is needed. I wrote my original code in a way that I can expand
> this
> > to do this job too, but do you agree we need to do this?
> > 4- We can also expand this bot to create a phabricator task for each
> issue
> > that has been created (except pull requests). Is it okay?
> >
> > I published my code in [4].
> >
> > [1]: https://github.com/wikimedia/pywikibot-core "Github mirror of
> > "pywikibot/core" - our actual code is hosted with Gerrit (please see
> > https://www.mediawiki.org/wiki/Developer_access for contributing"
> > [2]: https://github.com/wikimedia/pywikibot-core/pull/5
> > [3]: https://gerrit.wikimedia.org/r/208906
> > [4]: https://github.com/Ladsgroup/sync_github_bot
> >
> > Best
>
> > _______________________________________________
> > Pywikipedia-l mailing list
> > Pywikipedia-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
>
> --
> | Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
> | identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
>
> _______________________________________________
> Pywikipedia-l mailing list
> Pywikipedia-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
>
Hello,
A quick reminder about Wikimedia Language Engineering team's IRC office
hour later today at 1430 UTC[1] on #wikimedia-office. Please see below for
the original announcement, local time, and agenda. We will post logs on
metawiki[2] after the event.
Thanks
Runa
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150505T1430
[2] https://meta.wikimedia.org/wiki/IRC_office_hours#Office_hour_logs
---------- Forwarded message ----------
From: Runa Bhattacharjee <rbhattacharjee(a)wikimedia.org>
Date: Thu, Apr 30, 2015 at 7:29 PM
Subject: [x-post] Next Language Engineering IRC Office Hour is on 5th May
2015 (Tuesday) at 1430 UTC
To: MediaWiki internationalisation <mediawiki-i18n(a)lists.wikimedia.org>,
Wikimedia developers <wikitech-l(a)lists.wikimedia.org>, Wikimedia Mailing
List <wikimedia-l(a)lists.wikimedia.org>, "Wikimedia & GLAM collaboration
[Public]" <glam(a)lists.wikimedia.org>
[x-posted announcement]
Hello,
The next IRC office hour of the Language Engineering team of the Wikimedia
Foundation will be on May 5, 2015 (Tuesday) at 1430 UTC on
#wikimedia-office. We missed a few of our regular monthly office hours, but
from May onwards we will be back on schedule.
There has been significant progress around Content Translation[1] and it is
now available as a beta feature on several Wikipedias[2]. We’d love to hear
comments, suggestions and any feedback that will help us make this tool
better.
Please see below to check local time and event details. Questions can also
be sent to me ahead of the event.
Thanks
Runa
[1] http://blog.wikimedia.org/2015/04/08/the-new-content-translation-tool/
[2]
https://www.mediawiki.org/wiki/Content_translation/Languages#Available_lang…
Monthly IRC Office Hour:
==================
# Date: May 5, 2015 (Tuesday)
# Time: 1430 UTC (Check local time:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150505T1430 )
# IRC channel: #wikimedia-office
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
Hi all --
I wanted to give you an update on the Community Tech team. We've posted job
descriptions for open positions on our jobs page that we'd like to bring
your attention to:
Community Tech Developer
<https://boards.greenhouse.io/wikimedia/jobs/62666?t=m5pcy0#.VUgfhjvF_FI>
Community Tech Engineering Manager
<https://boards.greenhouse.io/wikimedia/jobs/62669?t=d51bks#.VUgfhjvF_FI>
Please encourage qualified folks to apply!
I want to say that I'm really excited to be working with Luis to help build
this team. I'm very appreciative that Lila and the other execs have
identified this gap in our community support and have made resources
available to address it.
-Toby
After a lot of work, we're ready to provide a more sensible data layout for
format=json results (and also format=php). The changes are generally
backwards-compatible for API clients, but extension developers might have
some work to do. If your extension is maintained in Gerrit, much of the
necessary conversion has already been done for you (the major exception
being booleans that were violating the old API output conventions).
The general theme is that the ApiResult arrays now have more metadata,
which is used to apply a backwards-compatible transformation for clients
that need it and optional transformation so JSON output needn't be limited
by restrictions of XML. At the same time, improvements were made to
ApiResult and ApiFormatXml to hopefully make it easier for developers to
use.
Relevant changes include:
- Several ApiResult methods were deprecated. If your extension is
maintained in Gerrit, these should have already been taken care of for you
(with the exception of T95168 <https://phabricator.wikimedia.org/T95168>
where work is ongoing), but new code will need to avoid the deprecated
methods.
- All ApiResult methods that operate on a passed-in array (rather than
internal data) are now static, and static versions of all relevant data-
and metadata-manipulation methods are provided. This should reduce the need
for passing ApiResult instances around just to be able to set metadata.
- Properties with names beginning with underscores are reserved for API
metadata (following the lead of existing "_element" and "_subelements"),
and will be stripped from output. Such properties may be marked as
non-metadata using ApiResult::setPreserveKeysList(), if necessary.
- PHP-arrays can now be tagged with "array types" to indicate whether
they should be output as arrays or hashes. This is particularly useful to
fix T12887 <https://phabricator.wikimedia.org/T12887>.
- The "*" property is deprecated in favor of a properly-named property
and special metadata to identify it for XML format and for
back-transformation. Use ApiResult::setContentValue() instead of
ApiResult::setContent() and all the details are handled for you.
- ApiFormatXml will no longer throw an exception if you forget to call
ApiResult::setIndexedTagName()!
- ApiFormatXml will now reversibly mangle tag and attribute names that
are not valid XML, instead of irreversibly mangling spaces and outputting
invalid XML for other stuff.
- ApiResult will now validate data added (e.g. adding resources or
non-finite floats will throw an exception) and auto-convert objects. The
ApiSerializable interface can be used to control object conversion, if
__toString() or cast-to-array is inappropriate.
- Actual booleans should now be added to ApiResult, and will be
automatically converted to the old convention (empty-string for true and
absent for false) when needed for backwards compatibility. Code that was
violating the old convention will need to use the new
ApiResult::META_BC_BOOLS metadata property to prevent this conversion.
- Modules outputting as {"key":{"*":"value"}} to avoid large strings in
XML attributes can now output as {"key":"value"} while still maintaining
<container><key>value</key></container> in XML format, using
ApiResult::META_BC_SUBELEMENTS. New code should use
ApiResult::setSubelementsList() instead.
- Modules outputting hashes as
[{"name":"key1","*":"value1"},{"name":"key2","*":"value2"}] (due to the
keys being invalid for XML) can now output as
{"key1":"value1","key2":"value2"} in JSON while maintaining <container><item
name="key1">value1</item><item name="key2">value2</item></container> in
XML format, using array types "kvp" or "BCkvp".
I apologize for forgetting to announce this sooner. If developers need
assistance with API issues or code review for API modules, please do reach
out to me.
--
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
The Wikimedia Reading Infrastructure team [0] was formed during the
recent Wikimedia Foundation Engineering reorganization [1]. The team
currently consists of former members of the Wikimedia MediaWiki API
team [2] which was formed (briefly) from the Wikimedia MediaWiki Core
and Multimedia teams [3]. The new team's mission has a slightly
different scope that the API team did, but the security, stability and
performance of the API remains a top tier goal in support of the
Reading team's projects and the projects of other WMF and community
developers.
Towards that end, the team would like to put itself (and particularly
Brad "anomie" Jorsch) forward as an available consulting resource for
all other WMF teams and volunteer contributors who are enhancing the
MediaWiki API by adding or updating new code to API modules in core or
extensions. Brad has a long history of both consuming and maintaining
API related code. For several years he has been considered the "go to
guy" by his peers in the former MediaWiki Core team for reviewing API
changes and his name is likely well know to those of you who regularly
work with the API and other projects like Scrubunto [4].
We aren't asking to be the sole arbiters or implementers of API
related change. Rather we would like to have a chance to help ease
implementation pains and provide insight on both the good and bad
patterns that recur in typical API module development. Chances are
good that the Gerrit watches we have will notice patches as they move
through the review process even without explicit inclusion, but we
would appreciate being invited into these conversations when possible.
[0]: https://www.mediawiki.org/wiki/Wikimedia_Reading_Infrastructure_team
[1]: https://lists.wikimedia.org/pipermail/wikimedia-l/2015-April/077619.html
[2]: https://www.mediawiki.org/wiki/Wikimedia_MediaWiki_API_Team
[3]: https://lists.wikimedia.org/pipermail/wikitech-l/2015-March/081357.html
[4]: https://www.mediawiki.org/wiki/Extension:Scribunto
Bryan
--
Bryan Davis Wikimedia Foundation <bd808(a)wikimedia.org>
[[m:User:BDavis_(WMF)]] Sr Software Engineer Boise, ID USA
irc: bd808 v:415.839.6885 x6855
Hi,
We're planning on deploying a global user merge tool to Wikimedia sites
shortly. As the name suggests, it merges multiple users into one.
This means that if your extension is storing user ids or user names, it
will need to listen to one of the UserMerge hooks
(UserMergeAccountFields, MergeAccountFromTo,
UserMergeAccountDeleteTables, or DeleteAccount) to make sure it isn't
referring to non-existent users. Reedy & I previously did an audit last
November of all deployed extensions, however new ones have been deployed
since then. Please check your extension(s) and if they need updating,
file bugs that block T49918[1] and T69758[2].
[1] https://phabricator.wikimedia.org/T49918
[2] https://phabricator.wikimedia.org/T69758
-- Legoktm
Call for Research & Innovation Papers
SEMANTiCS 2015
Transfer // Engineering // Community
11th International Conference on Semantic Systems
Vienna, Austria September 15-17, 2015
http://www.semantics.cc
Important Dates (Research & Innovation)
------------------------------------------------------
* Abstract Submission Deadline: May 22, 2015
* Paper Submission Deadline: May 29, 2015
* Notification of Acceptance: July 10, 2015
* Camera-Ready Paper: July 24, 2015
SEMANTiCS proceedings will be published by ACM ICP.
Submissions via Easychair:
https://easychair.org/conferences/?conf=semantics2015research
The calls for “Industry & Use Case Presentations” and “Posters and
Demos” at SEMANTiCS 2015 can be found here:http://www.semantics.cc/
<http://www.semantics.cc/>
The annual SEMANTiCS conference is the meeting place for professionals
who make semantic computing work, who understand its benefits and
encounter its limitations. Every year, SEMANTiCS attracts information
managers, IT-architects, software engineers and researchers from
organisations ranging from NPOs, through public administrations to the
largest companies in the world. Attendees learn from industry experts
and top researchers about emerging trends and topics in the fields of
semantic software, enterprise data, linked data & open data strategies,
methodologies in knowledge modelling and text & data analytics. The
SEMANTiCS community is highly diverse; attendees have responsibilities
in interlinking areas like knowledge management, technical
documentation, e-commerce, big data analytics, enterprise search,
document management, business intelligence and enterprise vocabulary
management.
The success of last year’s conference in Leipzig with more than 230
attendees from 22 countries proves that SEMANTiCS 2015 will continue a
long tradition of bringing together colleagues from around the world.
There will be presentations on industry implementations, use case
prototypes, best practices, panels, papers and posters to discuss
semantic systems in birds-of-a-feather sessions as well as informal
settings. SEMANTICS addresses problems common among information
managers, software engineers, IT-architects and various specialist
departments working to develop, implement and/or evaluate semantic
software systems.
The SEMANTiCS program is a rich mix of technical talks, panel
discussions of important topics and presentations by people who make
things work - just like you. In addition, attendees can network with
experts in a variety of fields. These relationships provide great value
to organisations as they encounter subtle technical issues in any stage
of implementation. The expertise gained by SEMANTiCS attendees has a
long-term impact on their careers and organisations. These factors make
SEMANTiCS for our community the major industry related event across Europe.
The following ‘horizontals’ (research) and ‘verticals’ (industries)
topics are of interest:
* Business Models, Governance & Data Strategies
* Knowledge Discovery & Intelligent Search
* Data Integration & Enterprise Linked Data
* Big Data & Text Analytics
* Data Portals & Knowledge Visualization
* Semantic Information Management
* Document Management & Content Management
* Terminology, Thesaurus & Ontology Management
* Industry & Engineering
* Life Sciences & Health Care
* Public Administration
* Galleries, Libraries, Archives & Museums (GLAM)
* Media, Publishing & Advertising
* Financial & Insurance Industry
* Telecommunications
* Energy, Transport & Environment
Research / Innovation Papers
--------------------------------------
The Research & Innovation track at SEMANTiCS welcomes the submission of
papers on novel scientific research and/or innovations relevant to the
topics of the conference. Submissions must be original and must not have
been submitted for publication elsewhere. Papers should follow the ACM
ICPS guidelines for formatting
(http://www.acm.org/sigs/publications/proceedings-templates) and must
not exceed 8 pages in length for full papers and 4 pages for short
papers, including references and optional appendices.
All accepted full papers and short papers will be published in the
digital library of the ACM ICP Series under the ISBN-No.:
978-1-4503-1972-0. Research & Innovation papers should be submitted
through EasyChair at:
https://easychair.org/conferences/?conf=semantics2015research. Papers
must be submitted in PDF (Adobe's Portable Document Format) format.
Other formats will not be accepted. For the camera-ready version, the
source files (Latex, Word Perfect, Word) will also be needed.
Important Dates (Research & Innovation)
* Abstract Submission Deadline: May 22, 2015
* Paper Submission Deadline: May 29, 2015
* Notification of Acceptance: June 26, 2015
* Camera-Ready Paper: July 15 , 2015
Research and Innovation Chairs:
Sebastian Hellmann, AKSW, Universität Leipzig
Josiane Xavier Parreira, Siemens AG Österreich
Programme Committee:
* Alessandro Adamou, Knowledge Media Institute, The Open University
* Guadalupe Aguado-De-Cea, Universidad Politécnica de Madrid
* Rajendra Akerkar, Senior Researcher/Professor, Western Norway Research
Institute
* Nathalie Aussenac-Gilles, IRIT CNRS
* Ciro Baron, University of Leipzig
* Charalampos Bratsas, Web Science Program, Mathematics Department,
Aristotle University of Thessaloniki, Greece
* Martin Brümmer, Universität Leipzig
* Volha Bryl, University of Mannheim
* Paul Buitelaar, Insight centre for Data Analytics, National University
of Ireland Galway
* Irene Celino, CEFRIEL
* Pierre-Antoine Champin, LIRIS
* Christian Chiarcos,
* Key-Sun Choi, KAIST
* Ioana-Georgiana Ciuciu, Université Joseph Fourier, Grenoble
* Roland Cornelissen, Metamatter
* Gianluca Correndo, University of Southampton
* Roberta Cuel, University of Trento
* Claudia D'Amato, University of Bari
* Mathieu D'Aquin, Knowledge Media Institute, the Open University
* Aba-Sah Dadzie, University of Birmingham
* Enrico Daga, The Open University
* Tommaso Di Noia, Politecnico di Bari
* Stefan Dietze, L3S Research Center
* Marin Dimitrov, Ontotext
* Mauro Dragoni, Fondazione Bruno Kessler
* Samhaa El-Beltagy, Cairo University
* Henrik Eriksson, Linkping University
* Anna Fensel, Semantic Technology Institute (STI) Innsbruck, University
of Innsbruck
* Miriam Fernandez, Knowledge Media Institute
* Agata Filipowska, Department of Information Systems, Poznan University
of Economics
* Marco Fossati, Fondazione Bruno Kessler
* Fabien Gandon, Inria
* Roberto Garcia, Universitat de Lleida
* José María García, STI Innsbruck, University of Innsbruck
* Wolfgang Gassler, University of Innsbruck, Insitute of Computer
Science, Research Group Databases and Information Systems
* Alain Giboin, INRIA Sophia Antipolis - Méditerranée
* Jose Manuel Gomez-Perez, Intelligent Software Components (iSOCO) S.A.
* Jorge Gracia, Ontology Engineering Group. Universidad Politécnica de
Madrid
* Michael Granitzer, University of Passau
* Andreas Harth, AIFB, Karlsruhe Institute of Technology
* Bernhard Haslhofer,
* Benjamin Heitmann, Digital Enterprise Research Institute, National
University of Ireland, Galway
* Eelco Herder, L3S Research Center
* Andreas Hotho, University of Wuerzburg
* Sirko Hunnius, IfG.CC - The Potsdam eGovernment Competence Center
* Anja Jentzsch, Hasso Plattner Institut
* Efstratios Kontopoulos, CERTH-ITI
* Christoph Lange, University of Bonn
* Ivo Lašek, Faculty of Mathematics and Physics, Charles University
* Nelia Lasierra Beamonte, STI, University of Innsbruck
* Steffen Lohmann, University of Stuttgart
* Vanessa Lopez, IBM Research
* Sandra Lovrenčić, University of Zagreb, Faculty of organization and
informatics Varazdin, Pavlinska 2, HR-42000 Varazdin, Croatia
* Markus Luczak-Roesch, University of Southampton
* Elisa Marengo, Faculty of Computer Science, Free University of
Bozen-Bolzano
* John P. Mccrae, Cognitive Interaction Technology, Center of Excellence
* Pablo Mendes, IBM Research Almaden
* Uroš Milošević, Institute Mihailo Pupin
* Elena Montiel-Ponsoda, Ontology Engineering Group. Laboratorio de
Inteligencia Artificial. Facultad de Informática. Universidad
Politécnica de Madrid
* Andrea Moro, Sapienza, Universita di Roma
* Lyndon Nixon, MODUL University
* Andrea Giovanni Nuzzolese, STLab, ISTC-CNR
* Leo Obrst, MITRE
* Vito Claudio Ostuni, Politecnico di Bari
* Viviana Patti, Dipartimento di Informatica, Università di Torino
* Heiko Paulheim, University of Mannheim
* Silvio Peroni, University of Bologna and ISTC-CNR
* Axel Polleres, Vienna University of Economics and Business - WU Wien
* Mateusz Radzimski, Universidad Carlos III Madrid
* Achim Rettinger, Karlsruhe Institute of Technology
* Giuseppe Rizzo, EURECOM
* Marco Rospocher, Fondazione Bruno Kessler
* Matthew Rowe, Lancaster University
* Eugen Ruppert, TU Darmstadt - FG Language Technology
* Marta Sabou, MODUL University Vienna
* Muhammad Saleem, AKSW
* Felix Sasaki, W3C
* Bernhard Schandl, mySugr GmbH
* Pavel Shvaiko, Informatica Trentina
* Elena Simperl, University of Southampton
* Ronald Stamper, Measur Ltd
* Nadine Steinmetz, Hasso Plattner Institute for Software Systems
Engineering
* Holger Stenzhorn, Saarland University Hospital
* Mari Carmen Suárez-Figueroa, Universidad Politécnica de Madrid
* Vojtěch Svátek, University of Economics, Prague
* Alexandru Todor, AG Corporate Semantic Web
* Robert Tolksdorf, Freie Universität Berlin, Networked Information Systems
* Ioan Toma, STI Innsbruck
* Jürgen Umbrich, Vienna University of Economy and Business (WU)
* Ricardo Usbeck, University of Leipzig
* Pierre-Yves Vandenbussche, INSERM UMRS 872, éq.20, 15, rue de l’école
de médecine, 75006 Paris, France
* Ruben Verborgh, Ghent University - iMinds
* Maria Esther Vidal, Universidad Simon Bolivar, Dept. Computer Science
* Boris Villazón-Terrazas, iSOCO, Intelligent Software Components
* Krzysztof Wecel, Poznan University of Economics
* Katrin Weller, GESIS Leibniz Institute for the Social Sciences
* Rupert Westenthaler, Salzburg Research
* Patrick Westphal, Universität Leipzig
* Wolfram Wöß, Institute for Application Oriented Knowledge Processing,
Johannes Kepler University Linz, Austria
* Eva Zangerle, Databases and Information Systems, Department of
Computer Science, University of Innsbruck
SEMANTiCS 2015 Organisation Committee:
* Axel Polleres, Conference Chair
* Tassilo Pellegrini, Conference Chair
* Christian Dirschl, Industry Chair
* Sebastian Hellmann, Research & Innovation Chair
* Josiane Xavier Parreira, Research & Innovation Chair
* Agata Filipowska, Poster and Demo Chair
* Ruben Verborgh, Poster and Demo Chair
* Anna Fensel, Workshop Chair
--
Sebastian Hellmann
AKSW/NLP2RDF research group
Insitute for Applied Informatics (InfAI) and DBpedia Association
Events:
* *Sept. 1-5, 2014* Conference Week in Leipzig, including
** *Sept 2nd*, MLODE 2014 <http://mlode2014.nlp2rdf.org/>
** *Sept 3rd*, 2nd DBpedia Community Meeting
<http://wiki.dbpedia.org/meetings/Leipzig2014>
** *Sept 4th-5th*, SEMANTiCS (formerly i-SEMANTICS) <http://semantics.cc/>
Venha para a Alemanha como PhD: http://bis.informatik.uni-leipzig.de/csf
Projects: http://dbpedia.org, http://nlp2rdf.org,
http://linguistics.okfn.org, https://www.w3.org/community/ld4lt
<http://www.w3.org/community/ld4lt>
Homepage: http://aksw.org/SebastianHellmann
Research Group: http://aksw.org
Thesis:
http://tinyurl.com/sh-thesis-summaryhttp://tinyurl.com/sh-thesis
Hi Community Metrics team,
this is your automatic monthly Phabricator statistics mail.
Number of accounts created in (2015-04): 260
Number of active users (any activity) in (2015-04): 766
Number of task authors in (2015-04): 413
Number of users who have closed tasks in (2015-04): 226
Number of projects which had at least one task moved from one column
to another on their workboard in (2015-04): 191
Number of tasks created in (2015-04): 3098
Number of tasks closed in (2015-04): 2478
Number of open and stalled tasks in total: 21503
Median age in days of open tasks by priority:
Unbreak now: 7
Needs Triage: 84
High: 135
Normal: 358
Low: 675
Needs Volunteer: 503
(How long tasks have been open, not how long they have had that priority)
TODO: Numbers which refer to closed tasks might not be correct, as described in T1003.
Yours sincerely,
Fab Rick Aytor
(via community_metrics.sh on iridium at Fri May 1 00:00:06 UTC 2015)