For all Hive users using stat1002/1004, you might have seen a deprecation
warning when you launch the hive client - that claims it's being replaced
with Beeline. The Beeline shell has always been available to use, but it
required supplying a database connection string every time, which was
pretty annoying. We now have a wrapper
setup to make this easier. The old Hive CLI will continue to exist, but we
encourage moving over to Beeline. You can use it by logging into the
stat1002/1004 boxes as usual, and launching `beeline`.
There is some documentation on this here:
If you run into any issues using this interface, please ping us on the
Analytics list or #wikimedia-analytics or file a bug on Phabricator
(If you are wondering stat1004 whaaat - there should be an announcement
coming up about it soon!)
Here's a fun simple little idea:
Did anybody ever try to find what are the most common link trails wikis in
In English, for example, the two most common ones will probably be "s" and
"es", in links like [[bottle]]s and [[box]]es; these two possibly appear
millions of times in the English Wikipedia. And there are certainly many
other common trails in English.
In other languages they will be different.
I can easily do it myself some time by running on a dump. Just wondering
whether anybody already tried it.
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
“We're living in pieces,
I want to live in peace.” – T. Moore
Sorry, I'm bad at remembering to cross-post
---------- Forwarded message ----------
From: Dan Andreescu <dandreescu(a)wikimedia.org>
Date: Fri, Jul 29, 2016 at 11:22 PM
Subject: [Wikistats 2.0] [Regular Update] First update on Wikistats 2.0
To: Analytics List <analytics(a)lists.wikimedia.org>
Welcome to the first of a series of semi-regular updates on our progress
towards Wikistats 2.0. As you may have seen from the banners on
stats.wikimedia.org, we're working on a replacement for Wikistats. Erik
talked about this in his announcement . To summarize it from our point
* Wikistats has served the community very well so far, and we're looking to
keep every bit of value in the upgrade
* Wikistats depends on the dumps generation process which is getting slower
and slower due to its architecture. Because of this, most editing metrics
are delayed by weeks through no fault of the Wikistats implementation
* Finding data on Wikistats is a bit hard for new users, so we're working
on new ways to organize what's available and present it in a comprehensive
way along with other data sources like dumps
This regular update is meant to keep interested people informed on the
direction and progress of the project.
Of course, Wikistats 2.0 is not a new project. We've already replaced the
data pipeline behind the pageview reports on stats.wikimedia.org already.
But the end goal is a new data pipeline for editing, reading, and beyond,
plus a nice UI to help guide people to what they need. Since this is the
first update, I'll lay out the high level milestones along with where we
are, and then I'll give detail about the last few weeks of work.
1. [done] Build pipeline to process and analyze *pageview* data
2. [done] Load pageview data into an *API*
3. [ ] *Sanitize* pageview data with more dimensions for public
4. [ ] Build pipeline to process and analyze *editing* data
5. [ ] Load editing data into an *API*
6. [ ] *Sanitize* editing data for public consumption
7. [ ] *Design* UI to organize dashboards built around new data
8. [ ] Build enough *dashboards* to replace the main functionality
9. [ ] Officially Replace stats.wikipedia.org with *(maybe)
***. [ ] Bonus: *replace dumps generation* based on the new data
Our focus last year was pageview data, and that's how we got 1 and 2 done.
3 is mostly done except deploying the logic and making the data
available. So 4, 5, and 6 are what we're working on now. As we work on
these pieces, we'll take vertical slices of different important metrics and
take them from the data processing all the way to the dashboards that
present the results. That means we'll make incremental progress on 8 and 9
as we go. But we won't be able to finish 7 and 9 until we have a cohesive
design to wrap around it all. We don't want to introduce yet more
dashboard hell, we want to save you the consumers from all that.
So the focus right now is on the editing data pipeline. What do I mean by
this? Data is already available in quarry and via the API. That's true,
but here are some problems with that data:
* lack of historical change information. For example, we only have
pageview data by the title of the page. If we wanted to get all the
pageviews for a page that's now called C, but was called B two months ago
and A three months before that, we have to manually parse PHP-serialized
parameters in the logging table to trace back those page moves
* no easy way to look at data across wikis. If someone asks you to run a
quarry query to look at data from all wikipedias, you have to run hundreds
of separate queries, one for each database
* no easy way to look at a lot of data. Quarry and other tools time out
after a certain amount of time to protect themselves. Downloading dumps is
a way to get access to more data but the files are huge and analysis is hard
* querying the API with complex multi-dimensional analytics questions isn't
These are the kinds of problems we're trying to solve. Our progress so far:
* Retraced history through the logging table to piece together what names
each page has had throughout its life. Deleted pages were included in this
* Found what names each user has had throughout their life. And what
rights and blocks were applied to or removed from users.
* Wrote event schemas for Event Bus, which will feed data into this
pipeline in near real time (so metrics and dashboards can be updated in
* Come up with a single denormalized schema that holds every single kind of
event possible in the editing world. This is a join of the Event Bus
schemas mentioned above and is possible to feed either in batch from our
reconstruction algorithm or in real time. If you're familiar with lambda
architecture, this is the approach we're taking to make our editing data
Right now we're testing the accuracy of our reconstruction against
Wikistats data. If this works, we'll open up the schema to more people to
play with so they can give feedback on this way of doing analytics. And if
all that looks good, we'll be loading the data into Druid and Hive and
running the most high priority metrics on this new platform. We hope to be
done with this by the end of this quarter. To weigh in on what reports are
important, make sure you visit Erik's page . We'll also do a tech talk
on our algorithm for historical reconstruction and lessons learned on
If you're still reading, congratulations, sorry for the wall of text. I
look forward to keeping you all in the loop, and to making steady progress
on this project that's very dear to our hearts. Feel free to ask questions
and if you'd like to be involved, just let me know how. Have a nice
I've been trying to match edit activity with pagecounts but I've
encountered a couple of problems. The amazing pagecounts dumps (
https://dumps.wikimedia.org/other/pagecounts-raw/) use the page url to
identify the individual page:
fr.b Special:Recherche/Achille_Baraguey_d%5C%27Hilliers 1 624
while the stub-meta-history uses the "raw" title:
so I need an easy way to map title to url. I imagine there some rules on
how this "translation" is done? My google-fu has failed to encounter them.
Also, are is timezones mentioned in the meta-history files:
the same as the one used in the pagecount filenames:
Bruno Miguel Tavares Gonçalves, PhD
Full disclosure: I am the creator of the Project Grant application for
Arc.heolo.gy, located here:
I hope for this to be a general discussion on potential applications,
criticisms, questions, technological recommendations, and community
Currently, the project has a live Neo4j Graph database built and parsed
from a download of the English language Wikipedia from April. I have
temporarily hosted the database instance both on my local machine and a
SoftLayer server provided under a temporary entrepreneur credit.
My goal is two fold.
On the backend: refine the parsing algorithm (I am getting some incorrect
relationships in the database), automate the parsing so that it updates the
database frequently, expand language support, and perform semantic parsing
to weight individual relationships to strengthen the ability to filter out
On the frontend: I have done little to zero work here beyond pure
framework to build both a 2d (d3) and 3d (webGL) interface to be able to
explore the database with a high amount of control and ease.
If any of you would like to access the database for exploration, please
contact me privately and I will give you credentials.
Any recommendations on parsing, hosting, visualization, or otherwise are
appreciated. Endorsements and Volunteers are also highly appreciated!
p.s. I am new to directly engaging with the Wiki community, and if I
committed some faux pas in starting this thread please let me know and I
will do my best to correct it.
┃┃ ╭╮ ┃╰╯╰╯┃┃╰
We’re preparing for the July 2016 research newsletter and looking for contributors. Please take a look at: https://etherpad.wikimedia.org/p/WRN201607 and add your name next to any paper you are interested in covering. Our target publication date is Monday August 1 UTC although actual publication might happen several days later. As usual, short notes and one-paragraph reviews are most welcome.
Highlights from this month:
• An Empirical Evaluation of Property Recommender Systems for Wikidata and Collaborative Knowledge Bases
• Breaking the glass ceiling on Wikipedia
• Centrality and Content Creation in Networks - The Case of Economic Topics on German Wikipedia
• Centrality and Content Creation in Networks - The Case of Economic Topics on German Wikipedia
• Comparative assessment of three quality frameworks for statistics derived from big data: the cases of Wikipedia page views and Automatic Identification Systems
• Competencias informacionales básicas y uso de Wikipedia en entornos educativos
• Computational Science and Its Applications
• Controversy Detection in Wikipedia Using Collective Classification
• Discovery and efficient reuse of technology pictures using Wikimedia infrastructures.
• Dynamics and Biases of Online Attention: The Case of Aircraft Crashes
• Evaluating and Improving Navigability of Wikipedia: A Comparative Study of Eight Language Editions
• Extracting Scientists from Wikipedia
• From Digital Library Citation Parsing to Wikipedia Reference Analysis
• Monitoring the Gender Gap with Wikidata Human Gender Indicators
• Platform affordances and data practices: The value of dispute on Wikipedia
• Stationarity of the inter-event power-law distributions
• Using Wikipedia to Teach Discipline Specific Writing
• 日本の大学生のWikipediaに対する信憑性認知，学習における利用実態とそれらに影響を与える要因 (Google Translate: Factors that give Japan's credibility awareness of Wikipedia of college students, use in learning actual situation and the impact on them)
If you have any question about the format or process feel free to get in touch off-list.
Masssly, Tilman Bayer and Dario Taraborelli