For all Hive users using stat1002/1004, you might have seen a deprecation
warning when you launch the hive client - that claims it's being replaced
with Beeline. The Beeline shell has always been available to use, but it
required supplying a database connection string every time, which was
pretty annoying. We now have a wrapper
setup to make this easier. The old Hive CLI will continue to exist, but we
encourage moving over to Beeline. You can use it by logging into the
stat1002/1004 boxes as usual, and launching `beeline`.
There is some documentation on this here:
If you run into any issues using this interface, please ping us on the
Analytics list or #wikimedia-analytics or file a bug on Phabricator
(If you are wondering stat1004 whaaat - there should be an announcement
coming up about it soon!)
We are very pleased to announce two distinguished Keynotes for the 8th
International Conference on Social Media & Society (July 28-30, 2017,
* Lee Rainie - Director, Pew Research Center's Internet
<http://www.pewinternet.org/> & American Life Project, USA
* Ronald Deibert - Professor of Political Science, and Director of the
Citizen Lab <http://www.citizenlab.org/> at the Munk School of Global
Affairs, University of Toronto, Canada.
We would also like to invite scholarly and original submissions that broadly
relate to the 2017 conference theme on "Social Media for Social Good or
Evil." We welcome both quantitative and qualitative work which crosses
interdisciplinary boundaries and expands our understanding of the current
and future trends in social media research. See the call for proposals at
* Workshops/ Technical Tutorials - Due December 5, 2016
* Full and Work-in-progress (WIP) Papers - Due January 16, 2017
* Posters - Due March 6, 2017
Full and WIP (short) papers presented will be published in the conference
proceedings by ACM International Conference Proceeding Series (ICPS)
reflayout=flat#prox> and will be available in the ACM Digital Library. All
conference presenters will be invited to submit their extended conference
papers to a special issue of the Social Media + Society
<http://sms.sagepub.com/> journal ( <http://sms.sagepub.com/>
http://sms.sagepub.com/) published by SAGE.
2017 #SMSociety Organizing Committee:
* Anatoliy Gruzd, Ryerson University, Canada - Conference Chair
* Jenna Jacobson, University of Toronto, Canada - Conference Chair
* Philip Mai, Ryerson University, Canada - Conference Chair
* Hazel Kwon, Arizona State University, USA - Poster Chair
* Bernie Hogan, Oxford Internet Institute, UK - WIP Chair
* Jeff Hemsley, Syracuse University, USA - WIP Chair
* William H. Dutton, Michigan State University, USA
* Zizi Papacharissi, University of Illinois at Chicago, USA
* Barry Wellman, INSNA Founder, The Netlab Network, Canada
* Visit: http://socialmediaandsociety.org/about/
If you have any questions, please contact us via email at
ask(a)socialmediaandsociety.org <mailto:firstname.lastname@example.org> or on
Twitter at @SocMediaConf <https://twitter.com/SocMediaConf>
+research-l because this is more of a research than an analytics question.
What do you mean by acronyms in deletion queues here? Are you talking about
policy links used to justify !votes in deletion discussions, or acronyms
used in deletion comments of AfD'd articles? Or something else entirely.
If #1, this paper
examines the use of a single policy (IAR) in AfD's over time.
If #2, I did a similar (quick and dirty) analysis with AfC recently, here:
Others may be aware of additional resources or analyses.
On Tue, Nov 22, 2016 at 7:10 AM, Jane Darnell <jane023(a)gmail.com> wrote:
> Hi all,
> Has anyone tried to find the frequency of acronyms used in AfD queues? Any
> information about the deletion queue in language is welcome, thanks.
> This came up during a discussion about "enyclopedia worthiness" and how to
> explain this concept to newbies.
> Analytics mailing list
Jonathan T. Morgan
Senior Design Researcher
User:Jmorgan (WMF) <https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF)>
The Wikimedia Foundation datasets collection on the Internet Archive
 has now surpassed 1 million items (and about 50,000 full database
dumps)! This marks a major milestone in our archiving efforts of
Wikimedia's vast amount of data and ensures that the vital content
submitted by volunteers across the moment is preserved. All these
would not have been possible without the help of many people,
including Nemo, Ariel and Emijrp (thanks!).
We started archiving towards the end of 2011 and reached a milestone
of half a million items back in June 2015.  We have since moved on
from archiving just the main database dumps to saving research-worthy
data such as the pageviews data and even attempting to keep a copy of
Wikimedia Commons. Today, we are working on making the items on the
Internet Archive more accessible for researchers by working on an
interface for searching old dumps.
Despite this feat, we are in constant need of more help. If you are a
researcher, a programmer or someone with a computer, we need your help
in many tasks! Have a look at WikiTeam's project  or Emijrp's
Wikipedia Archive page  for more information. If you regularly work
on the Wikimedia database dumps, please provide your input in the
Dumps-Rewrite project  and the API interface .
As before, here's to the next million!