Re: http://twkozlowski.net/the-pot-and-the-kettle-the-wikimedia-way/
Two questions:
1. Where can I find a response from either the WMF board or WMF
funding/finance to the criticisms of a lack of transparency or the
apparent failure of the project to deliver value for the donor's money
as raised in this blog post?
2. Where can I read an officially recognized report for the outcomes
of this project in terms of value for Wikimedia projects? Obviously we
do not want to rely on second-hand analysis when reports to the WMF
are a requirement for such projects.
Thanks,
Fae
--
faewik(a)gmail.com https://commons.wikimedia.org/wiki/User:Fae
Really. Who thought it was a good idea to MAKE THE BANNER FOLLOW YOU
DOWN THE PAGE?
There must be an identifiable person who actually said "yes, this is a
good decision, I shall make this decision."
- d.
We know NSA wants Wikipedia data, as Wikipedia is listed in one of the
NSA slides:
https://commons.wikimedia.org/wiki/File:KS8-001.jpg
That slide is about HTTP, and the tech staff are moving the
user/reader base to HTTPS.
As we learn more about the NSA programs, we need to consider vectors
other than HTTP for the NSA to obtain the data they want. And the
userbase needs to be aware of the current risks.
One question from the "Dells are backdored"[sic] thread that is worth
separate consideration is:
Are the Wikimedia transit links encrypted, especially for database replication?
MySQL has replication over SSL, so I assume the answer is Yes.
If not, is this necessary or useful, and feasible ?
However we also need to consider that SSL and other encryption may be
useless against NSA/etc, which means replicating non-public data
should be avoided wherever possible, as it becomes a single point of
failure.
Given how public our system is, we don't have a lot of non-public
data, so we might be able to design the architecture so that
information isnt replicated, and also ensure it isnt accessed over
insecure links. I think the only parts of the dataset that are
private & valuable are
* passwords/login cookies,
* checkuser info - IPs and useragents,
* WMF analytics, which includes readers iirc, and
* hidden/deleted edits
* private wikis and mailing lists
Have I missed any?
Are passwords and/or checkuser info replicated?
Is there a data policy on WMF analytics data which prevents it flowing
over insecure links, and limits what is collected and ensures
destruction of the data within reasonable timeframes? i.e. how about
not using cookies to track analytics of readers who are on HTTP
instead of HTTPS?
The private wikis can be restricted to https, depending on the value
of the data on those wikis in the wrong hands. The private mailing
lists will be harder to secure, and at least the English Wikipedia
arbcom list contain a lot of valuable data about contributors.
Regarding hidden/deleted edits, the replication isnt the only source
of this data. All edits are also exposed via Recent Changes
(https/api/etc) as they occur, and the value of these edits is
determined by the fact they are hidden afterwards (e.g. don't appear
in dumps). Is there any way to control who is effectively capturing
all edits via Recent Changes?
--
John Vandenberg
Hi,
Julia Reda, a MEP (Member of the European Parliament) for the German
Pirate Party has released a report[1] regarding copyright in the EU
and demanding "an ambitious reform agenda for the overhaul of EU
copyright"[2].
A shorter version with the most important points is the press release:
[2] https://juliareda.eu/2015/01/press-release-eu-copyright-report/
This work has followed the public consultation on copyright that many
people (and also associations, including some Wikimedia chapters) have
answered[3] in the past months.
>From [1] you can also see the work of Dimitar Dimitrov and the Free
Knowledge Advocacy Group EU[4], and of other free knowledge
organisations in Europe as the Free Software Foundation Europe (FSFE)
and "La Quadrature du Net" from France.
Well done, Free Knowledge Advocacy Group EU!
Cristian
[1]https://juliareda.eu/2015/01/report-eu-copyright-rules-maladapted-to-the-web/
[2] https://juliareda.eu/2015/01/press-release-eu-copyright-report/
[3] https://juliareda.eu/2014/08/the-european-copyright-divide/
[4] https://meta.wikimedia.org/wiki/EU_policy/Statement_of_Intent
For some months, Twitter have been blocking most URLs in their direct
messages (DMs), supposedly as an anti-spam measure.
Do we have someone who has a contact there, who could ask them to
whitelist Wikimedia project URLs in DMs?
--
Andy Mabbett
@pigsonthewing
http://pigsonthewing.org.uk
On Thu, Oct 9, 2014 at 10:26 AM, Pine W <wiki.pine(a)gmail.com> wrote:
> I'm sure a Board member, Lila, or Erik will correct me if I am mistaken,
> but my understanding is that there is internal agreement at Board level
> that the Product side of the org needs some systemic changes, that Lila was
> chosen with the goal of making those changes, and that some changes are
> already happening.
There's agreement at all levels that we want to continue down the path
set by Sue back in 2012 [1] for WMF to truly understand itself as a
technology and grantmaking organization. That path led to where we are
today:
1) As part of the ED transition, Sue recommended (and the Board
accepted the recommendation) to seek an ED with a strong
technology/product background, and we hired Lila Tretikov as Sue's
successor who matches those requirements.
2) In November 2012, I recommended that we prepare for building out
new functions for UX and Analytics, and prepare for dedicated
leadership for Engineering and Product. Sue accepted this
recommendation. I hired Directors for UX and Analytics in 2013,
followed by Community Engagement in 2014, and finally we hired a VP
Engineering last week to complete the process.
3) To better account for the need to learn quickly and adjust course
as appropriate, we introduced quarterly reviews in December 2012 [3]
and increasingly reduced the specificity of Annual Plan level
commitments while increasing the focus on metrics and accountability
in the reviews.
4) On the technology and product front, many improvements to process
and support infrastructure have been implemented in the last couple of
years, including but not limited to:
- Development of MediaWiki Vagrant as a standardized dev environment,
to reduce failure cases due to developer environment inconsistencies
- Improvements to continuous integration infrastructure for PHP unit
tests and QUnit JavaScript unit tests, and increased focus (but not
nearly enough yet) on automated tests, especially for newly developed
features
- Introduction and continued improvement of BetaLabs as a staging
environment for all commits, increased use of automated end-to-end
browser tests and QA testing by humans to catch bugs and regressions
prior to production rollouts
- Introduction and use of various tools for measuring the impact of
features, including EventLogging as a standard instrumentation
framework for measuring feature usage, dashboards for visualizing
usage, WikiMetrics for analyzing editor cohort behavior, Editor
Engagement Vital Signs for understanding system-wide user behavior,
analysis of pageview data using Hadoop (just rolled out), etc.
- Highly specialized automated testing frameworks for specific
projects, e.g. Parsoid round-trip testing and visual diffing (!) to
detect dirty diffs or output problems
- Introduction of design research as a discipline in the UX team
(through hiring of Abbey Ripstra as User Research Lead) and
incorporation of user studies in a much more systematic way across
products
- Community liaisons dedicated to key products, responding to user
feedback and helping Product Managers understand more complex
community needs
- Continued shortening of release/deployment cycles; significant
improvements to deployment tooling, rewriting our legacy "scap" tools
to increase the ability to monitor and reason about deployments;
introduction of daily "SWAT" deploys to quickly release fixes, etc.
- Introduction of various infrastructure tools that help us better
analyze/profile issues, including logstash for log analysis, increased
use of graphite for performance metrics collection and various
front-ends for visualizing those metrics
- Shift towards loosely coupled services, addressing the difficulty of
maintaining and improving our highly monolithic codebase (examples
include Parsoid, Citoid, Mathoid, and the new Content API in
development)
- Introduction of Beta Features framework to stage features for early adopters
5) The changes Lila has pushed for since we started include:
- Greater focus on quarterly prioritization and a "rolling roadmap"
rather than a fiscal year view of the world
- Increased emphasis on understanding the needs of different user
personas at all cycles of software development, including through use
of qualitative and quantitative methods
- Reducing velocity of user-facing changes (esp. on desktop) to
increase focus on foundations (platform/process improvements) that
ultimately will enable us to move faster and more effectively
- Documenting product development methodology on-wiki and establishing
a clearer social contract (to reduce the reliance on RFCs/votes
regarding feature configurations)
- Surveying the needs of current users to more systematically balance
projects that serve future/new users vs. projects that serve the users
we have today
- Improved communication channels for community engagement to make it
easier to understand what major projects are currently in development
and how to provide feedback
This already means, effectively, that the commitments in the Annual
Plan developed during Sue's time should be taken with a big block of
salt at this point in time -- we're slowing down the deployment (not
development) of big user-facing features like Flow and VE as much as
needed to ensure that we incorporate user feedback, data and
qualitative research into the product development process
appropriately and spend sufficient time on the technical foundations
for these projects.
The quarterly prioritization alone has been, IMO, a huge improvement
that's already paying off. In the "Annual Plan" view of the world,
it's unlikely that we would have prioritized a project like HHVM the
way we did, because we were generally stuck on the priorities set for
the whole year. But it was very clear that this project would provide
huge benefits to our users, and I'm glad we were able to call it out
as _the_ top priority for Q1 and give the team the space to really
focus on getting it done (almost there now, starting to serve reader
traffic [4]).
Our draft Q2 top priorities (not yet posted on-wiki, but discussed in
the metrics meeting last week) are consistent with the above, with the
main user-facing push being on mobile web/apps and editing
performance, while the other priorities are more
platform/process-related. Once again, we're continuing to work on VE /
Flow, but focusing more on fundamentals (performance, architecture,
testing, use case analysis, etc.) than accelerating deployments.
My focus over the coming days is to flesh out the details for the Q2
priorities, and then shift to putting more effort in documenting and
refining product development methodologies and processes on-wiki. On
the engineering side, there's plenty of process/infrastructure
improvement to do as well. From my point of view, continued
improvement to test coverage and CI/testing infrastructure, developer
tools, profiling/instrumentation, staged roll-out support and
strengthening of architectural leadership are the big pieces for
coming months, but I'll let Damon speak to his focus areas as he gets
the lay of the land.
Erik
[1] https://meta.wikimedia.org/wiki/User:Sue_Gardner/Narrowing_focus
[2] https://lists.wikimedia.org/pipermail/wikimedia-l/2012-November/122663.html
[3] https://lists.wikimedia.org/pipermail/wikimedia-l/2012-December/123088.html
[4] https://gerrit.wikimedia.org/r/#/c/165004/
--
Erik Möller
VP of Product & Strategy, Wikimedia Foundation
Hi folks,
to increase accountability and create more opportunities for course
corrections and resourcing adjustments as necessary, Sue's asked me
and Howie Fung to set up a quarterly project evaluation process,
starting with our highest priority initiatives. These are, according
to Sue's narrowing focus recommendations which were approved by the
Board [1]:
- Visual Editor
- Mobile (mobile contributions + Wikipedia Zero)
- Editor Engagement (also known as the E2 and E3 teams)
- Funds Dissemination Committe and expanded grant-making capacity
I'm proposing the following initial schedule:
January:
- Editor Engagement Experiments
February:
- Visual Editor
- Mobile (Contribs + Zero)
March:
- Editor Engagement Features (Echo, Flow projects)
- Funds Dissemination Committee
We’ll try doing this on the same day or adjacent to the monthly
metrics meetings [2], since the team(s) will give a presentation on
their recent progress, which will help set some context that would
otherwise need to be covered in the quarterly review itself. This will
also create open opportunities for feedback and questions.
My goal is to do this in a manner where even though the quarterly
review meetings themselves are internal, the outcomes are captured as
meeting minutes and shared publicly, which is why I'm starting this
discussion on a public list as well. I've created a wiki page here
which we can use to discuss the concept further:
https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/Quarterly_r…
The internal review will, at minimum, include:
Sue Gardner
myself
Howie Fung
Team members and relevant director(s)
Designated minute-taker
So for example, for Visual Editor, the review team would be the Visual
Editor / Parsoid teams, Sue, me, Howie, Terry, and a minute-taker.
I imagine the structure of the review roughly as follows, with a
duration of about 2 1/2 hours divided into 25-30 minute blocks:
- Brief team intro and recap of team's activities through the quarter,
compared with goals
- Drill into goals and targets: Did we achieve what we said we would?
- Review of challenges, blockers and successes
- Discussion of proposed changes (e.g. resourcing, targets) and other
action items
- Buffer time, debriefing
Once again, the primary purpose of these reviews is to create improved
structures for internal accountability, escalation points in cases
where serious changes are necessary, and transparency to the world.
In addition to these priority initiatives, my recommendation would be
to conduct quarterly reviews for any activity that requires more than
a set amount of resources (people/dollars). These additional reviews
may however be conducted in a more lightweight manner and internally
to the departments. We’re slowly getting into that habit in
engineering.
As we pilot this process, the format of the high priority reviews can
help inform and support reviews across the organization.
Feedback and questions are appreciated.
All best,
Erik
[1] https://wikimediafoundation.org/wiki/Vote:Narrowing_Focus
[2] https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
Hi all,
There are online small business accounting software packages. Do any
thematic orgs have experience with them? Any recommendations? I am thinking
about proposing Quickbooks Online for the Cascadia user group, but as this
Forbes article says, there are competitors:
http://www.forbes.com/sites/quickerbettertech/2014/01/06/why-your-company-m…
Thanks,
Pine