I am working with Tamil Wikipedia team to get the details requested.
Will share details in a week.
On 8 Jun 2017 01:42, "zppix e" <megadev44s.mail(a)gmail.com> wrote:
Hello,
Wikimedia's AI team (side note: im unaffliated with wmf, was gaven
permission to send this email) will need help with setting up some
wordlists for the tawiki ORES system (see T166052). If you have any
knowledge of the Tamil lang, please come join us at chat.freenode.net,
channel #wikimedia-ai (webchat.freenode.net). Feel free to cross post this
or ask users on-wiki.
Thanks,
Zppix
Volunteer developer for WMF
enwp.org/User:Zppix
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi all,
the Wikimedia technical community has recently adopted a Code of Conduct.
You have probably heard more about it than you wanted to, but if you have
missed it somehow, you can read the related blog post [1].
We started adding a CODE_OF_CONDUCT file with a link to all repos (this is
a new convention for declaring what a project's code of conduct is,
promoted by Github), which resulted in a debate about whether that is the
right thing to do. If you are interested, please join the discussion on the
Phabricator task [2].
[1] https://blog.wikimedia.org/2017/06/08/wikimedia-code-of-conduct/
[2] https://phabricator.wikimedia.org/T165540
I'm trying to setup two Parsoid servers to play nicely with two MediaWiki
application servers and am having some issues. I have no problem getting
things working with Parsoid on a single app server, or multiple Parsoid
servers being used by a single app server, but ran into issues when I
increased to multiple app servers. To try to get this working I starting
making the app and Parsoid servers communicate through my load balancer. So
an overview of my config is:
Load balancer = 192.168.56.63
App1 = 192.168.56.80
App2 = 192.168.56.60
Parsoid1 = 192.168.56.80
Parsoid2 = 192.168.56.60
Note, App1 and Parsoid1 are the same server, and App2 and Parsoid2 are the
same server. I can only spin up so many VMs on my laptop.
The load balancer (HAProxy) is configured as follows:
* 80 forwards to 443
* 443 forwards to App1 and App2 port 8080
* 8081 forwards to App1 and App2 port 8080 (this will be a private network
connection later)
* 8001 forwards to Parsoid1 and Parsoid2 port 8000 (also will be private)
On App1/Parsoid1 I can run `curl 192.168.56.63:8001` and get the
appropriate response from Parsoid. I can run `curl 192.168.56.63:8081` and
get the appropriate response from MediaWiki. The same is true for both on
App2/Parsoid2. So the servers can get the info they need from the services.
Currently I'm getting a the error "Error loading data from server: 500:
docserver-http: HTTP 500. Would you like to retry?" when attempting to use
Visual Editor. I've tried various different settings and have not always
gotten that specific error, but am getting it with the settings I currently
have in localsettings.js and LocalSettings.php (shown below in this email).
Removing the proxy config lines from these settings gave slightly better
results. I did not get the 500 error, but instead it sometimes after a very
long time it would work. It also may have been throwing errors in the
parsoid log (with debug on). I have those logs saved if they help. I'm
hoping someone can just point out some misconfiguration, though.
Here are snippets of my config files:
On App1/Parsoid1, relevant localsettings.js:
parsoidConfig.setMwApi({
uri: 'http://192.168.56.80:8081/demo/api.php',
proxy: { uri: 'http://192.168.56.80:8081/' },
domain: 'demo',
prefix: 'demo'
} );
parsoidConfig.serverInterface = '192.168.56.80';
On App2/Parsoid2, relevant localsettings.js:
parsoidConfig.setMwApi({
uri: 'http://192.168.56.80:8081/demo/api.php',
proxy: { uri: 'http://192.168.56.80:8081/' },
domain: 'demo',
prefix: 'demo'
} );
parsoidConfig.serverInterface = '192.168.56.60';
On App1/Parsoid1, relevant LocalSettings.php:
$wgVirtualRestConfig['modules']['parsoid'] = array(
'url' => '192.168.56.80:8001',
'HTTPProxy' => 'http://192.168.56.80:8001',
'domain' => $wikiId,
'prefix' => $wikiId
);
On App2/Parsoid2, relevant LocalSettings.php:
$wgVirtualRestConfig['modules']['parsoid'] = array(
'url' => '192.168.56.80:8001',
'HTTPProxy' => 'http://192.168.56.80:8001',
'domain' => $wikiId,
'prefix' => $wikiId
);
Thanks!
--James
Hi all,
We're excited to announce that we've rolled out several new features and
bug fixes for the Commons Android app[1] over the past few months. Some of
the major ones include:
- A map of nearby places that need pictures (in addition to the existing
list). Selecting a nearby place allows users to see the associated Wikidata
item, get directions to the place, or view the associated Wikipedia article
- A new and improved UI, including a light and dark theme, a navigation
drawer, and a logout option
- Fixed memory issues preventing users with older phones from accessing the
app
- Licenses now include CC-BY 4.0 and CC-BY-SA 4.0, and licenses can be
selected individually when uploading a picture
- The total number of pictures that have been uploaded from an account are
now shown, and the image details pane now displays the upload date and
image coordinates
Thank you for your support and encouragement though all this time!
Feedback, bug reports, and suggestions are always welcome on our GitHub
page[2]. :)
[1]: https://play.google.com/store/apps/details?id=fr.free.nrw.commons
[2]: https://github.com/commons-app/apps-android-commons/issues/
--
Regards,
Josephine
I've read through the documentation I think you're talking about. It's kind
of hard to determine where to start since the docs are spread out between
multiple VE, Parsoid and RESTBase pages. Installing RESTBase is, as you
say, straightforward (git clone, npm install, basically). Configuring is
not clear to me, and without clear docs it's the kind of thing that takes
hours of trial and error. Also some parts mention I need Cassandra but it's
not clear if that's a hard requirement.
If I want a highly available setup with multiple app servers and multiple
Parsoid servers, would I install RESTBase alongside each Parsoid? How does
communication between the multiple app and RB/Parsoid servers get
configured? I feel like I'll be back in the same load balancing situation.
--James
On Jun 8, 2017 7:43 AM, "C. Scott Ananian" <cananian(a)wikimedia.org> wrote:
RESTBase actually adds a lot of immediate performance, since it lets VE
load the editable representation directly from cache, instead of requiring
the editor to wait for Parsoid to parse the page before it can be edited.
I documented the RESTBase install; it shouldn't actually be any more
difficult than Parsoid. They both use the same service runner framework
now.
At any rate: in your configurations you have URL and HTTPProxy set to the
exact same string. This is almost certainly not right. I believe if you
just omit the proxy lines entirely from the configuration you'll find
things work as you expect.
--scott
On Wed, Jun 7, 2017 at 11:30 PM, James Montalvo <jamesmontalvo3(a)gmail.com>
wrote:
> Setting up RESTBase is very involved. I'd really prefer not to add that
> complexity at this time. Also I'm not sure at my scale RESTBase would
> provide much performance benefit (though I don't know much about it so
> that's just a hunch). The parsoid and VE configs have fields for proxy (as
> shown in my snippets), so it seems like running them this way is intended.
> Am I wrong?
>
> Thanks,
> James
>
> On Jun 7, 2017 8:12 PM, "C. Scott Ananian" <cananian(a)wikimedia.org> wrote:
>
> > I think in general the first thing you should do for performance is set
> up
> > restbase in front of parsoid? Caching the parsoid results will be faster
> > than running multiple parsoids in parallel. That would also match the
> wmf
> > configuration more closely, which would probably help us help you. I
> wrote
> > up instructions for configuring restbase on the VE and Parsoid wiki
> pages.
> > As it turns out I updated these today to use VRS configuration. Let me
> know
> > if you run into trouble, perhaps some further minor updates are
> necessary.
> > --scott
> >
> > On Jun 7, 2017 6:26 PM, "James Montalvo" <jamesmontalvo3(a)gmail.com>
> wrote:
> >
> > > I'm trying to setup two Parsoid servers to play nicely with two
> MediaWiki
> > > application servers and am having some issues. I have no problem
> getting
> > > things working with Parsoid on a single app server, or multiple
Parsoid
> > > servers being used by a single app server, but ran into issues when I
> > > increased to multiple app servers. To try to get this working I
> starting
> > > making the app and Parsoid servers communicate through my load
> balancer.
> > So
> > > an overview of my config is:
> > >
> > > Load balancer = 192.168.56.63
> > >
> > > App1 = 192.168.56.80
> > > App2 = 192.168.56.60
> > >
> > > Parsoid1 = 192.168.56.80
> > > Parsoid2 = 192.168.56.60
> > >
> > > Note, App1 and Parsoid1 are the same server, and App2 and Parsoid2 are
> > the
> > > same server. I can only spin up so many VMs on my laptop.
> > >
> > > The load balancer (HAProxy) is configured as follows:
> > > * 80 forwards to 443
> > > * 443 forwards to App1 and App2 port 8080
> > > * 8081 forwards to App1 and App2 port 8080 (this will be a private
> > network
> > > connection later)
> > > * 8001 forwards to Parsoid1 and Parsoid2 port 8000 (also will be
> private)
> > >
> > > On App1/Parsoid1 I can run `curl 192.168.56.63:8001` and get the
> > > appropriate response from Parsoid. I can run `curl 192.168.56.63:8081`
> > and
> > > get the appropriate response from MediaWiki. The same is true for both
> on
> > > App2/Parsoid2. So the servers can get the info they need from the
> > services.
> > >
> > > Currently I'm getting a the error "Error loading data from server:
500:
> > > docserver-http: HTTP 500. Would you like to retry?" when attempting to
> > use
> > > Visual Editor. I've tried various different settings and have not
> always
> > > gotten that specific error, but am getting it with the settings I
> > currently
> > > have in localsettings.js and LocalSettings.php (shown below in this
> > email).
> > > Removing the proxy config lines from these settings gave slightly
> better
> > > results. I did not get the 500 error, but instead it sometimes after a
> > very
> > > long time it would work. It also may have been throwing errors in the
> > > parsoid log (with debug on). I have those logs saved if they help. I'm
> > > hoping someone can just point out some misconfiguration, though.
> > >
> > > Here are snippets of my config files:
> > >
> > > On App1/Parsoid1, relevant localsettings.js:
> > >
> > > parsoidConfig.setMwApi({
> > >
> > > uri: 'http://192.168.56.80:8081/demo/api.php',
> > > proxy: { uri: 'http://192.168.56.80:8081/' },
> > > domain: 'demo',
> > > prefix: 'demo'
> > > } );
> > >
> > > parsoidConfig.serverInterface = '192.168.56.80';
> > >
> > >
> > > On App2/Parsoid2, relevant localsettings.js:
> > >
> > > parsoidConfig.setMwApi({
> > >
> > > uri: 'http://192.168.56.80:8081/demo/api.php',
> > > proxy: { uri: 'http://192.168.56.80:8081/' },
> > >
> > > domain: 'demo',
> > > prefix: 'demo'
> > >
> > > } );
> > >
> > > parsoidConfig.serverInterface = '192.168.56.60';
> > >
> > >
> > > On App1/Parsoid1, relevant LocalSettings.php:
> > >
> > > $wgVirtualRestConfig['modules']['parsoid'] = array(
> > > 'url' => '192.168.56.80:8001',
> > >
> > > 'HTTPProxy' => 'http://192.168.56.80:8001',
> > >
> > > 'domain' => $wikiId,
> > > 'prefix' => $wikiId
> > > );
> > >
> > >
> > > On App2/Parsoid2, relevant LocalSettings.php:
> > >
> > > $wgVirtualRestConfig['modules']['parsoid'] = array(
> > > 'url' => '192.168.56.80:8001',
> > >
> > > 'HTTPProxy' => 'http://192.168.56.80:8001',
> > >
> > > 'domain' => $wikiId,
> > > 'prefix' => $wikiId
> > > );
> > >
> > >
> > > Thanks!
> > >
> > > --James
> > > _______________________________________________
> > > Wikitech-l mailing list
> > > Wikitech-l(a)lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
--
(http://cscott.net)
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi everybody,
We have made some changes to our Product and Technology departments which
we are excited to tell you about. When Wes Moran, former Vice President of
Product, left the Wikimedia Foundation in May, we took the opportunity to
review the organization and operating principles that were guiding Product
and Technology. Our objectives were to improve our engagement with the
community during product development, develop a more audience-based
approach to building products, and create as efficient a pipeline as
possible between an idea and its deployment. We also wanted an approach
that would better prepare our engineering teams to plan around the upcoming
movement strategic direction. We have finished this process and have some
results to share with you.
Product is now known as Audiences, and other changes in that department
In order to more intentionally commit to a focus on the needs of users, we
are making changes to the names of teams and department (and will be using
these names throughout the rest of this update):
-
The Product department will be renamed the Audiences department;
-
The Editing team will now be called the Contributors team;
-
The Reading team will be renamed the Readers team.
You might be asking: what does “audience” mean in this context? We define
it as a specific group of people who will use the products we build. For
example, “readers” is one audience. “Contributors” is another. Designing
products around who will be utilizing them most, rather than what we would
like those products to do, is a best practice in product development. We
want our organizational structure to support that approach.
We are making five notable changes to the Audiences department structure.
The first is that we are migrating folks working on search and discovery
from the stand-alone Discovery team into the Readers team and Technology
department, respectively. Specifically, the team working on our search
backend infrastructure will move to Technology, where they will report to
Victoria. The team working on maps, the search experience, and the project
entry portals (such as Wikipedia.org) will join the Readers team. This
realignment will allow us to build more integrated experiences and
knowledge-sharing for the end user.
The second is that the Fundraising Tech team will also move to the
Technology department. This move recognizes that their core work is
primarily platform development and integration, and brings them into closer
cooperation with their peers in critical functions including MediaWiki
Platform, Security, Analytics, and Operations.
The Team Practices group (TPG) will also be undergoing some changes.
Currently, TPG supports both specific teams in Product, as well as
supporting broader organizational development. Going forward, those TPG
members directly supporting feature teams will be embedded in their
respective teams in the Audiences or Technology departments. The TPG
members who were primarily focused on organizational health and development
will move to the Talent & Culture department, where they will report to
Anna Stillwell.
These three changes lead to the fourth, which is the move from four
“audience” verticals in the department (Reading, Editing, Discovery, and
Fundraising Tech, plus Team Practices) to three: Readers, Contributors, and
Community Tech. This structure is meant to streamline our focus on the
people we serve with our feature and product development, increase team
accountability and ownership over their work, allow Community Tech to
maintain its unique, effective, and multi-audiences workflow, and better
integrate support directly where teams need it most.
One final change: in the past we have had a design director. We recognize
that design is critical to creating exceptional experiences as a
contributor or a reader, so we’re bringing that role back. The director for
design will report to the interim Vice President of Product. The Design
Research function, currently under the Research team in the Technology
department, will report to the new director once the role is filled.
Technology is increasingly “programmatic”
The Technology department is also making a series of improvements in the
way we operate so that we can better serve the movement.
The biggest change is that all of our work in fiscal year 2017-2018 will be
structured and reported in programs instead of teams (you can see how this
works in our proposed 2017-2018 Annual Plan).[2] This will help us focus on
the collective impact we want to make, rather than limiting ourselves to
the way our organization is structured. These programs will be enabled by
the platforms (MediaWiki, Fundraising Tech, Search, Wikimedia Cloud
Services, APIs, ORES, and Analytics) that the Technology department builds
and maintains, and they will be delivered by teams that provide critical
services (Operations, Performance, Security, Release Engineering, and
Research). Distinguishing the work of the Technology department into
platforms and services will also allow us to treat platforms as products,
with accountable product managers and defined roadmaps.
In addition to moving the Search subteam into Technology, we are creating a
separate ORES team. These changes mark the start of something big -
investing in building machine learning, machine translation, natural
language processing and related competencies. This is the first step
towards supporting intelligent, humanized, user interfaces for our
communities - something we’re thinking of as “human tech”. Not because we
think that machines will replace our humans, but because these tools cans
help our humans be much more productive.
Why these changes, why now?
When the Product and Technology departments were reorganized in 2015,[1]
the stated goal was establishing verticals to focus on specific groups of
users and to speed execution by reducing dependencies among teams. These
smaller changes are meant to “tune-up” that structure, by addressing some
of its weaknesses and making additional improvements to the structure of
our engineering work.
The process that brought us to these changes began informally shortly after
Victoria arrived, and took on a more formal tone once Wes announced his
departure in May. Katherine asked Anna Stillwell, the Foundation's
newly-appointed Chargée d’Affaires in the Talent & Culture department, to
facilitate a consultation with both departments to identify their pain
points, and better understand their cultural and structural needs. After
collecting feedback from 93 people across the two departments, as well as
stakeholders around the organization, she offered a draft proposal for open
comment within the Foundation. After making some changes to reflect staff
feedback, the Foundation’s leadership team decided to proceed with the
changes described above.
The leaders of some of the teams involved will be following up in the next
few days with the specifics of these organizational moves and what they
mean to our communities. If you still have questions, please ask here or on
the talk page of this announcement:
https://www.mediawiki.org/wiki/Talk:Wikimedia_Engineering/June_2017_changes.
Best regards,
Toby Negrin, Interim Vice President of Product
Victoria Coleman, Chief Technology Officer
PS. An on-wiki version of this message is available for translation:
https://www.mediawiki.org/wiki/Wikimedia_Engineering/June_2017_changes
[1]
https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Engineering_reorganiza…
[2]
https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Annual_Plan/2017-2018/…
Hi Toby,
Thanks for sharing the reorg information. From my perspective as an
outsider, this sounds good.
I have a question about the sentences "The biggest change is that all of
our work in fiscal year 2017-2018 will be structured and reported in
programs instead of teams (you can see how this works in our proposed
2017-2018 Annual Plan). This will help us focus on the collective impact we
want to make, rather than limiting ourselves to the way our organization is
structured."
I would like to see WMF move fully to project-based budgeting (there are a
variety of names for similar approaches), and the change that you describe
here sounds like a step in that direction. Will WMF move fully to
project-based budgeting by the time of the 2018-2019 Annual Plan? That
would involve each project (such as "redesign of www.wikimediafoundation.org")
having a project budget, and the collection of chosen projects with their
budgets would constitute the Annual Plan. (The methodology for choosing
projects varies among organizations that do this kind of budgeting; I would
imagine that WMF could use its values, the outcomes of the strategy
process, and the annual Board guidance about the budget as major factors in
selecting projects.)
Thanks,
Pine
On Wed, Jun 7, 2017 at 2:12 PM, Toby Negrin <tnegrin(a)wikimedia.org> wrote:
> Hi everybody,
>
> We have made some changes to our Product and Technology departments which
> we are excited to tell you about. When Wes Moran, former Vice President of
> Product, left the Wikimedia Foundation in May, we took the opportunity to
> review the organization and operating principles that were guiding Product
> and Technology. Our objectives were to improve our engagement with the
> community during product development, develop a more audience-based
> approach to building products, and create as efficient a pipeline as
> possible between an idea and its deployment. We also wanted an approach
> that would better prepare our engineering teams to plan around the upcoming
> movement strategic direction. We have finished this process and have some
> results to share with you.
>
> Product is now known as Audiences, and other changes in that department
>
> In order to more intentionally commit to a focus on the needs of users, we
> are making changes to the names of teams and department (and will be using
> these names throughout the rest of this update):
>
> -
>
> The Product department will be renamed the Audiences department;
> -
>
> The Editing team will now be called the Contributors team;
> -
>
> The Reading team will be renamed the Readers team.
>
> You might be asking: what does “audience” mean in this context? We define
> it as a specific group of people who will use the products we build. For
> example, “readers” is one audience. “Contributors” is another. Designing
> products around who will be utilizing them most, rather than what we would
> like those products to do, is a best practice in product development. We
> want our organizational structure to support that approach.
>
> We are making five notable changes to the Audiences department structure.
>
> The first is that we are migrating folks working on search and discovery
> from the stand-alone Discovery team into the Readers team and Technology
> department, respectively. Specifically, the team working on our search
> backend infrastructure will move to Technology, where they will report to
> Victoria. The team working on maps, the search experience, and the project
> entry portals (such as Wikipedia.org) will join the Readers team. This
> realignment will allow us to build more integrated experiences and
> knowledge-sharing for the end user.
>
> The second is that the Fundraising Tech team will also move to the
> Technology department. This move recognizes that their core work is
> primarily platform development and integration, and brings them into closer
> cooperation with their peers in critical functions including MediaWiki
> Platform, Security, Analytics, and Operations.
>
> The Team Practices group (TPG) will also be undergoing some changes.
> Currently, TPG supports both specific teams in Product, as well as
> supporting broader organizational development. Going forward, those TPG
> members directly supporting feature teams will be embedded in their
> respective teams in the Audiences or Technology departments. The TPG
> members who were primarily focused on organizational health and development
> will move to the Talent & Culture department, where they will report to
> Anna Stillwell.
>
> These three changes lead to the fourth, which is the move from four
> “audience” verticals in the department (Reading, Editing, Discovery, and
> Fundraising Tech, plus Team Practices) to three: Readers, Contributors, and
> Community Tech. This structure is meant to streamline our focus on the
> people we serve with our feature and product development, increase team
> accountability and ownership over their work, allow Community Tech to
> maintain its unique, effective, and multi-audiences workflow, and better
> integrate support directly where teams need it most.
>
> One final change: in the past we have had a design director. We recognize
> that design is critical to creating exceptional experiences as a
> contributor or a reader, so we’re bringing that role back. The director for
> design will report to the interim Vice President of Product. The Design
> Research function, currently under the Research team in the Technology
> department, will report to the new director once the role is filled.
>
> Technology is increasingly “programmatic”
>
> The Technology department is also making a series of improvements in the
> way we operate so that we can better serve the movement.
>
> The biggest change is that all of our work in fiscal year 2017-2018 will
> be structured and reported in programs instead of teams (you can see how
> this works in our proposed 2017-2018 Annual Plan).[2] This will help us
> focus on the collective impact we want to make, rather than limiting
> ourselves to the way our organization is structured. These programs will be
> enabled by the platforms (MediaWiki, Fundraising Tech, Search, Wikimedia
> Cloud Services, APIs, ORES, and Analytics) that the Technology department
> builds and maintains, and they will be delivered by teams that provide
> critical services (Operations, Performance, Security, Release Engineering,
> and Research). Distinguishing the work of the Technology department into
> platforms and services will also allow us to treat platforms as products,
> with accountable product managers and defined roadmaps.
>
> In addition to moving the Search subteam into Technology, we are creating
> a separate ORES team. These changes mark the start of something big -
> investing in building machine learning, machine translation, natural
> language processing and related competencies. This is the first step
> towards supporting intelligent, humanized, user interfaces for our
> communities - something we’re thinking of as “human tech”. Not because we
> think that machines will replace our humans, but because these tools cans
> help our humans be much more productive.
>
> Why these changes, why now?
>
> When the Product and Technology departments were reorganized in 2015,[1]
> the stated goal was establishing verticals to focus on specific groups of
> users and to speed execution by reducing dependencies among teams. These
> smaller changes are meant to “tune-up” that structure, by addressing some
> of its weaknesses and making additional improvements to the structure of
> our engineering work.
>
> The process that brought us to these changes began informally shortly
> after Victoria arrived, and took on a more formal tone once Wes announced
> his departure in May. Katherine asked Anna Stillwell, the Foundation's
> newly-appointed Chargée d’Affaires in the Talent & Culture department, to
> facilitate a consultation with both departments to identify their pain
> points, and better understand their cultural and structural needs. After
> collecting feedback from 93 people across the two departments, as well as
> stakeholders around the organization, she offered a draft proposal for open
> comment within the Foundation. After making some changes to reflect staff
> feedback, the Foundation’s leadership team decided to proceed with the
> changes described above.
>
> The leaders of some of the teams involved will be following up in the next
> few days with the specifics of these organizational moves and what they
> mean to our communities. If you still have questions, please ask here or on
> the talk page of this announcement: https://www.mediawiki.org/
> wiki/Talk:Wikimedia_Engineering/June_2017_changes.
>
> Best regards,
>
> Toby Negrin, Interim Vice President of Product
> Victoria Coleman, Chief Technology Officer
>
> PS. An on-wiki version of this message is available for translation:
> https://www.mediawiki.org/wiki/Wikimedia_Engineering/June_2017_changes
>
> [1] https://meta.wikimedia.org/wiki/Wikimedia_Foundation_
> Engineering_reorganization_FAQ
>
> [2] https://meta.wikimedia.org/wiki/Wikimedia_Foundation_
> Annual_Plan/2017-2018/Draft
>
> _______________________________________________
> Please note: all replies sent to this mailing list will be immediately
> directed to Wikimedia-l, the public mailing list of the Wikimedia
> community. For more information about Wikimedia-l:
> https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
> _______________________________________________
> WikimediaAnnounce-l mailing list
> WikimediaAnnounce-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikimediaannounce-l
>
>
Hello,
Wikimedia's AI team (side note: im unaffliated with wmf, was gaven
permission to send this email) will need help with setting up some
wordlists for the tawiki ORES system (see T166052). If you have any
knowledge of the Tamil lang, please come join us at chat.freenode.net,
channel #wikimedia-ai (webchat.freenode.net). Feel free to cross post this
or ask users on-wiki.
Thanks,
Zppix
Volunteer developer for WMF
enwp.org/User:Zppix
*https://www.mediawiki.org/wiki/Scrum_of_scrums/2017-06-07
<https://www.mediawiki.org/wiki/Scrum_of_scrums/2017-06-07>*
*= 2017-06-07 =*
contact: https://www.mediawiki.org/wiki/Wikimedia_Engineering
== callouts ==
* RelEng: MW 1.29 Release blocked on the tasks in:
https://phabricator.wikimedia.org/project/view/2400/
* TechOps: codfw Row D switch upgrade on Tuesday 20th 1500UTC:
https://phabricator.wikimedia.org/T167274
* RelEng: '''WARNING''': Ops will be removing Salt near the end of next
quarter, that means no more Trebuchet as well. See
https://phabricator.wikimedia.org/T129290#3245438 for a list of things
still needing migration to scap3. (may see some strange updates from us :))
=== Technology ===
==== Analytics =====
- Productionizing code to count project-wide unique devices (unique
devices on *.wikipedia.org)
- Still working on eventlogging purging so data is in compliance with 90
day retention, should be done by end of quarter.
- Still working on prep work to replace kafka cluster and add TLS
support, some kafka clients (like mediawiki) do not support this natively
(this work will expand into next quarter)
- Added data for ops to druid: requests sampled 1/128, can be used for
troubleshooting: https://tinyurl.com/y7ogd7n5
- Pivot no longer open source: Confirmed that last open source clone of
pivot doesn’t have any of the bugfixes we need (due to litigation pivot is
closed source now), we will try to migrate users to Superset next quarter
(similar but less optimal tool open sourced by Airbnb), that will eat time
from our next quarter plans.
==== Research ====
* Working on deploying Recommendation API based on ServiceTemplateNode
** https://phabricator.wikimedia.org/T165760
* Building Spark job with MLlib for finding translation recommendations
** https://github.com/schana/recommendation-translation
** https://phabricator.wikimedia.org/T162912
==== Services ====
* Blockers: none
* Updates:
** Summary endpoint now contains HTML extract along with plain text
** Working on an experiment with services kubernetes development setup
==== Discovery/Search ====
* Tuna reorg is in effect
* No blockers
* Chinese/Hebrew analyzers ready, deploying. Next is Japanese.
* Upgraded kibana to v 5.3.3
* Continuing work on ML-assisted ranking
* Special:Undelete search deployed for admin testing (
https://phabricator.wikimedia.org/T163235)
* Sister wiki search is being deployed (
https://phabricator.wikimedia.org/T162626)
==== RelEng ====
* '''Blocking''': none?
* '''Blockers''': none
* Updates:
** MW 1.29 Release blocked on the tasks in:
https://phabricator.wikimedia.org/project/view/2400/
** '''WARNING''': Ops will be removing Salt near the end of next quarter,
that means no more Trebuchet as well. See
https://phabricator.wikimedia.org/T129290#3245438 for a list of things
still needing migration to scap3. (may see some strange updates from us :))
==== Security ====
* Reviews:
** TemplateStyles is almost complete
** psy/psysh use on WMF servers
** Verification of whitelisted.yaml / graylisted.yaml
** Auto-approval of low-risk OAuth applications
** Ex:JsonConfig/Ex:Kartographer
==== Tech Ops ====
* '''Blocking''': none?
* '''Blockers''': none
* Updates:
** codfw Row D switch upgrade on Tuesday 20th 1500UTC:
https://phabricator.wikimedia.org/T167274
** enwiki API overload
https://wikitech.wikimedia.org/wiki/Incident_documentation/20170607-WikiScr…
** New ops person joining the team: Keith Herron
=== Reading ===
==== web ====
* Warnings in place for pdf generation. Talks continue around backend.
* Updating page previews to consume and render HTML previews
* Page previews on Wikidata
==== iOS ====
* Finishing up 5.5 (Places, Explore feed updates) -
https://phabricator.wikimedia.org/project/view/2602/
** Regression testing & fixing remaining issues
** Public beta this week
==== Android ====
* Current release work complete, beta release soon, perhaps
today/Wednesday, assuming no trouble in QA:
https://phabricator.wikimedia.org/project/view/2352/
* On deck: https://phabricator.wikimedia.org/project/view/2763/
* New engineer candidate interviews underway.
==== Reading Infrastructure ====
* EL problem https://phabricator.wikimedia.org/T67508
* OCG Vagrant role
* TemplateStyles test server
* MCS: Fixed featured article titles for French WP in aggregated feed
endpoint.
==== Multimedia ====
* No blockers, not blocking
* Work on 3D progressing, though still waiting on proper reviews on design
and usability
=== Community Tech ===
* Deployed LoginNotify to Test Wikipedia (
https://www.mediawiki.org/wiki/Extension:LoginNotify)
* Polishing up CodeMirror extrension for deployment as a Beta Feature
* Still working on XTools rewrite
* No blockers
=== Editing ===
==== UI Standardization ====
* This week:
** Clean-up/patches of style guide workboard
https://phabricator.wikimedia.org/tag/wikimediaui_style_guide/
* Updates:
** WikimediaUI Style Guide
https://wikimedia.github.io/WikimediaUI-Style-Guide/
*** Semi-automated SVG export of widgets overview and widgets/components to
provide open format for designers' use
** OOjs UI
https://phabricator.wikimedia.org/diffusion/GOJU/browse/master/History.md
*** v0.22.1 released (James D. Forrester)
**** Continued work on icons: Drop the core icon pack – please check your
extensions for needed icon packs
==== Language ====
* No blockers/blocking
* New feature in ContentTranslation: CX will allow publishing to User (or
Draft if available) namespace easily.
==== Collaboration ====
* Enabling saved filters in production. This has been gated off for a
while, it is now moving to enabled along with the train (and some related
bug fixes)
* RC Filters fixes
** Variety of UI fixes
** A couple backend-of-the-frontend fixes
* Echo
** Fixed an exception in the Echo blacklist functionality (still
dark-launched and only available on test wikis)
** Another Echo bug fix
* A few other small or not user-visible fixes
==== Parsing ====
* Linter is being re-enabled on large wikis next week - the core patch that
blocked this has now been merged and will be deployed this week
* Now that wmf2 is deployed on the cluster, we are ready to do final
reviews of red link support in Parsoid, merge and test it.
* Parsoid side patch to parse language variants is now going through final
reviews.
=== Fundraising Tech ===
* No blockers/blocking AFIK
* Offsite this week
* Consolidation and improvements in config for SmashPig and
DonationInterface
* Usual onslaught of minor fixes for payment processor integrations
* Deploy of CentralNotice feature coming up