Hello all,
This message is for those of you who do deployments to the WMF cluster.
On the [[How to deploy code]] wikitech page, there is a section on
Testing your live code:
https://wikitech.wikimedia.org/wiki/How_to_deploy_code#Test_your_code_live
That's a pretty basic overview of it and it could be greatly improved
with information like:
* How to monitor specific parts of the cluster that are relevant to what
you deployed
* What general monitoring should be looked at after you deploy
I know many of you already do much of this after you deploy, but the
lack of documentation on *how* to do it was a recurring theme in the
initial interviews I did with engineering teams when I first started.
https://wikitech.wikimedia.org/wiki/Deployments/Features_Process/General_Fe…
== "The Ask" ==
I'm asking you ("you" being those of you who have experience doing
post-deploy monitoring) to please add more documentation to this section
of the How to deploy code page:
https://wikitech.wikimedia.org/wiki/How_to_deploy_code#Test_your_code_live
I expect people from both engineering and ops will have feedback here.
Also, those of you who don't know how to monitor/log things post deploy
but you have specific questions, please ask here so that someone who
does know can answer on the wiki.
Thanks,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
Hi all,
the GLAM Wiki Toolset-code is ready for its initial review.
It can be found at: https://gerrit.wikimedia.org/r/#/c/59405/
Please help us by taking a look at it.
Cheers,
Geer Oskam
Europeana
GSoC / OPW mentors are facing 10 intense days selecting our next group
of interns.
Some suggestions:
ALWAYS
* Be nice and welcoming, especially in your first messages exchanged.
There will be time for wikitech-style blunt straightforwardness. ;)
* Rely as much as possible in public channels to discuss with the
candidates. Any proposal is a community project, not your pet project.
BEFORE THE SUBMISSION DEADLINE
* Get a second co-mentor for the proposals you want to see accepted.
It's not easy but the success rate is remarkably higher, and the
workload for each remarkably lower. Could be a profile complementary to
yours: technical vs community, professional vs volunteer, maintainer vs
power user, East vs West... The candidate and the project will benefit a
lot.
* You are supposed to be very responsive these days. Say 24h max for an
answer. Failing to do so will diminish the chances of your candidates /
proposals. As said, a second co-mentor always helps.
* Help your candidates, within certain limits. Candidates must have the
skills to prepare a proper plan for their own proposals.
* Asses the capacity of the candidate to complete the project. A nice
written proposal is important but don't rely on this alone.
* Assess the availability of the candidates. This is like a full time
job, with certain margin of flexibility for regular studies (but still).
SELECTING CANDIDATES
After the deadline we will meet to prioritize GSoC and OPW candidates.
* If you have more than one candidate be ready to prioritize them. One
mentor can take only one project, unless there is a good justification
for taking two (e.g. strong co-mentors in both).
* Read also the rest of proposals and pencil your own ranking with a
Wikimedia / MediaWiki wide agenda in mind.
* Be ready to negotiate the place of your candidates in the general
ranking. In other words, don't push blindly for "your" proposals.
Needless to say, you must read the official GSoC manual for mentors:
http://en.flossmanuals.net/GSoCMentoring/
There is more good reading at
https://www.mediawiki.org/wiki/Mentorship_programs/Possible_mentors
If you are a good mentor your know that 20 minutes reading docs can save
you a lot more time and energies. ;)
--
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Hello,
I am summarizing here my proposal idea for OPW MediaWiki projects.
Objective: To test MediaWiki/WikiPedia websites across different browsers. To make sure various website features are supported on different browser platforms. This is mainly functionality testing of website features. When a new feautre is implemented, regression testing to be done to make sure existing feaures work fine. Once the tests are passed, it can be confirmed that the tested feature support the particular browsers.
Scope: To do functionality testing of selected features(i.e Print/Export PDF option etc. ) of the website on selected browser(i.e IE, FF, Chrome etc.) This is nothing to do with Performance Testing, Load Testing of the website.
Plan:
* To select features which will be tested and then select the browsers on which testing to be done
* To prepare test cases covering all functionality of the selected feature
* To prepare multiple test scripts for each of these testcases, covering negative/postive etc. different scenario
* To prepare test data to cover various sets of data i.e to cover edge conditions etc.
* To execute the test scripts and analyse the results
* For failed result, bug will be logged others will be marked as Passed testing
* Bug lifecycle will be tracked and once the fix is available it will be retested.
* Test scripts will be reused for regression testing.
* Test scripts will be updated now and then to include new changes.
* New Test scripts can be included to support different scenario of the bug fix.
* Test report to be presented summarizing the findings, highlighting the open defects.
Tools : Test Automation testing tool to be used - Selenium suite or Cucumber. BugZilla will be used to log defects. Test Cases to be maintained in spreadsheet or any version controlled tools. Test Report to be prepared in graphical format, any open source tool can be used.
Thanks and Regards,
Indrani Sen
Msc Software Engineering
Queen Mary, University of London
London, United Kingdom
Hi folks,
Short version: This mail is fishing for feedback on proposed work on
Gerrit-Bugzilla integration to replace code review tags.
Long version:
One feature of our old code review system that was a tagging system
that made it quick and easy to assign a keyword to a revision at any
time. There were a number of uses we had for the system, which are
documented here:
http://www.mediawiki.org/wiki/Subversion/Code_review/tags
Examples of tags that we miss:
scaptrap - this change requires special care at deploy time. It may
be that it requires two components to be in sync that aren't generally
deployed in sync, or it requires a configuration change, or something
else.
fixme - this change introduces a bug (weirdly, not in our old
documentation - go figure). In Gerrit, this would be a -1 or -2 if
the code hasn't been merged yet, but post-merge, there's no uniform
way to attach that metadata.
backcompat - backwards compatibility breakers.
This has been a frequently requested feature of the upstream Gerrit
developers. However, they've been reluctant to implement such a
feature until after some unscheduled major architectural work is
completed, so we shouldn't hold our breath waiting for that.
With that in mind, we have bug 38239, assigned to Chad:
https://bugzilla.wikimedia.org/38239
Chad worked up a hacky version of tagging as a MediaWiki extension
earlier this week, which would have kept the tags in MediaWiki instead
of Gerrit. That's only one half of what might be an acceptable
solution, since the other half would need to be some Javascript in the
Gerrit user interface to allow for insertion into the MediaWiki tag
database. I discouraged Chad from continuing on this because it
seemed to me that it would have been a little *too* hacky. I
preferred that if we were going to have our own hacky solution, it
should at least be implemented as a Gerrit plugin, so that it would at
least stand a chance of becoming a well-integrated solution.
The problem with tagging generally, though, is that it ended up being
this weird parallel workflow to other systems. "fixme" was frequently
used as a substitute for filing a bug report. "scaptrap" was a
substitute for proper deployment notes, and "backcompat" was a
solution for proper developer notes. That said, it was lightweight,
which meant that it actually got used, as opposed to many "proper"
solutions, which are frequently enough work that it's difficult to
expect uniform followthrough.
The solution that Chad and I discussed is an addition to the
Bugzilla-Gerrit plugin that Christian is already working on. The idea
would be that, for any given revision, there would be a "file bug
about this revision" link. Following that link would throw to the
standard Bugzilla bug filing page, with as many fields prepopulated
based on Gerrit context as could sensibly be filled (including, at the
very minimum, a link back to the Gerrit rev, but probably also with
the assignee set to the developer who introduced the issue, the
component set based on the repo).
A Bugzilla-based solution would be an ideal replacement for "fixme",
since fixmes are basically bugs anyway. It would work reasonably well
for "scaptrap", since they generally imply something that needs to be
done prior to deployment. It would be an awkward replacement for
"backcompat" and others.
Still, the nice thing about this is that a Bugzilla-based solution is
that it's general purpose enough that it may very well find use
outside of Wikimedia-land. The BZ-Gerrit work is actually being done
as part of a larger issue tracker plugin that the GerritForge folks
have written to support Jira. Filing issues based on revisions is
likely a common request for people integrating Gerrit with their issue
tracking system.
Is a Bugzilla-based solution worthwhile enough for our purposes for a
modest (but probably not insignificant) investment in this area, or
should we prioritize other Gerrit work higher (say, for example, a
native Gerrit tagging plugin)? Assuming we move forward with
development, anything we need to consider?
I've put the bulk of this email on mediawiki.org here:
http://www.mediawiki.org/wiki/Git/Tagging
We should evolve that page into a spec for the work that Chad and
Christian will be doing.
Rob
Hello,
I'm new to this ml, and would like to have some relevant links so I can
gather information to have a global overview of how you organize and how
to make useful contributions.
Kind regards,
mathieu
Hi,
I am a 3rd year undergraduate student of computer science, pursuing my
B.Tech degree at RCC Institute of Information Technology. I am proficient
in Java, PHP and C#.
Among the project ideas on the GSoC 2013 ideas page, the one particular
idea that seemed really interesting to me is developing an Entity
Suggester for Wikidata. I want to work on it.
I am passionate about data mining, big data and recommendation engines,
therefore this idea naturally appeals to me a lot. I have experience with
building music and people recommendation systems, and have worked with
Myrrix and Apache Mahout. I recently designed and implemented such a
recommendation system and deployed it on a live production site, where I'm
interning at, to recommend Facebook users to each other depending upon
their interests.
The problem is, the documentation for Wikidata and the Wikibase extension
seems pretty daunting to me since I have not ever configured a mediawiki
instance or actually used it. (I am on my way to try it out following the
instructions at
http://www.mediawiki.org/wiki/Summer_of_Code_2013#Where_to_start.) I can
easily build a recommendation system and create a web-service or REST based
API through which the engine can be trained with existing data, and queried
and all. This seems to be a collaborative filtering problem (people who
bought x also bought y). It'll be easier if I could get some help about the
part where/how I need to integrate it with Wikidata. Also, some sample
datasets (csv files?) or schemas (just the column names and data types?)
would help a lot, for me to figure this out.
I have added this email as a comment on the bug report at
https://bugzilla.wikimedia.org/show_bug.cgi?id=46555#c1.
Please ask me if you have any questions. :-)
Thanks,
Nilesh
--
A quest eternal, a life so small! So don't just play the guitar, build one.
You can also email me at contact(a)nileshc.com or visit my
website<http://www.nileshc.com/>