First things first: the 1.21 release is rapidly approaching. Please
look over https://www.mediawiki.org/wiki/Release_notes/1.21 and help
update the documentation so that when release time comes, we will know
about the work you've done on MediaWiki.
Next, I've started using the MW_release_status template on the
documentation for MediaWiki releases, but I've run into a problem.
There are two different pages for newer releases. For example, 1.20 has
https://www.mediawiki.org/wiki/Release_notes/1.20 and
https://www.mediawiki.org/wiki/MediaWiki_1.20. Since the MediaWiki_1.20
page links to the RELEASE-NOTES-1.20 file, I didn't discover the
Release_notes/1.20 page until later.
Older releases redirect from the MediaWiki_X.XX page to the
Release_notes/X.XX page (see MediaWiki_1.15, for example). This makes
it possible to use {{SUBPAGENAME}} to get the release number and display
a proper message with the MW_release_status template.
For 1.21, I've made a redirect from MediaWiki_1.21 to
Release_notes/1.21. I started to do something similar for 1.20, 1.19,
etc, but stopped because I couldn't merge the pages quickly.
So, the question: is there a reason to keep two separate pages for each
release going forward?
--
http://hexmode.com/
[We are] immortal ... because [we have] a soul, a spirit capable of
compassion and sacrifice and endurance.
-- William Faulker, Nobel Prize acceptance speech
Hi,
I am Tejas Nikumbh, an undergrad at Indian Institute of Technology Bombay.
I will be participating in GSoC this year. [GSoC 2013]. In order to imporve
my chances for getting selected this year, I would like to start
contributing early on to Open Source development via Wikimedia.
Here's a little background info :
*Languages proficient in : *Java, Python, Javascript [+jQuery] ,
HTML[+HTML5 Canvas], CSS [+CSS3]. C++ should be on the list soon.
*CS Theory :* Discrete Math, Probablity and Random Processes, Data
Structures and Algorithms.
*SVN : *Github, Mercurial
.
*Basic Machine learning [Regression algos and some classification algos]*
Based on the above info, are there any projects which might suit me?
Also, Picking up something reasonable quickly shouldn't be a problem.
Thanks,
--
Tejas Nikumbh,
Third Year Undergraduate,
Electrical Engineering Department,
IIT Bombay.
I was wondering what the latest on this was (I can't seem to find any
recent updates in my mailing list). The MobileFrontend project was
reassured to see a github user commenting on our commits in github.
It's made me more excited about a universe where pull requests made in
github show up in gerrit and can be merged. How close to this dream
are we?
Hello!
WebRequest::getPathInfo() still depends on PHP bug 31892 fixed 6 years
ago. I.e. WebRequest uses REQUEST_URI instead of "mangled" PATH_INFO
which is not "mangled" since PHP 5.2.4. Yeah, Apache still replaces
multiple /// with single /, but afaik it's done for REQUEST_URI as well
as PATH_INFO.
Maybe that part of the code should be removed?
Also I don't understand the need for PathRouter - my IMHO is that it's
just an unnecessary sophistication. As I understand EVERYTHING worked
without it and there is no feature in MediaWiki which depends on a
router. Am I correct?
Hi,
unfortunately en.planet updates are still being stuck.
https://bugzilla.wikimedia.org/show_bug.cgi?id=45806
The issue is in feedparser.py
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 32:
ordinal not in range(128)
I would really appreciate help on this from somebody more familiar
with Python/Django template/parsing feeds/ unicode problems.
Thanks
---
INFO:planet.runner:Loading cached data
Traceback (most recent call last):
File "/usr/bin/planet", line 138, in <module>
splice.apply(doc.toxml('utf-8'))
File "/usr/lib/pymodules/python2.7/planet/splice.py", line 118, in apply
output_file = shell.run(template_file, doc)
File "/usr/lib/pymodules/python2.7/planet/shell/__init__.py", line 66, in run
module.run(template_resolved, doc, output_file, options)
File "/usr/lib/pymodules/python2.7/planet/shell/tmpl.py", line 254, in run
for key,value in template_info(doc).items():
File "/usr/lib/pymodules/python2.7/planet/shell/tmpl.py", line 193, in
template_info
data=feedparser.parse(source)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 3525,
in parse
feedparser.feed(data)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 1662,
in feed
sgmllib.SGMLParser.feed(self, data)
File "/usr/lib/python2.7/sgmllib.py", line 104, in feed
self.goahead(0)
File "/usr/lib/python2.7/sgmllib.py", line 143, in goahead
k = self.parse_endtag(i)
File "/usr/lib/python2.7/sgmllib.py", line 320, in parse_endtag
self.finish_endtag(tag)
File "/usr/lib/python2.7/sgmllib.py", line 360, in finish_endtag
self.unknown_endtag(tag)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 569, in
unknown_endtag
method()
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 1512,
in _end_content
value = self.popContent('content')
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 849, in
popContent
value = self.pop(tag)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 764, in
pop
mfresults = _parseMicroformats(output, self.baseuri, self.encoding)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 2219,
in _parseMicroformats
p.vcard = p.findVCards(p.document)
File "/usr/lib/pymodules/python2.7/planet/vendor/feedparser.py", line 2161,
in findVCards
sVCards += '\n'.join(arLines) + '\n'
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 32:
ordinal not in range(128)
----
--
Daniel Zahn <dzahn(a)wikimedia.org>
Operations Engineer
Hello all,
This is related to the ongoing tagging in Gerrit/Bugzilla thread, but I
wanted to make it a separate one so it is more easily seen.
Request: For anything in Gerrit that you feel is scaptrap worthy, please
add me to the reviewers list. That way I can keep a record of them and
you don't have to do anything like edit a wiki page somewhere.
I won't be reviewing your code (yet) so don't worry about that.
Thanks!
Greg
PS: For those who don't know, "scaptraps" are the loving name we give
changes that should be noted when doing a deployment as it has a chance
of breaking something or is a migration of some sort. "scap" is one of
the tools we use in deployments.
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
As proposed here some time ago, the focus of the next QA weekly goal
will be old LiquidThreads bugs.
Starting on March 18, 2013 and all week long with a dedicated session on
March 19, we're cleaning up the bug reports about Extension:LiquidThreads.
http://www.mediawiki.org/wiki/Bug_management/Triage/20130318
Join us improving the quality of these reports!
--
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
I done some testing of the performance of Lua-based templates as
deployed on enwiki. This analysis is summarized at:
https://en.wikipedia.org/wiki/User:Dragons_flight/Lua_performance
The bottom line is that Lua is fast, and often much faster than the
template coding it replaces.
For the important case of citation templates, one can anticipate
seeing about an 80% reduction in render time once Module:Citation/CS1
is deployed. This will have the effect that 300 citations can be
processed in about 3.5 seconds rather than 18 seconds. Such an
improvement should make a meaningful difference for many of
Wikipedia's complex pages.
One unexpected detail that came out of my testing is that the overhead
per #invoke call is about 4.5 milliseconds, which is actually fairly
large once one starts talking about having several hundred calls on a
single page. For the citation module, this overhead is about 40% of
the run time. For some of the simpler number formatting and string
manipulation Lua modules, the overhead can be 75-90% of the run time.
I don't know if it is possible, but it may be worth looking to see if
there are ways to use caching or other techniques to reduce the
overhead associated with launching each #invoke instance.
-Robert Rohde
We just finished deploying a new SSL certificate to the sites. Now all *.m
and *. certificates are included in a single certificate, except
mediawiki.org. Unfortunately we somehow forgot mediawiki.org when we
ordered the updated cert. We'll be replacing this soon with another cert
that had mediawiki.org included.
This should fix any certificate errors that folks have been seeing on
non-wikipedia m. domains.
- Ryan
http://www.modern.ie/en-us/virtualization-tools is offering VMs for
testing various versions of IE.
Unlike before, they now even offer VirtualBox and VMWare images, so you
don't have to convert the Virtual PC ones.
Matt Flaschen