Here's a crazy question.
Non-profit organizations are famous for having terrible web
sites. Generally they get a fixed budget and after they spend it, they
have a party and announced that they succeeded. Nobody ever tells the
users, or rather, the people who might have been the users if they
found out about it.
For a long time I thought "non-profit" was a cause of failure, or
rather, that profit was a cause of success. Nobody at a library
benefits from making a digital library 5% easier to use, but if a
company like AMZN improves its site by 5%, that translates into happy
customers plus a pile of money that can go into bonuses, dividends, etc.
That continuous improvement is missing in most non-profits. At
best they get a series of grants to do things and set goals for major
upgrades. Sometimes these upgrades fail, sometimes they really help,
often they end up spending a lot of money for 3 years to get something
that's about the same as what they had before.
How does the Wikimedia foundation escape this trap?
I've searched mediawiki endlessly, but I can't seem to find an answer as to
how to Selectively add html/wikitext to the top of articles while they are
being displayed to the user.
So if I had a Namespace:, I want every page within the namespace to have a
piece of html tagged to the top of the page while it is being display to a
user.
Could anyone clue me in to a hook or way to do this?
- Hunter F.
Howdy folks,
I put together some code to fetch all the non-obsolete patches from
this page and see if they apply against trunk.
https://bugzilla.wikimedia.org/buglist.cgi?keywords=patch%2C%20need-review&…
Basically it tries --strip with 0,1,2,3 and then does an exhaustive
search of the tree for places where the patch might apply.
These are my results.
http://pastebin.com/tgNTfXzx
For ease of viewing, here is a list filtered to only the successfully
applied patches.
http://pastebin.com/EGA3cF90
The "Strip" and "Path" information is what you need to apply the
patch. (eg. patch --directory=includes/ --strip 3 < mypatchfile)
Eventually I may update these scripts to run all the automated tests
on each patch that apply cleanly. However, I won't be getting to it
in the near future. ;-)
In the meantime, hopefully this makes it easier to catch up on the backlog.
If anyone wants to mess with the code (it is in Ruby), please let me
know and I will get it up on GitHub.
~Rusty
I've noticed a problem with overzealous deletionists on Commons. While
this may be something of a legal and political issue, it's also
operational and affects multiple *[m,p]edias at the same time.
I've spent some time over the years convincing public figures that we
need official pictures released for articles, rather than relying on
fan (or publicity or staff) produced pictures. Because of my own
experience in the academic, computing, political, and music industries,
I've had a modicum of success.
I also ask them to create an official user identity for posting them.
Since Single User Login (SUL), this has the added benefit that nobody
else can pretend to be them. From their point of view, it's the same
reason they also ensure they have an existing facebook or linkedin or
twitter account.
This week, one of the commons administrators (Yann) ran a script of
some sort that flagged hundreds of pictures for deletion, apparently
based on the proximity of the word facebook in the description. There
was no time for actual legal analysis, at a rate of more than one per
minute. The only rationale given was: "From Facebook. No permission."
https://commons.wikimedia.org/wiki/Commons:Deletion_requests/File:Sharon_Ag…
In this case, timestamps indicate the commons photo was posted before
the facebook photo, and the facebook version is somewhat smaller, so
there's not even the hint that it was copied "From Facebook." Besides,
many public figures also have facebook accounts, so it shouldn't matter
that a photo appears in both places.
A bot posted a link to the notice on the en.wiki talk page that used
the photo, where in turn it appeared in my watchlist.
Then, despite my protest noting that the correct copyright release was
included, the administrator (Yann) argued that "The EXIF data says that
the author is John Taylor. The uploader has another name, so I don't
think he is allowed to decide a license."
That appears to be post-hoc explanation, as the facebook one obviously
wasn't applicable. Self-justifying strawman argument.
In this case, as is usual in the most industries, the *camera* owner
appears in the EXIM file. A public figure who pays the studio for
headshots owns the picture itself. The photographer would need the
public figure's permission to distribute the photo!
After pointing out the nomination didn't even remotely meet the
deletion policy nomination requirements (that I cited and quoted), this
administrator wrote: "I see that discussion with you is quite useless."
Then, minutes later, another administrator, Béria Lima, deleted the
photo without waiting for the official 7 day comment period to expire.
That indicates collusion, not independent review.
There are a number of obvious technical issues. YouTube and others
have had to handle this, it's time for us.
1) DMCA doesn't require a takedown until there's been a complaint. We
really shouldn't allow deletion until there's been an actual complaint.
We need technical means for recording official notices and appeals.
Informal opinions of ill-informed volunteers aren't helpful.
2) Fast scripting and insufficient notice lead to flapping of images,
and confusion by the owners of the documents (and the editors of
articles, as 2 days is much *much* too short for most of us). We need
something to enforce review times.
3) Folks in other industries aren't monitoring Talk pages and have no
idea or sufficient notice that their photos are being deleted. The
Talk mechanism is really not a good method for anybody other than very
active wikipedians. We need better email and other social notices.
4) We really don't have a method to "prove" that a username is actually
under control of the public figure. Hard to do. Needs discussion.
5) We probably could use some kind of comparison utility to help
confirm/deny a photo or article is derived from another source.
If there's a better place to discuss this, please indicate.
The WMF folks organizing education programs around the world (where
students improve Wikipedia articles as an assignment) are looking for
better tools for professors to review student contribs.
One of the needs that's come up is a more user-friendly, consolidated
view of all changes made by a user -- either for a timeframe, or a
given page.
That is:
* allow student/ page-level filtering of contribs
* render a sequence of diffs, as opposed to a sequence of page titles
* collapse a sequence of edits into a single diff
Brion suggested this could be done through a gadget/user script that
utilizes the API. You'd fetch diffs one-by-one for each chunk via the
API (can load them asynchronously onto the same page, allowing the
reviewer to start on the latest or earliest edits and keep on going
even while things load).
This shouldn't cause extra load versus loading the same diffs
manually, but will be a lot nicer for the person reviewing it.
Any takers? This could make a big difference for getting hundreds more
students to work on educational content -- it's a Good Thing. And it's
probably useful in and of itself.
Erik
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
---------- Forwarded message ----------
From: Polaris <xxx(a)northerndragons.ca>
Date: 2011/11/14
Subject: Trying to report a problem, not sure the right place
To: wikitech-l-owner(a)lists.wikimedia.org
Subjects: Missing pages after upgrade from 1.5 to 1.17
Hi there,
I have a 1.5 site, that I had to evacuate from my current host to a new
hosting provider. I stood up the old version here:
http://in4k.northerndragons.ca/.
It's running on PHP: 5.2.17 (cgi-fcgi), and MySQL 5.1.39.
I have tried to upgrade this site, using the following process.
1) Creating a replica of the site, by creating a new database / exporting
the old one via PHPmyAdmin, and importing
2) Copying all the data over.
3) Modifying localsettings.php to have the right paths.
4) Validating my site is working as expected.
This works.
Then I drop the files for MediaWiki 1.17, and run the maintenance script
upgrade.php (on the command line), which completes without errors.
The problem is that, I'm missing pages after the upgrade. You can see it
from my old site (just migrated) vs the 1.17 site:
Most of the user pages are gone, and users for that matter too.
Examples:
Working old version:
http://in4k.northerndragons.ca/index.php?title=User:alrj
Missing new version:
http://beta.in4k.northerndragons.ca/index.php?title=User:Alrj
Says User account "Alrj" is not registered.
Another:
http://in4k.northerndragons.ca/index.php?title=User:auld
Missing version:
http://beta.in4k.northerndragons.ca/index.php?title=User:Auld
User account "Auld" is not registered.
What's really odd, is that this isn't true with all users. For example:
http://in4k.northerndragons.ca/index.php?title=User:Blueberry
and http://beta.in4k.northerndragons.ca/index.php?title=User:Blueberry
Work in both.
I'd greatly appreciate some help here, as I'm totally stumped.
Thanks!
--
Kind regards,
Huib Laurens
WickedWay.nl
Webhosting the wicked way.
In the search box, I get suggestions on the fly as I type, and I'm
often impressed by the good suggestions. However, right now at Wiktionary
I get suggestions that aren't the best ones for the given prefix.
For example, at en.wiktionary.org if I type "lagru" it doesn't
suggest"lagrum", but instead a bunch of inflected and derived
forms:
lagrumshänvisning
lagrums
lagrumshänvisnings
lagrummets
lagrummen
lagrummet
lagrumshänvisningars
lagrumshänvisningar
lagrumshänvisningarnas
lagrumshänvisningarna
Since these are Swedish entries in the English Wiktionary,
none of these pages get much traffic. Are the completion
suggestions based on traffic stats? In this case, link
count might be a better predictor for best suggestion,
since all derived forms link back to the basic form.
Not much traffic: 5 page views in 30 days,
http://stats.grok.se/en.d/latest/lagrum
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hello everybody,
I am trying to write a SpecialPage with some kind of wizard functionality. On normal access a HTMLForm with a HTMLSelectField is presented. In the submit callback method of this HTMLForm object I create another HTMLForm with more HTMLFields. So far everything works fine. But when I submit this second HTMLForm the validation process breaks. Instead of running all validation- and filter-callbacks of the HTMLFormFields in the second HTMLForm the Framework renders the first HTMLForm with an error.
Does anybody know how to build a wizard with MediaWikis HTMLForm class? I'm a little stuck here. Help is very appreciated.
Best regards,
Robert Vogel
Social Web Technologien
Softwareentwicklung
Hallo Welt! - Medienwerkstatt GmbH
__________________________________
Untere Bachgasse 15
93047 Regensburg
Tel. +49 (0) 941 - 66 0 80 - 198
Fax +49 (0) 941 - 66 0 80 - 189
www.hallowelt.biz
vogel(a)hallowelt.biz
Sitz: Regensburg
Amtsgericht: Regensburg
Handelsregister: HRB 10467
E.USt.Nr.: DE 253050833
Geschäftsführer: Anja Ebersbach, Markus Glaser, Dr. Richard Heigl, Radovan Kubani
For those who'd like to look at the submissions that came in through
the October 2011 coding challenge, they're now all posted here:
https://www.mediawiki.org/wiki/October_2011_Coding_Challenge/Submissions
There's also an archive with the version of the code at the time of
the submission.
Thanks to everyone who participated! While I would have loved to see
more submissions, there are definitely a few that are worth a closer
look. In coming weeks we'll be sending another email to participants
with a survey to help us improve future outreach efforts, and of
course we'll be selecting the winners and communicating with
individual developers.
Comments on the submissions are very much welcome on the talk page.
Greg will post more details soon on the judging process.
All the best,
Erik
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate