tools.wmflabs.org is supposed to be the replacement for the toolserver
which the wmf is basically forcefully shutting down. I started the
migration several months ago but got fed up with the difficulties and
stopped. In the last month I have moved most of my tools to labs, and I
have discovered that there are some serious issues that need addressed.
The toolserver was a fairly stable environment. I checked my primary host I
connect to and it has been up for 4 months with continuous operations.
tools however is being treated like the red-headed step child. According to
the people in charge of labs they dont care about ensuring stability and
that if stuff breaks Oh well well get to it when we can. They say that
tools is not a production service so we really don't give a <>, if it
breaks it breaks, we will fix it when we can but since its not production
its not a priority.
One good example of this is that a tool cannot connect to
tools.wmflabs.orgdue to a host configuration issue. This is a known
bug, we have a way of
fixing it, but its still not implemented
Given that tools is replacing the toolserver I would expect at worst labs
is just as good, however what I am seeing and hearing is that the wmf is
throwing away one of their best assets, and driving away a lot of
developers due to the management of tools.
I do want to give Coren credit as he is doing what he can to support the
migration.
My question is why has the wmf decided to degrade the environment where
tool developers design and host tools (quite a few of them are long term
stable projects)? and what can we do to remedy this?
John
Hello,
it's possible to add the Google Page Speed module on the wikipedia apache ?
For speed purpose.
I want to discuss with you of this possibility.
Thanks
Luke
Heya folks :)
Quite a while ago Ubuntu did a paper cuts initiative
(https://wiki.ubuntu.com/OneHundredPaperCuts). Basically it was about
collecting and fixing small bugs/annoyances that were easy to fix but
had a large impact on how pleasant it is to use the product. We're
going to do something similar for Wikidata now.
I'd like to get a bugzilla keyword for this so we can easily tag those
bugs. For that I need at least one other team however who'd like to
join such an initiative. Is there anyone who's interested in this?
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Technical Projects
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
If we are creating an ai app that needs to get information , would we be
allowed to crawl wikipedia for this information? The app would probably be
a search query of some kind, that give information back to the user, one of
the sites used is wikipedia. The app would use parts of wikipedia's
articles and send that info back to the user, and give them a link to click
if they want to visit the full article. Each user can only query/search
once per second; however the collective user base might query wikipedia
more than once. Therefore, this web crawler may crawl more than once per
second collectively with every user. Would this be allowed?
Hello!
I recently closed a huge [[w:fa:ویکیپدیا:نظرخواهی برای استفاده از اساسال
برای ویکیپدیای فارسی|RfC]] about using SSL in Persian Wikipedia which
mainly runs by Iranian users.
Iran is the number one target in PRISM surveillance program (Further
information: https://bit.ly/17N57rx) and long history of arresting,
torturing and murdering internet activists (case in point [[w:en:Sattar
Beheshti|Sattar Beheshti]]) or even family members of internet activists
(case in point Yashar Khameneh) leaves no doubt on intention of Iranian
government on surveillance and control of Iranian people. You can find a
very long list in human rights defendants organizations (Breaching privacy
of Iranian people is one of the very few things that both Iran and US
governments agree about it) so we are sure we need to switch to SSL but
using SSL in Iran has its own problems. Iranian authorities block SSL IP of
some sites that they have blocked in non-SSL mode either it's blocked
completely or partially, these sites includes facebook, twitter, and until
recently Wikipedia. Wikipedia is not blocked in Iran but about 400 articles
of Persian Wikipedia (and some other sites like the whole Hebrew Wikipedia)
are blocked for viewing the complete list of the articles which are mainly
about politics, religion, or sexology go to [[w:fa:رده:صفحههای فیلترشده در
ایران]]. Access to Wikipedia in SSL is open since August 25. Speed of
internet in Iran is one of the slowest in the world and it's not a big deal
about loading pages of Wikipedia but variance of internet speed is too high
and we will fail in our main goal on providing free knowledge for people
who don't have easy access to knowledge, people like middle or elementary
school students who are living in countryside and problem of internet
access becomes even worse when the government makes speed of internet on
SSL so low that time of opening a simple page becomes like 4 times higher
when people try to use SSL, It's mainly because of encouraging people not
to use SSL or even we can consider intention of decryption of SSL data.
Scammed SSL certificates attack (Further information:
https://bit.ly/1dXl5Ub) which happened two years ago shows us how much
the government desires to
control people. Another problem is sometimes specially when there is a
crisis in politics or in the country in general (which happens three or
four times every year) access to any site outside of HTTP layer is
impossible and all of other protocols even IRC happens to be blocked out of
nowhere.
Community of Persian Wikipedia (readers and writers) are strongly against
enforced SSL because of the issues I talked about it above and in other
hand they worry about privacy and not letting the governments breach their
privacy
Here is my suggestions and requests based on what Persian Wikipedia and
Iranian Wikimedians in general agree:
*It's very important to let people choose their protocol, There is
consensuses that the community agrees on SSL as default for logged in users
but they are really insisting on making the protocol an arbitrary option
and It seems It's not enabled in WMF projects except mediawiki.org (in
[[m:HTTPS]] you can find the documentation about disabling SSL but as far
as I checked It's not possible and I couldn't find the option in my
preferences maybe It's a bug)
*In order to encourage people to use SSL and increase their safety of
editing in Wikipedia we need to speed up loading of Wiki pages I suggest
web designers and other experts come and help on optimizing Wikipedia
specially Persian language projects. We warmly welcome any ideas about
increasing safety.
*Because of the experience of the past community thinks It's very probable
that SSL access to Wikipedia in Iran will be blocked several times and even
maybe every block won't take more than one week but It will happen. So we
need to be very flexible and fast in cases like this in future So hereby I
ask people who are in charge of SSL in WMF to be prepared and be able to
switch to from SSL to non-SSL and switch back easily and rapidly in cases
of SSL blocking in Iran.
*Lack of documentation in safety issues put Iranian lives in danger, I can
give you an example. Insisting on SSL is good but because of speed or other
issues of SSL some people use proxy even they are using SSL, what they do
when they want to bypass blocking in HTTP layer and speed of loading
increases. It's very dangerous because data will not be encrypted until
reception in proxy computer and that means easy information for the
government with delusion of safety, SSL in this case becomes harmful not
useful. We need to complete documentation and let people know about the
safety.
I'm sending this mail to wikitech-l because I think Iranian people need
help of technical people who can do something about the SSL issue
Best
--
Amir
Hi,
It has been several times announced that we deprecated the bots
project and now it is finally time to get rid of most resource
expensive instances, which are the database instaces.
There are 2 sql servers with hostnames:
bots-sql2
bots-bsql01
Both of these servers are active and currently being used by a number
of services (bots, including ClueBot).
Please migrate the databases to tools project ASAP. If you need any
assistance with this, please let me or Coren know.
I would like to schedule a removal of these servers soon, preferably
in 2 weeks (Tuesday 24, Sep).
If there is anyone who can't migrate the database until then, please
let me know so that we can postpone this removal.
So as of now, the both servers are going to be shut down at Monday 16,
Sep and permanently deleted (the contents will be backed up) at
Tuesday 24, Sep.
If you need more time, no problem, but please let me know :-)
Thank you
Hey Guys,
I am having trouble using ArticleDeleteComplete Hook. I wish to get the
content of the deleted revision.
So I called $article->getContent(), but it return me null. How can I get
the content of the deleted revision ?
Cheers,
Anubhav
Anubhav Agarwal| 4rth Year | Computer Science & Engineering | IIT Roorkee
Hi all,
I've been working on an api module/extension to extract metadata from
commons image description pages, and display it in the API. I know
this is an area that various people have thought about from time to
time, so I thought it would be of interest to this list.
The specific goals I have:
*Should be usable for a light box type feature ("MediaViewer") that
needs to display information like Author and license. [1] (This is
primary use case)
*Should be generic where possible, so that better metadata access can
be had by all wikis, even if they don't follow commons conventions.
For example, should generically support exif data from files where
possible/appropriate, overriding the exif data when more reliable
sources of information are available.
*Should be compatible with a future wikidata on commons thing. [2]
**In particular, I want to read existing description page formatting,
not try and force people to use new parser functions or formatting
conventions, since they may become outdated in near future when
wikidata comes
**Hopefully Wikidata would be able to hook into my system (Well at the
same time providing its own native interface)
*Since descriptions on commons are formatted data (Wikilinks are
especially common) it needs to be able to output formatted data. I
think html is the most easy to use format. Much more easy to use than
say wikitext (However this is perhaps debatable)
What I've come up with is a new api metadata property (Currently
pending review in gerrit) called extmetadata that has a hook
extensions can hook into. [3] [4] [5] Additionally I developed an
extension for reading information from commons description pages. [6]
It combines information from both the file's metadata, and from any
extensions. For example, if the Exif data has an author specified
("Artist" in exif speak), and the commons description page also has
one specified, the description page takes precedence, under the
assumption its more reliable. The module outputs html, since that's
the type of data stored in the image description page (Except that it
uses full urls instead of local ones).
The downside to this is in order to effectively get metadata out of
commons given the current practises, one essentially has to screen
scrape and do slightly ugly things (Look ahead for a brighter tomorrow
with wikidata!)
As an example, given a query like
api.php?action=query&prop=imageinfo&iiprop=extmetadata&titles=File:Schwedenfeuer_Detail_04.JPG&format=xmlfm&iiextmetadatalanguage=en
it would produce something like [7]
So thoughts? /me eagerly awaits mail tearing my plans apart :)
[1] https://www.mediawiki.org/wiki/Multimedia/Media_Viewer
[2] https://commons.wikimedia.org/wiki/Commons:Wikidata_for_media_info
[3] https://gerrit.wikimedia.org/r/#/c/81598/
[4] https://gerrit.wikimedia.org/r/#/c/78162/
[5] https://gerrit.wikimedia.org/r/#/c/78926/
[6] https://gerrit.wikimedia.org/r/#/c/80403/
[7] http://pastebin.com/yh5286iR
--
Bawolff
Hi, I'd like to present a new RFC for your consideration. You can find it
at https://www.mediawiki.org/wiki/Requests_for_comment/DataStore
Briefly, it proposes a new key-value storage for MediaWiki intended to get
rid of some small tables and create a bridge to NoSQL (where it really
makes sense, not where it's trendy).
--
Best regards,
Max Semenik ([[User:MaxSem]])
When I'm trying to install wikimedia in my local server (xampp) I'm
getting this ERROR ! Please help me to fix this ASAP!thanks
Here's the error
""" Connected to mysql 5.1.40; You are using MySQL 4.1 server, but
PHP is linked to old client libraries; if you have trouble with
authentication, see http://dev.mysql.com/doc/mysql/en/old-client.html
for help """