Hi,
I was involved in an open source project that was usurped by one of the main developers for the sole reason of making money, and that project continues now to take advantage of the community to increase the profit of that developer. I never would have thought such a thing was possible until I saw that happen. If that developer wasn't acting greedy, there would now be open source hardware for radio transceivers of all types, but instead there is only open source software for radio of all types. I find it a shame, and when I was working on that project I could *feel* it being usurped! I unfortunately may be paranoid as I feel the same thing here with the wikimedia foundation usurping wikipedia. If you don't believe me, just consider that it is a very gradual process, like getting people used to not being able to download image dumps anymore, and ignoring ALL requests to restore this functionality. Also failing to provide full history backups of the flagship wiki. These two facts allow the wikimedia foundation to maintain the control of intellectual property that wasn't created by the people. If you want the wikimedia foundation to respect you as volunteers, you will have to DEMAND respect by making sure that they never usurp the project. I think the best way to do this is to make sure we can all download up to date full history with images wikipedia's so a fork at any time is possible. Sure it may be paranoid, but trust me it is worth it to be paranoid regarding a project as important as wikipedia. I have been in situations like this before, I wish I had acted before even if I was wrong! I wouldn't even be speaking now except for reading the heart-felt words of volunteers in this thread that are unhappy with how the wikimedia foundation is running. We need to organize to get wikimedia foundation to release images tarballs, they are only ignoring multiple requests to do so, so far.
cheers, Jamie
On 9/8/10 2:26 PM, Domas Mituzas wrote:
Hi!!!!!
... there would now be open source hardware ....
Do you need open source "Enter" key?
Open source hardware isn't an inherent absurdity... it usually means that the hardware designs or other precursors (such as code that can generate circuit designs) are freely available.
http://en.wikipedia.org/wiki/Open-source_hardware
On Wed, Sep 8, 2010 at 8:32 PM, Neil Kandalgaonkar neilk@wikimedia.org wrote:
On 9/8/10 2:26 PM, Domas Mituzas wrote:
Hi!!!!!
... there would now be open source hardware ....
Do you need open source "Enter" key?
Open source hardware isn't an inherent absurdity... it usually means that the hardware designs or other precursors (such as code that can generate circuit designs) are freely available.
http://en.wikipedia.org/wiki/Open-source_hardware
-- Neil Kandalgaonkar neilk@wikimedia.org
I think the point was not about hardware, but the OPs inability to include a single linebreak in the e-mail.
-Chad
On 9/8/10 5:35 PM, Chad wrote:
I think the point was not about hardware, but the OPs inability to include a single linebreak in the e-mail.
I need an open source irony detector.
2010/9/9 Neil Kandalgaonkar neilk@wikimedia.org:
I need an open source irony detector.
Hint: it starts with this code:
if ( $name === 'domas' ) return true;
Roan Kattouw (Catrope)
On Thu, Sep 9, 2010 at 1:58 PM, Roan Kattouw roan.kattouw@gmail.com wrote:
2010/9/9 Neil Kandalgaonkar neilk@wikimedia.org:
I need an open source irony detector.
Hint: it starts with this code:
if ( $name === 'domas' ) return true;
fixme: needs curly braces.
-Chad
On 8 September 2010 22:15, Jamie Morken jmorken@shaw.ca wrote:
I was involved in an open source project that was usurped by one of the main developers for the sole reason of making money, and that project continues now to take advantage of the community to increase the profit of that developer. I never would have thought such a thing was possible until I saw that happen. If that developer wasn't acting greedy, there would now be open source hardware for radio transceivers of all types, but instead there is only open source software for radio of all types. I find it a shame, and when I was working on that project I could *feel* it being usurped! I unfortunately may be paranoid as I feel the same thing here with the wikimedia foundation usurping wikipedia. If you don't believe me, just consider that it is a very gradual process, like getting people used to not being able to download image dumps anymore, and ignoring ALL requests to restore this functionality. Also failing to provide full history backups of the flagship wiki. These two facts allow the wikimedia foundation to maintain the control of intellectual property that wasn't created by the people.
This is something that's been a problem for years now.
I do not think there is any sort of deliberate intent. However, keeping the data close is a way to proprietise a wiki even if it's free content, so making it easy to fork is an important attitude to maintain.
I realise this is difficult when the devs have to work as hard as possible just to keep everything from falling over ...
- d.
2010/9/8 David Gerard dgerard@gmail.com:
This is something that's been a problem for years now.
I do not think there is any sort of deliberate intent. However, keeping the data close is a way to proprietise a wiki even if it's free content, so making it easy to fork is an important attitude to maintain.
I realise this is difficult when the devs have to work as hard as possible just to keep everything from falling over ...
That's right, there is no deliberate intent and it's really a lack of people on the ops side (dumps are an ops thing, not a dev thing, and devs generally can't do much to help here). WMF is also not "ignoring" requests to provide image dumps, it just hasn't gotten around to setting them up yet; presumably, this is because text dumps aren't running smoothly yet (I'd appreciate a reply from Ariel Glenn to get the facts here, but since Ariel is out of the country I may or may not get my wish).
It's true that the dumps situation is still a problem, but you (OP) should assume some good faith here rather than accusing the WMF of ignoring you, not earning the community's trust or even trying to usurp Wikipedia. You're right, you are being paranoid.
Roan Kattouw (Catrope)
Στις 09-09-2010, ημέρα Πεμ, και ώρα 20:08 +0200, ο/η Roan Kattouw έγραψε:
2010/9/8 David Gerard dgerard@gmail.com:
This is something that's been a problem for years now.
I do not think there is any sort of deliberate intent. However, keeping the data close is a way to proprietise a wiki even if it's free content, so making it easy to fork is an important attitude to maintain.
I realise this is difficult when the devs have to work as hard as possible just to keep everything from falling over ...
That's right, there is no deliberate intent and it's really a lack of people on the ops side (dumps are an ops thing, not a dev thing, and devs generally can't do much to help here). WMF is also not "ignoring" requests to provide image dumps, it just hasn't gotten around to setting them up yet; presumably, this is because text dumps aren't running smoothly yet (I'd appreciate a reply from Ariel Glenn to get the facts here, but since Ariel is out of the country I may or may not get my wish).
It's true that the dumps situation is still a problem, but you (OP) should assume some good faith here rather than accusing the WMF of ignoring you, not earning the community's trust or even trying to usurp Wikipedia. You're right, you are being paranoid.
I am not thinking about image dumps at all. I am concentrating on the regular XML dumps which have been in sorry shape for various reasons ever since I started as a volunteer in the community adding content. (note that I am not laying blame about the sorry state, that's not the point).
For the rest of September I'll be fooling with these parallel runs until I get something that seems to perform well. For the next 5-6 days I'm out of action on them but after that it's back to the grind on them. Today, though I should hae been working on something else, I spent crunching some numbers and trying to figure out what more optimal chunk sizes ought to be. Since earlier articles by far have the bulk of the revisions it turns out I need to write some code to implement that. Anyways, either I'm (mostly) hard at work on this problem or I'm secretly plotting to run off with all the old copies of wikipedia to Bermuda and retire.... :-P
Off of the dumps page on wikitech http://wikitech.wikimedia.org/view/Dumps there's a link to a page where I'm starting to keep updates, now that there is an actual run going. I may shoot this run and restart this piece in a few days, but what the heck, at least there's some information there. Also there's a link to a wish list for the XML dumps; if the image dumps aren't listed there please add them. I'm not going to try to think about how feasible or not they might be right now though, brain too full.
Happy trails,
Ariel
I had no idea that usurping an open source project was as easy as not providing full history back-ups and image dumps. And here I was trying to replace all the board members with proxies from Wikia! What a waste of time ;)
Ryan Kaldari
On 9/8/10 2:15 PM, Jamie Morken wrote:
Hi,
I was involved in an open source project that was usurped by one of the main developers for the sole reason of making money, and that project continues now to take advantage of the community to increase the profit of that developer. I never would have thought such a thing was possible until I saw that happen. If that developer wasn't acting greedy, there would now be open source hardware for radio transceivers of all types, but instead there is only open source software for radio of all types. I find it a shame, and when I was working on that project I could *feel* it being usurped! I unfortunately may be paranoid as I feel the same thing here with the wikimedia foundation usurping wikipedia. If you don't believe me, just consider that it is a very gradual process, like getting people used to not being able to download image dumps anymore, and ignoring ALL requests to restore this functionality. Also failing to provide full history backups of the flagship wiki. These two facts allow the wikimedia foundation to maintain the control of intellectual property that wasn't created by the people. If you want the wikimedia foundation to respect you as volunteers, you will have to DEMAND respect by making sure that they never usurp the project. I think the best way to do this is to make sure we can all download up to date full history with images wikipedia's so a fork at any time is possible. Sure it may be paranoid, but trust me it is worth it to be paranoid regarding a project as important as wikipedia. I have been in situations like this before, I wish I had acted before even if I was wrong! I wouldn't even be speaking now except for reading the heart-felt words of volunteers in this thread that are unhappy with how the wikimedia foundation is running. We need to organize to get wikimedia foundation to release images tarballs, they are only ignoring multiple requests to do so, so far.
cheers, Jamie
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 8 September 2010 23:00, Ryan Kaldari rkaldari@wikimedia.org wrote:
I had no idea that usurping an open source project was as easy as not providing full history back-ups and image dumps. And here I was trying to replace all the board members with proxies from Wikia! What a waste of time ;)
Neglecting to make it possible to get the data out is an effective way to proprietise a wiki. Making the backups work is important.
- d.
Hi all;
I think that Jamie has started an important topic. I don't think that WMF is going to usurp Wikipedia and the sister projects now or in the future, but it is statistically possible. If we want to protect us, the human knowledge and our work of this hypothetical scenario, we need complete full dumps frequently. But this scenario is a malicious one, and I think that there are many more dangerous posibilities, and unfortunately, they are common.
For example, small or massive lost of data due to natural disasters, crackers attacks, stolen passwords, hardware and software bugs, sudden crazy sysops, and _human errors_. Is WMF ready for that?
Long time ago I searched info about that, but I only found these links[1][2]. Recently, I have been concerned about this again. Most of the Wiki[mp]edia projects are small, and their full backups are updated every week[3] and they can be stored everywhere, but the largest ones like English Wikipedia gets outdated soon[4] (now, it is +200 days old).
I don't know so much about the infrastructure and how WMF servers are allocated around the world, so, I want to ask a simple question:
In the case of a complete disaster in the "main" servers, will WMF be able to restore all the Wiki[mp]edia contain using backups?
We got a terrible fright when 3000 images were deleted accidentally in 2008[5] and I think that not all were recovered.
When people ask about images dump the most common reply is: "Are you going to store 7 TB (Commons)?" I can't store that at home of course, but, I'm sure that a few universities or entities around the world can, not only for backup purposes, for researching too (in full resolution or thumbs).
Also, I think that we need to start mirroring Wiki[mp]edia dumps to other servers around the globe, as the common GNU/Linux ISOs mirrors do. Also, Library of Congress said some time ago that they are going to save a copy of all the tweets sent to Twitter.[6] When are they going to save a copy of Wiki[mp]edia? I hope we have learnt a bit since Library of Alexandria was destroyed.
I don't want that an error moves us back to January 15, 2001.
Regards, emijrp
[1] http://wikitech.wikimedia.org/view/Disaster_Recovery [2] http://wikitech.wikimedia.org/view/Offsite_Backups [3] http://download.wikimedia.org/ [4] http://en.wikipedia.org/wiki/User:Emijrp/Wikipedia_Archive [5] http://lists.wikimedia.org/pipermail/wikitech-l/2008-September/039265.html [6] http://www.wired.com/epicenter/2010/04/loc-google-twitter/
2010/9/8 Jamie Morken jmorken@shaw.ca
Hi,
I was involved in an open source project that was usurped by one of the main developers for the sole reason of making money, and that project continues now to take advantage of the community to increase the profit of that developer. I never would have thought such a thing was possible until I saw that happen. If that developer wasn't acting greedy, there would now be open source hardware for radio transceivers of all types, but instead there is only open source software for radio of all types. I find it a shame, and when I was working on that project I could *feel* it being usurped! I unfortunately may be paranoid as I feel the same thing here with the wikimedia foundation usurping wikipedia. If you don't believe me, just consider that it is a very gradual process, like getting people used to not being able to download image dumps anymore, and ignoring ALL requests to restore this functionality. Also failing to provide full history backups of the flagship wiki. These two facts allow the wikimedia foundation to maintain the control of intellectual property that wasn't created by the people. If you want the wikimedia foundation to respect you as volunteers, you will have to DEMAND respect by making sure that they never usurp the project. I think the best way to do this is to make sure we can all download up to date full history with images wikipedia's so a fork at any time is possible. Sure it may be paranoid, but trust me it is worth it to be paranoid regarding a project as important as wikipedia. I have been in situations like this before, I wish I had acted before even if I was wrong! I wouldn't even be speaking now except for reading the heart-felt words of volunteers in this thread that are unhappy with how the wikimedia foundation is running. We need to organize to get wikimedia foundation to release images tarballs, they are only ignoring multiple requests to do so, so far.
cheers, Jamie
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 13 September 2010 21:14, emijrp emijrp@gmail.com wrote:
I think that Jamie has started an important topic. I don't think that WMF is going to usurp Wikipedia and the sister projects now or in the future, but it is statistically possible. If we want to protect us, the human knowledge and our work of this hypothetical scenario, we need complete full dumps frequently. But this scenario is a malicious one, and I think that there are many more dangerous posibilities, and unfortunately, they are common. For example, small or massive lost of data due to natural disasters, crackers attacks, stolen passwords, hardware and software bugs, sudden crazy sysops, and _human errors_. Is WMF ready for that?
Shit happening is by far more likely than malice. Denise considers job #1 the elimination of existing single points of failure, so that's something. And she knows her stuff. And has the proper sysadminly horror at the notion of systems susceptible to such.
- d.
Dear All,
May I ask how I can get the full-protecting article lists monthly? I can get the current one by searching lock link. Is that some tools for this?
thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I can get the current one by searching lock link. Is that some tools for this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp... returns the first 500 (or 5,000 if you're a bot or sysop) pages in the main namespace that only sysops can edit (i.e. that are fully protected). If you're not a privileged user and only get 500 entries, you can use the information in the query-continue tag to get the next 500.
Roan Kattouw (Catrope)
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I can get the current one by searching lock link. Is that some tools for this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp... returns the first 500 (or 5,000 if you're a bot or sysop) pages in the main namespace that only sysops can edit (i.e. that are fully protected). If you're not a privileged user and only get 500 entries, you can use the information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the data you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I can get the current one by searching lock link. Is that some tools for this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
returns the first 500 (or 5,000 if you're a bot or sysop) pages in the main namespace that only sysops can edit (i.e. that are fully protected). If you're not a privileged user and only get 500 entries, you can use the information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Better yet, http://toolserver.org/~betacommand/reports/sysopprotecton.txtwhich is updated daily,
Δ
On Tue, Sep 14, 2010 at 9:08 AM, John Doe phoenixoverride@gmail.com wrote:
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the data you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I
can
get the current one by searching lock link. Is that some tools for
this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
returns the first 500 (or 5,000 if you're a bot or sysop) pages in the main namespace that only sysops can edit (i.e. that are fully
protected).
If you're not a privileged user and only get 500 entries, you can use
the
information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi, the link seems not work.
best,
Zeyi
On Sep 14 2010, John Doe wrote:
Better yet, http://toolserver.org/~betacommand/reports/sysopprotecton.txtwhich is updated daily,
Δ
On Tue, Sep 14, 2010 at 9:08 AM, John Doe phoenixoverride@gmail.com wrote:
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the data you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I
can
get the current one by searching lock link. Is that some tools for
this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
returns the first 500 (or 5,000 if you're a bot or sysop) pages in the main namespace that only sysops can edit (i.e. that are fully
protected).
If you're not a privileged user and only get 500 entries, you can use
the
information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
http://toolserver.org/~betacommand/reports/sysopprotecton.txt
try that
On Tue, Sep 14, 2010 at 1:35 PM, zh509@york.ac.uk wrote:
Hi, the link seems not work.
best,
Zeyi
On Sep 14 2010, John Doe wrote:
Better yet, http://toolserver.org/~betacommand/reports/sysopprotecton.txtwhichhttp://toolserver.org/%7Ebetacommand/reports/sysopprotecton.txtwhichis updated daily,
Δ
On Tue, Sep 14, 2010 at 9:08 AM, John Doe phoenixoverride@gmail.com wrote:
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the
data
you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk:
Dear All,
May I ask how I can get the full-protecting article lists monthly? I
can
get the current one by searching lock link. Is that some tools for
this?
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
returns the first 500 (or 5,000 if you're a bot or sysop) pages in
the
main namespace that only sysops can edit (i.e. that are fully
protected).
If you're not a privileged user and only get 500 entries, you can use
the
information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
May I know the collecting date of this data?
Can I have the number of fully-protected articles by month? not all of pages but only article pages? Is that possible? Thanks!
Zeyi
On Sep 14 2010, John Doe wrote:
http://toolserver.org/~betacommand/reports/sysopprotecton.txt
try that
On Tue, Sep 14, 2010 at 1:35 PM, zh509@york.ac.uk wrote:
Hi, the link seems not work.
best,
Zeyi
On Sep 14 2010, John Doe wrote:
Better yet, http://toolserver.org/~betacommand/reports/sysopprotecton.txtwhichhttp://toolserver.org/%7Ebetacommand/reports/sysopprotecton.txtwhichis updated daily,
Δ
On Tue, Sep 14, 2010 at 9:08 AM, John Doe phoenixoverride@gmail.com wrote:
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the
data
you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
2010/9/14 zh509@york.ac.uk: > Dear All, > > May I ask how I can get the full-protecting article lists > monthly? I
can
> get the current one by searching lock link. Is that some tools for
this?
>
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
returns the first 500 (or 5,000 if you're a bot or sysop) pages in
the
main namespace that only sysops can edit (i.e. that are fully
protected).
If you're not a privileged user and only get 500 entries, you can use
the
information in the query-continue tag to get the next 500. Roan Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
http://toolserver.org/~betacommand/reports/protection/en/ is a list of all articles that are non-redirects. there should be one file a day from now on
On Thu, Sep 16, 2010 at 8:57 AM, zh509@york.ac.uk wrote:
May I know the collecting date of this data?
Can I have the number of fully-protected articles by month? not all of pages but only article pages? Is that possible? Thanks!
Zeyi
On Sep 14 2010, John Doe wrote:
http://toolserver.org/~betacommand/reports/sysopprotecton.txthttp://toolserver.org/%7Ebetacommand/reports/sysopprotecton.txt
try that
On Tue, Sep 14, 2010 at 1:35 PM, zh509@york.ac.uk wrote:
Hi, the link seems not work.
best,
Zeyi
On Sep 14 2010, John Doe wrote:
Better yet, http://toolserver.org/~betacommand/reports/sysopprotecton.txtwhichhttp://toolserver.org/%7Ebetacommand/reports/sysopprotecton.txtwhich
http://toolserver.org/%7Ebetacommand/reports/sysopprotecton.txtwhichis
updated daily,
Δ
On Tue, Sep 14, 2010 at 9:08 AM, John Doe phoenixoverride@gmail.com wrote:
If you want we have a toolserver database query service, and generating such data should be easy if you file a request https://jira.toolserver.org/browse/DBQ you should be able to get the
data
you need.
Δ
On Tue, Sep 14, 2010 at 8:59 AM, zh509@york.ac.uk wrote:
On Sep 14 2010, Roan Kattouw wrote:
>2010/9/14 zh509@york.ac.uk: >> Dear All, >> >> May I ask how I can get the full-protecting article lists >> monthly? I can >> get the current one by searching lock link. Is that some tools
for
this? >> > >
http://en.wikipedia.org/w/api.php?action=query&list=allpages&apprtyp...
> returns the first 500 (or 5,000 if you're a bot or sysop) pages in
the
> main namespace that only sysops can edit (i.e. that are fully protected). > If you're not a privileged user and only get 500 entries, you can > use the > information in the query-continue tag to get the next 500. Roan > Kattouw (Catrope)
Thanks for this!
I am looking for the change of full-protecting articles. May I get the data with time or date something? is that possible? Thanks,
Zeyi
> >_______________________________________________ >Wikitech-l mailing list >Wikitech-l@lists.wikimedia.org >https://lists.wikimedia.org/mailman/listinfo/wikitech-l >
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Also, I think that we need to start mirroring Wiki[mp]edia dumps to other servers around the globe, as the common GNU/Linux ISOs mirrors do. Also, Library of Congress said some time ago that they are going to save a copy of all the tweets sent to Twitter.[6] When are they going to save a copy of Wiki[mp]edia? I hope we have learnt a bit since Library of Alexandria was destroyed.
They've actually just reached out to us to discuss archiving all of the Wikimedia projects :)
Discussions are in their early stages but I'll happily update as I know more.
--tomasz
Hi Tomasz,
That is great news, congrats! I am happy they are spending time to archive wikis, I also am praying that my post ends up in the correct thread.
cheers, Jamie
----- Original Message ----- From: Tomasz Finc tfinc@wikimedia.org Date: Monday, September 13, 2010 3:09 pm Subject: Re: [Wikitech-l] Community vs. centralized development To: Wikimedia developers wikitech-l@lists.wikimedia.org Cc: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org
Also, I think that we need to start mirroring Wiki[mp]edia
dumps to other
servers around the globe, as the common GNU/Linux ISOs mirrors
do. Also,
Library of Congress said some time ago that they are going to
save a copy of
all the tweets sent to Twitter.[6] When are they going to save
a copy of
Wiki[mp]edia? I hope we have learnt a bit since Library of
Alexandria was
destroyed.
They've actually just reached out to us to discuss archiving all of the Wikimedia projects :)
Discussions are in their early stages but I'll happily update as I know more.
--tomasz _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
wikitech-l@lists.wikimedia.org