So ... how hard would it be for links to images on Commons to be
listed on the Commons image page?
(Presumably the other wiki would have to tell Commons a link had been
put in place or removed.)
- d.
---------- Forwarded message ----------
From: Mark Clements <gmane(a)kennel17.co.uk>
Date: 30-Apr-2007 16:43
Subject: [Wikipedia-l] Commons deletion issue
To: wikipedia-l(a)lists.wikimedia.org
Cc: wikitech-l(a)lists.wikimedia.org
MediaWiki.org has just been a victim of the 'Commons deletion issue'. An
image from the Crystal icon set that was prominently used in many
navigational pages, including the main page, was deleted from Commons and
suddenly MediaWiki.org is full of broken images. The deletion was valid (it
was a duplicate file) but the fact that the actions on SiteA can affect
SiteB so radically is really rather worrying...
A technical solution would be for people deleting files on commons (for this
kind of reason) to replace the page with a redirect page, and for these
redirects to be resolved by MediaWiki when linking to a shared file. This
won't solve all problems of course, but it would help for this kind of
situation (or e.g. a GIF being replaced by an SVG). Of course, I'm sure
there are many ways this could be abused as well...
I don't know if anyone has any suggestions to deal with this kind of
problem, or even if it has already been recognised as an issue. The
non-technical answer is, of course, to copy the commons images to the local
wiki, but that kind of defeats the point of commons, doesn't it?
- Mark Clements (HappyDog)
_______________________________________________
Wikipedia-l mailing list
Wikipedia-l(a)lists.wikimedia.org
http://lists.wikimedia.org/mailman/listinfo/wikipedia-l
On 30/04/07, greg(a)svn.wikimedia.org <greg(a)svn.wikimedia.org> wrote:
> + if ( function_exists ( 'wgWaitForSlaves' ) )
> + wfWaitForSlaves( 5 );
*cough*
That should be if( function_exists( 'wfWaitForSlaves' ) ), no?
Rob Church
Hi,
Why not explain to them in during the signup process and welcome email that they
are in fact publishing their emails to the general public, and that they can
choose to remain anonymous?
My philosophy about search engines is the more the merrier: let end users decide
how best to access the content.
Thanks,
George / GChriss
On 4/26/07, Sanjay Sodhi <sanjay.sodhi(a)gmail.com> wrote:
> Might I ask why Google was forbidden from indexing the mailing lists in the
> first place?
People complained about their names turning up in Google searches when
they (gasp) posted to a public mailing list. Brion got tired of
dealing with them, so he deindexed the archive.
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
http://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi,
I've written a XML-style-like wiki-markup extension.
Each time some text is passed to my function it contains one namespace
abrevitation. So what I want to do right now, is to collect all namspace
abreviations which appear on that page and put them as namspace
definitions to the top of the page.
To get that working I thought I'll define a global array variable in
LocalSettings.php and put the namespace-abreviations together with the
PageID into that array, so that all occuring ns-abreviations of the same
page are aggregated together. After Parsing is done I would use the
ParserAfterTidy-Hook to call another function which puts the used
namespace-definitions to the top of the page.
Do you think this is a proper way ? Or is there a better solution?
If you think it could work I've got another question.
Is the variable I defined in LocalSettings.php for all processes the
same? So if 2 (parsing-)processes are runnning, are they both accessing
the same variable or does each process have a unique variable which is
just initialised the same way?
Thank you very much for your answear.
Greets
Christoph
(I already tried it once to reach the list one hour ago with some
similar text. So I hope it doesn't appear twice:-)
This is a very nice script but little known about how it works,
especially for people like me...
I have some questions..
1-How does it login to the wiki? I didn't give it information...didn't
set AdminSettings.php..does it take the info from LocalSettings.php?
2-How does it add the pages to the wiki? I mean, does it check for every
page and compare then it updates it?
3-Is it normal that it works for some xx number of pages, then it ends
without any errors but at the same time ends without finishing off the
dump? when It starts again, it goes fast through until the number it
ended at, then it becomes slow..works for a bit..then it ends...
4-What happens if you have a loaded database of an old dump, then you
started it with a new dump?
Thanks!
Dear ladies and gentlemen,
[Note:Mr. Christoph Hager asked us to place our apologies and questions here:]
we have found out, that we have been listed as a live mirror and therefore have been blocked.
We truly apologize for doing it the wrong way - we were unaware of the issue concerning "live mirrors" in 2005, when we
implemented our solution.
Therefore we really are sorry for the challenges we may have caused you, as we were unaware of the "live mirror" issue when we
first implemented (autumn 2005) our encyclopedia, that was also using Wikipedia-contents. We really, really thought, that we
were fully compliant.
So to our knowledge the GFDL compliance was perfect (now it is not, as we've quickly turned off all links) and we were doing
everything right, but we did not know, that we were not allowed to "parse" articles live, that we did not deem appropiate for
our audience, but were linked in other articles we had saved on our server.
We would love to solve this issue with you and would ask you for a contact, with whom we may sort out the best possible solution
for you and us. Of course, we would also delete the entire thing and start building an own solution, if you ask us to.
At the moment, we think a Wikimedia-conform solution might be the following:
- we would check the articles concerning our project (koordinaten.de/.net) and save them into an own database
- we assume at the moment, that this will be less than overall 1000 articles
- we would get changed or new articles once a month to avoid bringing unnecessary traffic to your servers and would clearly
state the necessary details for the GFDL-rules (as we did) and that our versions may be outdated in comparison the the Wikipedia
- we will deny (already did) the robots to follow and therefore check these pages
- we could get the articles in times you propose (if there are any less frequented times on wikipedia...)
- we are not sure how to handle the image-issues - at the moment, we think it to be best, if we would also cache them on our
server, but are not sure about the legal aspects here
- if possible, we would also deny the possibility to change articles on our version as this function would best be linked to the
original Wikipedia-version
- any articles, that we did not cache but would have links to them in the cached articles would be transferred by a link to
Wikipedia itself, though we would include a page, that makes the user aware of the transfer
This way, we would avoid having to download the whole wikipedia files as proposed on some pages about live mirrors and at the
same time, reduce any traffic for Wikipedia. But we could keep using the articles for our audience as an information to our
site.
Also through the GFDL compliance we already followed since 2005 all users would be aware, that the source for the articles is
Wikipedia and the respective author.
This way we would hope to reduce or avoid any harm for wikipedia, it's servers and the people making this great portal what it
is today.
As we were sure in 2005, that we understood how to do it in compliance with your informations, we now would like to ask for
support/a confirmation how to best do it.
As mentioned, we would also accept your denial of any use of Wikipedia contents, allthough we would greatly appreciate your
acceptance of our apologies for the challenges we may have caused.
We really were unaware and the last thing we had in mind was to harm Wikipedia!
Best regards,
__
Samater Liban - Consulting - Heret Informatik Service
www.Koordinaten.net / www.Koordinaten.de
Am Kühlen Grund 5 - 65835 Liederbach - Germany
T: +49-(0)700 - 56673462 - M: +49-(0)175 - 544 71 67
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r21709).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* Link containing double-single-quotes '' (bug 4598) [Has never passed]
* message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* HTML nested bullet list, open tags (bug 5497) [Has never passed]
* HTML nested ordered list, open tags (bug 5497) [Has never passed]
* Fuzz testing: image with bogus manual thumbnail [Introduced between 08-Apr-2007 07:15:22, 1.10alpha (r21099) and 25-Apr-2007 07:15:46, 1.10alpha (r21547)]
* Inline HTML vs wiki block nesting [Has never passed]
* Mixing markup for italics and bold [Has never passed]
* dt/dd/dl test [Has never passed]
* Images with the "|" character in the comment [Has never passed]
* Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
Hi all,
I hope that this isn't too out of left field, especially coming from a
relative unknown. I do however think that this is a conceptual problem in
the Parser class which ought to receive some attention eventually.
You see, it seems to me that the Parser is in fact more Renderer than it is
parser, and most of the options in ParserOptions are in fact Rendering
options.
I noticed this while working on a tag extension for events. My goal was to
parse some wikitext to obtain an array of PHP objects of a class Event
which cooresponds to a an "event" tag I've defined. Well, I can sort of
pull this off by creating a special parser, and adding parser hook which
builds the objects for each array and sticks them into a global. Of course
to do this I need to initialize the parser with a ParserOptions object, and
of course the parser goes on to render HTML which I don't actually want,
since the goal is just to get the intermediate stage.
It occurs to me that separating the Parse stage from the Render stage could
have some other useful effects, like making it easier to add different
renderers, and making it a bit easier to start on the Parser
rationalizations that people have been talking about.
Of course any such separation will mean having to specify at least some of
the parser and renderer behaviour, but maybe not all. At any rate it
should make it easier to do so.
So my question is this: does separating the two functions seem like a
worthwhile task to anybody else here?
Thanks for you time,
-mark
--
--
=================================================================
-- mark at geekhive dot net --
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r21681).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* Link containing double-single-quotes '' (bug 4598) [Has never passed]
* message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* HTML nested bullet list, open tags (bug 5497) [Has never passed]
* HTML nested ordered list, open tags (bug 5497) [Has never passed]
* Fuzz testing: image with bogus manual thumbnail [Introduced between 08-Apr-2007 07:15:22, 1.10alpha (r21099) and 25-Apr-2007 07:15:46, 1.10alpha (r21547)]
* Inline HTML vs wiki block nesting [Has never passed]
* Mixing markup for italics and bold [Has never passed]
* dt/dd/dl test [Has never passed]
* Images with the "|" character in the comment [Has never passed]
* Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
Hi,
I am doing some analysis and need to convert article URLs to article
titles. For example,
http://en.wikipedia.org/wiki/Question:_Are_We_Not_Men%3F_Answer:_We_Are_Dev…
converts to
Question: Are We Not Men? Answer: We Are Devo!
I've been doing some searching around but haven't found a specific
procedure documented anywhere. It looks to me like standard URL
unescaping followed by replacing underscores with spaces, but I wonder
if there is more.
Pointers to documentation or an explanation would be most appreciated.
Thanks,
Reid