An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test Magic Word: {{CURRENTMONTHNAMEGEN}}... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Running test 5 quotes, code coverage +1 line... FAILED!
Running test HTML Hex character encoding.... FAILED!
Running test dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
> It's probably worth mentioning that there's a new project to create a
> MediaWiki-workalike engine that runs in a Java Servlets container. It's
> called JAMWiki, and it's here:
>
> http://jamwiki.org/
>
> I've played around with it a little and, while it's definitely Not There
> Yet, it's way ahead of 80% of existing Wiki engines. I'm going to be
> keeping an eye on it.
>
> ~Evan
And amusingly, it also has at least one of the exact same XSS
vulnerabilities that used to be in MediaWiki ;-)
And yes, I have reported this to them, together with PoC, at
http://jamwiki.org/wiki/en/Bug_Reports#XSS
Their implementation does look interesting though.
All the best,
Nick.
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test Magic Word: {{CURRENTMONTHNAMEGEN}}... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Running test 5 quotes, code coverage +1 line... FAILED!
Running test HTML Hex character encoding.... FAILED!
Running test dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
Hi,
Currently, when linking to a category that "does not exist", a
redlink is shown. However, the definition of "does not exist" is "has
no text describing the category". However, the category actually
functions perfectly well. Since many categories are self-explanatory
(especially for small wikis), I suspect this redlinking acts as an
unnecessary brake on the use of categories. That is, people are
discouraged from pre-emptively linking to a category that doesn't
"exist" yet. Redlinks in the category bar look like mistakes, they
don't look like you've actually done something that is highly
encouraged.
Would it be possible to change the definition of "does not exist" to
be "has no text, and no articles"? Or maybe something else entirely?
Steve
> But other
> page count systems have been removed in that past by Brion because of
> privacy reasons.
Well, it *is* a pretty good reason. If you store any logs and are a
high-profile source of public information and those logs can in any
way be linked back to a specific user, then you must assume that
sooner or later someone may take you to court to get access to those
logs. Suppose a "person of interest" has been reading Wikipedia
information on the chemistry of explosives, or reading up on
biological pathogens, or military installations, etc etc etc. That is
exactly the kind of thing that certain areas of law enforcement would
like to know, and be able to use against people in court. Before you
say "conspiracy theory!", remember that Google has this problem (for
searches that people have conducted, which it does record), and
libraries have this problem (for books that people have borrowed,
which libraries also record). I'm actually surprised that the
Wikipedia has not had this problem yet, and I can only presume that
it's because there are no logs. The single easiest way to avoid the
problem is to not keep any logs (besides those which are already
public, such as the edit histories). There's a counterargument that
some of these people may really be evil, but the reality is that the
databases are located in the US, and the current US government has
repeatedly demonstrated a thorough contempt for civil liberties
(fingerprinting foreign nationals entering the US as though they were
criminals, arresting people wearing T-shirts with protest slogans,
illegal wire taps, indefinite imprisonment without due process at
Guantanamo, the practise of "rendition", arresting people
photographing bridges, and the list of abuses goes on and on and on).
For my 2 cents, concern over legal problems & potential abuse of the
data far outweighs my desire to know how many people have viewed say
the "Mickey Mouse" page.
All the best,
Nick.
I've given Daniel Kinzler (Duesentrieb) subversion write access, for work
relating to an AJAX category tree extension which he has written. Daniel has
been a known and respected member of the community for some time.
-- Tim Starling
> > Well, about the edit preview, don't worry about. Our Parser is
> > complicated enough that anything like that isn't happening soon.
>
> It already has:
> http://wikiwyg.org/wysi/
Yes, but for corner case stuff its output differs from MediaWiki. In
other words, what you see is not always what you're going to get. For
example:
* "JavaScript"
renders as that on WikiWSY, but renders as "JavaScript" on MediaWiki.
* "&60;" renders on the Parser as "&60;", but on WikiWSY it doesn't show up.
* Don't get the ISBN / RFC / PubMed autolinking (e.g. "ISBN 1903")
* Nested links render differently - e.g. "[[a link[[b]]]]" renders
like "[[a link<href>b</href>]]" on MediaWiki, but like "<href>a
link</href>" on WikiWSY (drops the "b").
* a line with just "'''a" renders as <bold>a</bold> on MediaWiki, but
as "a" (no bold) on WikiWSY.
* "{|{|\nx" renders as blank on WikiWSY, but shows an "x" on MediaWiki.
* "<pre>aaa</pre><nowiki>xxxx</nowiki>" renders as that string on
WikiWSY, but shows as "aaa\nxxxxx" on MediaWiki.
But I do think WikiWSY is impressive. Getting something client-side
that matches the server-side Parser for core functionality is very
hard. Getting something that's "quirk-compatible" for the less common
stuff is very very hard (and would probably require literally porting
Parser.php from PHP to JavaScript; and then keeping it updated as
Parser.php changed - lots & lots & lots of work; it could even be
easier to write a custom PHP-to-JavaScript "compiler" to do it
automatically).
All the best,
Nick.
Hi,
is it possible to update an article via an sql-statement? What I want to
do:
I have a script wich collects texts from other sources:
e.g. /proc/cpuinfo or texts from external websites.
And I have an article named: computername
Now I want to update the article "computername" with the text from a
local file via SQL in a bash script (or a php script...).
Is it possible? Or are there some solutions for this problem?
thanks for an answer.
greetings mario