I tried to install the wikimedia package on a server using Romanian as the
default language and I got this error message. Does anyone know what the
problem might be?
S Matei smatei(a)yahoo.com
Checking environment...
?? PHP 5.0.4: ok
?? PHP server API is cgi-fcgi; using ugly URLs (index.php?
title=Page_Title)
?? Have XML / Latin1-UTF-8 conversion support.
?? PHP's memory_limit is 8M. If this is too low, installation
may fail! Attempting to raise limit to 20M... ok.
?? Have zlib support; enabling output compression.
?? Turck MMCache not installed, can't use object caching
functions
?? Found ImageMagick: /usr/local/bin/convert; image
thumbnailing will be enabled if you enable uploads.
?? Found GD graphics library built-in.
?? Installation
directory: /home/champion/u3/smatei/www/cultura
?? Script URI path: /~smatei/cultura
Notice: Undefined index: linktrail
in /home/champion/u3/smatei/www/cultura/languages/LanguageRo.php on line
1347
Notice: Undefined index: linktrail
in /home/champion/u3/smatei/www/cultura/languages/LanguageRo.php on line
1347
Notice: Undefined index: linktrail
in /home/champion/u3/smatei/www/cultura/languages/LanguageRo.php on line
1347
?? MySQL error 1045: Access denied for user 'root'@'web.ics.purdue.edu'
(using password: NO)
?? Trying regular user... ok.
?? Connected to database... 4.1.13-log; enabling MySQL 4 enhancements
?? Database smatei exists
?? Creating tables... done.
?? Initializing data...
?? Created sysop account CulturaModernaAdmin.
??
Notice: Undefined index: 1movedto2_redir in
/home/champion/u3/smatei/www/cult
Notice: Undefined index: aboutsite in
/home/champion/u3/smatei/www/cultura/la
Hello,
So as I learned stewards are now inactive due to the disabled
steward interface, but I cannot really find info (apart from the
brief "it may break replication", whatever "it" might be), so if someone
would be so kind as to share what happened or the details would be great.
More importantly I have to ask whether it's possible to set
rights manually by a developer or it breaks replication (however
may it happen) as well as the steward interface did? Shall I
keep trying to find someone to set the flag or shall I wait
for how long (a good guesstimation is okay).
Thanks,
Peter
Sorry if this has been asked before, but I have created a skin that uses a
form that enables users to add necessary classification markup. The issue I
have is that I need to set/change the preferences listing so that this skin
is the default and cannot be changed by the user through the preferences
section. Is this possible?
Ryan
Hi All,
You used to be able to use GET requests to preview edits to the Wikipedia.
For example, this link worked in January 2005:
http://en.wikipedia.org/w/wiki.phtml?title=East_Balmain&wpTextbox1=%23REDIR…
Clicking this link used to preview adding a redirect at the "East
Balmain" article, with the article text being "#REDIRECT [[Balmain
East, New South Wales]]", and the edit summary being "Redirect to
[[Balmain East, New South Wales]]". All the user had to do then was
click "Save". It was incredibly useful, because it meant software
could generate new possible articles and redirect, and then a human
could yes/no veto them. It also meant that these possible items could
be listed on the Wikipedia itself, because the Wikipedia allows GET
arguments but not POST arguments in links - and this in turn allowed
the Wikipedia to be used to extend the Wikipedia - it was, in short,
extremely useful.
( Of course this functionality could be abused, in the same way that
_any_ technology can be used for both positive and negative purposes.
An example of project that used this in a positive way can be seen
here: http://en.wikipedia.org/wiki/User:Nickj/Redirects )
Now however it seems like these GET requests no longer work. Have I
missed something obvious (i.e. it's just a dumb-user problem with the
GET request) ? Or is it something to do with the Wikipedia itself
(i.e. a server-side thing) ? If so, is it intentional? If it's not
intentional, can we please have this functionality back? (At least for
new articles: I can understand why it's a bad idea for edits to
existing articles because it allows thoughtless overwriting of
previous content - but this argument doesn't apply if there isn't
something there already).
All the best,
Nick.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
MediaWiki 1.5.0 is the new stable release branch of MediaWiki, and is
recommended for all new installations.
See the release notes (link below) for details of new features and
requirements.
Any wikis running a 1.5 beta or release candidate are strongly
recommended to upgrade to the final release, which includes a number of
bug fixes and a security fix for CSS bugs in Microsoft Internet Explorer.
IMPORTANT: Running a 1.3 or 1.4 wiki and don't want to jump to 1.5 yet?
Be sure to upgrade to 1.3.17 or 1.4.11, also released today. Versions
prior to 1.3.16 and 1.4.10 have a serious data corruption bug which is
triggered by a spambot known to operate in the wild.
Release notes:
http://sourceforge.net/project/shownotes.php?release_id=361506
Download:
http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.5.0.tar.gz?download
MD5 checksum:
b431e82ee5fd0d619d17cb2d417387c3 mediawiki-1.5.0.tar.gz
Before asking for help, try the FAQ:
http://meta.wikimedia.org/wiki/MediaWiki_FAQ
Low-traffic release announcements mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-announce
Wiki admin help mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
Bug report system:
http://bugzilla.wikimedia.org/
Play "stump the developers" live on IRC:
#mediawiki on irc.freenode.net
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFDRKyrwRnhpk1wk44RAjBhAKCwMKBNBviPZL/8h5/qgAm1WPKJUgCfcbBj
aTFHYRsaDB04lX5zs9R9CGI=
=M4I7
-----END PGP SIGNATURE-----
Various people have discussed the desirability of single-sign-on for
Mediawiki installations.
I'm interested in the same thing and have been working on that for a
little bit.
So far: "It's working!" for 1.5rc4 ;-)
My questions:
1) Is there any place where people interested in this subject "hang
out"? (like a wiki page somewhere, ...?)
2) Is this the right mailing list to discuss this?
Thanks,
Johannes.
Johannes Ernst
http://netmesh.info/jernst
Ray Saintonge wrote:
> Neil Harris wrote:
>
>> Phroziac wrote:
>>
>>> No way would we fit in the 30 volumes of Britannica for this
>>> hypothetical print release! Anyway, what if we had a feature in the
>>> Wikipedia 1.0 idea, where we could rate how useful the inclusion of an
>>> article in a print version would be. This would allow anyone making a
>>> print version, be it the foundation, or someone else, to trim wikipedia
>>> easier. Certainly you could do it by hand, but eek. that's huge. With
>>> our current database dumps, it would already not be unreasonable to
>>> make
>>> a script to automatically remove articles with stub tags in them.
>>> Obviously these would be worthless in a print version.
>>
>>
>> In my opinion, an article ranking system would be an ideal way to
>> start collecting data for trying to place articles in rank order for
>> inclusion in a fixed amount of space.
>>
>> One interesting possibility is, in addition to user rankings, using
>> the number of times the article's title is mentioned on the web --
>> the Google test -- as an extra input to any hypothetical ranking system.
>
>
> The thing to remember if a ranking system is used is that it is a tool
> rather than a solution. It can point to problem articles that need
> work. We don't need to be limited to a single algorithm for
> evaluating an article. The Google test can be added, but so can
> others too.
>
> Ec
>
That's right. The _gold standard_ for article assessment is peer review;
the next best is based on manual ranking by a sufficiently large and
well-distributed group of users; the next best is based on
carefully-chosen algorithms which blend together machine-generated
statistics and human-generated statistics.
Given that we have 750,000+ articles in the English-language Wikipedia
alone, it is likely to take some time for reasonable amounts of votes to
be accumulated for all articles. According to my earlier calculation, if
we wanted to trim en: Wikipedia into 32 volumes, we would need to keep
out five out of six articles. (We could keep Wikilinks in, thinly
underlined with page/volume references in the margin, for those in the
print version, and say dotted underlines for those which exist online
but are not in the print version, to let people know there is an online
article on that topic).
This raises the possibility of using machine-generated statistics to act
as a proxy for manual review where it is not yet available. Given a
sufficient number of human-rated articles, and a sufficient number of
machine-generated statistics for articles, we could use machine learning
(a.k.a function approximation) algorithms to attempt to predict the
scores of as-yet-unranked articles. This could then be used as a "force
multiplier" for human-based ranking, to rank articles which have not yet
received sufficient human rankings to be statistically significant.
This approach could easily be sanity-checked by taking one random sample
of articles as a training set, and another disjoint random sample as a
testing set: the predictive power of a machine-learning algorithm
trained on the training set could be determined by measuring the quality
of its predictions of the true user rankings of the training set. As the
number of articles with statistically significant human rankings
increase, the algorithm can be re-trained repeatedly; this would also
help resist attempt to "game" the ranking algorithm.
What statistics could be used as input to this kind of approach? It's
not hard to think of possible measures:
0. any available user rankings, by value and number or rankings
0a. stub notices
0b. "disputed" notices, "cleanup" notices, "merge" notices, now,
0c. ...or in the past
0d. has it survived an AfD process? by what margin?
0e. what fraction of edits are reverts?
0f. has it been a featured article?
0g. has it ever been a featured article?
0h. log10(page view rate via logwood)
...and so on...
1. log10(total Google hits for exact phrase searches for title and
redirects)
1a. same as above, but limited to .gov or .edu sites
1b. same as above, but using matches _within en: Wikipedia itself_
1c. same as above, but using _the non-en: Wikipedias_
1d. same as above, but using matches _within the 1911 EB_
1e. same as above, but using matches _within the Brown corpus_
1f. ditto, but within the _NIMA placename databases_
1g, h. _Brewer's Dictionary of Phrase and Fable_, _Gray's Anatomy_
1i, j, k, l... the Bible, the Qu'ran, the Torah, the Rig Veda...
1m, n... the collected works of Dickens, Shakespeare...
... and so on, for various other corpora...
2. log10(number of distinct editors for an article)
3. log10(total number of edits for this article, conflating sequential
edits by same user)
4. log10(age of the article)
5. size of the article text in words
6. size of the article source in bytes
7. approx. "reading age/ease" of the article, using...
7a. Flesch-Kincaid Grade Level
7b. Gunning-Fog Index
7c. SMOG Index
8. number of inter-language links from/to this article
9. inwards wikilinks, including via redirects, perhaps weighted by
referring article scores (although we should be careful not to infringe
the Google patents)
10. # of outwards wikilinks
11. # of categories assigned
12. # of redirects created for this article
13. Bayesian scoring for individual words, using the "bag of words" model
13a. as above, but using assigned categories as tokens
13b. as above, but for words in the article title
13c. as above, but for words in edit comments
13d. as above, but for text in talk pages
13e. as above, but for names of templates
13f. as above, but for _usernames of contributing authors_, ho ho ho!
14. shortest category distance from the "fundamental" category
15. shortest wikilink distance from various "seed" pages
16. length of article title, in characters (shorter is "more fundamental"?)
17. length of article title, in words
18. what fraction of the article text contains letters from which other
scripts?
19. does it contain images? how many?
19a. what is the images-to-words ratio?
20. what is the average paragraph length?
21. how many subheadings does it have?
22. how many external links does it have in its "external links" section?
23. how many inline links does it contain in the main article body?
23. how many "see also"s does it have?
24. what is the ratio of list items to overall words?
25. what is the ratio of proper nouns (crudely measured) to overall words?
..and so on, and so forth. Some of these are easy to calculate, some are
hard. Can anyone think of better ones?
Individually, I doubt whether any of these are a really good predictor
of article quality. However, learning algorithms are surprisingly good
at pattern recognition from very noisy multi-dimensional data. It's
quite possible that this approach would work with only a limited number
of reasonably statistically independent input metrics (ten?); the huge
list above is only to give an idea of the large number of possible
choices of article metrics, ranging from the simple to the complex.
The corpus-based measures are particularly interesting; they mean we
don't need to bug Google for a million search keys.
The machine learning algorithm of choice is probably a support vector
machine; they're powerful, simple to use, capable of learning highly
non-linear functions (for example, recognising handwritten Han
characters from preprocessed bitmaps), and there are numerous
pre-packaged GPL'd implementations available as tools.
No doubt there will be lots of academics who might be willing to assign
this as a project or PhD topic to one of their research students. ;-)
Before any of this could be possible, we would in any case need the
article ranking system to be up and running for some time, which we need
anyway for the manual approach.
-- Neil
I have written a page in extensions, that will enable a user to add reference
information. However now i want to call this page in such a way that when a
user is writing a new article at that time he can click on either a button
like the ones located above the existing text editor which will call that
page. I am trying to do this on my local wiki. Does anybody know how this can
be done?. Does anybody know what might be any other way to do it
Thanks,
--
Amruta
I am interested in making a map of Wikipedia in order to streamline the content,
provide an overview of different areas, and connect Wikipedia to digital archives
maintained by museums and laboratories all around the world. For more information
please see http://meta.wikimedia.org/wiki/CDT_proposal .
If you would like to collaborate, or if you already have similar efforts underway,
please contact me.
Thank you,
Deborah MacPherson
*************************************************
Deborah MacPherson, Projects Director
Accuracy&Aesthetics, A Nonprofit Organization for the
Advancement of Education, Cultural Heritage, and Science
www.accuracyandaesthetics.comwww.deborahmacpherson.com
mailing address: PO Box 52, Vienna VA 22183 USA
phones: 703 585 8924 and 703 242 9411
mailto:debmacp@gmail.com
The content of this email may contain private and confidential
information. Do not forward, copy, share, or otherwise distribute
without explicit written permission from all correspondents.
**************************************************