Recently in es.wiktionary we have approved to change the project name
from Wiktionary to Wikcionario.
You can see the voting results in
http://es.wiktionary.org/wiki/Wiktionary:Votaciones/2004/Nombre_local_del_p…
Would be any of you so kind to change the project name and consequently
the project namespace in es.wiktionary to Wikcionario?
Thanks a lot.
Hi folks,
I sent this message to textbook-l but there has been no response. This
isn't even there in the archives so either this post was removed by
the moderator or no moderator has checked this.
Also there are no November messages in the textbook-l archive, which
makes it look like that mailing list has as low traffic as
[[Wikibooks:Staff lounge]] (the equivalent of Village pump) or lower.
So I've decided to send mail to this list instead. Sorry if this is
not appropriate.
I have been forced to bring to the notice of the wikimedia community
that there is a person (http://en.wikibooks.org/wiki/User:Panic2k4)
intent of forking a work in wikibooks within wikibooks itself. Not
only is he forking the "article"/"module", he has forked the talk page
as well, which makes it even more difficult to discuss ideas of
improvement with others.
Now he has blatantly stated that since wikibooks is GFDL he can do
what he wants without discussing with others & expects me to do the
same: (http://en.wikibooks.org/w/wiki.phtml?title=User_talk:Panic2k4&oldid=70652
& probably a few later versions of the page).
A more easily readable version of my original post to textbook-l with
working wikilinks is at
http://en.wikibooks.org/wiki/Wikibooks:Staff_lounge#.5B.5BProgramming:C_plu…
Sorry, I didn't have enough time to convert the wikilinks into proper
HTTP URLs. Thanks once again for your consideration!
It's wikitech-l(a)wikimedia.org. I'm CCing this discussion there in this
email; hopefully they can assist you.
John Lee
([[en:User:Johnleemk]])
a a wrote:
>ok, sure.
>
>Could you please direct me to that list.
>
>Thank you
>
>Andrew
>
>
>On Tue, 23 Nov 2004 20:14:14 -0700, Mark Williamson <node.ue(a)gmail.com> wrote:
>
>
>>Hi a a,
>>
>>I think perhaps your mail would be better directed at the technical mailinglist.
>>
>>Mark
>>
>>
>>
>>
>>On Tue, 23 Nov 2004 17:51:53 -0800, a a <mr.computer.geek(a)gmail.com> wrote:
>>
>>
>>>hi all I am new and have some questions. I want to use some of the
>>>tools provided im wikimedia but am a little lost when looking at the
>>>source code.
>>>
>>>I really like the diff tool. I would like to use it on some text
>>>files I have. Could you suggest a way to run your diff tool and some
>>>plain test files, not sure what functions to call first
>>>
>>>I would like to do this from a unix/linux command line. I would also
>>>like to keep the highlichting and font colors of the wikipedia but
>>>would like to have a basic format (no navigation, no wallpaper ect)
>>>
>>>Any help you could provide would be great!
>>>
>>>
When I'm logged in and use the "E-mail this user" link, the resulting
e-mail doesn't seem to contain any information about my originating IP
address. Hotmail adds a "X-Originating-IP: [123.123.123.123]" header
to any outgoing mail, and I think this would be a good idea for
Wikipedia aswell.
If an IP address is blocked from editing articles, can that address
still be used for sending Wikipedia e-mail? Can it be used for
registering new users? Is that good or bad?
I can only send Wikipedia e-mail if I'm logged in, but the outgoing
mail has a "From:" header that contains the address I entered in my
personal preferences and that address hasn't been verified, has it?
Is that good or bad?
Theoretically, if my IP is blocked from editing, I think I could sign
up as a new user "Jimbooo" and set my e-mail adress to that of Jimbo
Wales, and start to send insulting Wikipedia e-mails to Wikipedia
users I want to turn against Jimbo. The recipients would get a
message "From: Jimbooo <...>" with no information about my IP.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
I think this should be on Wikitech-l, and as such am CCing it there.
John Lee
([[en:User:Johnleemk]])
Gerrit wrote:
>[resent, wrong address for wikipedia-l]
>[FUP: wikipedia-l, CC: pywikipediabot-users]
>
>Hello,
>
>I have recently been thinking again how wonderful my bayesian
>spamfilter, implemented by Spambayes[1], is working to filter my e-mail.
>For an explanation of Bayesian spamfiltering, see the Spambayes homepage.
>I was thinking whether it would be possible to do something like that
>for Newpages. It could reduce human work and might prove a very
>interesting experiment as well.
>
>The bot I am thinking of would follow Newpages live. It fetches each
>page, and checks it against it database. If it's classified as ham, then
>continue. If it's classified as unsure, ask the user whether it is
>{{delete}}-material: if yes, train as spam and prepend {{delete}} to the
>article. If no, train as ham. It could add a comment to the article or a
>message to the talk page: <!-- classified by ... as ... with score ... -->
>If it's classified as spam, show the user (part of) the content to
>confirm that it's really true (if not, treat as unsure-ham).
>If it already contains '{{delete}}', train as spam and continue.
>When no user is using the program, create a stack of articles to work
>through when a user starts with the program again.
>
>This would be implemented using an enhanced Pywikipediabot and the
>library coming with Spambayes. I foresee some problems. For example,
>each user would have its own 'hammy.db'. As we are all working on the
>same thing, we would want to have a central hammy.db, probably one per
>language. This would be at a central server (need not to be Wikipedia: I
>volunteer with my server for this task). Initially, it would be a
>command-line tool, although a web interface might prove very useful as
>well.
>
>Additional to the contents of the page, clues can also be given by the
>user contributing, whether the user is logged-in or anonymous, the range
>of the IP, name of the page, and, why not, the time of day, although the
>latter might have less value than the former ones.
>
>Perhaps it could also be done for RecentChanges. It would then be fed
>the diffs. This would require a lot more work, because there is a major
>difference between removing a line and adding a line (in fact, when one
>would be a spam-hint, the inverse would be a ham-hint with clue 1-other).
>This is much more difficult and I do not have the knowledge to write
>such a thing. It does not seem impossible, though.
>
>What do you think?
>
>kind regards,
>Gerrit Holl.
>
>[1] http://www.spambayes.org/
>
>
>
On 2 Nov 2004, at 12:36, Ashar Voultoiz wrote:
> Thomas Gries wrote:
>> Please could the developer who recently posted that he/she is
>> finalising an LDAP-interface -->
>> --> contact me please ?
>> Thanks in advance
>> Tom Gries
>
> Hello,
>
> Ryan Lane posted a patch at:
> http://bugzilla.wikipedia.org/show_bug.cgi?id=814
Great work, Ryan! This patch is the "missing link" that has kept me
from adopting MediaWiki as our Program's wiki.
Brion, in comment #5 on this patch, you mention adding an external
authentication hook. Do you expect to implement this Real Soon Now? I
can live with or without it--I only wanted to know if I should pursue
this patch as-is, or wait for the plugin.
Jim Vanderveen, IT Manager
Office of Water Programs, Calif State University Sacramento
jim.vanderveen(a)owp.csus.edu -- PGP key 0x4F449230
PGP info at http://www.owp.csus.edu/~vanderveenj/pgp/
Hello all :)
We are running a software catalogue with MediaWiki and the wiki is sponsored
by a company. Now the company wants us to place a picture with the company's
logo under the wiki logo.
I added
<div class="portlet" id="p-advertisement">
<h5>Sponsored by</h5>
<div class="pBody">
<a href="http://www.nct.de"><img
src="/en/MediaWiki/stylesheets/images/Logo-nct.png></a>
</div>
</div>
in xhtml_slim.pt and it results in an error. Currently I work with a text link
but that's not what the company wants.
Any ideas how to fix it?
The URL of the wiki is http://www.tuxfutter.de/en/wiki/Main_Page
Btw: MediaWiki 1.3.3 :)
Thanks and kind regards,
Hauke
--
> Sitze ich vor einer Winkiste bekomm ich hier die Kriese.
Ja, Windows ist wie Sackhuepfen ohne Beine.
----
diskless und valencia im Heise-Forum
I did some quickie benchmarks of a couple of types of page view,
comparing 1.3 with 1.4 under a small variety of option variations.
See the fun:
http://meta.wikimedia.org/wiki/MediaWiki_1.4_benchmarks
Since the speed benefit of a parser cache hit is so great on
non-trivial pages, I've also tweaked the parser cache to work without
requiring memcached. It's now enabled by default.
When running without memcached, parser cache drops will be kept in the
objectcache table, where until now we've just been keeping the message
cache. Zlib will be used if available to compress the size of the
dumps. It's possible that on a relatively active site the objectcache
garbage collection will conflict too much; if anyone notices any such
problems it can be adjusted.
I've also been working on squeezing extra milliseconds out of the fixed
per-page costs (initialisation, skin rendering, etc). These do add
up...
-- brion vibber (brion @ pobox.com)
Hi everyone,
(Hi! First post here)
for a site I'm building I'm trying to use one generalised login system
for everything on the site (forum / wiki / ftp / ...). I was wondering
if anyone has done any work on joining phpBB and MediaWiki.
In a perfect world people would log into PhpBB and an account would be
made for MediaWiki with the same login/pass. Whenever a user logs into
the site, he should automatically be logged into MediaWiki.
So - has it been done? Google didn't really help me this time.
thanks a lot,
- bram
--
ICMC 2005.......... http://www.icmc2005.org
MTG................ http://www.iua.upf.es/mtg
Smartelectronix.... http://www.smartelectronix.com
Musicdsp........... http://www.musicdsp.org
Telephone.......... +34 628 765 127
I received about 10 inquiries so far about how to enable EasyTimeline in
MediaWiki.
I have to answer I don't really know.
I always point them to the link below, but noone takes the trouble to report
succes or failure, either out of laziness or frustration.
(spam filter might be partial explanation, my provider is known for this)
Can someone give me foolproof instructions on how to enable ET ?
I will add those to my support page. Thanks.
Erik Zachte
http://cvs.sourceforge.net/viewcvs.py/*checkout*/wikipedia/extensions/timeli
ne/Timeline.php?rev=1.6
Comment at the top says:
# Timeline extension
# To use, include this file from your LocalSettings.php
# To configure, set members of $wgTimelineSettings after the inclusionpoint
them to