I am posting this to foundation-l and wikitech-l, because there is both
a technical and a policy aspect here.
A small but growing group of Wikinews contributors is actively making
audio recordings of Wikinews stories. This effort is coordinated here:
http://en.wikinews.org/wiki/Wikinews:Audio_Wikinews
The files are uploaded to the Commons in Ogg Vorbis format. Now, two
members of this group have started to also produce a streamed version of
this, which is broadcast at specific times -- effectively Internet
radio. This could grow into a real independent wikiradio project which
perhaps could encompass more than just news. The usefulness of realtime
broadcasting for news should be obvious.
The current Wikinews page is at:
http://en.wikinews.org/wiki/Wikinews:WikiNews_Network
Now, here is our dilemma:
Streaming audio in realtime requires special software. WNN currently
uses ShoutCast, which is not free/open source software. It is also not
hosted on Wikimedia's servers, but on the server of one of our
contributors. (We have the same issue with the print edition, but I'll
try to resolve this separately.)
Do we want to run this on our own servers? If so, there is a free
software implementation called Icecast. Would it be possible to securely
set up an Icecast server for this purpose on our hardware?
If we don't want to run it on our servers, should we allow it to be
called "Wikinews Network", or should it use a different name?
Should we set any specific limits for the project's scope beyond NPOV,
or should we let it experiment freely with the format for the time
being? (There was some talk about ads on the program, but I made it
clear that this was absolutely impossible.) As for NPOV, a wiki-radio
project might want to broadcast news *about* the wiki community; do
these have to strictly adhere to NPOV (the German Wikipedia:Kurier, for
example, does not)?
Best,
Erik
I am working on a patch for bug #235 (Auto unit conversion):
http://bugzilla.wikipedia.org/show_bug.cgi?id=235
You can see it in action at:
http://67.190.41.14:880/wiki/index.php/Main_Page
(if it gets defaced,
http://67.190.41.14:880/wiki/index.php?title=Main_Page&oldid=54)
This will enable user preferences in displaying units of measurement as well
as provide an easier way for editors to insert units with conversions into
the articles. For instance, you would be able to write:
A U.S. football field is (((100. yards))) in length. It would then display
in one of five different ways depending on the user preferences.
If you wanted to link that to the 1 E1 m, you would just do: ((([[100.
yards]])))
I currently have imperial to metric conversions for common lengths, volumes,
weights and speeds. Your input at the above demo would be greatly
appreciated if you are interested in this feature. Feel free to sign up for
an account and test out the preferences as well. There are plenty of
pre-made examples if you just want to see the potential. I am very new to
wikipedia, so I would appreciate any knowledgeable persons providing input
on the 'Issues to be Solved' section.
Other features:
* can accommodate written out units such as (((three hundred metres)))
* places commas into numbers only if editor does
* automatically determines approximate significant figures
Thanks,
Matt Wright
Everyone's seen the "sticky" newtalk notification problem -- that ugly
orange box hangs around for longer than it's meant to and can be hard to
get rid of. I fixed one cause of this: the client cache. We've always
had a system to invalidate the client cache when the newtalk status
changes, but it was broken shortly before 1.4 was branched and nobody
noticed. I'm pretty sure it's now fixed.
The changes were inapplicable to HEAD, since newtalk handling has
changed completely. It will need to be tested separately.
-- Tim Starling
The sysadmin with whom I have been corresponding recommended that I post my
situation and query here. I should say that while I am not quite illiterate
technically, neither am I an expert.
I established, quite some months ago, a web site that has been operating as
a combination of Wikipedia articles and the corresponding dmoz links lists.
I have been punctilious in observing the terms of use of both
organizations. I have, however, now--after all these months--discovered
that I am apparently not using the resource itself in a preferred manner.
I have been fetching individual articles from the wikipedia site as
visitors request them (once fetched, they are given some php-based
processing, and the rest of the page built around them). Apparently that
is a no-no, owing, I am told, to the server load, especially from
searchbots that may follow out the pages.
I was, after some months of operation, suddenly hit with a 403 block; on
inquiry, I discovered the facts above. I then asked whether using instead
the Special:Export XML access would be an acceptable way of fetching
articles individually on demand. The sysadmin wrote that he felt it would,
but that I would be best to post here to see if others agree or disagree.
I realize that there is no easy way to convert the marked-up text to HTML,
but I am prepared to cobble up some php to essay the task--but, before
going to that nontrivial effort, I would like to be sure that I will not
again be blocked even if I am accessing individual articles via
Special:Export XML. (At present, I seem to be getting perhaps 20,000
visitors a day.)
This site is very important to me, and I need to act extremely quickly if
it is not all to just go down the drain, so please help me out here.
Hoi,
Elly wrote a fine message about the increasing problems with interwiki
links. Several people have answered on this threat indicating problems.
I think I have an answer to the issues raised and I wrote them here
http://meta.wikimedia.org/wiki/A_new_look_at_the_interwiki_link
I believe that we can dispense with running the many client side bots
for the interwikis and have a more economical server side bot in stead.
I welcome all comments and I hope that there will be no arguments that
prevent an implementation of this scheme.
Thanks,
GerardM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Yann Forget <yann(a)forget-me.net> wrote:
> As you mention Wikisource, there is a need for a syntax for texts in verse.
>
> Something like
>
> <verse>
> foo
> bar
> </verse>
>
> which would replace
>
> foo<br>
> bar<br>
The HTML 4.1 recommendation (
http://www.w3.org/TR/REC-html40/struct/text.html#h-9.3.4 ) seems to hint
that the tag PRE should be used for poetry (see their example). There's
still a few problems with it (like the monospace font), but nothing a
little stylesheet magic couldn't fix up.
- --
Edward Z. Yang Personal: edwardzyang(a)thewritingpot.com
SN:Ambush Commander Website: http://www.thewritingpot.com/
GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc
3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (MingW32)
iD8DBQFCrH+zqTO+fYacSNoRAgmyAJwMX2pjPrxHkFNmWCJiGnR9JpDIIACePz5g
izgDbgLTuSD01IN/d6d4TnY=
=zRDq
-----END PGP SIGNATURE-----
Hi there,
I want to set up a MediaWiki site on my hosting and domain. The
technical info seems a bit complex, could you explain it to me please?
I can get mySQL and PHP as required, but is a lot of PHP required to
make my wiki site secure and so some small edits?
thanks,
hope you can help,
--
MatthewFelgate
@Hotmail.com for MSN
@GMail.com for Email
I am trying to set up an RSS parser on a private wiki that uses the
following for onsubmit:
onsubmit="if(window.myInterval){window.clearInterval(window.myInterval);}if(
importXML('rssLocal.php?feedURL='+escape(this.feedURL.value),'parseRSS',fals
e,2000)){if(parseFloat(this.interv.value)){window.myInterval=window.setInter
val('importXML(\'rssLocal.php?feedURL='+escape(this.feedURL.value)+'\',\'par
seRSS\',false,2000);',parseFloat(this.interv.value)*60000);}}else{alert('You
r browser cannot import XML, so it cannot view RSS feeds using this
script');}return false;"
The problem is that when I submit the page, every ' is replaced by a \'
giving:
onsubmit="if(window.myInterval){window.clearInterval(window.myInterval);}if(
importXML(\'rssLocal.php?feedURL=\'+escape(this.feedURL.value),\'parseRSS\',
false,2000)){if(parseFloat(this.interv.value)){window.myInterval=window.setI
nterval(\'importXML(\\'rssLocal.php?feedURL=\'+escape(this.feedURL.value)+\'
\\',\\'parseRSS\\',false,2000);\',parseFloat(this.interv.value)*60000);}}els
e{alert(\'Your browser cannot import XML, so it cannot view RSS feeds using
this script\');}return false;"
Is there a way to stop this from happening? Or does is break something else
if this is stopped?
I have tried calling the code in a non-wiki .js page, but this doesn't work
and I would like the code to be in the wiki so it can be edited.
Hope you can help, many thanks.