To all Wikimedia developers (and any other interested parties):
Alterego and I, soren9580, are working on creating a new footnotes
feature for Wikipedia, because we believe in the power of footnotes and
want to make it easier for writers to cite their sources in articles
and include relevant material in an easily readable format. We've
developed some examples for people to take a look at. First, the
description of what we are trying to do is available at
http://en.wikipedia.org/wiki/Wikipedia:Footnote2
A good example site using the current footnote implementation we have
developed, combined with a bibliography, is available at:
http://en.wikipedia.org/wiki/Myers-Briggs_Type_Indicator
Our goal is to develop an inline, dynamic footnoting system. Basically,
we want the ability to do footnotes the same way we currently do Tables
of Contents, by putting the text for the footnote right in with the
text it refers to, and having mediawiki autogenerate the footnotes
section at the bottom of the page. Sample code might look something
like this.
{{F|Auto-generated inline footnotes allow the user to write footnotes
as he or she is writing his or her article. The user can simply spell
out the text of the footnote as so, and allow the software to place it
where it needs to go. For more details see The Skeptic's Dictionary
(2004). Myers-Briggs Type indicator. Retrieved December 20, 2004.}}
This tag would, with the proper coding, automatically create a
footnotes section at the bottom of the page wherever the writer of the
article put the tag {{Footnotes}}.
As of right now, our implementation uses two templates, an {{f1}}
template and an {{f1b}} template (with the numbers changed for each
footnote). The f1 is put at the site of the footnote, and the f1b is
put in the footnotes list with the text of the footnote. each creates a
link which links back to the other one, so the reader can quickly skip
down to the footnote he wants, and skip back up to where he was
reading. It works and is easy to use, but is not as fluid as inline
footnoting, and still requires the writer to make sure everything is
numbered properly, and to renumber footnotes when adding them in the
middle of the document. There is a better way to do this, we believe we
have a good idea for how to go about implementing it.
Please give us feedback (alterego is also subscribed to this list so
he'll see what you post here in response), and if any developers are
interested in writing this feature into the next version, please let us
know.
Thanks!
-Chris Nicholson
Hi,
I hereby announce the 2nd version of my LilyPond extension, which has
these new features (but still using LilyPond's safe mode):
If run with <lilybook>...</lilybook>, all pages are displayed (as PNGs),
not only one note line. Also, if your code requests a midi file, by
clicking on one of the PNGs you can download that.
If you want midi output for a fragment, just use <lilymidi> instead of
<lilypond>.
You can find this extension at:
http://lily4jedit.sf.net/mediawiki-LilyPond-extension-v2.tar.gz
Ciao,
Dscho
It's quite evident that wikimedia's current network is a mess. We have
three rather dumb (but cheap) netgear gigabit switches, that offer some
manageability features. However, noone seems to know what's on which
ports. The interfaces to manage them seem rather limited (I don't know -
I don't have access to them), and the features they have lack those to
build a large network. It's quite clear that nobody really likes these
switches, and would like to buy other, better ones now we've run out of
switch ports again.
As projected server count for wikimedia in dec 2005 is about 500
servers, it's time to start properly planning the design of the network.
While complexity of the network increases, remote manageability becomes
more important. Most admin duties happen remotely, while only Jimbo has
physical access to the actual hardware. This way, admins are restricted
to telling him what to do, whenever he has time for it and is on
location. This really delays and complicates things, so I think it would
be good to make sure we build a network makes REMOTE management as easy
and flexible as possible, and keep required physical changes to a minimum.
This does require switches that are more expensive than the current
ones, and it is rather hard to justify the cost for them. Technically,
wikimedia projects CAN run on cheap, unmanageable switches, since they
DO the most important part of the job: switching, at gigabit speeds.
However, with every new server and extra switch, remote manageability is
getting harder, and consequently, the network is getting a mess. Admins
don't know exactly what's on a port. It's barely documented, and they
can't find out through the switch's management interfaces either. In
case of network problems, there are hardly any graphs, logs or other
sources of information to find out what's going on. The current setup is
feasible when one has < 24 ports, but it gets really messy when the
network grows...
I think we need at least layer 2 switches with basic manageability
features. Basic as in what's basic in any medium to large company
network these days. Some features we really could use are:
* serial ports, so we can manage them out of band through a console
server even if the network is down
* SNMP, so we can properly graph statistics of the switch itself, and
the individual ports. VERY helpful in case of problems...
* spanning tree (STP), especially helpful in large networks with remote
management
* VLANs and 802.1Q support. Allows one set of switches to be used for
multiple virtual LANs, and allows for more flexible and cost effective
use of resources, remotely WITHOUT changes to the physical network setup
* Diagnostic information from the switch's console - port descriptions,
port statistics, port status, mac address information, vlan assignments,
error rates, etc
* Syslog logging, so we notice what's going on
* centralized administration, so we don't have to manually copy
everything to each and every switch
* upgradeable firmware with long term support
* Port trunking/aggregation, for high bandwidth or redundancy needs
* IGMP/multicast support, could be helpful on a large network too
While the current netgear switches do have a few of the features
mentioned above, it's all too limited, too restricted, and too non
standard to be useful in a large network.
Wikimedia also needs SOME layer 3 and layer 4 features, but these are
less important, and generally MUCH more expensive, so I don't think we
can really justify to do this using switch hardware:
Layer 3 routing. While we (intend to) have at least two different
vlans/networks, an external and an internal, some traffic needs to be
routed/NATed between them. This does NOT involve actual wikimedia client
traffic, but it does involve some traffic needed for management of the
servers, like retrieving software updates, sending mails etc. This won't
be a lot of traffic, and we could do this using NAT on some server, or
for example on an LVS loadbalancing box (more on this later).
Layer 4 load balancing. Currently, load balancing between squid boxes
happens through multiple DNS A-records, and this clearly isn't optimal.
A true load balancing would be a lot better. There are layer 4 switches
that support load balancing to some extent, but these are generally VERY
expensive, $10,000 and higher. A cheaper, and probably more flexible
alternative is a setup using multiple redundant Linux LVS (Linux Virtual
Server) boxes. Hashar and a friend of his who has experience using LVS
for large clusters are preparing a presentation on this.
Firewalling. I personally don't think we really need this, especially
not once all fundamentally internal servers are on an internal vlan, but
some think it could be useful to have a central firewall, and a layer
3/4 switch could do this.
Personally, I think it would be good to build the wikimedia network on
proper layer 2 switches that can support switching in a large network
with decent manageability. Layer 3 and up (routing/NAT, load balancing,
firewalling if needed) we can do using a redundant LVS cluster, which
hashar is working on. This has the benefit that we don't have to spend
overly excessive amounts of money on proprietary hardware stuff, and
still get a very flexible and cost effective solution, with as much free
software involved as reasonably possible.
Also important I think, is that we choose a vendor that can offer us a
full range of products from low end to as high end as we will ever need.
When you have a large network, it's not feasible to work with many
different interfaces, command sets, features and terminology on each
switch, you'd rather want it to be reasonably consistent among different
switches and product ranges.
We could build the network out of a nice, decent core switch (possibly
two for redundancy), and multiple, relatively cheap access switches to
connect servers (for example, Cisco 2948G-GE-TX). Alternatively, we
could build a large virtual switch by stacking multiple smaller ones
(for example, out of Cisco 3750s), but we might run against a stacking
limit there, and these switches are generally quite a bit more expensive
than non stackable ones. We could even build up the entire network out
of just one very big, and very expensive modular switch which has
hundreds of ports, but this will be very hard to make redundant
(although these switches are pretty redundant of themselves...), and
also involves a big initial investment.
Redundancy is something we need to think about. Of course we can buy one
big and expensive switch, but what if it breaks? With multiple cheaper
switches, it's more feasible to have one or two on spare.
Some servers have larger dependencies between them in terms of low
latency and high bandwidth than others. This obviously needs to be taken
into account while designing the network.
This needs more discussion...
I mentioned Cisco examples here, but that's only because I personally
have experience with them, they have a whole line of product ranges, and
prices are readily available. Of course, many other good switch vendors
exist (Foundry, HP, Extreme, Nortell, etc...), and many could provide us
with equivalent products we need. We have to look for alternatives as
well...
It would especially be helpful if someone could get one of the major
network hardware vendors to donate network hardware to us, but I think
if that would happen, it would have to be a donor/partner for the long
term, and not just for a single donation. We can't build a large and
consistent network out of single, uncoordinated donations.
Comments, please!
(we could transfer this to a wiki if that's helpful...)
--
Mark
mark(a)nedworks.org
On Tue, Dec 21, 2004 at 05:50:43AM +0000, Angela wrote:
> There isn't really any policy on this yet. It might be best to discuss
> creating one on the English Wikipedia mailing list (wikien-l) or
> making a page about it on the wiki.
>
> The most relevant policy so far is
> http://meta.wikimedia.org/wiki/Privacy_policy but that is moving away
> from giving specific details on sockpuppet policy, so that each
> language wiki can decide this for themselves, rather than being a
> global policy.
Actually, that page seems reasonably strict and leave fairly little
wriggling space for purposes of checking whether 3RR has been violated
or someone is voting double. Still, it is an answer to my question, so
thanks!
--
Frank v Waveren Fingerprint: BDD7 D61E
fvw(a)[var.cx|stack.nl] ICQ#10074100 5D39 CF05 4BFC F57A
Public key: hkp://wwwkeys.pgp.net/468D62C8 FA00 7D51 468D 62C8
---- Forwarded message from Victor Porton <porton(a)narod.ru> ----
Date:
From: Victor Porton <porton(a)narod.ru>
To: board(a)wikimedia.org
Cc:
Reply-To:
Subject: [Ticket#: 107107-FW] I will load your XML import
> Hi,
>
> I launched a Web site which is expected to have big traffic.
>
> It will load information from your XML import (from en.wiktionary,org)
> so
> placing a load on your server.
>
> Sadly currently no local proxy is implemented. (I'm working on this but
> there are some problems to set up local HTTP proxy.)
>
> As a temporary solution maybe you'll point me a HTTP proxy between me
> and
> XML import of en.wiktionary.org? My server is 198.63.211.208.
>
> Anyway, I may donate an adequate amount afterwards as the compensation
> for
> traffic I place onto your server (plus I'm interested in development of
> Wiktionary etc. in general so I may donate more than will be spent by
> me).
> (Also note that I help you increasing your Google PR.)
>
> --
> == Victor Porton (porton(a)narod.ru) ==
>
>
---- End forwarded message ----
Thanks!
I did not set error_reporting on my system, so was unaware
of the issue, even though I had tested it with keys that were
not in the array.
Initially I wanted to write two separate extensions:
One that does what enumcat does, without localization,
and another that does the localization in links.
example: <localize_german>[[blah|english]]
[[blah2|german]]</localize_german>
would have returned '[[blah|englisch]] [[blah2|deutsch]]'
and the idea was to combine both:
<localize_french> <enumcat>blah</enumcat> </localize_french>
unfortunately the parser does not allow to combine two extension tags:
only the outer one is evaluated, with the inner string passed as an
argument.
so maybe there would be something to improve in the parser itself?
cheers
Thomas
Brion Vibber wrote:
>
> Thanks! Committed. A couple issues:
>
> If used on a category which contains items that don't exactly match,
> you'll get E_NOTICE warnings about uninitialized array indices if
> error_reporting is turned up to E_ALL. You should check that the sort
> key is represented in the array before trying to access it, and if it
> doesn't find a sensible fallback behavior. (eg using the sortkey
> 'raw', or the title, or discarding the entry entirely)
>
> Also, if you explicitly only want to show entries with certain sort
> keys, you may want to add this to the WHERE clause in your query, such
> as "cl_sortkey IN('french','german','english')" etc.
>
> -- brion vibber (brion @ pobox.com)
Hello,
> Hello Victor,
>
> Your host will certainly be blocked if you keep request special:export .
> You should use the database dumps to get all the data at once :o)
>
> http://download.wikimedia.org/index.php?thingumy=wiktionary
For me it is hard now to quickly enough set up so big databases.
Please so let us agree for a compromise variant:
- I will keep a cache (maybe currently around 20-30 Mb of Bzip2 compressed
data, increasing the cache size in months to follow) of Wiktionary articles.
- I will use XML export to download several (e.g. 10) articles at once to
update cache only when articles are very old (week? two weeks? month?) or
the user of my site explicitly requested to update the cache. (Well, when
somebody requests an article to be downloaded _first time_ I indeed need to
make XML export for just one, not ten articles.)
- I will use XML import with "Accept-Encoding: gzip" HTTP header to
compress the bunches of imported articles with Gzip.
OK?
Additionally I may now donate you a little to pay for the traffic.
--
== Victor Porton (porton(a)ex-code.com) ==
# http://ex-code.com - software company, custom software for low price #
# http://ex-code.com/~porton/ - Christian revelations, math discoveries #
# http://ex-code.com/articles/ - original programming/XML articles #
I was thinking something along the lines of an admin configurable option to
add specific templates for specific namespaces and categories. One way to
avoid pages that only contain default text would be to only add the template
if the page is not saved blank, or to not include the template as part of
the text, instead rendering it separate underneath or above the text.
The problem I'm having is that many people do not read guidelines before
they start using the wiki, which leads to inconsistent formatting, and
discussion pages that are out of control. For certain categories and
namespaces it would be nice to have formatting information there so people
know what to follow.
I'm not sure if a "template on request" button would work because the user
would still have to add it manually, and 99% of the time, they probably
don't know the template even exists. Even if there were a list of all
templates to choose from, users likely wouldn't know which template to use.
Either way, most pages wouldn't have the template it needed.
One specific template that would be very nice for every page is a help bar
at the top or bottom. Other than hacking at the code, I don't see a way to
add this to every page by default.
Ryan Lane
NAVOCEANO
> -----Original Message-----
> From: wikitech-l-bounces(a)wikimedia.org
> [SMTP:wikitech-l-bounces@wikimedia.org] On Behalf Of b schewek
> Sent: Monday, December 20, 2004 2:56 PM
> To: Wikimedia developers
> Subject: Re: [Wikitech-l] Auto-template on Page Creation
>
> Ryan Lane:
>
> >
> > Is there any way to make a template automatically appear on newly
> created
> > pages? If so, is there any way to do it by specific namespaces? I have a
> > template that tells users how they should use talk pages, but it would
> be
> > much more useful if it was added by default. It would also be nice to be
> > able to do something like this when categories are used (like formatting
> > rules for topics under certain categories).
> >
> > If this is not currently possible, is this something that the wikimedia
> > community would be interested in? If so, what are some ideas on a good
> way
> > to implement this, and which objects would need to be modified to add
> this
> > functionality?
>
> A long time ago, empty pages had a standard text inside.
> It lead to the creation of many pages with exactly that text.
> Thus, the practice was discontinued.
>
> Maybe a "template on request" button, as part of the edit menu?
>
> Schewek
>
>
> --
> ______________________________________________
> Check out the latest SMS services @ http://www.linuxmail.org
> This allows you to send and receive SMS through your mailbox.
>
>
> Powered by Outblaze
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l