Take a look at:
http://en.wikipedia.org/wiki/Headway
Note that when the HTML renderer has to make a fraction, it leaves way
too much whitespace between the numerator and denominator. I realize
why this is happening, but can't this be adjusted with CSS?
Maury
Hi everyone,
Here's a breakdown of the revisions left to review:
http://www.mediawiki.org/wiki/MediaWiki_roadmap/1.17/Revision_report
Current count of branches plus extensions: 283
There's a script to generate this (publishing source later; requires
toolserver), so we should be able to maintain this list up until
release. The main goal right now is to give us better visibility into
the revisions that don't have reviewer tags on them, since that's hard
to see otherwise.
Rob
Hello,
I have made a mistake Saturday evening (around 18:30 UTC) which broke
some SUL-related functions. The issue was fixed by Apergos about 1 hour
later while I was out of home.
Here is the report:
I tried to create the Esperanto wikisource (bug 26136 [1]) by following
our guide on wikitech [[add a wiki]] [2]. I ran addwiki.php with the
following invocation:
$ php addwiki.php eo wikisource eowikisource eo.wikisource.org
This complained about eowikisource not existing in wgLanguageName, most
probably because the first two arguments are eaten by the Maintenance
class or something like that.
Re checking our guide, it says:
You need to put in --wiki=aawiki after the addwiki.php and before the
langcode for now. Script is wonky.
I thought aawiki was a place-holder for the wiki database name and ran:
$ php addwiki.php --wiki eowikisource eo wikisource \
eowikisource eo.wikisource.org
That triggers a database error saying the database "eowikisource" does
not exist, which is the intended result.
I then edited the all.dblist and pmtpa.dblist to manually add
eowikisource and ran sync-dblist. At this point, SUL was broken but I
did not know about it :-( Please note those additions are normally
handled by the addwiki.php script.
Root cause:
Addition of a non existent database name.
Impact:
Any script / functions querying all databases and expecting them to
exist without error checking.
How it got fixed:
Apergos removed the eowikisource from the dblists and synced the list.
How it could have been avoided:
I should have reverted my change in the dblist. That would have fixed
the issue. I should have stayed idling in IRC for sometime after my change.
I was not reachable since my contact-details were a bit old. I have
updated our local file with mobile and home phone.
Real solution:
Run, as instructed in the guide, addwiki.php --wiki aawiki [...]
TODO:
- really create eowikisource
- Investigate potential damages (ex bug 26877 [3])
- properly handle non existing database in functions / scripts
- fix addwiki.php in 1.6wmf4
- review the [[Add a wiki]] article
I hope this report clarify the issue and would like to thanks Apergos,
JeLuF and RobH for their kind words on IRC on Sunday morning, it is
always appreciated when you are ashamed by such a stupid mistake.
[1] https://bugzilla.wikimedia.org/26136
[2] http://wikitech.wikimedia.org/view/Add_a_wiki
[3] https://bugzilla.wikimedia.org/26877
--
Ashar "hashar" Voultoiz
A quick update on WYSIFTW, my "augmented wikitext" editor. (Please see
http://meta.wikimedia.org/wiki/WYSIFTW for details.)
Wikitext support is nearing completion. I added bold/italics a few
days ago, and yesterday it got some buttons to apply/remove such
markup from a selection. Just a few minutes ago, I finished wikitable
support - you can now edit text in table cells, in the same table
layout and style you see in the real article (though you cannot alter
the table or cell markup itself, add/remove rows, etc., which can
later be achieved through buttons in the sidebar or similar).
As of this moment, lists, indentations, <nowiki>, <pre>, and "---"
(<hr>) are not supported. These shouldn't be too difficult, compared
to the things already done.
I have taken great care to avoid unnecessary changes in the wikitext
being introduced through the parsing/unparsing process. I am not 100%
successful, but after test-loading dozens of random pages, as well as
a few of my standard tests (including [[Paris]] and [[Berlin]]), these
events seem to be rare, and do not appear to break valid wiki syntax.
If you find a page (it will warn you in the sidebar after parsing),
please add it to
http://meta.wikimedia.org/wiki/Talk:WYSIFTW#Pages_with_inherent_differences
.
The editing components have improved as well, but are far from the
usual Word-like capabilities. No cut/copy/paste, and no undo. The
former should be easy to do, at least for plain text; the latter will
require "recording" of all editing actions, which sounds like work to
me :-(
Speed has become an issue. I work with Chrome 10 on a not-too-old
iMac, so even behemoths like [[Paris]] are parsed in <20sec. However,
I have heard reports about times of >200sec, which is clearly too much
(20 sec is as well, IMO). A large chuck of the time seems to come from
bold/italics parsing, which can include up to four separate parsing
steps in my implementation. There is clearly room for improvement, but
I hesitate to optimize until all major features (e.g. lists) are
implemented, and I have some standard test pages available. I am also
thinking about using Selenium once WYSIFTW is feature-complete (as far
as wikitext goes).
There is the question of what browsers/versions to test for. Should I
invest large amounts of time optimising performance in Firefox 3, when
FF4 will probably be released before WYSIFTW, and everyone and their
cousin upgrades? As a one-man-show, I have to think about these
things.
Finally, there are, undoubtedly, a large number of bugs hidden in the
code. I assume they will be weeded out, given enough eyeballs (testers
and developers).
That wasn't as quick as I said in the first line of this mail. OTOH,
it's past midnight here (again!), and I'm getting too old for this...
Cheers,
Magnus
Reply to message 7
Seems it can use anywhere but it not support Chinese at least. It do not support
the http://en.wikipedia.org/wiki/Chinese_input_methods_for_computers . May it be
fix?
HW
________________________________
寄件人﹕ "wikitech-l-request(a)lists.wikimedia.org"
<wikitech-l-request(a)lists.wikimedia.org>
收件人﹕ wikitech-l(a)lists.wikimedia.org
傳送日期﹕ 2011/1/23 (日) 9:40:33 AM
主題: Wikitech-l Digest, Vol 90, Issue 48
Send Wikitech-l mailing list submissions to
wikitech-l(a)lists.wikimedia.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
or, via email, send a message with subject or body 'help' to
wikitech-l-request(a)lists.wikimedia.org
You can reach the person managing the list at
wikitech-l-owner(a)lists.wikimedia.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Wikitech-l digest..."
Today's Topics:
1. Re: File licensing information support (Bryan Tong Minh)
2. Re: File licensing information support (Platonides)
3. Re: Announcing OpenStackManager extension (Platonides)
4. Re: File licensing information support (Krinkle)
5. Re: File licensing information support (Platonides)
6. Re: File licensing information support (Magnus Manske)
7. Re: WYSIFTW status (Magnus Manske)
8. Re: Farewell JSMin, Hello JavaScriptDistiller! (Maciej Jaros)
----------------------------------------------------------------------
Message: 1
Date: Sat, 22 Jan 2011 20:15:10 +0100
From: Bryan Tong Minh <bryan.tongminh(a)gmail.com>
Subject: Re: [Wikitech-l] File licensing information support
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Message-ID:
<AANLkTinN_n3kM0s_yCymh5jySu47h8mtdAqZX9bxPCnB(a)mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Fri, Jan 21, 2011 at 3:36 AM, Michael Dale <mdale(a)wikimedia.org> wrote:
> On 01/20/2011 05:00 PM, Platonides wrote:
>> I would have probably gone by the page_props route, passing the metadata
>> from the wikitext to the tables via a parser function.
>
> I would also say its probably best to pass metadata from the wikitext to
> the tables via a parser function. ?Similar to categories, and all other
> "user edited" metadata. This has the disadvantage that its not easy 'as
> easy' to edit via structured api entry point, ?but has the advantage of
> working well with all the existing tools, templates and versioning.
>
This is actually the biggest decision that has been made, the rest is
mostly implementation details. (Please note that I'm not presenting
you with a fait accompli, it is of course still possible to change
this)
Handling metadata separately from wikitext provides two main
advantages: it is much more user friendly, and it allows us to
properly validate and parse data.
Having a clear separate input text field "Author: ____" is much more
user friendly {{#fileauthor:}}, which is so to say, a type of obscure
MediaWiki jargon. I know that we could probably hide it behind a
template, but that is still not as friendly as a separate field. I
keep on hearing that especially for newbies, a big blob of wikitext is
plain scary. We regulars may be able to quickly parse the structure in
{{Information}}, but for newbies this is certainly not so clear.
We actually see that from the community there is a demand for
separating the meta data from the wikitext -- this is after all why
they implemented the uselang= hacked upload form with a separate text
box for every meta field.
Also, a separate field allows MediaWiki to understand what a certain
input really means. {{#fileauthor:[[User:Bryan]]}} means nothing to
MediaWiki or re-users, but "Author: Bryan___ [checkbox] This is a
Commons username" can be parsed by MediaWiki to mean something. It
also allows us to mass change for example the author. If I want to
change my attribution from "Bryan" to "Bryan Tong Minh", I would need
to edit the wikitext of every single upload, whereas in the new system
I go to Special:AuthorManager and change the attribution.
> Similar to categories, and all other"user edited" metadata.
Categories is a good example of why metadata does not belong in the
wikitext. If you have ever tried renaming a category... you need to
edit every page in the category and rename it in the wikitext. Commons
is running multiple bots to handle category rename requests.
All these advantage outweigh the pain of migration (which could
presumably be handled by bots) in my opinion.
Best regards,
Bryan
------------------------------
Message: 2
Date: Sat, 22 Jan 2011 21:04:05 +0100
From: Platonides <Platonides(a)gmail.com>
Subject: Re: [Wikitech-l] File licensing information support
To: wikitech-l(a)lists.wikimedia.org
Message-ID: <ihfd31$buv$1(a)dough.gmane.org>
Content-Type: text/plain; charset=ISO-8859-1
An internally handled parser function doesn't conflict with showing it
as a textbox.
We could for instance store it as a hidden page prefix.
Data stored in the text blob:
"Author: [[Author:Bryan]]
License: GPL
---
{{Information| This is a nice picture I took }}
{{Deletion request|Copyvio from http://www.example.org}}
"
Data shown when clicking edit:
Author: <input type="text value="Bryan" />
License: <select>GPL</select>
<textarea name="textbox1">
{{Information| This is a nice picture I took }}
{{Deletion request|Copyvio from http://www.example.org}}
</textarea>
Why do I like such approach?
* You don't need to create a new way for storing the history of such
metadata.
* Old versions are equally viewable.
* Things like edit conflicts are already handled.
* Diffing could be done directly with the blobs.
* Import/export automatically works.
* Extendable for more metadata.
* Readable for tools/wikis unaware of the new format.
On the other hand:
* It breaks the concept of "everything is in the source".
* Parsing is different based on the namespace. A naive parsing as
"License: GPL" instead of showing an image and a GPL excerpt, would be
acceptable, but if incomplete markup is stored there, the renderings
would be completely different. Could be skipped if placing the metadata
inside a tag. But what happens if the tag is inserted elsewhere in the
page? MediaWiki doesn't have run-once tags.
PS: The field author would be just a pointer to the author page, so you
wouldn't need to edit everything on any case.
------------------------------
Message: 3
Date: Sat, 22 Jan 2011 21:05:57 +0100
From: Platonides <Platonides(a)gmail.com>
Subject: Re: [Wikitech-l] Announcing OpenStackManager extension
To: wikitech-l(a)lists.wikimedia.org
Cc: mediawiki-l(a)lists.wikimedia.org
Message-ID: <ihfd6h$buv$2(a)dough.gmane.org>
Content-Type: text/plain; charset=ISO-8859-1
Ryan Lane wrote:
> For the past month or so I've been working on an extension to manage
> OpenStack (Nova), for use on the Wikimedia Foundation's upcoming
> virtualization cluster:
>
>http://ryandlane.com/blog/2011/01/02/building-a-test-and-development-infras…
>/
>
> I've gotten to a point where I believe the extension is ready for an
> initial release.
Congratulations, Ryan!
------------------------------
Message: 4
Date: Sat, 22 Jan 2011 21:47:17 +0100
From: Krinkle <krinklemail(a)gmail.com>
Subject: Re: [Wikitech-l] File licensing information support
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Message-ID: <37ECD06F-DF37-4F62-9CA2-1DC62EC267BA(a)gmail.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Jan 22, 2011 at 21:04 Platonides wrote:
> An internally handled parser function doesn't conflict with showing it
> as a textbox.
>
> We could for instance store it as a hidden page prefix.
>
> Data stored in the text blob:
> "Author: [[Author:Bryan]]
> License: GPL
> ---
> {{Information| This is a nice picture I took }}
> {{Deletion request|Copyvio from http://www.example.org}}
> "
>
> Data shown when clicking edit:
>
> Author: <input type="text value="Bryan" />
> License: <select>GPL</select>
>
> <textarea name="textbox1">
> {{Information| This is a nice picture I took }}
> {{Deletion request|Copyvio from http://www.example.org}}
> </textarea>
So PHP would extract {{#author:4}} and {{#license:12}} from the
textblob when showing the editpage.
And show the remaining wikitext in the <textarea> and the author/
license as seperate form elements.
And upon saving, generate "{{#author:4}} {{#license:12}}\n" again and
prepend to the textblob.
Double instances of these would be ignored (ie. stripped automatically
since they're not re-inserted to
the textblob upon saving).
One small downside would be that if someone would edit the textarea
manually to do stuff with
author and license, the next edit would re-arrange them since they're
extracted and re-insterted
thus showing messy diffs. (not a major point as long as it's done
independant from JavaScript,
which it can be if done from core / php).
If that's what you meant, I think it is an interesting concept that
should not be ignored, however personally
I am not yet convinced this is the way to go. But when looking at the
complete picture of up/down sides,
this could be something to consider.
--
Krinkle
------------------------------
Message: 5
Date: Sat, 22 Jan 2011 23:09:59 +0100
From: Platonides <Platonides(a)gmail.com>
Subject: Re: [Wikitech-l] File licensing information support
To: wikitech-l(a)lists.wikimedia.org
Message-ID: <ihfkf2$fef$1(a)dough.gmane.org>
Content-Type: text/plain; charset=ISO-8859-1
Krinkle wrote:
> So PHP would extract {{#author:4}} and {{#license:12}} from the
> textblob when showing the editpage.
> And show the remaining wikitext in the <textarea> and the author/
> license as seperate form elements.
> And upon saving, generate "{{#author:4}} {{#license:12}}\n" again and
> prepend to the textblob.
>
> Double instances of these would be ignored (ie. stripped automatically
> since they're not re-inserted to
> the textblob upon saving).
> One small downside would be that if someone would edit the textarea
> manually to do stuff with
> author and license, the next edit would re-arrange them since they're
> extracted and re-insterted
> thus showing messy diffs. (not a major point as long as it's done
> independant from JavaScript,
> which it can be if done from core / php).
>
> If that's what you meant, I think it is an interesting concept that
> should not be ignored, however personally
> I am not yet convinced this is the way to go. But when looking at the
> complete picture of up/down sides,
> this could be something to consider.
>
> --
> Krinkle
That's an alternative approach. I was thinking in accepting them only at
the beginning of the page, but extracting from everywhere is also an
alternative.
------------------------------
Message: 6
Date: Sun, 23 Jan 2011 00:38:53 +0000
From: Magnus Manske <magnusmanske(a)googlemail.com>
Subject: Re: [Wikitech-l] File licensing information support
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Message-ID:
<AANLkTimFfJs98FEAmJ6mhBbFJ25Z7jXo+sgvFLU4r-XF(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
On Sat, Jan 22, 2011 at 10:09 PM, Platonides <Platonides(a)gmail.com> wrote:
> Krinkle wrote:
>> So PHP would extract {{#author:4}} and {{#license:12}} from the
>> textblob when showing the editpage.
>> And show the remaining wikitext in the <textarea> and the author/
>> license as seperate form elements.
>> And upon saving, generate "{{#author:4}} {{#license:12}}\n" again and
>> prepend to the textblob.
>>
>> Double instances of these would be ignored (ie. stripped automatically
>> since they're not re-inserted to
>> the textblob upon saving).
>> One small downside would be that if someone would edit the textarea
>> manually to do stuff with
>> author and license, the next edit would re-arrange them since they're
>> extracted and re-insterted
>> thus showing messy diffs. (not a major point as long as it's done
>> independant from JavaScript,
>> which it can be if done from core / php).
>>
>> If that's what you meant, I think it is an interesting concept that
>> should not be ignored, however personally
>> I am not yet convinced this is the way to go. But when looking at the
>> complete picture of up/down sides,
>> this could be something to consider.
>>
>> --
>> Krinkle
>
> That's an alternative approach. I was thinking in accepting them only at
> the beginning of the page, but extracting from everywhere is also an
> alternative.
OK, my 2 cents:
I would be in favour of extracting data from the {{Information}}
template via the parser, but we talked about this over a year ago at
the Paris meeting, and it was deemed too complicated (black caching
magick etc.), and noone has stepped forward to do anything along those
line, so I guess it's dead and buried.
Things like {{#author:4}} seem to be a nice hack to Get Things Done
(TM). As was mentioned before, the temptation is great to expand it
into a generic triplet storage a la Semantic MediaWiki, but that would
probably complicate things to an extend where nothing gets done,
again.
But one thing comes to mind: If someone implements an abstraction
layer ("4" to a specific author) anyway, it should be dead simple to
use it for tags as well. Just allow multiple {{#tag}}s per page (as
opposed to {{#author}}), done. The same code that will allow for
editing author and license information centrally should make it
possible to edit tag information, i18n for example, so the tag display
could be in the current user language (with "en" fallback). Search for
tags i18n-style could be possible as well, if the translation
information is encoded machine-readable as well, e.g. as language
links ([[de:Pferd]] on the [[Tag:Horse]] page).
It might be too much to try to activate all of that in the first
round, but IMHO the code should keep the use as tags in mind; it would
be dreadful to waste such an opportunity.
Cheers,
Magnus
------------------------------
Message: 7
Date: Sun, 23 Jan 2011 00:45:29 +0000
From: Magnus Manske <magnusmanske(a)googlemail.com>
Subject: Re: [Wikitech-l] WYSIFTW status
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Message-ID:
<AANLkTikw+SCv3cUScc--1SvZE=54OgS=zydJUdwQpy=i(a)mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
On Wed, Jan 19, 2011 at 10:57 PM, Magnus Manske
<magnusmanske(a)googlemail.com> wrote:
> On Wed, Jan 19, 2011 at 8:25 PM, Platonides <Platonides(a)gmail.com> wrote:
>> Magnus Manske wrote:
>>> On my usual test article [[Paris]], the slowest section ("History")
>>> parses in ~5 sec (Firefox 3.6.13, MacBook Pro). Chrome 10 takes 2
>>> seconds. I believe these will already be acceptable to average users;
>>> optimisation should improve that further.
>>>
>>> Cheers,
>>> Magnus
>>
>> What about long tables?
>
> Worst-case-scenario I could find:
>http://en.wikipedia.org/wiki/Table_of_nuclides_(sorted_by_half-life)#Nuclid…
>s
>
> 4.7 sec in Chrome 10 on my iMac.
> 6.2 sec in Firefox 4 beta 9.
> 10.7 sec in Firefox 3.6.
>
> Could be worse, I guess...
>
Another update that might be of interest (if not, tell me :-)
I just went through my first round of code optimisation. Parsing speed
has improved considerably, especially for "older" browsers: Firefox
3.6 now parses [[Paris]] in 10 sec instead of 32 sec (YMMV).
Also, it is now loading the wikitext and the image information from
the API in parallel, which reduces pre-parsing time.
For small and medium-size articles, editing in WYSIFTW mode now often
loads (and parses) faster than the normal edit page takes to load
(using Chrome 10).
Cheers,
Magnus
------------------------------
Message: 8
Date: Sun, 23 Jan 2011 02:40:30 +0100
From: Maciej Jaros <egil(a)wp.pl>
Subject: Re: [Wikitech-l] Farewell JSMin, Hello JavaScriptDistiller!
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Message-ID: <4D3B870E.2010407(a)wp.pl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Michael Dale (2011-01-21 16:04):
> On 01/21/2011 08:21 AM, Chad wrote:
>> While I happen to think the licensing issue is rather bogus and
>> doesn't really affect us, I'm glad to see it resolved. It outperforms
>> our current solution and keeps the same behavior. Plus as a bonus,
>> the vertical line smushing is configurable so if we want to argue
>> about \n a year from now, we can :)
> Ideally we will be using closures by then and since it rewrites
> functions, variable names and sometimes collapses multi-line
> functionality, new line preservation will be a mute point. Furthermore,
> Google even has a nice add-on to firebug [1] for source code mapping.
> Making the dead horse even more dead.
>
> I feel like we are suck back in time, arguing about optimising code that
> came out eons ago in net time ( more than 7 years ago ) There are more
> modern solutions that take into consideration these concerns and do a
> better job at it. ( ie not just a readable line but a pointer back to
> the line of source code that is of concern )
>
> [1] http://code.google.com/closure/compiler/docs/inspector.html
Great. Now I only need to tell the user to install Firefox, install
Firebug and some other addon, open the page in Firefox... Oh, wait. This
error does not occur in Firefox...
Please, I can live with folding new lines (thou I don't believe those
few bites are worth it) acutely compiling the code (or packing as some
say) would be just evil for Mediawiki or Wikimedia to be more exact.
Just remember that people all over the world are hacking into Mediawiki
all the time. Making it harder won't help a bit.
Regards,
Nux.
------------------------------
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
End of Wikitech-l Digest, Vol 90, Issue 48
******************************************
For the past month or so I've been working on an extension to manage
OpenStack (Nova), for use on the Wikimedia Foundation's upcoming
virtualization cluster:
http://ryandlane.com/blog/2011/01/02/building-a-test-and-development-infras…
I've gotten to a point where I believe the extension is ready for an
initial release.
In brief, OpenStack works a lot like EC2, and in fact implements the
EC2 API. This extension interacts with the EC2 API and LDAP, to manage
a virtual machine infrastructure. It has the following features:
* Integrates with the LdapAuthentication extension, and creates user
accounts in LDAP upon user creation
** Users created with a posix username, uid, and gid; home directory;
openstack credentials; and wiki credentials
* Manages most features of OpenStack
** Handles project creation/deletion, and membership
** Handles project and global role membership
** Handles instance creation/deletion
** Handles security group creation/deletion and rule addition/removal
** Handles floating IP address allocation and association with instances
** Handles public SSH key addition/removal from user accounts
* Manages DNS via PowerDNS with an LDAP backend
** Handles private DNS for private IP address ranges upon instance
creation and deletion
** Handles public DNS for floating IP addresses
* Manages Puppet configuration for instances via Puppet with an LDAP
backend for nodes
The extension was written to handle the case explained in my blog
post. It is likely not written in a generic enough way to fit into
most existing infrastructures currently. If you'd like to help make
the extension more useful for a wider audience, please contact me,
send patches, or if you have commit access, make modifications. I have
a test/dev environment for this project configured on tesla, if you'd
like to work in a pre-configured environment.
The extension page is here:
http://www.mediawiki.org/wiki/Extension:OpenStackManager
Respectfully,
Ryan Lane
Hi,
At the Greek Research and Education Network (GRNET) we look at the possibility of contributing to the development of WYSIWYG editor support in Wikipedia. We understand that considerable work has already taken place in the area, e.g.:
* http://www.mediawiki.org/wiki/WYSIFTW
* https://svn.wikia-code.com/wikia/trunk/extensions/wikia/RTE/
* http://www.mediawiki.org/wiki/User:JanPaul123/Sentence-level_editing
We therefore think that it will not be productive to reinvent the wheel over here.
Our contribution can take the form of providing developers that will devote part (or all) of their time for some months in 2011. We welcome any comments and suggestions on how we could push this forward, and in particular:
* Specific tasks / components that need to be designed, developed, optimized, etc., and estimates of effort and timeframe.
Best Regards,
Panos Louridas
GRNET
Hello,
The squid statistics report show us that some site are leaking our
bandwidth. How to tell? They have a huge number of images referral and
barely none for pages.
One example:
In December, channelsurfing.net has been seen as a referrer for:
- 1000 pages roughly
- 1 740 000 images
whatchnewfilms.com is 14 000 / 581 000.
By looking at their pages, they use upload.wikimedia.org and glue some
advertisement around there.
Given the cost in bandwidth, hard drives, CPU, architecture ... I do
think we should find a solution to block thoses sites as much as
possible. Would it be possible at the squid level?
http://stats.wikimedia.org/archive/squid_reports/2010-12/SquidReportOrigins…
--
Ashar Voultoiz
> Hello,
>
>
> As you may have noticed, Roan, Krinkle and me have started to more
> tightly integrate image licensing within MediaWiki. Our aim is to
> create a system where it should be easy to obtain the basic copyright
> information of an image in a machine readable format, as well as
> querying images with a certain copyright state (all images copyrighted
> by User:XY, all images licensed CC-BY-SA, etc)
>
> At this moment we only intend to store author and license information,
> but nothing stops us from expanding this in the future.
>
> We have put some information in a not so structured way at mw.org [1].
> There are some issues open on the talk page [2]. Input is of course
> welcome, both here or preferably at the talk page.
>
>
> Bryan
>
>
> [1] http://www.mediawiki.org/wiki/Files_and_licenses_concept
> [2] http://www.mediawiki.org/wiki/Talk:Files_and_licenses_concept
>
>
Has there been consideration given to translating author names into
different languages?
Relative to other types of metadata, having the author in different
languages is not as important, since most
people just use whatever the name in the author's native language is
(or at least, that is what experience suggests to me). However, we
might want to have different translations of
the authors names in some circumstances:
*If the author is 'Unknown' or 'Anonymous', we'd definitely want to be
able to translate that.
*If the author is a company, government or a group with a proper name,
people tend to translate the name.
*If the author's native language is in a different script then the
current language, then the author's name is usually translated in my
experience. (Since to the average English viewer, a name in a language
like Arabic or Tamil that doesn't use the Latin alphabet, generally
look like any other name in that language I imagine people who only
speak Arabic would have trouble differentiating between the written
form of different English names).
(Of course, the above is just a guess for one you'd want to translate
author names, I don't know what happens in actual practise).
So I do think allowing such author properties to have multiple
translations is something to consider.
If there was support for translations of the values of these
properties, ideally when querying this information from the api - we'd
want to be able to do things like get the author's name in language X,
falling back to the original language if unavailable. Get the author's
name in all available languages, etc.
-bawolff