Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
Hi!
I've read on the techblog that the new UI go live in April. I have
some questions:
1) What version? Acai, babaco, citron?
2) How/where could a wiki customize the special character insert menu,
and the inserted strings? And the embed file (picture) button inserts
this: "[[Example.jpg]]", without any "File:" or "Image:"!
3) The search and replace button is available in firefox, but does not
appear at all in opera. Why?
4) Currently the new navigable TOC does not work on FF/Opera at all
(I've tried those).
Not too early for live deployment?
Regards,
Akos Szabo (Glanthor Reviol)
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
https://bugzilla.wikimedia.org/show_bug.cgi?id=15607
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
solution.
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
--
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
Hello to all!
I'm a French student and I am participating the Google Summer of Code
this year on Mediawiki!
My mentor is Roan Kattouw (Catrope) and my subject is "Reasonably
efficient interwiki transclusion". You can see my application page
here: [1].
I have already discussed with my mentor and we have prepared together
a draft about my project: [2]. It sums up the current situation and
includes some proposals.
It is now open for comments, so, could you please read it and let me
know about your remarks and suggestions, on this list and/or on the
talk page?
Thanks in advance
[1] http://www.mediawiki.org/wiki/User:Peter17/GSoc_2010
[2] http://www.mediawiki.org/wiki/User:Peter17/Reasonably_efficient_interwiki_t…
--
Peter Potrowl
http://www.mediawiki.org/wiki/User:Peter17
Dear all,
I've started to develop a simple wysiwyg editor that could be useful to
wikipedia. Basically the editor gets the wiki code from wikipedai and builds
the html on client side. Then you can edit the html code as you can imagine
and when you are done another script converts the html back to wiki code.
There is a simple demo here :
http://www.corefarm.com:8080/wysiwyg?article=Open_innovation . You can try
other pages from http://www.corefarm.com:8080/ (type the article name).
It's far from being really usable now but do you think that such a tool
would be useful ? The global structure is ok, most of the buttons are
working (even if there are no special images to figure out what they
actually do); it's just a matter of filling the gaps and support all the
wikipedia syntax.
You comments are welcomed!
All the best,
William
Hi all,
For those who don't know me, I'm one of the GSOC students this year.
My mentor is ^demon, and my project is to enhance support for metadata
in uploaded files. Similar to the recent thread on interwiki
transclusions, I'd thought I'd ask for comments about what I propose
to do.
Currently metadata is stored in img_metadata field of the image table
as a serialized php array. Well this works fine for the primary use
case - listing the metadata in a little box on the image description
page, its not very flexible. Its impossible to do queries like get a
list of images with some specific metadata property equal to some
specific value, or get a list of images ordered by what software
edited them.
So as part of my project I would like to move the metadata to its own
table. However I think the structure of the table will need to be a
little more complicated then just <page id>, <name>, <value> triples,
since ideally it would be able to store XMP metadata, which can
contain nested structures. XMP metadata is pretty much the most
complex metadata format currently popular (for metadata stored inside
images anyways), and can store pretty much all other types of
metadata. Its also the only format that can store multi-lingual
content, which is a definite plus as those commons folks love their
languages. Thus I think it would be wise to make the table store
information in a manner that is rather close to the XMP data model.
So basically my proposed metadata table looks like:
*meta_id - primary key, auto-incrementing integer
*meta_page - foreign key for page_id - what image is this for
*meta_type - type of entry - simple value or some sort of compound
structure. XMP supports ordered/unordered lists, associative array
type structures, alternate array's (things like arrays listing the
value of the property in different languages).
*meta_schema - xmp uses different namespaces to prevent name
collisions. exif properties have their own namespace, IPTC properties
have their own namespace, etc
*meta_name - The name of the property
*meta_value - the value of the property (or null for some compound
things, see below)
*meta_ref - a reference to a meta_id of a different row for nested
structures, or null if not applicable (or 0 perhaps)
*meta_qualifies - boolean to denote if this property is a qualifier
(in XMP there are normal properties and qualifiers)
(see http://www.mediawiki.org/wiki/User:Bawolff/metadata_table for a
longer explanation of the table structure)
Now, before everyone says eww nested structures in a db are
inefficient and what not, I don't think its that bad (however I'm new
to the whole scalability thing, so hopefully someone more
knowledgeable than me will confirm or deny that).
The XMP specification specifically says that there is no artificial
limit on nesting depth, however in general practise its not nested
very deeply. Furthermore in most cases the tree structure can be
safely ignored. Consider:
*Use-case 1 (primary usecase), displaying a metadata info box on an
image page. Most of the time that'd be translating specific name and
values into html table cells. The tree structure is totally
unnecessary. for example the exif property DateTimeOriginal can only
appear once per image (also it can only appear at the root of the tree
structure but thats beside the point). There is no need to reconstruct
the tree, just look through all the props for the one you need. If the
tree structure is important it can be reconstructed on the php side,
and would typically be only the part of the tree that is relevant, not
the entire nested structure.
*Use-case 2 (secondary usecase). Get list of images ordered by some
property starting at foo. or get list of images where property bar =
baz. In this case its a simple select. It does not matter where in the
tree structure the property is.
Thus, all the nestedness of XMP is preserved (So we could re-output it
into xmp form if we so desired), and there is no evil joining the
metadata table with itself over and over again (or at all), which from
what i understand, self-joining to reconstruct nested structures is
what makes them inefficient in databases.
I also think this schema would be future proof because it can store
pretty much all metadata we can think of. We can also extend it with
custom properties we make up that are guaranteed to not conflict with
anything (The X in xmp is for extensible).
As a side-note, based on my rather informal survey of commons (aka the
couple people who happened to be on #wikimedia-commons at that moment)
another use-case people think would be cool and useful is metadata
intersections, and metadata-category intersections. I'm not planning
to do this as part of my project, as I believe that would have
performance issues. However doing a metadata table like this does
leave the possibility open for people to do such intersection things
on the toolserver or in a DPL-like extension.
I'd love to get some feedback on this. Is this a reasonable approach
for me to take on this.
Thanks for reading.
--
-bawolff
The Nuke extension doesn't work with postgres (https://
bugzilla.wikimedia.org/show_bug.cgi?id=23600). Is there a revision that
contains a version that does? Right now (for 1.13.2) the snapshot
returned by the Nuke extension page is: r37906. This produces the error
given in the bug ticket.
Regards,
--
-- Dan Nessett
Hi everyone,
One thing we're struggling with right now is getting a chunk of the Flagged
Revs UI to look right. None of us working on Flagged Revs right now are CSS
gurus, and the people that we have at Wikimedia Foundation that are really
good with CSS are buried in other work, so we could really use some help.
What we're struggling with is that the "[review pending revisions]" with the
little lock icon beside it to look right in a cross-browser and cross-skin
fashion. A couple of the problems we're seeing:
1. In Vector, the placement of the text can be too high or too low,
depending on the browser in use
2. In Monobook, the problem is even worse. For example, in Chrome on
Linux, the text hovers way up above article, covering up the "My
contributions" link, for example
You can see all of this in action here:
http://flaggedrevs.labs.wikimedia.org/wiki/Backmasking
..and there are screenshots of the problem here:
http://www.pivotaltracker.com/story/show/2937207
Is there anyone here who can look at the CSS and offer up a better version
of what's there?
Rob