>From looking at DB scheme I cannot find an efficient way of getting the
list of null revisions or opposite (no null revisions list). With LIMIT
paging (for custom API). When I GROUP then ORDER and LIMIT, it behaves
It seems that I should use very inefficient GROUP BY rev_text_id (and
also MySQL not offering FIRST / LAST aggregate functions) and also there
is no index on rev_text_id by default :-( I wish there was a field like
rev_minor_edit but for detection of null revisions, such as these
generated by XML import / export. They confuse the logic of my wiki
synchronization script. However, even if I were able to persuade to
include these features into the scheme, 1.15 which customers use, was
already released some time ago, anyway :-( So probably the core patch is
the only efficient way to solve my problem?
It's been a while since I've updated the notes from our test framework
meetings, so I just did so today:
The meeting earlier today is here:
Not a lot of context in those, so I'll provide a summary. Markus
Glaser has been doing a lot of work over the past month getting the
Selenium framework in shape for adding new tests. He also documented
what parameters are necessary to run Selenium tests on our test grid,
which gives everyone access to many different browsers to test
Concurrently with that, Nadeesha and Jinseh at Calcey Technologies
have been ramping up on the framework, and have a number of tests that
Nadeesha will be committing in trunk soon.
Our conversation today was brief, and mainly a mundane runthrough of
action items. One conversation we did drift into was one about
installer testing, after figuring out that that is a weak spot in our
coverage right now (as many people refreshing their installs from
trunk don't run the installer every time they refresh, and it's one of
the big features for the next release of MediaWiki). The framework
currently isn't well suited to test prior to the db and everything
being set up, so the folks at Calcey are going to spend some time
thinking about that.
Since we don't have a manual testing plan for the installer, I've put
a stub here:
...and I've asked Calcey to flesh it out. The idea is that once we
all agree on what makes sense to test at all (manually, automated, or
otherwise), then we can talk about what makes sense to automate.
The installer testing is a plan we've cooked up today, so we haven't
even run it past Chad yet, for example (/me waves at Chad).
If you'd like to participate in the meetings, let me know. Our IRC
meetings obviously require no RSVP (next one is next week, December 9
at 8am PST on #mediawiki), but our voice meetings we'd like you to
RSVP for, since they're still kind of a pain to get going (next one is
the week after next, December 16 at 8am PST).
Nope, committers can continue doing what they do.
You should only notice a change if you try to commit
broken code :)
On Nov 30, 2010 10:34 AM, "Krinkle" <krinklemail(a)gmail.com> wrote:
> On Tue, Nov 30, 2010 at 10:19 AM, Chad <innocentkiller(a)gmail.com>
>> On Tue, Nov 30, 20...
if/when this is enabled. Does this require anything from the commiters ?
Do I need to install something or run a command in addition to or
instead of 'svn commit -m "" ' ?
Sounds nice as an additional check :)
Wikitech-l mailing list
I am interested in developing an extension for handling molecular files
(files containing informations about chemical molecules : atoms, bonds,
If I understand correctly it will enable me to display specific informations
in the File:... page, like what MediaWiki does for simple images.
Something like existing extensions (FlvHandler, OggHandler,
PagedTiffHandler, PNGHandler, TimedMediaHandler in SVN trunk for example).
I have read several pages on MediaWiki about writing extensions, but they
are not very detailed for media handler extensions.
I have also written an extension to display and interact with
but I still have several questions on how I can create a handler for
molecular files in MediaWiki.
Any help or links to some explanations will be appreciated.
Molecular files exist in several formats : pdb, cif, mol, xyz, cml, ...
Usually they are detected as simple MIME types (either text/plain or
application/xml) by MediaWiki and not as more precise types (even if this
types exist : chemical/x-pdb, chemical/x-xyz, ...).
It seems that to register a Media handler, I have to add an entry to
$wgMediaHandlers : $wgMediaHandler['text/plain'] = 'MolecularHandler';
Will it be a problem to use such a general MIME type to register the handler
? Especially for files of the same MIME type but that are not molecular
Are there some precautions to take into account ? (like letting an other
handler deal with the file if it's not a molecular file, ...)
I want to use the Jmol <http://www.jmol.org/> applet for displaying the
molecule in 3d, and allowing the user to manipulate it.
But the applet is about 1M in size, so it takes time to load the first time,
then to start and load the molecular file.
I would like to start showing a still image (generated on the server) and a
button to let the user decide when loading the applet if interested in.
Several questions for doing this with MediaWiki :
- What hook / event should I use to be able to add this content in the
File:... page ?
- Is there a way to start displaying the File:... page, compute the still
image in the background,and add it in the File:... page after ?
- Are there any good practices for doing this kind of things ?
Is it also possible to create thumbnails in articles if they include links
to a molecular file (like [[File:example.pdb]]) ?
What hook should I use ?
Is it possible to compute the thumbnail in the background ?
Any other advice for writing a media handler extension ?
Or other possibilities that could enhance the extension ?
Among the few handler extensions in SVN, which is the better example ?
Thanks for any help
Guy Chapman requested that I post to the mailing list to ask how we can proceed to getting a copy of Wikipedia so that we can offer it as a database in our free search service, in response to the request in the following paragraph. He made me aware of its size, but that is not an issue. I would like to obtain a copy and then establish a routine for automated synced downloads like we do for the other databases we have in our system.
I have had several requests to add Wikipedia to our eTBLAST text similarity search engine. This is to improve reference finding as well as novelty assessment. Our search tool is widely used, widely published and is free. Please see etblast.org or http://en.wikipedia.org/wiki/ETBLAST. I would like to create a searchable copy of Wikipedia locally with links back to Wikipedia for hits, and of course acknowledge Wikimedia. We do this for several open text datasets and are prepared to keep a local, synced copy of Wikipedia, if you are interested. I am certain that our mutual users would like and benefit from our working together.
Cheers, and thank you,
----- Original Message -----
From: "Wikipedia information team" <info-en(a)wikimedia.org>
To: "Skip Garner" <garner(a)vbi.vt.edu>
Cc: "Dominik L. Borkowski" <dom(a)vbi.vt.edu>, "Johnny Sun" <szhaohui(a)vbi.vt.edu>
Sent: Wednesday, December 1, 2010 9:43:25 AM
Subject: Re: [Ticket#2010112810016598] I would like to provide a different search engine for Wikimedia
Dear Skip Garner,
Thank you for your email. Our response follows your message.
11/29/2010 16:23 - Skip Garner wrote:
> Thank you for the information. I would like to move forward on this, for I
think it will be of mutual value. The size of the database is not an issue, and
we are always expanding our storage and serving capabilities. We regularly work
with data in the 100's of T in size. One issue would be getting the first copy,
but we could probably handle that by fed-x.
> Can you tell me how we can proceed?
The best bet is probably to email the wikitech mailing list, which is where the
devs hang out.
They will have the best idea of the practicalities.
Wikipedia - http://en.wikipedia.org
Disclaimer: all mail to this address is answered by volunteers, and responses are
not to be considered an official statement of the Wikimedia Foundation. For
official correspondence, please contact the Wikimedia Foundation by certified mail
at the address listed on http://www.wikimediafoundation.org
Harold "Skip" Garner
Virginia Bioinformatics Institute
Washington Street (0477)
Blacksburg, VA 24061
Assistant: Renee Nester
> Message: 5
> Date: Wed, 24 Nov 2010 15:46:24 -0800
> From: Erik Moeller <erik(a)wikimedia.org>
> Subject: Re: [Wikitech-l] Commons ZIP file upload for admins
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Content-Type: text/plain; charset=ISO-8859-1
> [Kicking this thread back to life, full-quoting below only for quick reference.]
> I've collected some additional notes on this here:
> Would appreciate feedback & will circulate further in the Commons community.
Personally I think it would be nicer if you could associate source
files with the final files.
*User uploads jpeg of 3D image (or whatever)
*on the image description page for the jpg, there is an upload
"source" file link
*Users (who have appropriate permissions) can upload the associated
source files with this link.
*These source files might appear as a subpage of the primary
image/document/media, or they might just appear in list form at the
bottom of the image description page of the main image/media. Either
way, the source files would be associated with a single "main" file.
Doing it this way would limit the feature to source files of actually
uploaded files (so less random cruft lying around, no orphaned source
files, less chance of people abusing the feature to get around file
type restrictions). I also personally don't like the idea of uploading
archives. Instead I think it would be better just to upload all the
source files needed. (although that might fall apart if you're
uploading source files for something very complex which has many
source files in a specific directory structure). There could also be a
download all option where all the source files get tar'ed together on
the server side for an easy download.