Hello,
while working towards the xmldumps-backup test suite, I came across ExternalStorageHttp.
While it's easy to setup (read-only), testing it is rather a burden.
Is ExternalStorageHttp actually used for some MediaWiki at WikiMedia?
Kind regards,
Christian
--
---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ----
Companies' registry: 360296y in Linz
Christian Aistleitner
Gruendbergstrasze 65a Email: christian(a)quelltextlich.at
4040 Linz, Austria Phone: +43 732 / 26 95 63
Fax: +43 732 / 26 95 63
Homepage: http://quelltextlich.at/
---------------------------------------------------------------
Hello,
I am currently developing a test suite for the XML dumps, and I am curious about the specification of text.old_flags in MediaWiki's maintainance/tables.sql.
The file describes the 'object' flag as
text field contained a serialized PHP object.
object either contains multiple versions compressed
to achieve a better compression ratio, or it refers
to another row where the text can be found.
Is the „multiple versions” part still used in some project?
If so, how should this be set up [1]?
Kind regards,
Christian
P.S.: In #wikimedia-dev I was told, to bring up the question on this list. If there are further lists, where I should ask, please let me know.
[1] Before r6138 (back then still in Article.php not Revision.php), it seems the text was obtained by
$object = unserialize( $text );
$text = $object->getItem( $hash );
. There it is somewhat obvious how a single object may return different texts. However, beginning with p6138 it seems the text is simply fetched by
$obj = unserialize( $text );
[...]
$text = $obj->getText();
. If a single object should return different texts, how does it determine, which text to return?
--
---- quelltextlich e.U. ---- \\ ---- Christian Aistleitner ----
Companies' registry: 360296y in Linz
Christian Aistleitner
Gruendbergstrasze 65a Email: christian(a)quelltextlich.at
4040 Linz, Austria Phone: +43 732 / 26 95 63
Fax: +43 732 / 26 95 63
Homepage: http://quelltextlich.at/
---------------------------------------------------------------
Good morning campers! :-)
At the POTY collection link http://dumps.wikimedia.org/other/poty/
you'll notice that the 2009 files have been added.
For people who have wanted older (2002 through 2006) dumps, one or two
dumps of the projects for most of that period are now online at our new
archive link: http://dumps.wikimedia.org/archive/
I'd love to get a couple dumps of each of the projects for the years
2005, 2007 and 2008. If you have an old mirror on a hard drive
gathering dust in your closet, give me a shout. Thanks!
Hmm, and one other thing while I'm at it, I've been cleaning up our
dumps documentation on wikitech, and there's now a rough outline of the
dumps history here: http://wikitech.wikimedia.org/view/Dumps/History
If people remember any milestones that should be added there, or if you
see any glaring errors, please edit, or send me mail with the
corrections.
Ariel
On Fri, Nov 11, 2011 at 11:18 PM, emijrp <emijrp(a)gmail.com> wrote:
> Forwarding...
>
> ---------- Forwarded message ----------
> From: emijrp <emijrp(a)gmail.com>
> Date: 2011/11/11
> Subject: Old English Wikipedia image dump from 2005
> To: wikiteam-discuss(a)googlegroups.com
>
>
> Hi all;
>
> I want to share with you this Archive Team link[1]. It is an old English
> Wikipedia image dump from 2005. One of the last ones, probably, before
> Wikimedia Foundation stopped publishing image dumps. Enjoy.
>
> Regards,
> emijrp
>
> [1] http://www.archive.org/details/wikimedia-image-dump-2005-11
People interested in image dumps may be also interested in my post
relating to the GFDL requirements, which I think mean images need to
be included in the dumps.
https://meta.wikimedia.org/w/index.php?title=Talk:Terms_of_use&diff=prev&ol…
excerpt:
"..the [GFDL] license requires that someone can download a
''complete'' Transparent copy for one year after the last Opaque copy
is distributed. As a result, I believe the BoT needs to ensure that
the dumps are available ''and'' that they can be available for one
year after WMF turns of the lights on the core servers (it allows
'agents' to provide this service). As Wikipedia contains images, the
images are required to be included. .."
discussion continues ..
https://meta.wikimedia.org/wiki/Talk:Terms_of_use#Right_to_Fork
--
John Vandenberg
Hello!
Taking as example "Дом" from the russian wiktionary, the templates are not
rendered (not found by the wikimedia program either, as show the red links):
Морфологические и синтаксические свойства
Шаблон:сущ ru m ina
1c(1)<http://localhost/mediawiki-1.18.0/index.php?title=%D0%A8%D0%B0%D0%B1%D0%BB%…>
Шаблон:морфо<http://localhost/mediawiki-1.18.0/index.php?title=%D0%A8%D0%B0%D0%B1%D0%BB%…>
Let's take the example of the template "сущ ru m ina
1c(1)<http://localhost/mediawiki-1.18.0/index.php?title=%D0%A8%D0%B0%D0%B1%D0%BB%…>
"
I have some questions:
- in the SQL generated from mwdumper the following command does not return
any result:
grep "'сущ ru m ina 1c(1)" ruktionary.sql" ruwiktionary.sql
- The string is also not to be found in the generated DB, as a consequence
More general:
- Is there some special option for mwdumper or a special dump to import to
also get the templates?+
- Is there any way to log the requests sent by mediawiki to the DB?
Thanks a lot in advance
Sebastien
On 9 January 2012 22:05, Platonides <platonides(a)gmail.com> wrote:
> On 08/01/12 23:03, Sébastien Druon wrote:
>
>> It seems that mwdumper did not import the templates (select * from page
>> where page_title like 'Шаблон:%' does not return anything), though they
>> are present in the xml dump.
>> Is there some special option to use?
>>
>
> The pages aren't stored with the namespaces name as a literal, but with
> the namespace number.
> Try SELECT page_title FROM page WHERE page_namespace=10 LIMIT 5;
>
by WOHLFAHRT Roland (Gemeindeinformatikzentrum Kärnten GI Z-K GmbH)
Hi everybody
I have a migration-issue and tried your maintenance-Script rebuildImages.php.
See http://meta.wikimedia.org/wiki/Talk:Data_dumps#rebuildImages.php_fails
Environment:
- Windows 2008 r2 with xampp
- Mediawiki 1.18.0
I did the migration this way because I wanted a "fresh" mediawiki installation.
I found no good place to ask for support, because the script has no article on mediawiki.org.
Perhaps you can give me a small hint or workaround to get my wiki up&running. Thx a lot.
Cheers,
Roland
Hello,
How is it possible to get the list of all the entries (words) of a
wiktionary?
For example, for the russian wiktionary, I want to get the list of all the
russian entries (no other languages)
Thanks a lot in advance,
Sebastien
Hi!
What is the best/easiest way to get a parsed version (including template
resolution) of all entries of a wiktionary (separate html files for each
entry for example).
Thanks in advance
Sebastien