That's how it always starts.
"Oh I'll just use this one piece." Then "Ohhh this looks cool..."
Before you know it, you're using the whole framework and kicking yourself.
Slippery slope? Sure, but I don't wanna go sliding down.
On Dec 21, 2010 11:18 AM, "Soxred93" <soxred93(a)gmail.com> wrote:
I'm not looking to integrate these frameworks entirely into MediaWiki; I'm
just talking about this one single file with one single class. (to be fair,
it's 3 classes, but they're all in that one file).
On Dec 21, 2010, at 9:09 AM, Chad wrote:
> I hate these frameworks, so a big -1 from me.
I've been editing http://www.mediawiki.org/wiki/Unit_Testing (and am
happy for feedback and suggestions.)
While editing, I took at look at the support files in tests/phpunit
and have some questions (along with a patch to fix a few problems I
* Do we need the install target? Given that the installation script
seems lightly broken, I'd guess that it is not used often. I've
removed the target. We should also remove the supporting file
* The path for the coverage target was broken by r78383. It's fixed in
* In targets noparser, safe and databaseless, do we need to exclude
group Broken? It is already excluded in suite.xml. I can see an
argument for always exclude Broken, so that people don't accidentally
run broken tests when they supply their own XML configuration file
with the CONFIG_FILE option. However, if you're supplying your own
config file, then you should know what you're doing.
* Also, I've done some light copy editing.
I've done a light edit on tests/phpunit/README. In particular, I've
removed the recommendation to use the system packaging tools to
install PHPUnit. PEAR works quite well in my experience – following
the installation instructions in the PHPUnit manual will help ensure
that people are running current versions of PHPUnit.
I also fixed the path in docs/code-coverage (which was broken by r78383).
Zak Greant (Wikimedia Foundation Contractor)
MediaWiki Activity Log at http://twitter.com/#!/zwmf
Plans, reports + more at http://mediawiki.org/wiki/User:Zakgreant
Want to talk about the Mediawiki developer docs?
Catch me on irc://irc.freenode.net#mediawiki
I would like to initiate a discussion about how to reduce the time required to generate dump files. A while ago Emmanuel Engelhart opened a bugreport suggesting to parallelize this feature and I would like to go through the available options and hopefully determine a course of action.
The current process is straightforward and sequential (as far as I know): it reads table by table and row by row and stores the output. The drawbacks of this process are that it takes increasingly more time to generate a dump as the different projects continue to grow and when the process halts or is interrupted then it needs to start all over again.
I believe that there are two approaches to parallelizing the export dump:
1) Launch multiple PHP processes that each take care of a particular range of ids. This might not be called true parallelization, but it achieves the same goal. The reason for this approach is that PHP has very limited (maybe no) support for parallelization / multiprocessing. The only thing PHP can do is fork a process (I might be incorrect about this)
2) Use a different language with builtin support for multiprocessing like Java or Python. I am not intending to start an heated debate but I think this is an option that at least should be on the table and be discussed. Obviously, an important reason not to do it is that it's a different language. I am not sure how integral the export functionality is to MediaWiki and if it is then this is a dead end.
However, if the export functionality is primarily used by Wikimedia and nobody else then we might consider a different language. Or, we make a standalone app that is not part of Mediawiki and it's use is only internally for Wikimedia.
If i am missing other approaches or solutions then please chime in.
I have downloaded a dump several month ago.
By accidentally, I lost the version info of this dump, so I don't know when
this dump was generated.
Is there any place that list out info about the past dumps(such as size...)?
I've long been interested in offline tools that make use of WikiMedia
information, particularly the English Wiktionary.
I've recently come across a tool which can provide random access to a
bzip2 archive without decompressing it and I would like to make use of
it in my tools but I can't get it to compile and/or function with any
free Windows compiler I have access to. It works fine on the *nix
boxes I have tried but my personal machine is a Windows XP netbook.
The tool is "seek-bzip2" by James Taylor and is available here:
* The free Borland compiler won't compile it due to missing (Unix?) header files
* lcc compiles it but it always fails with error "unexpected EOF"
* mingw compiles it if the -m64 option is removed from the Makefile
but it then has the same behaviour as the lcc build.
My C experience is now quite stale and my 64-bit programming
(I'm also interested in hearing from other people working on offline
tools for dump files, wikitext parsing, or Wiktionary)
Andrew Dunbar (hippietrail)
I'd like to make it easier for novice users to create Sign Language
definition pages with videos for en.wiktionary's new "Sign gloss:"
namespace. It's already possible to create such pages, but it requires a
large number of steps, which can deter potential contributors.
I'd like to make a command-line tool or web-form. The user would provide
1. Their name and password*
2. The name of the page
3. The text contents of the page (definition, etymology, etc. as plain
4. A video of the sign (and maybe also a video of it in use)
The tool would then automatically
0a. Check if the page already exists (and stop if it does)
0b. Convert the video to a format appropriate for Commons, if needed
1. Log in as the user
2. Upload the video to Commons
3. Create the page with the desired contents.
Is this a good idea?
Is there something like this already that I could use as a basis?
If this were a web form, how would I handle username+password securely?
*: Ideally I'd like to be able to help users who don't yet have accounts
to make accounts, and also somehow automatically handle the
account-linking business between Commons, Wiktionary, etc.
I am Pravin Satpute, I am working on language technology and for building
words and it frequency, i required some webpages in indic language.
Can i get the most recent dump without en.wiki
I filed bug 26259, "MediaWiki bloated with test suites":
RobLa asked me to send an e-mail to wikitech-l about this bug. It's my view
that checking out MediaWiki from SVN should not include files that most
users do not need or want. These test suites seem to fit perfectly within
Here you have http://www.megaupload.com/?f=WRDUHD3E If it says "temporarly
disabled", wait some minutes and retry.
2010/12/11 Andrew Dunbar <hippytrail(a)gmail.com>
> Thanks that would be awesom! I don't know megaupload so give me a URL
> or whatever I need when it's there.
> Andrew Dunbar (hippietrail)
> On 11 December 2010 10:34, emijrp <emijrp(a)gmail.com> wrote:
> > I have this one: mediawikiwiki-20100808-pages-meta-history.xml.7z (37
> MB). I
> > can upload it to MegaUpload if needed.
> > 2010/12/6 Andrew Dunbar <hippytrail(a)gmail.com>
> >> Could anybody help me locate a dump of mediawiki.org while the dump
> >> server is broken please? I only need current revisions.
> >> Thanks in advance.
> >> Andrew Dunbar (hippietrail)
> >> _______________________________________________
> >> Wikitech-l mailing list
> >> Wikitech-l(a)lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l