Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
own one.
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
- d.
Hi,
I'm getting complaints from users using my Widgets and HeaderTabs
extensions' parser functions that their output is mangled with <p> tags.
Looking into the issues I identified two separate problems and I'll be very
happy if you can confirm that I'm correct and help me resolve them.
First issue: Output of all parser functions is preceded with "\n\n" in
Parser.php line 2975 (on current stable 1.12 branch) which purposefully
forces closing </p><p> combination which contradicts with users expectations
that it will actually be inline if they put parser function inline. Here's
the code:
# Replace raw HTML by a placeholder
# Add a blank line preceding, to prevent it from mucking up
# immediately preceding headings
if ( $isHTML ) {
$text = "\n\n" . $this->insertStripItem( $text );
}
This is quite distracting since there is no way to work around this in
extensions or page text.
Second issue: If output of the function has line breaks in it, it gets
populated by lots of <p> tags which might not be desirable if extension is
supposed to preserve HTML structure (e.g. Widgets extension). I found a
piece of instruction on how to avoid it by using unique markers and
'ParserAfterTidy' hook:
http://www.mediawiki.org/wiki/Manual:Tag_extensions#How_can_I_avoid_modific…
please let me know if this is still going to work correctly with new
parser implementation.
I'll greatly appreciate your help with the matter.
Thank you,
Sergey
--
Sergey Chernyshev
http://www.sergeychernyshev.com/
This option was making sense when blocking registered users was an
experimental feature, but currently it's set to true everywhere and
hardly there's any third-party wiki that actually use it. So, does it
make sense to remove this option completely?
--
Max Semenik ([[User:MaxSem]])
On Fri, Jun 27, 2008 at 11:06 AM, <demon(a)svn.wikimedia.org> wrote:
> Log Message:
> -----------
> Add no-ops for the (un)lock functions.
>
> Modified Paths:
> --------------
> trunk/phase3/includes/db/DatabaseMssql.php
> trunk/phase3/includes/db/DatabaseOracle.php
> trunk/phase3/includes/db/DatabaseSqlite.php
You know, maybe it would be an interesting idea to actually use real
polymorphism in the Database class rather than making Database ==
DatabaseMySQL and have everything else override that? How about the
no-op lock() and unlock() go in Database, and get overridden in
DatabaseMySQL (which is the only one where they're different)?
Hi List,
how is it possible to get the (rss/atom) feed icon in the address bar of
the browser. At the moment I use the WikiFeeds extension which creates
an own feed being shown in the address bar.
But I want the mediawiki's internal feed (which displays the changes
much better) used when you click the Recent Changes page - so how do i
show this feed in the address bar of the browser so that the feed can be
subscribed to on every page of the wiki?
PS: Is there an implementation planned with which you can configure the
feed?
Thx in advance,
Regards, Alex
--
*Game based eVideo*
Ein ESF-gefördertes Projekt der FHTW Berlin.
eLearning | eVideo | Web 2.0 | Serious Games
büro: +49 (0)30 50 19 26 47
http://evideo.fhtw-berlin.de
Alexander Kluge
http://www.alexkluge.de
Studentischer Mitarbeiter
mobil: +49 (0)163 60 51 036
alexander.kluge(a)fhtw-berlin.de
FHTW - Fachhochschule für Technik und Wirtschaft
Treskowallee 8, 10318 Berlin
Hello, wikitech.
I have applied to Google Summer of Code with the project to enable
category moving without using bots. After some correspondance with
Catrope, the following text is my project idea. Any feedback would be
welcome.
Synopsis
I will provide capability of moving categories to achieve an effect
for the end-user similar to that of moving other pages. Currently,
contributors must apply to use a bot that recreates the category page
and changes the category link on all relevant articles.
Project
The object can be divided into three parts. First, the category page
is moved, along with its history, just as renaming of articles works.
A redirect is optionally placed on the old category page, and the
category discussion is moved as well.
Second, all articles in the relevant category must have their category
links changed. There are several obstacles involved in this task:
1. Finding all alternative ways of categorizing articles. It is simple
to match the simple category links and category lists, but more
difficult to find e.g. categories included from a template. Roan
Kattouw (Catrope) suggested category redirects for this, such that all
articles categorised as [[Category:A]] would also be listed at
[[Category:B]] if the prior has been redirected to the latter.
2. Articles might be in the process of being edited as the movement is
done. This, however, can be solved in the same manner as edit
collisions are currently solved.
3. The algorithm would likely have high complexity and would thus not
scale well with very large categories.
This is likely to constitute a significant and challenging part of the project.
As the last step, the relevant entries in the categorylinks table
would need to be changed. This is accomplished by a simple SQL query.
This could be avoided if bug #13579 [1] ("Category table should use
category ID rather than category name") is fixed, which it could be as
part of this project.
The project would preferably be written as a patch to the core.
Catrope suggested setting up a separate SVN branch for the project,
such that everyone can see my progress.
Profits for MediaWiki
Developing a means of moving categories would decrease dependency on
bots, gaining in administrative time. Additionally, the solution would
be faster than any bot-relying solution could be due to, among other
things, the removed need of loading pages.
Category moving would also increase the consistency in layout on the
different article types. The only real reason for a "move" tab not to
reside on category pages is that the feature is not yet implemented.
Roadmap
Publishing this document to the MediaWiki development community
(wikitech-l) and awaiting comments on the planned procedure would be
the first step.
After the community bonding period specified by the time line, a week
should be enough to get comfortable with the relevant MediaWiki code
and implement the first section, moving the category page along with
its discussion and history. Much old code should be reusable here,
such as the Title::moveTo() method for moving pages.
Until mid of July, most of the second part of the project should be
finished. In a week from there, the last part would be completed, too.
A month is then reserved for bug-testing, tweaking and as a buffer for
unexpected obstacles. The MediaWiki community is very important in
this step for testing and feedback.
Regards
--
Tim Johansson
http://timjoh.com/
Hi,
I'm still working on upgrading us to 1.12 and after upgrading the
extensions, I'm noticing that the 1.12 upgrade still takes a lot longer to
render a page. For ( $elapsed = $now - $wgRequestTime;) for 1.9, I usually
see on average about 0.15 seconds to serve a page, while 1.12 I'm seeing
more like 0.50 seconds, with both installations running on the same server
and connected to the same DB server, etc.
Here are some of the functions I'm seeing take awhile in the profiler.
462.884 MediaWiki::initialize
395.097 MediaWiki::performAction
328.284 Article::view
233.001 Parser::parse
232.899 Parser::parse-Article::outputWikiText
134.212 Parser::internalParse
103.388 Parser::replaceVariables
252.385 MediaWiki::finalCleanup
251.857 OutputPage::output
251.402 Output-skin
251.332 SkinTemplate::outputPage
209.752 SkinTemplate::outputPage-execute
855.008 -total
The full profile is here:
http://69.20.102.10/x/profile_deep.txt
Any ideas? I've tried drilling down into finding out why some of these are
taking a long time. replaceInternalLinks seems to take a long time sometimes
because the LocalFile::loadFromDB DB select statement sometimes takes over
50ms, but it looks like our indices in the image table are fine.
mysql> check table image;
+------------------+-------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+------------------+-------+----------+----------+
| wikidb_112.image | check | status | OK |
+------------------+-------+----------+----------+
1 row in set (1.73 sec)
Thanks,
Travis
On 21/06/2008, rotem(a)svn.wikimedia.org <rotem(a)svn.wikimedia.org> wrote:
> Revision: 36523
> Author: rotem
> Date: 2008-06-21 10:51:02 +0000 (Sat, 21 Jun 2008)
>
> Log Message:
> -----------
> Removing the recent changes fieldset, per complaints from some users.
Which kind of complaints and from where? I kind of like fieldsets, and
it doesn't look very good now without it in my opinion. Can you
provide more details? Can I have it back if I complain as an user? :D
Many special pages already have fieldsets, so it would be consistent
with that. Can we reach some kind of consensus on this?
--
Niklas Laxström
Obviously, somewhere in the mediawiki software there is a list of
non-whitespace characters which can follow the end of a link and still
be part of it. e.g. "[[fish]]es" links to "[[fish]]" but looks like
"[[fishes]]", which is good because one is the plural form, which
redirects to the singular form anyway.
i don't know when or why the behavior was changed, but I recently
noticed that apostrophes are now being sucked into the link, thus
"[[Arby]]'s" links to "[[Arby]]" but looks like [[Arby's]], which is
bad because one is a swedish housing project and the other is an
american restaurant chain.
In short, a feature intended for easy plural links now also creates
confusing possessive links. I do hope this is an accidental side
effect which can easily be corrected.
—C.W.