Hello all.
In the german WP, we have massive problems devising a consistent and
useful pattern for categorisation. After some trying and discussiong, it
seems that the software laks some features that would make categories a
truly powerful tool of structuring the WP. So my question is how hard it
would be to implement those features. Here are the once I personally
(and quite a few others, too) feel are the most important:
* Search over cross-sections of categories, an a sensible syntax for
linking to such sections.
* a concept of implicite membership in categories, such that members of
a sub-categorie are automatically members of the parent-categorie(s)
(transitive covers). For category-display, this should be optional. For
cross sections, this should be the default behaviour.
* A disctinction between a categorie being a subcategory, or just
belonging to another categorie, just as Articles do.
would it be feasable to put this features into the software? What would
be the problems? how soom could we have such features?
For more information about the troubles we are having, see
<http://de.wikipedia.org/wiki/Wikipedia_Diskussion:Kategorien> (german)
For some of my thoughts and ideas about this problems, see
<http://de.wikipedia.org/wiki/Benutzer:Duesentrieb/Semantic_Wiki_Web>
(english).
Thank you very much,
Daniel <http://de.wikipedia.org/wiki/Benutzer:Duesentrieb>
I'd like to hack SpecialRecentChanges so that one could
retrieve per page RecentChange RSS/Atom feeds or alternatively
hack PageHistory to offer RSS/Atom feeds.
Can anyone direct me as to which approach might be more
useful for mediawiki user's in general?
If I could get such a feed, and augment it slightly to
show the save page "Summary" comment as part of the
title for each feed item, then i think i could use
mediawiki to host a blog with a nice rss-feed.
hacking SpecialRecentChanges seems the easier approach
but thought i'd check first if one of the two was preferred.
thanks,
jr
--
------------------------------------------------------------
Joel W. Reed 412-257-3881
---------- http://home.comcast.net/~joelwreed/ ----------
When you are on page X and click on "watch this page" or "unwatch this
page", the 1.3.1 MediaWiki returns to the Main_Page.
However, as already implemented in eg. the User_Login procedure, I
prefer to be returned to last visited page, i.e. page X.
In module Article.php in function watch() change the last call
$wgOut->returnToMain(false);
to
$wgOut->returnToMain(true,$link);
Tom
Timwi a écrit :
> Thomas Gries wrote:
>
>> to show and link who edited last the article (or revision)
>> (1) on every article's footer
>
>
> This is not sufficiently important to warrant putting on every article.
> People wanting to know this only need to click "history"; people only
> wanting to read the article shouldn't be presented with this kind of
> secondary information.
And what about putting the size of the article (this information cannot
be easily found).
So the footer will look like
"This $2 bytes page was last modified $1"
Any idea to workaround
Xmlizer
I've added a memory usage check to the profiling output (requires that
PHP be compiled with --enable-memory-limit). It's pretty approximate and
probably wrong for whole functions (are local variables freed at the end
of the function?) but certainly is useful for the overall view and for
measuring chunks of code.
The last column of output is the sum of the increase in memory usage
over each invocation of that profiling point. (If it's all 0s, your PHP
is either older than 4.3.2 or compiled without --enable-memory limit.)
The default limit is a mere 8 megabytes, and this includes both data and
loaded & parsed code. On some hosts you're not allowed to increase the
limit, so we really want to be able to run within that; right now we
sometimes fail, for instance in rebuildMessages and Special:Allmessages.
The largest use of memory on common page views, though, is just _loading
the code_. Setup.php-includes section pulls in about four and a half
megabytes currently, way over half the default limit... trimming
unnecessary includes is a great way to save space and allow more
headroom for big operations. It also saves time; on a system without an
opcode cache, about half the runtime for a short page view time ends up
being in loading include files.
-- brion vibber (brion @ pobox.com)
On 22 Aug 2004, at 22:58, wikitech-l-request(a)wikimedia.org wrote:
> Message: 9
> Date: Sun, 22 Aug 2004 22:13:46 +0200
> From: Jakob Voss <gmane-user(a)nichtich.de>
> Subject: [Wikitech-l] How to get a list of authors for each article
> To: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Message-ID: <4128FE7A.2050908(a)nichtich.de>
> Content-Type: text/plain; charset=us-ascii; format=flowed
>
> <snip />
>
> All users should be listed - but not the IP-numbers of anonymous posts!
>
But it might be a good idea to include a statement like:
"This article has also been edited X times by anonymous contributors."
(NB: I don't think we should say "X anonymous users", because what with
dialup and dynamically assigned IPs, the relation between unique IPs
and and unique users is more than flaky.)
-Jens
Hi there!
(my first mail seems to be gone into nirvana)
For the German Wikipedia CD
(see http://meta.wikimedia.org/wiki/Wikipedia_auf_CD)
we need a least of all authors for each article (as GFDL requires).
I am not that good in SQL so can anybody help?
I thoght of a statement like
"This ariticle has been edited X times by logged in users and Y times by
anonymous user. The authors are: Userxy, Foouser, Userbar..."
My untested SQL statements up to now
CREATE TABLE edit_count (
article VARCHAR(255),
edited_by_users INTEGER,
edited_by_IP INTEGER
);
CREATE TABLE has_edited (
user VARCHAR(255),
article VARCHAR(255)
);
INSERT INTO has_edited
SELECT DISTINCT old_user_text AS user, old_title AS article WHERE
old_namespace=0
UNION
SELECT DISTINCT cur_user_text AS user, cur_title AS article WHERE
old_namespace=0
How not to get all the IP numbers??
What is the best method to get such a list of authors and how long will
it take to determine it for every article? By the way my little SQL
statement to count the number of links to each article is still running
and running for hours.
Thanks,
Jakob
Hi there!
(my first mail seems to be gone into nirvana)
For the German Wikipedia CD
(see http://meta.wikimedia.org/wiki/Wikipedia_auf_CD)
we need a least of all authors for each article (as GFDL requires).
I am not that good in SQL so can anybody help?
I thoght of a statement like
"This ariticle has been edited X times by logged in users and Y times by
anonymous user. The authors are: Userxy, Foouser, Userbar..."
My untested SQL statements up to now
CREATE TABLE edit_count (
article VARCHAR(255),
edited_by_users INTEGER,
edited_by_IP INTEGER
);
CREATE TABLE has_edited (
user VARCHAR(255),
article VARCHAR(255)
);
INSERT INTO has_edited
SELECT DISTINCT old_user_text AS user, old_title AS article WHERE
old_namespace=0
UNION
SELECT DISTINCT cur_user_text AS user, cur_title AS article WHERE
old_namespace=0
How not to get all the IP numbers??
What is the best method to get such a list of authors and how long will
it take to determine it for every article? By the way my little SQL
statement to count the number of links to each article is still running
and running for hours.
Thanks,
Jakob
On 22 Aug 2004, at 10:51, wikitech-l-request(a)wikimedia.org wrote:
> Date: Sat, 21 Aug 2004 20:43:07 +0200
> From: Elisabeth Bauer <elian(a)djini.de>
>>
>>> Hello,
>>>
>>> is there a possibility to get
>
>>> c) the current version number of an article?
>>
>> No idea
>
> see http://bugzilla.wikipedia.org/show_bug.cgi?id=181
> Currently not.
>
> greetings,
> elian
You are kindly and most cordially invited to vote for this bug!
:-)
- Jens