Query - Is there a secure (https) URL / access path in to Wiki Commons?
If not, is there a reason not, and can we fix that?
Thanks!
--
-george william herbert
george.herbert(a)gmail.com
brion(a)svn.wikimedia.org schreef:
> Revision: 40736
> Author: brion
> Date: 2008-09-12 00:16:41 +0000 (Fri, 12 Sep 2008)
>
> Log Message:
> -----------
> some first steps towards my horrifically evil code review tool to integrate in the wiki (doesn't work yet :)
>
Sounds like a very interesting project.
> <snip>
>
> + -- Update type: Modify, Add, Delete, Rename
> + cp_action enum ('M','A','D','R'),
>
Does rename even exist in SVN? Correct me if I'm wrong, but I always
thought SVN implemented renaming as a combination of copy and delete.
Speaking about copy, do you have a way to store and display stuff like
(Copied from foo) for added files?
> +-- And for our commenting system...
> +-- To specify follow-up relationships...
> +CREATE TABLE /*$wgDBprefix*/code_relations (
> + cf_repo_id int not null,
> + -- -> cr_id
> + cf_from int not null,
> + -- -> cr_id
> + cf_to int not null,
> +
> + primary key (cf_repo_id, cf_from, cf_to)
> +) /*$wgDBTableOptions*/;
>
Why didn't you just add cr_prev and cr_next to code_revisions? Also, the
cf_ prefix is kind of a weird choice here, since there's no 'f' in
code_relations.
> +
> +-- Freetext tagging for revisions
> +CREATE TABLE /*$wgDBprefix*/code_tags (
> + ct_repo_id int not null,
> + ct_rev_id int not null,
> + ct_tag varbinary(255) not null,
> +
> + primary key (ct_repo_id,ct_rev_id,ct_tag),
> + key (ct_repo_id,ct_tag,ct_rev_id)
> +) /*$wgDBTableOptions*/;
> +
>
What exactly are these tags going to do? Native support for marking as
reviewed or reverted would be nice. Also, the only indices on this table
are (ct_repo_id, ct_rev_id, ct_tag) and (ct_repo_id, ct_tag, ct_rev_id).
Wouldn't (ct_tag, ct_repo_id, ct_rev_id) be useful too (e.g. for finding
all revisions in all repos with a certain tag)?
> +CREATE TABLE /*$wgDBprefix*/code_comment (
> + -- Unique ID of the comment within the system.
> + cc_id int auto_increment not null,
> +
> + -- Repo and code revision this comment is attached to
> + cc_repo_id int not null,
> + cc_rev_id int not null,
> +
> + -- Wikitext blob of the comment.
> + -- FIXME: Consider using standard text store?
> + cc_text blob,
>
Does 'standard' text store mean the text table or the external disk
thingy? I would recommend using the same class that stores revision
text, so this extension doesn't have to worry about whether external
storage is enabled or not.
> + -- Timestamps of threaded parent and self to present a
> + -- convenient threaded sort order:
> + -- "20080130123456"
> + -- "20080130123456,20080230123456"
> + -- "20080430123456"
> + --
> + -- Allows 17 levels of nesting before we hit the length limit.
> + -- Could redo more compactly to get 31 or 63 levels.
> + cc_sortkey varbinary(255),
>
Nice one. What will happen when the limit is hit, though?
> +
> + -- Does this comment confer a review sum?
> + -- 0, +1, -1
> + cc_review int,
> +
> + primary key (cc_id),
> + key (cc_repo_id,cc_rev_id,cc_sortkey)
> +) /*$wgDBTableOptions*/;
>
You might wanna add a cr_review field holding the sum of all these
cc_review's to the code_rev table, along with an index. With that, the
review sum will always be correct even if not all comments are
displayed, and it'll be possible to search for e.g. all revisions with a
negative review sum.
About the extension as a whole: are new revisions added to the system
the second they're made, or are they imported on a regular basis? Also,
it would be nice to have some kind of e-mail notification feature
(either through the wiki's e-mail system or through a mailing list).
Roan Kattouw (Catrope)
On Thu, Sep 11, 2008 at 6:30 PM, <soxred93(a)svn.wikimedia.org> wrote:
> Revision: 40734
> Author: soxred93
> Date: 2008-09-11 22:30:27 +0000 (Thu, 11 Sep 2008)
>
> Log Message:
> -----------
> Add new special page Wantedfiles.
>
> [snip a whole lot]
>
> + function getSQL() {
> + $dbr = wfGetDB( DB_SLAVE );
> + list( $imagelinks, $page ) = $dbr->tableNamesN( 'imagelinks', 'page' );
> + $name = $dbr->addQuotes( $this->getName() );
> + return
> + "
> + SELECT
> + $name as type,
> + " . NS_IMAGE . " as namespace,
> + il_to as title,
> + COUNT(*) as value
> + FROM $imagelinks
> + LEFT JOIN $page ON il_to = page_title AND page_namespace = ". NS_IMAGE ."
> + WHERE page_title IS NULL
> + GROUP BY il_to
> + ";
> + }
This is pretty much the same query I committed in r35934. It was later
removed because the query is slow and makes this page useless--as
is the case with WantedPages which is disabled on WMF sites because
of it. Has something changed that makes this better now?
--Chad
Hi all, I'm new to this list, but I'm sure this topic has come up many times
before. In short, wiki syntax is inadequate for structuring discussion
pages, and Mediawiki needs a forum system. LiquidThreads is the most
significant effort in this direction, and it's a big step forward, but the
project is apparently dead and it's not being adopted by WMF. Other wikis
have been quite successful with forum systems, like the highly active YKTTW
forum on TVTropes (http://tvtropes.org/pmwiki/ykttw.php). I believe that the
current system substantially impedes usability of WMF projects, particularly
for new users, in some of the following ways:
* New users may fail to get attention because they post at the wrong place
in the page, forget to sign their post, or incorrectly format their replies.
Users new to wiki syntax may be so intimidated that they don't post at all.
They may also repeat questions already asked many times because the relevant
threads have vanished into the archives and there's no effective search
functionality for threads.
* Discussions may falter because interested contributors can't watch
individual discussions, sorting by creation time encourages short threads,
archiving edits mask real edits on watchlists.
* Inconsistent formatting causes confusion in threads (e.g. indentation
resetting, arbitrary section breaks, comments running together), which
occasionally leads to avoidable conflict.
For many years we've dealt with kludgy solutions for these problems on En,
like archiving bots, signature bots, and so on.
A common objection I hear is that wiki talk pages are better for discussing
drafts of portions of articles. But as long as the contents of posts still
use the same wiki syntax this is still straightforward.
There's still significant disagreement about a few specific issues like
whether users, just admins, or no one should be permitted to edit comments
of others, or move threads from one discussion page to another; a good
permission system would allow this to be left up to the individual wiki.
There's a significant migration problem: what to do with the massive reams
of content existing on discussion pages and in discussion archives? There
may also be scalability issues with creating a system that can handle the
kind of load Wikipedia currently sees.
In short, I'm looking to revive the much-delayed effort to get real forum
support implemented and deployed to major WMF projects, and offering to
contribute and head up this effort myself. What will it take, and what's the
best answer to the hard design questions? What have we learned from
LiquidThreads? Considering all the schema changes since LiquidThreads, is it
better to use it as a starting point, or to do something new? Any feedback
is appreciated.
-Vivian
P.S. I'm posting under an alias due to my work's open source contribution
policy - I'm an undisclosed administrator on the English Wikipedia.
Hi there
I understand WikiMedia uses Lucene as its search back-end. Is it possible to
index other sources with Lucene and have the results show up in a WikiMedia
search result?
The reason is that I'd like to not only reference files and online content
in my wiki, but also have their content searchable - all through one
interface.
Regards
Jaf
Hello,
Apologies if this has been asked before, or if there's a more appropriate forum. I've been hunting but not found anything yet on mediawiki, Wikipedia, or the discussion list archive. Tried wikitext-l first but not sure anybody's on that list anymore.
Is
it possible to parse or call the data from template A on page X, and
spit parts of it back out using another template (say, template B) on
page Y, and in a different namespace? For the sake of argument, assume
there is only one template A iteration on page X.
For example,
Template A
{{bookdata
| image = no
| title = A big long title
| maplink = Image:Title_p82.gif
}}
Template B
{{mapdata
| title = [calls 'title' value from bookdata]
}}
It
seems like there should be a way to do this, so if it means reading
manuals on something I'd be happy to. At this point I'm just not sure
which direction I should be going to read up on this.
Thanks for any advice.
Craig
On Tue, Sep 9, 2008 at 4:42 PM, <catrope(a)svn.wikimedia.org> wrote:
> Log Message:
> -----------
> (bug 15535) prop=info&inprop=protection doesn't list pre-1.10 protections if the page is also protected otherwise (1.10+ style or cascading)
Can we migrate all of the old restrictions over already and forget
about this obsolete nonsense? Or is there some reason we're not?