Thanks for your replies, I realize this is a faulty plugin right now for
following reasons
* 25% probability
* securing the questions and answers
I will see if I can remove them, again this is my first plugin and I am
trying to learn
so my apologies :)
--
With Regards
Nischay Nahata
B.tech 3rd year
Department of Information Technology
NITK,Surathkal
John Erling Blad pointed us to this thread. I was not subscribed to the list, so I'm sorry that this repond probably creates a new thread.
We, UNINETT, operates Feide, the norwegian Identity Federation for students from lower and higher education and research institutions in Norway. Feide would allow services, like Wikipedia, to verify end users (with some additional user data, like userid, email and name etc) using the SAML 2.0 protocol. The end users will then login on their instituion login page using their institutional credentials, they will also have single sign-on to other sites.
We also maintain the software package SimpleSAMLphp, that implements the various roles in the SAML 2.0 protocol architecture, including support for acting as a Service Provider, which will be the relevant role for a service like Wikipedia. SimpleSAMLphp is implemented in PHP, and while we are not maintaining mediawiki extensions to integrate with others, I believe others have done some efforts:
http://www.mediawiki.org/wiki/Extension:MultiAuthPluginhttp://www.mediawiki.org/wiki/Extension:SAMLAuth
SimpleSAMLphp is one of many open source products implementing SAML.
We have a good contact network of other educational Identity Federations across the world, and in particular Europe and US. We have been part of two initiatives for allowing service provider to connect to a wide range of Identity Federations (at once), including GEANT eduGAIN and Kalmar2.
http://www.geant.net/service/edugain/pages/home.aspxhttps://www.kalmar2.org
Identity Federations, like Feide, can provide:
* verified accounts, something that may help controlling trolling.
* user convenience of not having to register or maintain another set of credentials, + the convenience of SSO.
If you are interested in doing a pilot with connecting wikipedia to Feide, we may provide you with further details to proceed with that.
The user centric Identity Federation paradigm, represented by protocols like OpenID (and others), will (usually) not provide you with verified accounts, but still get you the user convenience of SSO and re-use of existing account.
OpenID has went throuh a few versions, 1.0 and 2.0, and currently OpenID Connect is beeing sorted out. OpenID Connect differs signficantly from earlier versions since it is built upon OAuth (a good thing). We're also a bit involved with the OpenID Connect standardization. As part of the GÉANT Identity Federation project in collaboration with Kantara Initative, we will be responsible for implementing an automated interoperability test facility for OpenID Connect, like this: http://www.youtube.com/watch?v=3mGA79T0hPg
OAuth "alone" can not provide authentication of users to Wikipedia from external sites. But, it can be used to grant a user authorization to wikpiedia content through a back-channel REST API (without exposing credentials through this api). I believe that was the idea that this thread started with, which seems like a very good idea, but a very different idea than offering federated login. OAuth also exists in multiple versions, and I think it would be reccomended to go for OAuth 2.0 for any new projects that have not supported earlier versions of OAuth.
Andreas Åkre Solberg
UNINETT AS - http://rnd.feide.no
If I am writing to a wrong address, please let us know.
I used etherpad for a long time without any problem.
Now, if I open an etherpad window (etherpad.wikimedia.org) I got this
message after a few second:
"Disconnected.
*Lost connection with the EtherPad synchronization server.* This may be due
to a loss of network connectivity.
If this continues to happen, please let us
know<http://etherpad.wikimedia.org/ep/support>(opens in new window)."
If I reconnect, this happen again after a few second.
My internet connection work properly.
If I click "let us know", I got an other error message:
"Oops! A server error occured. It's been logged.
Please email <support(a)etherpad.com> if this persists."
Does this problem come from my computer (which is a new one and I have
never used etherpad on it)?
Should I write a message to support(a)etherpad.com or how could I use
etherpad?
Thank you for your help.
Best regards,
Samat
-----Original Message-----
From: John McClure [mailto:jmcclure@hypergrove.com]
Sent: Wednesday, March 21, 2012 4:48 AM
To: 'Yury Katkov'; 'Wikimedia developers'
Subject: RE: [Wikitech-l] Topic Maps
Yury,
Sure SMW certainly could be part of a solution especially using subobjects
from v1.7, but this suggests WP isn't intended to have something as basic as
a subject-index itself because SMW is apparently not on WP's roadmap. If WP
doesn't record topics for its articles, then WP cannot fully leverage its
library of data in the semantic web. IMHO I think the semantic web is more
fruitfully about merging and contrasting topic maps than resource
descriptions.
On implementation, I see two parts. First, are hierarchical subject indexes
(such as the LCSH)based on SKOS [3] and second are topic maps that are
roundtripped XTM v2.0 [2] within the scope of <page> elements. "All the
other" plumbing is significant enough though to make this work. For
instance, I'd consider requiring "type" designators in XTM stream to be
names of (aliased+actual) namespaces. The suggests a more dynamic namespace
manager which I know has been kicking around for awhile.
Bottom line if SMW were to be incorporated into WP then it's a fine idea to
use SMW to hold topic data. If not, I am concerned that platforms without
semantics don't seem sustainable over the long haul.
[1] http://en.wikipedia.org/wiki/Topic_Maps
[2] http://www.isotopicmaps.org/sam/sam-xtm/
[3] http://www.w3.org/TR/skos-referenc
Thank - John
>-----Original Message-----
>From: ganqturgon(a)gmail.com [mailto:ganqturgon@gmail.com]On Behalf Of
>Yury Katkov
>Sent: Tuesday, March 20, 2012 7:50 PM
>To: jmcclure(a)hypergrove.com; Wikimedia developers
>Subject: Re: [Wikitech-l] Topic Maps
>
>
>Hi John! Could you provide some links on how the Topic Maps are used
>in modern wikis and information systems?
>There is a big family of Semantic Extensions [1] that allow to export
>wikipages to RDF, isn't this enough?
>
>[1] http://semantic-mediawiki.org/wiki/Help:SMW_extensions
>-----
>Yury Katkov
>
>
>
>
>On Wed, Mar 21, 2012 at 6:53 AM, John McClure
><jmcclure(a)hypergrove.com> wrote:
>> Adding Topic Maps to MW base software could be a winner --
>it can generate a
>> wiki-site map (some think WP needs one!); it can be used to
>corelate the
>> contents of documents loaded into a wiki (like conference
>proceedings) with
>> a wiki's topic map; and would make a cool tool for any page
>in a wiki, most
>> clearly on a user page. It's perhaps a smart strategic move
>- ISO 82250
>> Topic Maps are the fruit of SGML/Hytime n-ary models that
>'lost' to RDF
>> triples back when. Being a superset of RDF, TMs can type associations
>> between articles while capturing all infobox data.
>>
>> Topic maps may be a compelling FUNCTIONAL upgrade for MW as
>it captures
>> subjects of an article for the first time. Given topic-map
>to RDF transforms
>> amid continuing W3 research, this could be enough for the
>semantic world. By
>> adopting say the Lib of Congress' Subject Headings, a wiki
>like Wikipedia
>> could play an important role in the semantic web. The
>current situation with
>> Wikipedia is that it's hard to have a large library of
>information without a
>> subject catalogue... right now, wikis have an author
>catalogue sort of, fine
>> for smaller hadcrafted wikis but doesn't scale well for many.
>>
>> Since other platforms now have maturing topic map extensions
>I'm worried the
>> impact on wikis not to have that technology.
>>
>> John McClure
>>
>>
>> _______________________________________________
>> Wikitech-l mailing list
>> Wikitech-l(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Mon, Mar 19, 2012 at 12:23 AM, Bawolff Bawolff <bawolff(a)gmail.com> wrote:
>>Message: 1
>>Date: Sun, 18 Mar 2012 13:02:53 +0800
>>From: Liangent <liangent(a)gmail.com>
>>To: "A list for announcements and discussion related to the Wikimedia
>> Labs project." <labs-l(a)lists.wikimedia.org>, Wikimedia developers
>> <wikitech-l(a)lists.wikimedia.org>
>>Subject: Re: [Wikitech-l] [Labs-l] New project request
>>Message-ID:
>> <CAJ23o9ijb2TrfW6Gk+XTaVZQwHfoST0M4EE2UD4A5xcmdXVLdQ(a)mail.gmail.com>
>>Content-Type: text/plain; charset=UTF-8
> [..]
>>What do you think is a better approach to implement this? In 1.16
>>version I had to create another DB table but in modern MediaWiki,
>>categories have better sorting support so another choice (which seems
>>more natural) is to patch MediaWiki and change $wgCategoryCollation to
>>an array to support multiple collation at the same time. Maybe another
>>choice is to reuse the existing categorylinks table in an extension
>>but this requires MediaWiki to have enough hooks.
>
> Hmm interesting. I could definitely see how it could feel "natural"
> that this be in core. I could also see fairly good arguments for it
> being an extension. Either way I think it should (ideally)
> integrate with the categorylinks table. As it stands though, the
> unique index on (cl_from,cl_to) would probably prevent that [change to
> a unique index on cl_from,cl_to,cl_collation?]. Otherwise I don't see
> any major hurdles. If going the extension route, hooks can always be
> added if there are not enough. Some maintenance scripts would have to
> be changed, and some changes to the Collation class would have to be
> made, but I don't think they would be super-complicated changes by any
> means.
>
> On a related note, there's also a bug that wants sort collations to be
> specified per category (but still only 1 collation to a category) -
> https://bugzilla.wikimedia.org/show_bug.cgi?id=28397
>
> -bawolff
This is my quick work and I'll push it for review after Git migration.
Comments are welcome.
-Liangent
Hi,
I created a search engine for irc logs, it works much faster than
current engine and it's written in php. I will send a source code
soon.
http://bots.wmflabs.org/~wm-bot/searchlog/
I don't understand html, so output is ugly. If someone wants to help
to improve it, let me know.
Hello,
My name is Aaron Pramana and I am a prospective student for GSOC with
Mediawiki. A week ago, I posted a brief explanation of my idea on a bug fix
page, but I haven't had time to elaborate on the Wikitech list, as Sumana
suggested I do. This proposal is an amalgamation of several small
improvements which will make the watchlist more accessible. My goal for
this project is to retrofit the watchlist with an emphasis on improving the
workflow for current and potential power-users.
Objectives:
*add a way to group similar pages into subdirectories
(e.g. Special:Watchlist/GroupName)
*allow bulk watchlist modifications (rollback/unwatch) with checkboxes
*unify the "watched changes", "view/edit" sections
*incorporate ajax for actions that don't require a page reload (if time
allows)
Does anyone have additional features/bugs that they'd like to see
added/fixed on the watchlist? Please provide links to bug requests if they
exist.
Over the next week or so, I should be more responsive to replies as my
school will be on Spring Break. I am about halfway finished with a
proposal, so it should be available for review then. Please let me know if
my ideas have potential for a successful proposal and if there is anything
I can add. Thanks!
Adding Topic Maps to MW base software could be a winner -- it can generate a
wiki-site map (some think WP needs one!); it can be used to corelate the
contents of documents loaded into a wiki (like conference proceedings) with
a wiki's topic map; and would make a cool tool for any page in a wiki, most
clearly on a user page. It's perhaps a smart strategic move - ISO 82250
Topic Maps are the fruit of SGML/Hytime n-ary models that 'lost' to RDF
triples back when. Being a superset of RDF, TMs can type associations
between articles while capturing all infobox data.
Topic maps may be a compelling FUNCTIONAL upgrade for MW as it captures
subjects of an article for the first time. Given topic-map to RDF transforms
amid continuing W3 research, this could be enough for the semantic world. By
adopting say the Lib of Congress' Subject Headings, a wiki like Wikipedia
could play an important role in the semantic web. The current situation with
Wikipedia is that it's hard to have a large library of information without a
subject catalogue... right now, wikis have an author catalogue sort of, fine
for smaller hadcrafted wikis but doesn't scale well for many.
Since other platforms now have maturing topic map extensions I'm worried the
impact on wikis not to have that technology.
John McClure
>So, how could we check whether "Portuguese" wikis would break by doing
>this change?
According to Wikipedia, Portuguese sorting is as follows: "In addition
[to letters that are used in english], the following characters with
diacritics are used: Áá, Ââ, Ãã, Àà, Çç, Éé, Êê, Íí, Óó, Ôô, Õõ, Úú.
These are not, however, treated as independent letters in collation,
nor do they have entries of their own in Portuguese dictionaries. When
two words differ only in the presence or absence of a diacritic, the
one without it is collated first"
I just tested on my personal wiki, and can confirm that the ordering
when using this setting is as wikipedia describes they should be. I
didn't test super-exhaustively, but I feel very confident that this
setting would work fine for Portuguese without any further tailorings
needed.
Here's a screenshot of how uca-default sorts various letters used in
Portuguese: http://imgbin.org/images/7280.png .
The sort order used is consistent with
http://www.evertype.com/alphabets/portuguese.pdf (As far as I can
tell, assuming i'm reading that pdf correctly)
[from a different email]
>> So, how could we check whether "Portuguese" wikis would break by doing
>> this change?
>
>As Tim said:
>* Set one of the test wikis (Testwiki, Testwiki2) to Portuguese
>* change to said collation
>* do some editing
>* see if it breaks
Note, setting the language to Portuguese is unnecessary as we
currently do not support per-language tailoring of the collation. All
languages get sorted the same at the moment (there are bugs in
bugzilla to change this, and really it should be changed, but such
per-language support has yet to be implemented. However even if it was
fixed, its unclear if such a setting would be based on wiki content
language or not)
-bawolff