hi my life werey luckey m m like wold bat this posble my life
On 5/4/17, mediawiki-l-request@lists.wikimedia.org mediawiki-l-request@lists.wikimedia.org wrote:
Send MediaWiki-l mailing list submissions to mediawiki-l@lists.wikimedia.org
To subscribe or unsubscribe via the World Wide Web, visit https://lists.wikimedia.org/mailman/listinfo/mediawiki-l or, via email, send a message with subject or body 'help' to mediawiki-l-request@lists.wikimedia.org
You can reach the person managing the list at mediawiki-l-owner@lists.wikimedia.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of MediaWiki-l digest..."
Today's Topics:
- PluggableAuth and OAuth and login user experience (Scott Koranda)
- Re: Sitenotice for all pages except one (Jean Valjean)
- Re: Sitenotice for all pages except one (Sébastien Santoro)
- Load from a wget crawl (Rick Leir)
- Re: Load from a wget crawl (Brian Wolff)
- Re: Load from a wget crawl (G Rundlett)
- Re: Load from a wget crawl (Rick Leir)
Message: 1 Date: Wed, 3 May 2017 15:10:03 -0500 From: Scott Koranda skoranda@gmail.com To: mediawiki-l@lists.wikimedia.org Subject: [MediaWiki-l] PluggableAuth and OAuth and login user experience Message-ID: 20170503201003.GA123363@paprika.localdomain Content-Type: text/plain; charset=us-ascii
Hi,
I am using MediaWiki 1.28.
I want users to authenticate using OpenID Connect so I have deployed the PluggableAuth and OpenID Connect extensions and they are working well.
I also want to provision accounts in my wiki from another system of record using the API and so I have deployed the OAuth extension, created an owner-only OAuth consumer, and have written a PHP client against the API. It too is working well.
The issue is that in order for the PHP client to leverage the API and authenticate using OAuth AND for users to authenticate using OpenID Connect I need to set
$wgPluggableAuth_EnableLocalLogin = true;
If I do not set that then the PHP client cannot authenticate using OAuth.
Have I missed something so that I would not have to enable local login in order for the PHP client to use OAuth to authenticate and leverage the API to provision accounts?
If not, then I am satisfied with the solution I have except for the user login experience. I want them to click "Log in" but not have to then see the Special:UserLogin page.
My thought is to replace that special page with one I create with a custom extension that extends the SpecialUserLogin class as suggested here:
http://stackoverflow.com/questions/42776926/how-do-you-edit-the-html-for-med...
Is that the simplest and most elegant approach, or is there a cleaner way to "hide" the Username/Password form from users and avoid them having to click twice to start the OpenID Connect flow?
Thanks,
Scott K
Message: 2 Date: Wed, 3 May 2017 17:22:06 -0400 From: Jean Valjean jeanvaljean2718@gmail.com To: MediaWiki announcements and site admin list mediawiki-l@lists.wikimedia.org Subject: Re: [MediaWiki-l] Sitenotice for all pages except one Message-ID: CAGLDgJN7=wuZw31QVT8J5Ju47bQ34fLS9h5rSXEfYwMwO5DQ=w@mail.gmail.com Content-Type: text/plain; charset=UTF-8
Okay, this is the code I came up with:
$wgAlreadyParsed = false; function onParserBeforeStripSiteNotice( &$parser, &$text, &$strip_state ) { global $wgAlreadyParsed; if ( $wgAlreadyParsed ) { return true; } $title = $parser->getTitle(); if ( $title->getNamespace () !== 0 ) { return true; } $wikiPage = WikiPage::factory( $title ); $revision = $wikiPage->getRevision(); if ( is_null ( $revision ) ) { return true; } $content = $revision->getContent( Revision::RAW ); $revisionText = ContentHandler::getContentText( $content ); if ( $text !== $revisionText ) { return true; } $wgAlreadyParsed = true; $titleText = $title->getPrefixedText (); $text = "{{siteNotice|$titleText}}" . $text; } $wgHooks['ParserBeforeStrip'][] = 'onParserBeforeStripSiteNotice';
Then in Template:SiteNotice I put:
{{#ifeq: {{FULLPAGENAME}}|Main Page||<div class="plainlinks" style="border:1px solid #a7d7f9; width:100%; font-size: 110%; text-align: center; padding: 0.5ex; "><p>This page is an archive.</p></div>}}
On Wed, May 3, 2017 at 3:51 AM, Jean Valjean jeanvaljean2718@gmail.com wrote:
How do I put up a sitenotice for all pages except one (e.g. Main Page)? I want the main page to have my current content, and all other pages to have a notice saying their content is just an archive.
I notice that when I put {{FULLPAGENAME}} in MediaWiki:Sitenotice, it always says "MediaWiki:Sitenotice".
Otherwise, I would use an #ifeq.
Message: 3 Date: Wed, 3 May 2017 23:40:57 +0200 From: Sébastien Santoro dereckson@espace-win.org To: MediaWiki announcements and site admin list mediawiki-l@lists.wikimedia.org Subject: Re: [MediaWiki-l] Sitenotice for all pages except one Message-ID: CAKg6iAETp60h=wWj4-wK8uT3jrtob-WoeMZ998Fzm465V1g+EQ@mail.gmail.com Content-Type: text/plain; charset=UTF-8
Create an extension it will be easier ton maintain.
Divide the code in several methods too, https://jeroendedauw.github.io/slides/craftmanship/functions/#/1
On Wed, May 3, 2017 at 11:22 PM, Jean Valjean jeanvaljean2718@gmail.com wrote:
Okay, this is the code I came up with:
$wgAlreadyParsed = false; function onParserBeforeStripSiteNotice( &$parser, &$text, &$strip_state ) { global $wgAlreadyParsed; if ( $wgAlreadyParsed ) { return true; } $title = $parser->getTitle(); if ( $title->getNamespace () !== 0 ) { return true; } $wikiPage = WikiPage::factory( $title ); $revision = $wikiPage->getRevision(); if ( is_null ( $revision ) ) { return true; } $content = $revision->getContent( Revision::RAW ); $revisionText = ContentHandler::getContentText( $content ); if ( $text !== $revisionText ) { return true; } $wgAlreadyParsed = true; $titleText = $title->getPrefixedText (); $text = "{{siteNotice|$titleText}}" . $text; } $wgHooks['ParserBeforeStrip'][] = 'onParserBeforeStripSiteNotice';
Then in Template:SiteNotice I put:
{{#ifeq: {{FULLPAGENAME}}|Main Page||<div class="plainlinks" style="border:1px solid #a7d7f9; width:100%; font-size: 110%; text-align: center; padding: 0.5ex; "><p>This page is an archive.</p></div>}}
On Wed, May 3, 2017 at 3:51 AM, Jean Valjean jeanvaljean2718@gmail.com wrote:
How do I put up a sitenotice for all pages except one (e.g. Main Page)? I want the main page to have my current content, and all other pages to have a notice saying their content is just an archive.
I notice that when I put {{FULLPAGENAME}} in MediaWiki:Sitenotice, it always says "MediaWiki:Sitenotice".
Otherwise, I would use an #ifeq.
MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
-- Sébastien Santoro aka Dereckson http://www.dereckson.be/
Message: 4 Date: Wed, 03 May 2017 17:44:20 -0400 From: Rick Leir rleir@leirtech.com To: mediawiki-l@lists.wikimedia.org Subject: [MediaWiki-l] Load from a wget crawl Message-ID: C7AE2349-082D-44D9-8E5F-3EDD7ACCEF53@leirtech.com Content-Type: text/plain; charset=utf-8
Hi all Oops. I have a wget crawl of an old wikimedia site, and would like to load it into a newly installed wikimedia. No easy way, I am almost sure. Please tell me if I missed something. Cheers -- Rick -- Sorry for being brief. Alternate email is rickleir at yahoo dot com
Message: 5 Date: Wed, 3 May 2017 19:16:32 -0400 From: Brian Wolff bawolff@gmail.com To: MediaWiki announcements and site admin list mediawiki-l@lists.wikimedia.org Subject: Re: [MediaWiki-l] Load from a wget crawl Message-ID: CA+oo+DV_pGpB_VDmCt0+nj3pq0DBeohgN+UXb39Oc7PA-ZRwTQ@mail.gmail.com Content-Type: text/plain; charset=UTF-8
Yeah, that's basically impossible.
Maybe you could try and use parsoid to convert the html into wikitext-ish stuff (but it would be lossy as its not parsoid-html that's being converted) and also would still involve some programming to connect all the parts.
-- Brian
On Wed, May 3, 2017 at 5:44 PM, Rick Leir rleir@leirtech.com wrote:
Hi all Oops. I have a wget crawl of an old wikimedia site, and would like to load it into a newly installed wikimedia. No easy way, I am almost sure. Please tell me if I missed something. Cheers -- Rick -- Sorry for being brief. Alternate email is rickleir at yahoo dot com _______________________________________________ MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Message: 6 Date: Wed, 3 May 2017 19:57:10 -0400 From: G Rundlett greg.rundlett@gmail.com To: MediaWiki announcements and site admin list mediawiki-l@lists.wikimedia.org Subject: Re: [MediaWiki-l] Load from a wget crawl Message-ID: CANaytcebYQXx8Zrvxzi-FKQCtkWn_FMd9=efzYuK8WwS8FG_QQ@mail.gmail.com Content-Type: text/plain; charset=UTF-8
You could try my html2wiki extension
On May 3, 2017 7:17 PM, "Brian Wolff" bawolff@gmail.com wrote:
Yeah, that's basically impossible.
Maybe you could try and use parsoid to convert the html into wikitext-ish stuff (but it would be lossy as its not parsoid-html that's being converted) and also would still involve some programming to connect all the parts.
-- Brian
On Wed, May 3, 2017 at 5:44 PM, Rick Leir rleir@leirtech.com wrote:
Hi all Oops. I have a wget crawl of an old wikimedia site, and would like to
load it into a newly installed wikimedia. No easy way, I am almost sure. Please tell me if I missed something. Cheers -- Rick
-- Sorry for being brief. Alternate email is rickleir at yahoo dot com _______________________________________________ MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Message: 7 Date: Wed, 3 May 2017 21:33:17 -0400 From: Rick Leir rleir@leirtech.com To: mediawiki-l@lists.wikimedia.org Subject: Re: [MediaWiki-l] Load from a wget crawl Message-ID: ab6d8d7d-a640-b3d2-1dff-1b1574d949f5@leirtech.com Content-Type: text/plain; charset=utf-8; format=flowed
Thanks, that looks promising!
On 2017-05-03 07:57 PM, G Rundlett wrote:
You could try my html2wiki extension
On May 3, 2017 7:17 PM, "Brian Wolff" bawolff@gmail.com wrote:
Yeah, that's basically impossible.
Maybe you could try and use parsoid to convert the html into wikitext-ish stuff (but it would be lossy as its not parsoid-html that's being converted) and also would still involve some programming to connect all the parts.
-- Brian
On Wed, May 3, 2017 at 5:44 PM, Rick Leir rleir@leirtech.com wrote:
Hi all Oops. I have a wget crawl of an old wikimedia site, and would like to
load it into a newly installed wikimedia. No easy way, I am almost sure. Please tell me if I missed something. Cheers -- Rick
-- Sorry for being brief. Alternate email is rickleir at yahoo dot com _______________________________________________ MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
MediaWiki-l mailing list To unsubscribe, go to: https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
Subject: Digest Footer
MediaWiki-l mailing list MediaWiki-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/mediawiki-l
End of MediaWiki-l Digest, Vol 164, Issue 3