Hi,
I am a technical writer and I have a lot of questions.
Our company has been using MediaWiki for their documentation.
We have a new client that recently bought our products and now needs help in their documentation. Our product documentations are posted in our wiki site. The clients are now requesting that they have copies of our documentation.
I am also proposing that the create/build their wiki site on MediaWiki to have an easy transition with regards to the documentation. Now the process of transferring the documentation will be exporting and importing wikis.
I have found the following articles regarding importing/exporting wikis:
* Help:Export: http://en.wikipedia.org/wiki/Help:Export
* Help:Import: http://en.wikipedia.org/wiki/Help:Import
Can you confirm that these are the articles that i need regarding importing/exporting wikis?
Is it possible to import/export our wikis to a new MediaWiki for the client?
Hope you can help me on this. I would really appreciate this.
If there is anything else, please let me know.
Thanks!
PS. I am unsure if i should directly name our company and the client. This is regarding the confidentiality clause.
The Disambiguator extension
(http://www.mediawiki.org/wiki/Extension:Disambiguator) is now deployed
to all WMF wikis. This will enable us to:
1. Remove disambiguation code from core, including
Special:Disambiguations (bug 35981)
2. Stop requiring wikis to maintain template lists at
MediaWiki:Disambiguationspage
3. Add features like warning users when they are linking to
disambiguation pages (https://gerrit.wikimedia.org/r/#/c/70564)
4. Remove disambiguation pages from things like Special:Random and
Special:LonelyPages
4. Enable the development of more powerful 3rd party tools for dealing
with disambiguation pages
There is, however, one action required of each wiki that wants to make
use of the Disambiguator extension: Every disambiguation page on the
wiki needs to include the __DISAMBIG__ magic word (or an equivalent
alias). Typically, this only requires adding the magic word to a single
template that is included on all the disambiguation pages. For example,
on Commons, this was accomplished with the following change:
https://commons.wikimedia.org/w/index.php?title=Template%3ADisambig&diff=99…
On English Wikipedia, it was a bit more complicated:
https://en.wikipedia.org/w/index.php?title=Template%3ADmbox&diff=560507118&…
Once you've made this change, you should start seeing pages appear on
Special:DisambiguationPages within 3 days. If you have any questions or
problems, let me know.
Ryan Kaldari
Wikimedia Foundation
I understand there is an issue that needs solving where various pages
link to disambiguation pages. These need fixing to point at the
appropriate thing.
I had a thought on how this might be done using a variant of EventLogging...
When a user clicks on a link that is a disambiguation page and then
clicks on a link on that page we log an event that contains
* page user was on before
* page user is on now
If we were to collect this data it would allow us to statistically
suggest what the correct disambiguation page might be.
To take a more concrete theoretical example:
* If I am on the Wiki page for William Blake and click on London I am
taken to https://en.wikipedia.org/wiki/London_(disambiguation)
* I look through and see London (poem) and click on it
* An event is fired that links London (poem) to William Blake.
Obviously this won't always be accurate but I'd expect generally this
would work (obviously we'd need to filter out bots)
Then when editing William Blake say that disambiguation links are
surfaced. If I go to fix one it might prompt me that 80% of visitors
go from William Blake to London (poem).
Have we done anything like this in the past? (Collecting data from
readers and informing editors)
I can imagine applying this sort of pattern could have various other uses...
--
Jon Robson
http://jonrobson.me.uk
@rakugojon
Dear gadgets / bots maintainers,
The Engineering Community Team has this 2013-14 goal related to your work:
Jan - Mar 2014
Volunteers use the gadgets/bots analysis to write automated browser
tests; by March 2014, ten percent of all Wikimedia projects have
automated tests checking for problems with important bots and gadgets
(reducing breakage and reducing time-to-fix).
http://www.mediawiki.org/wiki/Wikimedia_Engineering/2013-14_Goals#Wikimedia…
Who wants to step in now? We want to hear about you, especially if you
are maintaining a popular gadget or bot. Please reply here and/or join
the QA list:
https://lists.wikimedia.org/mailman/listinfo/qa
We are running automated browser testing workshops and other QA
activities with volunteers. Get involved and we will help testing your
software project.
--
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Good Day,
I'm thinking about creating another Mediawiki extension here at my work (well
actually, more likely starting over on one I haven't touched in several months)
and I have a few questions about ContentHandler and if it is an appropriate
mechanism to make use of in this extension.
The extension I am planning on (re)making handles some data entry type stuff
about some of our computer systems here. I was thinking of creating a new
namespace called MP for the extension and then having articles in it named after
our various systems. There would then be a Special Page that keeps track of all
of these articles and provides some summary information on them in a table.
The articles, would contain several pages of structured information about our
computer systems. I feel like I had read that ContentHandler lets me define my
own editor for that type of content. [0] Would I be able to define an editor
that has multiple pages? The workflow that I would like to create splits the
editing of the information into several pages (one for intake questions, another
for documentation, yet another for regulatory related questions, etc.).
I could make this all as a standalone program, but I don't want to re-invent the
wheel if I don't have to when it comes to keeping track of revisions and
displaying diffs and the like. Plus keeping it in Mediawiki gives my users an
interface for that they are already familiar with.
I feel as though I was not at all clear in this email, so if I didn't give
enough information or you have questions that might clarify both in my mind and
yours what I am trying to accomplish feel free to ask.
Thank you,
Derric Atzrott
Computer Specialist
Alizee Pathology
[0]: https://www.mediawiki.org/wiki/Manual:ContentHandler/Doc At least, this is
alluded to in "action=edit will fail for pages with non-text content, unless the
respective ContentHandler implementation has provided a specialized handler for
the edit action."
On Mon, Jul 15, 2013 at 7:57 PM, Ilya Grigorik <igrigorik(a)google.com> wrote:
> +asher (woops, forgot to cc :))
>
> On Mon, Jul 15, 2013 at 7:54 PM, Ilya Grigorik <igrigorik(a)google.com>wrote:
>
>
>> Anyway, I've already started working on something I noticed in
>> mod_pagespeed - a much better JS minification, expect updates soon:)
>>
>
> Not to discourage you from doing so.. but JS minification is not the
> problem. In fact, if you look at the side<http://www.webpagetest.org/breakdown.php?test=130715_82_3c03a9eb9339dcf8d3e…>by
> side<http://www.webpagetest.org/breakdown.php?test=130715_VZ_7748042f6f940ec663a…>content breakdown of the original and MPS optimized sites, you'll notice
> that MPS is loading 3kb more of JS (because we add some of our own logic).
>
> We're not talking about applying missing gzip or minification.. To make
> the site mobile friendly, we're talking about structural changes to the
> page: eliminating blocking javascript code, inlining critical CSS to
> unblock first render, deferring other assets to after the above-the-fold is
> loaded, and so on. Those are the parts that MPS automates - the filmstrip<http://www.webpagetest.org/video/compare.php?tests=130715_82_3c03a9eb9339dc…>should speak for itself. (Note that filmstrip shows first render at 2s,
> instead of 1.6, due how how the frames are captured on mobile devices in
> WPT).
>
These are the points I was trying to highlight from your presentation :)
While there's room for further optimization after, inlining above the fold
css and deferring everything else including additional content seem like
immediate gains we could start working on. The mobile dev team has already
put work into being able to serve above the fold plus section headers in an
initial request - we just need to make some changes to how we assemble
pages to support inlining required css/js for this view, and separating out
the rest. I think this can and should be delivered by mediawiki /
resourceloader / mobilefrontend by design, instead of via mps however.
As Max noted, this would require an additional varnish cache split, varying
between devices that support this and those that don't. But the
performance gain for supported devices should fully justify it, and we just
invested in additional frontend cache capacity for mobile.
-Asher
Thanks for your comments, Ilya!
On 16.07.2013, 2:19 Ilya wrote:
> Re, Opera: Checked with the team, this is an oversight on our part.
> Easy to fix, we just didn't whitelist the Opera UA.
> Re, non-JS handling and purging: can we discuss this a bit more? Is
> the concern that the "noscript" variable adds another variant to the cache?
My concern wasn't about Opera support specifically - it's fairly
reasonable for systems designers to support some browsers and don't
support the others, my concern was that it used User-agent and did not
tell the frontend cache to vary on it - which will result in cache
pollution. And even if it did, we rely on edge cache hit rate heavily,
so varying cache on user agent would be pretty much like not caching
at all for us.
> Finally, speaking of caching... Historically, PageSpeed did the
> following: grab the HTML, rewrite resource links, cache-extend the
> optimized resources, mark the HTML as CC:private. The CC:private
> part is what allows pagespeed to regenerate the cache-extended
> resource filenames the moment they are changed. However, this means
> the HTML is not cacheable, which can (will) drive a lot more traffic
> to the origin servers. However, good news...
> Our last week's release added support for HTML caching + purge
> functionality with upstream caches:
> https://developers.google.com/speed/pagespeed/module/downstream-caching
> ^ allows caching of partially rewritten content, as well as
> automatic purge + caching of optimized assets. The docs have an
> example for nginx and Varnish, but this should work out of the box with Squid and other caches too.
We have recently moved away from doing a lot of device detection in
frontend caches which gave us a performance boost, would be pity to
discard this.
Anyway, I've already started working on something I noticed in
mod_pagespeed - a much better JS minification, expect updates soon:)
--
Best regards,
Max Semenik ([[User:MaxSem]])
Hey all,
I'm happy to announce the immediate availability of Diff 0.7.
This release focuses on fixing bugs, adding tests and removing deprecated
methods. The notable fixes are:
* MapPatcher will now no longer stop patching after the first remove
operation it encounters
* MapPatcher now always treats its top level input diff as a map diff
* MapDiffer will now treat equivalent sub-maps with different element order
correctly in recursive mode
No new features where added in this release.
Documentation can be found at https://www.mediawiki.org/wiki/Extension:Diff
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil. ~=[,,_,,]:3
--
"Chromebooks have in just the past eight months snagged 20 percent to 25
percent of the U.S. market for laptops that cost less than $300..."[1]
I have no idea how many pageviews we get coming from ChromeOS devices, and
I suspect it's hard to differentiate from regular Chrome visits on other
systems? Anyway, sales trends clearly suggest they are becoming more of a
niche to pay attention to.
It might be nice to have an official Wikipedia Chrome app. There are a few
in the Web Store now,[2] but they're not great. For Chrome OS users, the
main advantages of having an app, even if all it does is redirect to the
website, is the ability to add it to your Chrome homescreen and the dock.
This is probably not such a big market that the Foundation would spend any
money developing for it, but I think it's probably not hard, and anyway
Google building an OS with the Web as its backbone is kind of cool.
1.
http://www.bloomberg.com/news/2013-07-10/google-chromebook-under-300-defies…
Steven
Sending to wikitech-l instead, mediawiki-api is just for the MediaWiki API
itself, not Wikimedia server stuff.
Alex Monk
On Mon, Jul 15, 2013 at 6:38 PM, Robert Crowe <robert(a)ourwebhome.com> wrote:
> It looks to me like I was blacklisted for EN Wikipedia API requests. My
> website has been using the API for awhile now, but suddenly I'm getting 403
> errors coming back. I've tried to follow all the rules, but if I've missed
> something I'm happy to make changes. What disturbs me most is that there
> was no attempt to contact me before being blacklisted to let me know there
> was a problem. Where should I go to find out what the problem is?****
>
> Thanks,****
>
> Robert****
>
> ** **
>
> _______________________________________________
> Mediawiki-api mailing list
> Mediawiki-api(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>
>