This thread at LWN seems like it might have some information which would
be interesting to those people who might be charged, down the road, with
the SSLizing of Wikimedia:
http://lwn.net/Articles/428594/
In particular, it discusses SSL session-caching across a cluster, which I
hadn't realized was possible.
Cheers,
-- jra
Hey,
I have an extension that transfers content, including images, from one wiki
to another via the API. Currently it's using the upload via url option [0],
which works great, but not obviously not for source wikis that can not be
accessed from where the target wiki is. For that I need to use the regular
file transfer, but can't find what to do here.
The documentation states very helpful "When uploading files directly, the
request must use multipart/form-data as Content-Type or enctype,
application/x-www-form-urlencoded will not work.", but then provides no clue
at how to actually set the content type when making a request via the
MWHttpRequest class. I tried several things after looking at the source, but
could not get it working. Also am not finding any existing code doing this.
Any pointers here would be greatly appreciated :)
[0]
https://secure.wikimedia.org/wikipedia/mediawiki/wiki/API:Upload#Uploading_…
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil.
--
jQuery's ajax method provides a better way to load a javascript, and it can
detect when the script would be loaded and excute the callback function. I
think we can implement it to our mw.loader.load. jQuery.ajax provides two
way (ajax or inject) to load a javascript, you should set cache=true to use
the inject one.
Philip Tzou
Robla has has asked me to put together a page documenting how I plan to
use Bugzilla. For now, I've decided to mostly use the “Priority” field
(since it is largely unused) and write up my understanding of the
current usage of how “Severity” is used.
You can see the result at
http://www.mediawiki.org/wiki/Bugmeister/Bugzilla and I welcome your
comments.
Mark.
While exploring features of Labeled Section Transclusion Extension, mainly
used into source projects as a tool for proofread procedure, I found that
the extensione could solve many issues completely unrelated with such
procedure, but I found too some liminations coming IMHO from the way it is
implemented into the parsing path of the raw text of the page.
1. I can't transclude raw text wrapped into a template call: this doesn't
run:
{{center|<section begin=1 />Text<section end=1 />}}
2. I can't transclude raw text wrapped into a HTML comment like this:
<!--<section begin=1 />Text<section end=1 />-->
3. I can't transclude raw text wrapped into a noinclude tag:
<noinclude><section begin=1 />Text<section end=1 /></noinclude>
4. I can't transclude text coming from the same page calling transclusion.
I can't read php parsing scripts, but I can imagine that that thing will
change if labeled transclusion would be simply built as a regex search into
the raw code of the page, ignoring any other content of such raw code; and I
can imagine too that a great improvement in performance should be gained if
the page to search in would be kept into memory or into a fast cache,
allowing multiple labeled section searches into the same page without the
need of reloading it.
If #lst would be converted in a fast, effective, simple code including tool,
lots of interesting results should IMHO be gained: templates could be
converted into "libraries"; mono-and multidimensional arrays could be easily
used; a undetermined number of "variables" could be loaded and used easily;
redundance and coherence of data and metadata could be gained. Most
interesting, nothing of usual #lst syntax would need a change.
Is there any major drawback in such an idea?
Alex brollo
Here's a Wikipedia article. There is an error. There is the
edit link. Now I click the edit link. It's five seconds
after the page has been displayed. Now some Javascript
kicks in, that runs 5 seconds after the page first appeared.
And it moves the edit link sideways, so instead I click
the history link. What!?
Can this stop, please? I have already disabled all
personal Javascript and all gadgets.
I run Firefox 3.6.13 on Ubuntu Linux 10.10 on a
netbook/laptop with only moderate background load
and plenty of RAM. Many users would have less
powerful setups.
If Javascript needs to run very late in the page
loading process (why?), could it please refrain from
rearranging layout elements of the page? It's
like a little flea circus doing its tricks
for 7-8 seconds after the page loads. I have
to sit on my hands, not touching anything
until the fleas get tired and stop. All this
"usability" and the steward election banner that
needs to pop-up before it hides, is soooo very
tiring.
I got firebug installed, to study the timing
of a loading page. When I reload a page, so all
elements are in caches, the first parts including
geoip lookup take 60 ms. Then the first images
load at +2 seconds. I have no idea what makes
it take 2 seconds before the first image is loaded.
All images load really fast. Later, after a gap,
ext.vector.collapsibleNav, -Tabs, editWarning,
and simpleSearch start loading at +4.5 seconds.
Without other running programs, these times
shrink to +1 and +2.5 seconds. But there are
still inexplicably long gaps between elements.
If all images are loaded after 1.4 seconds,
why does collapsibleNav start loading at +2.5?
This Javascript is going to rearrange the
left margin menu, so I'm likely to hit the
wrong link when I want to click there.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hi all!
Wikimedia Germany is again organizing a hackathon in Berlin. It will be in May,
but we have not decided on the weekend yet. So, if you are interrested in
attending the event, please let us know on which dates you *can't* come. We are
especially interrested to know when people around the globe are tied up by exams
and such.
So, if you want to come, but might not be able to at some specific dates, drop a
note:
<http://www.mediawiki.org/wiki/Berlin_Hackathon_2011#Straw_Poll>
Thanks,
and see you there!
-- daniel
( repost from http://techblog.wikimedia.org )
Continuing with the work started last week[1], we plan to deploy 1.17
to more wikis in a couple hours. We had hoped we would be able to
figure out the performance issues in the past week, but unfortunately,
the only practical way we have to see the load problems we witnessed
last week is to put the software into production. We have put a lot
of instrumentation in place to help us diagnose our load issues. We
plan to start the upcoming deployment by rolling out to
nl.wikipedia.org, and do some debugging (rolling back if necessary).
If we’re able to diagnose and fix the problems quickly, we then plan
to roll out 1.17 more widely. If we’re still stumped, we may still
roll out to a few more low-traffic wikis, but leave the high-traffic
sites until we figure this out.
We plan to have more updates and detailed information on the
deployment page on mediawiki.org[2]. Thanks for your patience!
Rob
[1] http://techblog.wikimedia.org/2011/02/1-17deployment-attempt2/
[2] http://www.mediawiki.org/wiki/MediaWiki_1.17/Wikimedia_deployment