Hi,
I am Akshay Agarwal, a third year CSE student & a freelance web
developer. Please have a look at my GSoc Proposal
http://www.mediawiki.org/wiki/User:Akshay.agarwal & suggest necessary
changes.
Looking forward to your feedback,
-- Thanks & Regards,
Akshay Agarwal
Hi, I'm Yuvi, a student looking forward to working with MediaWiki via
this year's GSoC.
I want to work on something dump related, and have been bugging
apergos (Ariel) for a while now. One of the things that popped up into
my head is moving the dump process to another language (say, C#, or
Java, or be very macho and do C++ or C). This would give the dump
process quite a bit of a speed bump (The profiling I did[1] seems to
indicate that the DB is not the bottleneck. Might be wrong though),
and can also be done in a way that makes running distributed dumps
easier/more elegant.
So, thoughts on this? Is 'Move Dumping Process to another language' a
good idea at all?
P.S. I'm just looking out for ideas, so if you have specific
improvements to the dumping process in mind, please respond with those
too. I already have DistributedBZip2 and Incremental Dumps in mind too
:)
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=5303
Thanks :)
--
Yuvi Panda T
http://yuvi.in/
2011/3/25 Luca de Alfaro <luca(a)dealfaro.org>:
> Dear Wilfredor,
> I think a list of page_ids, one id per line, would be fine.
> You could also give us a list
> page_id,title
Line example:
45063840,"Simón Bolívar"
> where title is enclosed in " and where the "'s are quoted, following one of
> the standard csv formats...
> The answer we give back to you can contain page titles, so you can then
> check if any title has changed.
> If this does not work for you let us know; we are flexible.
Nice!
> Ok, we will bring up WikiTrust on the Spanish Wikipedia. It's quite easy
> for us to do so.
Excellent!
I'd like, please let me know when this is done. I do not know the
logistics of the process, however, I have closely followed in wp:en
1.0.
> Thanks for your interest!
The appreciation is mutual
> Luca
User:Wifredor
Hi guys,
My name is Sagie, I'm a first-year CS and linguistics student at Tel-Aviv University. I'm applying for GSoC this summer and wanted to introduce myself. I actually talked with some of you on #mediawiki last night, under the handle "n0nick".
I've been a professional web developer for 6 years, mostly done PHP work. I'm fairly familiar with web & wiki frameworks in general, and have had some (short) experience with MediaWiki.
I'd like to read your thoughts on the following projects I had my eyes on:
* Inline Editing extension for MW/SMW
Sounds like a very interesting and useful project to work on, and seems to me it fits the timeframe.
I saw that there's been a lot of work done already for this task by user janpaul123, and was wondering what's the status of this project and how I can help with making this a possible GSoC project.
* Sidebar/toolbar customization GUI
I understand there's overhaul work being done on element skin systems, but I talked with Dantman about the sidebar customization and from what I understand, it's possible to take this project as long as my work is abstract (and good) enough.
Anyone has other thoughts on this? Should I perhaps avoid working on something that might interfere with current developers work?
* Email notifications
Again, I see in the wiki page that some work has been done on this, and testing and bugfixing is required.
Do you think it could be a suitable summer project?
Would love to hear your thoughts.
Thanks,
--
Your friend in time,
Sagie Maoz
sagie(a)maoz.info // +1 (347) 556.5044 // +972 (52) 834-3339
http://sagie.maoz.info/http://n0nick.net/
/* simba says roar! */
Our parser cache hit ratio is very low, around 30%.
http://tstarling.com/stuff/hit-rate-2011-03-25.png
This seems to be mostly due to insufficient parser cache size. My
theory is that if we increased the parser cache size by a factor of
10-100, then most of the yellow area on that graph should go away.
This would reduce our apache CPU usage substantially.
The parser cache does not have particularly stringent latency
requirements, since most requests only do a single parser cache fetch.
So I researched the available options for disk-backed object caches.
Ehcache stood out, since it has a suitable feature set out of box and
was easy to use from PHP. I whipped up a MediaWiki client for it and
committed it in r83208.
My plan is to do a test deployment of it, starting on Monday my time
(i.e. Sunday night US time), and continuing until the cache fills up
somewhat, say 2 weeks. This deployment should have no user-visible
consequences, except perhaps for an improvement in speed.
-- Tim Starling
I work on an extension that used to call parse() directly. Then after
some advice from mw developers this was changed to a call to
recursiveTagParse because "parse should not be called directly".
Only problem is, the method that used to call parse() is used to
populate a Special page, so parse() is never called in the first place,
right? This means all the things parse() does in addition to
recursiveTagParse have to be copied over. So, what exactly makes it so
inadvisable to call parse()?
Stephan
Trying to edit is failing:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /wikipedia/en/w/index.php.
Reason: Error reading from remote server
Reading articles still works ok.
--
-george william herbert
george.herbert(a)gmail.com
I think I understand the desire for better, easier tools (git does make
branching and merging super-easy), but could I sidetrack the git
discussion for a bit and ask that people review the InterWiki
Transclusion branch (http://hexm.de/0y)?
Roan has already reviewed a big part of the code, but his time is very
limited right now. People have been asking for this for quite a while:
the bug number is 9890 (https://bugzilla.wikimedia.org/9890). I'd
like to get it reviewed before too much bit-rot sets in.
Mark.
> As to Toolserver, this environment and its functionality is deeply flawed.
> As the tools are open source, there is no reason why relevant tools cannot
> be brought into GIT and upgraded to a level where they are of production
> quality. Either GIT is able to cope or its distributed character adds no
> real value.
>
> The notion that it has to be MediaWiki core and or its extensions first is
> absurd when you consider that it is what we use to run one of the biggest
> websites of the world. We rely on the continued support for our production
> process. The daily process provided by LocalisationUpdate is such a
> production process. When the continuity of production processes is not a
> prime priority, something is fundamentally wrong.
You are misunderstanding. The thread isn't about toolserver, so you
are muddying up a perfectly valid thread with something totally
non-related.
Yes, toolserver has a problem, and it should be addressed. It isn't a
problem with the MediaWiki developer community though, it's a problem
with the toolserver community, and they need to fix it. But again,
let's focus on one issue at a time.
- Ryan Lane