Thanks a lot Quim Gil :-) I want to fix bugs and looking for what can I do.
I will check links and I am going to start fix bug.
22 Şub 2013 14:01 tarihinde <wikitech-l-request(a)lists.wikimedia.org> yazdı:
I run a bigger bleeding-edge-software MediaWiki via SSL on a Raspberry Pi.
And that's not too slow because I use APC.
My quick tip of the day
# add APC (Alternative PHP Cache)
# seehttps://www.mediawiki.org/wiki/Extension:APC
# stop your web server
service apache2 stop
# Install APC
apt-get install php-apc
# download APC extension (it is a "dashboard for APC" and adds a Special
page)
cd $IP/extensions
git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/APC.git
# in LocalSettings.php make sure to have the following lines
## Shared memory settings
$wgMainCacheType = CACHE_ACCEL;
$wgMemCachedServers = array();
# $wgGroupPermissions['apc']['apc'] = true;
# or simply give the right to an existing trusted group, like bureaucrat:
require_once("$IP/extensions/APC/APC.php");
$wgGroupPermissions['bureaucrat']['apc'] = true;
# and restart your server
service apache2 start
This is your weekly preview of higher-risk or general "you should be
aware of" items for the slew of deployments coming in the near term.
During the week of March 11th:
* Scibuntu (Lua) to all wikis
* upgrades to DNS software/config for better reliability
See the full Deployments page for regularly scheduled events and less
high-risk items:
https://wikitech.wikimedia.org/view/Deployments
Best,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| [[User:Greg G (WMF)]] A18D 1138 8E47 FAC8 1C7D |
We held our second bug day Tuesday Feb. 19th.
*How it Went*
We looked at open bugs in the Wikimedia's Git/Gerrit component. We
focused on upstream issues that may have been fixed with the recent upgrade
and issues that need status updates. Andre, ^demon, MatmaRex and I triaged
bugs and had help from developers in #wikimedia-dev.
*What we Achieved*
We triaged about 25 bugs [1]. This included retesting old reports to
see if the problem still exists after the Gerrit software upgrade on the
Wikimedia server. Many of these bugs were still valid, so we checked
statuses of upstream reports to see if some progress has taken place in the
meantime
*Improvements Implemented*
-Better Landing Page
[1] started as a landing page listing 'Who,' 'What,' 'When,' and
'Where.' Since we focused on upstream issues I was able to link to Release
Notes and Gerrit's bug tracker on that page and not clutter up [2]. After
the event we recorded the bugs that were acted upon and comments from the
events etherpad onto the new page. We will likely continue this process.
*Question for Prospective Participants*
I would like to know if the timing may be a barrier for prospective
attendees. Is there a better time to hold the bug days? So far we've run
both events on a Tuesday 17:00-23:00 UTC. We could alternate the time we
hold bug days to allow more people to participate.
Again, thank you for your participation and support!
-Valerie Juarez
[1] http://www.mediawiki.org/wiki/Bug_management/Triage/20130219
[2] http://www.mediawiki.org/wiki/Bug_management/Triage
On 02/22/2013 03:42 AM, Thorsten Glaser wrote:
> I am a bit unhappy that instead
> of a database, MySQL is used/preferred, but (after the last
> few bugfixes), PostgreSQL works, so I’m set.
Please do not hesitate to file any bugs for things that don't work for
you in PG. And if they aren't getting resolved quickly enough, please
ping me.
> I expect us (as in, my employer) to not follow every single
> MW release quickly, and Debian probably won’t either (most‐
> ly for lack of manpower, I guess).
And this is the exact reason that I initiated LTS support for 1.19.
We'll make releases every 6 months, but you can be assured that we'll
support 1.19 for a while.
> With my Debian Developer hat on, I don’t sense much in that
> area of complaints either.
I installed the package last night on http://home.nichework.com/ -- dns
may not be propagated yet -- and was disappointed that you didn't use
the CLI installer to set up a wiki using debconf.
There were a couple of other nits, but I think that overall it is a
great thing.
Thanks,
Mark
--
http://hexmode.com/
There is no path to peace. Peace is the path.
-- Mahatma Gandhi, "Non-Violence in Peace and War"
On 19/02/13 21:11, MZMcBride wrote:
> Hi.
>
> In the context of <https://bugzilla.wikimedia.org/show_bug.cgi?id=10621>,
> the concept of using wiki pages as databases has come up. We're already
> beginning to see this:
>
> * https://en.wiktionary.org/wiki/Module:languages (over 30,000 lines)
> * https://en.wikipedia.org/wiki/Module:Convertdata (over 7,400 lines)
>
> At large enough sizes, the in-browser syntax highlighting is currently
> problematic.
We can disable syntax highlighting over some size.
> But it's also becoming clear that the larger underlying
> problem is that using a single request wiki page as a database isn't
> really scalable or sane.
The performance of #invoke should be OK for modules up to
$wgMaxArticleSize (2MB). Whether the edit interface is usable at such
a size is another question.
> (ParserFunction #switch's performance used to prohibit most ideas of using
> a wiki page as a database, as I understand it.)
Both Lua and #switch have O(N) time order in this use case, but the
constant you multiply by N is hundreds of times smaller for Lua.
> Has any thought been given to what to do about this? Will it require
> manually paginating the data over collections of wiki pages? Will this be
> something to use Wikidata for?
Ultimately, I would like it to be addressed in Wikidata. In the
meantime, multi-megabyte datasets will have to be split up, for
$wgMaxArticleSize if nothing else.
-- Tim Starling
Hi!
wfMsg and wfMsgForContent are deprecated since 1.18 but the comment
doesn't say what functions are recommended to use instead. Does anyone
knows?
-----
Yury Katkov, WikiVote