Earlier this month, the PHP developer team moved to start soft deprecating
the mysql.so extension via documentation, with a intent to fully
E_DEPRECATED in a later release. [1] Therefore, I thought it would be
appropriate to start a small discussion as to how this should be handled.
At the current moment, MediaWiki is still using these functions in its
DatabaseMysql class. Being a new contributor to MW this came across as odd
as to why no MySQLi implementation was available, so I went ahead and
created one [2] (Patch [3]). Right now it's just another $wgDBtype, how it
should really be integrated is what I want to discuss. Should any
implementation (not necessarily mine) using MySQLi just be another DBType in
the installer perhaps? (Most software I've seen goes this route) Also, at
what point (time or event) do we do away with mysql function support? Also,
are there any performance regressions with MySQLi that we should be aware
about?
[1]: http://marc.info/?l=php-internals&m=131031747409271&w=<http://marc.info/?l=php-internals&m=131031747409271&w=2>
2 <http://marc.info/?l=php-internals&m=131031747409271&w=2>
[2]:
https://github.com/johnduhart/mediawiki-trunk-phase3/commit/552a90f5142bb10…
[3]: https://gist.github.com/1115789
--
John
All,
While spending the past few days/weeks in CodeReview, it has become
abundantly clear to me that we absolutely must get away from this idea
of doing huge refactorings in our working copies and landing them in trunk
without any warning. The examples I'm going to use here are the
RequestContext, Action and Blocking refactors.
We've gotten into a very bad habit recently of doing a whole lot of work in
secret in our respective working copies, and then pushing to trunk without
first talking to the community to discuss our plans. This is a bad idea for a
bunch of reasons.
Firstly, it skips the community feedback process until after your code is
already in trunk. By skipping this process--whether it's a formal RfC, or just
chatting with your peers on IRC--you miss out on the chance to get valuable
feedback on your architectural decisions before they land in trunk. Once code
has landed in trunk it is almost always easier to follow up and continue to
"fix" the code that should've been fully spec'd out before checkin.
Also, the community *must* have the chance to call you crazy and say "don't
check that in, please." Going back to my examples of Actions, had the community
been consulted first I would've raised objections about the decisions made with
Actions (I think they should be moved to special pages and the old action urls
made to redirect for back-compat...rather than solidifying the old and crappy
action interface with a new coat of paint). Looking at RequestContexts, had we
talked about this in an RfC first...we probably could've skipped the whole
member variable vs. accessor debate and the several months of minor cleanups
that's entailed (__get() magic is evil, IMHO)
Secondly, this increases the load on reviewers, myself included. When you land
a huge commit in trunk (like the Block rewrite), it takes *forever* to review
the original commit + half a dozen or more followups. This drains reviewer time
and leads to longer release cycles. I think I speak for everyone when I say this
is bad. Small incremental changes are infinitely easier to review than large
batch changes.
If you need to make huge changes: do them in a branch. It's what I did with the
installer and maintenance rewrites, what Roan and Trevor did with ResourceLoader
and what Brian Wolff did with his metadata improvements. Of course after landing
your branches in trunk there will inevitably be some cleanup required, but it
keeps trunk more stable until the branch merge and makes it easier to back out
if we decide to scrap the feature/rewrite.
I know SVN branches suck. But the alternative is having a constantly unstable
trunk due to alpha code that was committed haphazardly. Nobody wins in that
scenario.
So please...I beg everyone. Discuss your changes first. It doesn't have to be
formal (although formal spec'ing is always useful too!), but even having a
second set of eyes to glance over your ideas before committing never hurts
anyone.
-Chad
Hi all --
Please don't commit broken code to trunk; if you think your code may be
broken please consider asking about it first. This is especially true if
you're committing a fix for a bug that's gone back and forth over the years
about how it should be solved.
And it's even more true if the particular thing you're committing has been
previously committed and reverted several times due to ongoing issues.
Folks that have a history of having commits reverted for problems -- please
start considering this. It's easier to fix your code before it goes in than
after.
-- brion vibber (brion @ wikimedia.org)
I found one page http://en.wikipedia.org/wiki/Subroutine which has a
pageid 40988 in the 05-26 snapshot, but now has a pageid 32177451 in
the newer snapshot. Just wanted to know what are the circumstances
under which this happens. Is there a way in the api to know which
other pages might have had the same treatment?
Thanks
Priyank
Hey,
(You better write such mails to the MW list, which I cc'd now:
wikitech-l(a)lists.wikimedia.org)
There indeed is a better way. MW provides a hook that gets fired on page
creation. In this hook you can write the current timestamp to the page props
table. Then just obtain that value when you need it. This will be cheaper
then doing the query on the revision table.
Very easy to write up, but it might already be provided by some extension.
(Which is why I'm cc'ing the MW list.)
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil.
--
On 30 July 2011 00:28, James Hong Kong <jamesin.hongkong.1(a)gmail.com> wrote:
> Dear all,
>
> Standard MW, allows to use {{REVISIONTIMESTAMP}} it for a property
> that stores the last edit. We were wondering about the creation date
> of an document but since their is no standard magic word available we
> found some php code[1] that would give us the creation time stamp of
> the page that uses our own created magic word {{CREATIONTIMESTAMP}}.
>
> Our questions is now, does to code[1] below is the best possible way
> to that or is their a more efficient way to get the creation time
> stamp via a magic word.
>
>
> [1] case MAG_CREATIONTIMESTAMP:
> $parser->disableCache();
> $title = $parser->getTitle();
> $dbr =& wfGetDB( DB_SLAVE );
> $res = $dbr->query("SELECT rev_timestamp FROM
> revision WHERE rev_page=".$title->getArticleID()." ORDER BY
> rev_timestamp LIMIT 0,1");
> $row = $dbr->fetchRow($res);
> $dbr->freeResult($res);
> $ret = $row[0];
> return true;
>
> Cheers,
>
> MWJames
>
>
> ------------------------------------------------------------------------------
> Got Input? Slashdot Needs You.
> Take our quick survey online. Come on, we don't ask for help often.
> Plus, you'll get a chance to win $100 to spend on ThinkGeek.
> http://p.sf.net/sfu/slashdot-survey
> _______________________________________________
> Semediawiki-user mailing list
> Semediawiki-user(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/semediawiki-user
>
On Jul 28, 2011 6:01 PM, "MZMcBride" <z(a)mzmcbride.com> wrote:
>
> Brion Vibber wrote:
> > On Thu, Jul 28, 2011 at 3:09 P
> > If you're not sure, those are *all* better places to try something out
than
> > committing directly to trunk without talking to anybody or getting any
> > feedback.
>
> It's a bit difficult to get comments/review in the CodeReview comments
area
> if you haven't made the commit yet. ;-)
If you made the commit and received feedback about it in CR, continuing
discussion there before making an amended commit -- especially when fixing
something that was reverted -- seems a good fit.
That can save committers and reviewers alike from the pain of a second or
third breakage+revert/rushed fix cycle.
In many cases these are patches for a bug, in which case bugzilla is a good
place to stash an updated patch version to be looked over. (In a DVCS
workflow this might be a branch on a fork rather than a flat patch, but the
principle is the same and we *ought* to be able to do ok with patches -- 20+
years of free/open source devs have reviewed, iterated, and landed or
clearly rejected changes this way, including us!)
> Sometimes the only way people can get their code reviewed is to commit it.
> This is an old practice. Not to beat a dead horse, but this is all related
> to the same "patches sit unreviewed" issue, etc.
I find that reverting and yelling at people for broken commits is *not* a
sustainable practice, even if it's old.
We need to show that we can hold up our end of the bargain: give timely
feedback, keep up with both commits already done and with patches coming in
through other channels.
It's a process, and we're all still feeling our way along.
Part of that is improving reviewer responsiveness to commits after they come
in. Part is improving responsiveness to uncommitted patches.
And part of it is making sure that we send people and code through the
review paths that will best fit them.
-- brion
>
> MZMcBride
>
>
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi folks,
today is sysadmin appreciation day! ( http://www.sysadminday.com/ )
Many thanks to the Wikimedia operations team for being there at all
times of day and night to keep our sites up and our users happy -- and
for always working hard to make things faster, more stable, and more
secure.
For the Wikimedia Foundation, that goes for the office IT team as well --
thanks for all your help with networks, printers, laptops, monitors, projectors,
smartphones, dumb phones, and the myriad pieces of open and not-open
software that we run. Special thanks to all the volunteers who
participate in Wikimedia operations, help with outages, close shell
bugs, document things, and do so much more.
Please join me in thanking our ops and IT teams. You all rock - thanks
for being here. :-)
Erik
--
Erik Möller
Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
(CC'd inez @ wikia)
In the process of getting a better feel for the current state of Wikia,
Wikihow, & a few others' rich text editor tools, I'm going through Wikia's
CKEditor-based RTE extension and seeing if I can get it working on MediaWiki
trunk.
I've got a fork from Wikia's SVN in this gitorious project:
http://www.gitorious.org/mediawiki-wikia-rte
The 'master' branch is a straight git-svn clone of the subtree; 'tweaks'
branch has some extra doc comments and some initial tweaks to get it loading
(if not actually working right yet ;) on stock MediaWiki 1.18-SVN.
Current state:
* most stuff won't work yet!
* the editor can be loaded if forced with &useeditor=wysiwyg
* load/save results in some corruption, probably mostly due to the parser
annotations not all being present (need to customize a few bits)
* the editor is loaded through ResourceLoader, using a quick stub to work
around the lack of removal of certain lines
* it's almost certain that the CSS and some JS is broken :D
* there are various Wikia-specific PHP-side and JS-side extensions, many of
which still need to be switched to the stock MW equivalent or copied over.
Note that definitions for such things can usually be found in the modified
MediaWiki core in Wikia's SVN tree --
https://svn.wikia-code.com/wikia/trunk/
At a minimum I'd like to end up with something that works on stock MediaWiki
1.18 (and if it can be made to work on stock or lightly-patched 1.17, even
better!). It should be a more stable option for 1.17 users than the old
FCKEditor extension.
I'm still a bit leery of the internal annotations & edge-case checks for the
round-tripping and whether this structure would work for us in the long
term, but there's some good stuff in here that's going to be useful to learn
from whatever we do, and it's a useful tool for many cases in the short
term.
If anybody feels like trying it out / pitching in on the fixes, do feel free
to give a shout. I can set a few folks up with commit access on the git repo
or take some pulls for now, and will merge it into our SVN extensions when
it's a bit more stable.
-- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
Hi Roan,
thanks for the quick reply. This is (one of) the statement(s) we've got a problem with.
------------------
DROP PROCEDURE IF EXISTS `insertfile_getFilePosition`;
DELIMITER $$
CREATE PROCEDURE `insertfile_getFilePosition`(filename VARCHAR(255))
BEGIN
SELECT tmp.rank FROM
(SELECT @row:=@row+1 rank, i.img_name
FROM /*$wgDBprefix*/image i, (SELECT @row:=0) r
WHERE (i.img_major_mime != 'image' AND i.img_minor_mime != 'tiff')
ORDER BY i.img_name ASC) tmp
WHERE tmp.img_name = filename;
END
$$
DELIMITER ;
------------------
If provided to the SQL Database directly, it works. But the update.php throws a syntax error.
In general, is it possible to provide stored procedures this way?
Could there be a problem with the way the sql-file is read?
Greetings,
Robert Vogel
Social Web Technologien
Softwareentwicklung
Hallo Welt! - Medienwerkstatt GmbH
__________________________________
Untere Bachgasse 15
93047 Regensburg
Tel. +49 (0) 941 - 56 95 94 98
Fax +49 (0) 941 - 50 27 58 13
www.hallowelt.biz
vogel(a)hallowelt.biz
Sitz: Regensburg
Amtsgericht: Regensburg
Handelsregister: HRB 10467
E.USt.Nr.: DE 253050833
Geschäftsführer: Anja Ebersbach, Markus Glaser, Dr. Richard Heigl, Radovan Kubani
On Wed, 27 Jul 2011 at 22:16 PM, Roan Kattouw <roan.kattouw(a)gmail.com> wrote:
>On Wed, Jul 27, 2011 at 12:47 PM, Robert Vogel <vogel(a)hallowelt.biz> wrote:
>> Hello everybody!
>>
>> At my company we develop extensions for MediaWiki. We use the "LoadExtensionSchemaUpdates" hook to create tables with the "maintenance/update.php" script.
>> Recently we faced the question whether it is possible to have stored procedures/functions in an extensions SQL-File, or not. We tried it out and it didn't work for us. The update.php says we've got an error in the SQL syntax, but there isn't one.
>>
>> Can anybody help us? Is it possible to provide stored procedures to the database using the update.php? Is there an example anywhere? Thx.
>>
>The SQL syntax error message comes from the database engine, not from MediaWiki. So if it tells you there's an SQL syntax error, there's a syntax error for sure. What you should look at:
>1. does the DB backend you connect to support the syntax you're using?
>Infamously, MySQL 4.0 will reject anything containing subqueries as a syntax error, because subquery support wasn't introduced until 4.1 if memory serves 2. is MediaWiki connecting to the DB that you think it's connecting to? There might be a version-triggered error like #1 above, but you might not notice if you're connecting to a different version than MediaWiki is 3. are you using magic phrases like /*_*/, /*$wgDBTablePrefix*/, /*i*/ or /*$wgDBTableOptions*/ ? MediaWiki substitutes these before sending the SQL to the DB backend, so make sure you test your queries with these substitutions applied
>
>Roan Kattouw (Catrope)