Hey all --
I've got a stack of issues discussed during the last couple months' conferences and hackathons to raise, so there may be a few more manifestos on the way. But don't worry, this one will be short!
I had a great chat at the New Orleans hackathon with D J Bauch who's been working on maintaining MediaWiki's MS SQL Server support, and also had a very useful email chat a couple weeks back with Freakalowsky who's put a lot of work into MediaWiki's Oracle support.
Long story short: though traditionally MediaWiki developers have put very little work on our own into maintaining non-MySQL compatibility, there's still lots of folks interested in running on them... AND THEY ACTUALLY MOSTLY WORK!
At this point I think it's a bit crazy of us to keep on marginalizing that code; some folks running their own instances will need or want or prefer (or be forced by office IT) to run on some "funny" database, and we shouldn't stand in their way. More importantly, keeping things working with multiple DB backends helps us to keep our code cleaner, and reduces our own hard dependencies on a particular product line.
There are two main impediments to keeping code working on non-MySQL platforms: * lazy code breakages -- us MySQL-natives accidentally use MySQL-isms that break queries on other platforms * lazy schema updates -- us MySQL-natives add new schema updates into the system but only implement them for MySQL
The first could often be helped simply by having automated testing run to make sure that relevant code paths get exercised. Often there's just a function that's used, or lazy use of LIMIT/OFFSET, or GROUP BY in a form that other databases are pickier about. Just flagging them from the testing can often be enough to make it easy to fix.
The second is a bit harder, but again testing can help flag that something needs to be updated so it doesn't wait until too late for the next release, or something.
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Green bars are your friend! :)
-- brion
On Mon, Oct 24, 2011 at 3:04 PM, Brion Vibber brion@wikimedia.org wrote:
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Jenkins already runs the tests using Sqlite. Adding Mysql would be pretty trivial. Postgres too, probably.
I'm not so sure about getting Oracle, DB2 and MSSQL up and running on gallium ;-)
-Chad
Is there a way to tell Sqlite to be pickier about what it will accept? If it bombs out on anything that falls outside the union of (all supported database SQL), then just passing our tests should be sufficient, right?
On Mon, Oct 24, 2011 at 3:15 PM, Chad innocentkiller@gmail.com wrote:
On Mon, Oct 24, 2011 at 3:04 PM, Brion Vibber brion@wikimedia.org wrote:
I'd like for everybody who's working on MediaWiki on non-MySQL databases
to
make sure that the phpunit test suite can run on your favorite platform,
so
we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Jenkins already runs the tests using Sqlite. Adding Mysql would be pretty trivial. Postgres too, probably.
I'm not so sure about getting Oracle, DB2 and MSSQL up and running on gallium ;-)
-Chad
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Russell Nelson wrote:
Is there a way to tell Sqlite to be pickier about what it will accept? If it bombs out on anything that falls outside the union of (all supported database SQL), then just passing our tests should be sufficient, right?
No. And even if it didn't had its own laxitude (eg. accepting text in a numeric field), the abstraction layer would still be different, so you would also need to test that.
Brion Vibber brion@wikimedia.org writes:
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
It'd be great if you can also review and update http://www.mediawiki.org/wiki/Database_testing as needed.
Mark.
I will work on that (getting the phpunit test suite up and running on my box) next. Historically, I have had the good fortune of watching spectacular failures without even having had to resort to phpunit, but I'm working toward resolving that.
On Mon, Oct 24, 2011 at 3:50 PM, Mark A. Hershberger mhershberger@wikimedia.org wrote:
Brion Vibber brion@wikimedia.org writes:
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
It'd be great if you can also review and update http://www.mediawiki.org/wiki/Database_testing as needed.
Mark.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
while we're on the subject, a question about MySQL: is it possible to make MW independent of MySQL innodb tables? Innodb tables have a number of attributes that can make set up a pain (e.g. file and log sizes).
On Mon, Oct 24, 2011 at 2:04 PM, Brion Vibber brion@wikimedia.org wrote:
Hey all --
I've got a stack of issues discussed during the last couple months' conferences and hackathons to raise, so there may be a few more manifestos on the way. But don't worry, this one will be short!
I had a great chat at the New Orleans hackathon with D J Bauch who's been working on maintaining MediaWiki's MS SQL Server support, and also had a very useful email chat a couple weeks back with Freakalowsky who's put a lot of work into MediaWiki's Oracle support.
Long story short: though traditionally MediaWiki developers have put very little work on our own into maintaining non-MySQL compatibility, there's still lots of folks interested in running on them... AND THEY ACTUALLY MOSTLY WORK!
At this point I think it's a bit crazy of us to keep on marginalizing that code; some folks running their own instances will need or want or prefer (or be forced by office IT) to run on some "funny" database, and we shouldn't stand in their way. More importantly, keeping things working with multiple DB backends helps us to keep our code cleaner, and reduces our own hard dependencies on a particular product line.
There are two main impediments to keeping code working on non-MySQL platforms:
- lazy code breakages -- us MySQL-natives accidentally use MySQL-isms that
break queries on other platforms
- lazy schema updates -- us MySQL-natives add new schema updates into the
system but only implement them for MySQL
The first could often be helped simply by having automated testing run to make sure that relevant code paths get exercised. Often there's just a function that's used, or lazy use of LIMIT/OFFSET, or GROUP BY in a form that other databases are pickier about. Just flagging them from the testing can often be enough to make it easy to fix.
The second is a bit harder, but again testing can help flag that something needs to be updated so it doesn't wait until too late for the next release, or something.
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Green bars are your friend! :)
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Mon, Oct 24, 2011 at 5:08 PM, Fred Zimmerman zimzaz.wfz@gmail.com wrote:
while we're on the subject, a question about MySQL: is it possible to make MW independent of MySQL innodb tables? Innodb tables have a number of attributes that can make set up a pain (e.g. file and log sizes).
$wgDBTableOptions.
-Chad
I smell another abstraction layer coming on...
-----Original Message----- From: wikitech-l-bounces@lists.wikimedia.org [mailto:wikitech-l-bounces@lists.wikimedia.org] On Behalf Of Chad Sent: Monday, October 24, 2011 2:19 PM To: Wikimedia developers Subject: Re: [Wikitech-l] MediaWiki unit testing and non-MySQL databases
On Mon, Oct 24, 2011 at 5:08 PM, Fred Zimmerman zimzaz.wfz@gmail.com wrote:
while we're on the subject, a question about MySQL: is it possible to make MW independent of MySQL innodb tables? Innodb tables have a number of attributes that can make set up a pain (e.g. file and log sizes).
$wgDBTableOptions.
-Chad
_______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Mon, Oct 24, 2011 at 2:08 PM, Fred Zimmerman zimzaz.wfz@gmail.comwrote:
while we're on the subject, a question about MySQL: is it possible to make MW independent of MySQL innodb tables? Innodb tables have a number of attributes that can make set up a pain (e.g. file and log sizes).
There should be nothing relying specifically on InnoDB that I know of; we default to InnoDB for everything possible because it's the only sane table type widely available on standard MySQL installs (and it should fall back to whetever the server default is if you don't have InnoDB).
If you do choose to use non-InnoDB tables, I recommend using something else that is designed to work with transactions and survive server crashes (eg, DO NOT USE MyISAM)
As noted in other replies, you can tweak $wgDBTableOptions to change the default table type used when creating new tables; you can change existing ones with an ALTER TABLE at any time.
-- brion
I'll have a look(haven't poked the phpunit code for a while now) to make sure it's operational on Oracle backend.
@Chad: if the issue is in the DB itself (i inderstand that running it can be somewhat of a pain) i could try to get you access to one of out sandbox instances for the purposes of CI. Might not be the fastest option, but still ...
LP, Jure
On 24. 10. 2011 21:04, Brion Vibber wrote:
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Green bars are your friend! :)
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 25/10/11 16:24, Freako F. Freakolowsky wrote:
I'll have a look(haven't poked the phpunit code for a while now) to make sure it's operational on Oracle backend.
@Chad: if the issue is in the DB itself (i inderstand that running it can be somewhat of a pain) i could try to get you access to one of out sandbox instances for the purposes of CI. Might not be the fastest option, but still ...
Hello,
Looks like there is a free Oracle version named 'Express' which we might use [1]. They are providing a Debian package [2] so we can get it installed on Jenkins host easily, just need some configurationw hich can probably be handled with puppet [3].
[1] http://www.oracle.com/technetwork/database/express-edition/overview/index.ht... [2] https://help.ubuntu.com/community/Oracle [3] https://help.ubuntu.com/community/Oracle10g
On Tue, Oct 25, 2011 at 8:17 AM, Antoine Musso hashar+wmf@free.fr wrote:
Looks like there is a free Oracle version named 'Express' which we might use [1]. They are providing a Debian package [2] so we can get it installed on Jenkins host easily, just need some configurationw hich can probably be handled with puppet [3].
[1]
http://www.oracle.com/technetwork/database/express-edition/overview/index.ht... [2] https://help.ubuntu.com/community/Oracle
That debian repo only has the previous version (10g) though; the current 11g only seems to be available for x86_64 as an RPM package, which allegedly works on RHEL and OpenSuSE. Sigh...
I added a couple links to the current dl & documentation yesterday, but haven't had a chance to test anything yet: https://www.mediawiki.org/wiki/Manual:PHP_unit_testing/Oracle
-- brion
On Tue, Oct 25, 2011 at 11:59 AM, Brion Vibber brion@pobox.com wrote:
On Tue, Oct 25, 2011 at 8:17 AM, Antoine Musso hashar+wmf@free.fr wrote:
Looks like there is a free Oracle version named 'Express' which we might use [1]. They are providing a Debian package [2] so we can get it installed on Jenkins host easily, just need some configurationw hich can probably be handled with puppet [3].
[1]
http://www.oracle.com/technetwork/database/express-edition/overview/index.ht... [2] https://help.ubuntu.com/community/Oracle
That debian repo only has the previous version (10g) though; the current 11g only seems to be available for x86_64 as an RPM package, which allegedly works on RHEL and OpenSuSE. Sigh...
I added a couple links to the current dl & documentation yesterday, but haven't had a chance to test anything yet: https://www.mediawiki.org/wiki/Manual:PHP_unit_testing/Oracle
Hmm, maybe we should set up a second box in addition to gallium for this to act as the "database host" that has all the various DBMSes we want to test.
-Chad
Hmm, maybe we should set up a second box in addition to gallium for this to act as the "database host" that has all the various DBMSes we want to test.
https://labsconsole.wikimedia.org/wiki/Main_Page
Just a suggestion ;)
- Ryan
Ryan Lane wrote:
https://labsconsole.wikimedia.org/wiki/Main_Page
Just a suggestion ;)
- Ryan
1) This is the first time it is mentioned in this mailing list. 1b) Not even mentioned in the Server Admin Log. 2) It has a funny concept of "you have an account" 3) Public IPs are private 4) Instances don't seem to be on wmf dns, despite statements of "ssh <nameofinstance>" and "adding wmflabs domain in DNS" 5) Reading the git instructions make me feel sick 6) The RSA key (dc:e9:68:7b:99:1b:27:d0:f9:fd:ce:6a:2e:bf:92:e1?) is not listed 7) Why is there a unicorn ?
On Tue, Oct 25, 2011 at 10:43 PM, Platonides Platonides@gmail.com wrote:
- Reading the git instructions make me feel sick
Git instructions make people feel sick, well known fact.
On Tue, 25 Oct 2011 13:43:51 -0700, Bryan Tong Minh bryan.tongminh@gmail.com wrote:
On Tue, Oct 25, 2011 at 10:43 PM, Platonides Platonides@gmail.com wrote:
- Reading the git instructions make me feel sick
Git instructions make people feel sick, well known fact.
Most git instructions don't include lines like: git push ssh://<username>@gerrit.wikimedia.org:29418/operations/puppet HEAD:refs/for/test
This looks screwed up too: git config --global user.name "<wiki-username>"
Last I checked user.name was intended to be a realname not some name linking. My user.name is set to "Daniel Friesen" and there's no way in hell I'm changing it to "Dantman" which I DO expect to be my username on labs and whatnot. There is likewise no way in hell I'm going to use `ssh://Daniel%20Friesen@gerrit.wikimedia.org`.
In fact this seams to be at odds with our own git migration notes. There seams to be notes on taking the USERINFO from svn usernames and converting them in the git conversion to realname and e-mail.
On Tue, Oct 25, 2011 at 4:43 PM, Platonides Platonides@gmail.com wrote:
- Why is there a unicorn ?
Because I suggested it for the Labs mascot, and nobody had a better idea. True story.
-Chad
Hi! Speaking of non-MySQL databases: recently I decided to try converting manually crafted SQL strings (mostly MySQL selects) to DBMS-independent Database::select() calls with arrays. However I've stumbled with serious problems with table aliases. I am not a DBMS guru, so there might be my fault, of course, however it would be nice if someone could answer to my bugreport there: https://bugzilla.wikimedia.org/show_bug.cgi?id=31534 Dmitriy
Hi,
On Tue, Oct 25, 2011 at 22:55, Dmitriy Sintsov questpc@rambler.ru wrote:
Hi! Speaking of non-MySQL databases: recently I decided to try converting manually crafted SQL strings (mostly MySQL selects) to DBMS-independent Database::select() calls with arrays. However I've stumbled with serious problems with table aliases. I am not a DBMS guru, so there might be my fault, of course, however it would be nice if someone could answer to my bugreport there: https://bugzilla.wikimedia.org/show_bug.cgi?id=31534 Dmitriy
Did you test the code offered in comment 1 on that bug?
I assume
array( 'qp_users_polls', 'qu' => 'qp_users' ),
should instead be
array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ),
(from comment 1)
Seems cleaner and saner than yours, why not give it a try? (although yours is also cleaner and saner than the original way you found it! (that you quoted))
-Jeremy
On 26.10.2011 7:16, Jeremy Baron wrote:
Hi,
On Tue, Oct 25, 2011 at 22:55, Dmitriy Sintsovquestpc@rambler.ru wrote:
Hi! Speaking of non-MySQL databases: recently I decided to try converting manually crafted SQL strings (mostly MySQL selects) to DBMS-independent Database::select() calls with arrays. However I've stumbled with serious problems with table aliases. I am not a DBMS guru, so there might be my fault, of course, however it would be nice if someone could answer to my bugreport there: https://bugzilla.wikimedia.org/show_bug.cgi?id=31534 Dmitriy
Did you test the code offered in comment 1 on that bug?
I assume
array( 'qp_users_polls', 'qu' => 'qp_users' ),
should instead be
array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ),
(from comment 1)
Seems cleaner and saner than yours, why not give it a try? (although yours is also cleaner and saner than the original way you found it! (that you quoted))
How would Sam's code guess alias for qp_users_polls ? I thought he just forgot to put the alias key 'qup' there, writing in hurry, probably.
'qu.uid=qup.uid' probably should fail in his code.
Dmitriy
On Tue, Oct 25, 2011 at 23:23, Dmitriy Sintsov questpc@rambler.ru wrote:
How would Sam's code guess alias for qp_users_polls ? I thought he just forgot to put the alias key 'qup' there, writing in hurry, probably.
'qu.uid=qup.uid' probably should fail in his code.
Did you miss half of what I said? I'll quote myself so you can be sure to see it this time:
I assume
array( 'qp_users_polls', 'qu' => 'qp_users' ),
should instead be
array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ),
(from comment 1)
-Jeremy
On 26.10.2011 7:27, Jeremy Baron wrote:
On Tue, Oct 25, 2011 at 23:23, Dmitriy Sintsovquestpc@rambler.ru wrote:
How would Sam's code guess alias for qp_users_polls ? I thought he just forgot to put the alias key 'qup' there, writing in hurry, probably.
'qu.uid=qup.uid' probably should fail in his code.
Did you miss half of what I said? I'll quote myself so you can be sure to see it this time:
I assume
array( 'qp_users_polls', 'qu' => 'qp_users' ),
should instead be
array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ),
(from comment 1)
I am not native English speaker, sorry. However, in that bugreport there is second (different) query converted from MySQL string to Database::select() with arrays. It has INNER JOIN. The source query works fine, the converted one fails because table aliases are 'quoted'. Should I do not use table aliases at all? Dmitriy
Hi,
On Wed, Oct 26, 2011 at 00:31, Dmitriy Sintsov questpc@rambler.ru wrote:
I am not native English speaker, sorry.
That's fine but maybe helpful to know so I can be extra careful that I don't confuse you. As a last resort I think there are some Russian speakers on this list that may be able to help.
However, in that bugreport there is second (different) query converted from MySQL string to Database::select() with arrays. It has INNER JOIN. The source query works fine, the converted one fails because table aliases are 'quoted'. Should I do not use table aliases at all?
Aliases should be fine. (at least per the docs I found and reedy's example)
[i'm guessing] The problem is that the code assumes that a lone string is a table name for just 1 table and that any other semantics (multiple tables or tables with aliases) will be encoded in data structures rather than need to be parsed out of strings.
Anyway, could you just try this code and tell us what happens?
function getIntervalResults( $offset, $limit ) { $result = array(); $db = wfGetDB( DB_SLAVE ); $res = $db->select( array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ), array( 'qu.uid as uid', 'name as username', 'count(pid) as pidcount' ), 'qu.uid=qup.uid', __METHOD__, array( 'GROUP BY' => 'qup.uid', 'ORDER BY' => $this->order_by, 'OFFSET' => $offset, 'LIMIT' => $limit ) ); foreach( $res as $row ) { $result[] = $row; } return $result; }
Thanks, Jeremy
* Jeremy Baron jeremy@tuxmachine.com [Wed, 26 Oct 2011 01:09:07 -0400]:
Hi,
On Wed, Oct 26, 2011 at 00:31, Dmitriy Sintsov questpc@rambler.ru wrote:
I am not native English speaker, sorry.
That's fine but maybe helpful to know so I can be extra careful that I don't confuse you. As a last resort I think there are some Russian speakers on this list that may be able to help.
From my experience of many years of communicating with western people
and Russians on the Internet, western people are much more helpful and willing to help and to cooperate. That is not politically correct but true. Open source and freeware community in western countries is much more advanced than here in Russia.
Should I do not use table aliases at all?
Aliases should be fine. (at least per the docs I found and reedy's example)
If they are fine, why my second converted query (last post in bug 31534) https://bugzilla.wikimedia.org/show_bug.cgi?id=31534 fails to execute on MySQL ?
[i'm guessing] The problem is that the code assumes that a lone string is a table name for just 1 table and that any other semantics (multiple tables or tables with aliases) will be encoded in data structures rather than need to be parsed out of strings.
In my second query I placed all table alieses into php array keys, as it was suggested by Sam.
$res = self::$db->select( array( 'qu' => 'qp_users', 'qup' => 'qp_users_polls' ), array( 'qup.uid AS uid', 'name AS username', 'short_interpretation', 'long_interpretation', 'structured_interpretation' ), /* WHERE */ 'pid = ' . intval( $this->pid ), __METHOD__, array( 'OFFSET' => $offset, 'LIMIT' => $limit ), /* JOIN */ array( 'qu' => array( 'INNER JOIN', 'qup.uid = qu.uid' ) ) );
Why does it fail?
Anyway, could you just try this code and tell us what happens?
function getIntervalResults( $offset, $limit ) { $result = array(); $db = wfGetDB( DB_SLAVE ); $res = $db->select( array( 'qup' => 'qp_users_polls', 'qu' => 'qp_users' ), array( 'qu.uid as uid', 'name as username', 'count(pid) as pidcount' ), 'qu.uid=qup.uid', __METHOD__, array( 'GROUP BY' => 'qup.uid', 'ORDER BY' => $this->order_by, 'OFFSET' => $offset, 'LIMIT' => $limit ) ); foreach( $res as $row ) { $result[] = $row; } return $result; }
Thanks, Jeremy
Surely, I can. However my far goal is to make the whole extension compatible to non-MySQL DBMS, I guess this query will probably work, but I have another queries, like that second one with INNER JOIN. Maybe I should get rid of table aliases, the simpliest way. Dmitriy
On Wed, Oct 26, 2011 at 01:33, Dmitriy Sintsov questpc@rambler.ru wrote:
- Jeremy Baron jeremy@tuxmachine.com [Wed, 26 Oct 2011 01:09:07
-0400]:
Should I do not use table aliases at all?
Aliases should be fine. (at least per the docs I found and reedy's example)
If they are fine, why my second converted query (last post in bug 31534) https://bugzilla.wikimedia.org/show_bug.cgi?id=31534 fails to execute on MySQL ?
I think you found the problem already in your comment 2 but I missed it the first time around:
произошёл из функции «qp_PollStore::pollVotersPager». База данных возвратила ошибку «1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''qup' INNER JOIN `wiki_qp_users` 'qu' ON ((qup.uid = qu.uid)) WHERE pid = 7 LI' at line 1 (127.0.0.1)».
The Database class built query is identical to manually built query, except the Database class wraps table aliases into single quotes. Original query works.
Seems pretty clearly broken; we should fix the way we quote. (and maybe not break anyone because anyone using the syntax that triggers this would already be broken because of this) Maybe we need backticks instead of quotes? I can test when I'm less sleepy but maybe someone else knows. Looks like it per http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
However my far goal is to make the whole extension compatible to non-MySQL DBMS, I guess this query will probably work, but I have another queries, like that second one with INNER JOIN. Maybe I should get rid of table aliases, the simpliest way.
That may be a solution but either way the bug shouldn't be ignored. Hopefully (if someone (e.g. domas) agrees that it's safe) this can be backported to 1.17 and you could use it. I'm not too familiar with the standard backporting criteria for point releases.
-Jeremy
Jeremy Baron wrote:
Seems pretty clearly broken; we should fix the way we quote. (and maybe not break anyone because anyone using the syntax that triggers this would already be broken because of this) Maybe we need backticks instead of quotes? I can test when I'm less sleepy but maybe someone else knows. Looks like it per http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
Yes, the problem is that it is using parameter quoting instead of identifier quoting. Was quite obvious once I was looking at it on mysql-cli.
Not particulary useful in this case, but when dealing with wrapper selects you may enjoy http://toolserver.org/~platonides/mwtools/ExpandSelect.php </spam>
Platonides wrote:
Jeremy Baron wrote:
Seems pretty clearly broken; we should fix the way we quote. (and maybe not break anyone because anyone using the syntax that triggers this would already be broken because of this) Maybe we need backticks instead of quotes? I can test when I'm less sleepy but maybe someone else knows. Looks like it per http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
Yes, the problem is that it is using parameter quoting instead of identifier quoting. Was quite obvious once I was looking at it on mysql-cli.
However, conversion is done correctly for me both in trunk and REL1_18.
* Platonides Platonides@gmail.com [Wed, 26 Oct 2011 10:05:42 +0200]:
Platonides wrote:
Jeremy Baron wrote:
Seems pretty clearly broken; we should fix the way we quote. (and maybe not break anyone because anyone using the syntax that
triggers
this would already be broken because of this) Maybe we need
backticks
instead of quotes? I can test when I'm less sleepy but maybe
someone
else knows. Looks like it per http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
Yes, the problem is that it is using parameter quoting instead of identifier quoting. Was quite obvious once I was looking at it on mysql-cli.
However, conversion is done correctly for me both in trunk and
REL1_18.
I just compared 1.17 release and my farm code and there are few differences in /includes/db/Database.php
< return $this->tableName( $name ) . ' ' . $this->addIdentifierQuotes( $alias ); ---
return $this->tableName( $name ) . ' ' .
$this->addQuotes( $alias );
It seems that I am running not an release but some earlier version. I'll try to update later to see whether it works. Sorry if my previous bugreport was false - I'll check and close it. A bit strange though because I am sure I am not running early alpha of 1.17, but something quite close to release. Dmitriy
Mark and I are interested in running a one-hour bug triage to discuss, validate & prioritize MediaWiki bugs that affect Postgres, SQL Server, Oracle, SQLite, MariaDB, and other non-MySQL databases. Ideally we'd also come out with a first plan on improving database support. If you can make it tomorrow, the 28th, or Nov. 2nd or Nov. 3rd, please indicate that on this poll. If you can make tomorrow, please additionally feel free to ping Mark and me directly and we'll try to make up for the short notice! :-)
http://www.doodle.com/wrwrixnreh3w8vby
Sumana Harihareswara Volunteer Development Coordinator Wikimedia Foundation
On Mon, Oct 24, 2011 at 3:04 PM, Brion Vibber brion@wikimedia.org wrote:
Hey all --
I've got a stack of issues discussed during the last couple months' conferences and hackathons to raise, so there may be a few more manifestos on the way. But don't worry, this one will be short!
I had a great chat at the New Orleans hackathon with D J Bauch who's been working on maintaining MediaWiki's MS SQL Server support, and also had a very useful email chat a couple weeks back with Freakalowsky who's put a lot of work into MediaWiki's Oracle support.
Long story short: though traditionally MediaWiki developers have put very little work on our own into maintaining non-MySQL compatibility, there's still lots of folks interested in running on them... AND THEY ACTUALLY MOSTLY WORK!
At this point I think it's a bit crazy of us to keep on marginalizing that code; some folks running their own instances will need or want or prefer (or be forced by office IT) to run on some "funny" database, and we shouldn't stand in their way. More importantly, keeping things working with multiple DB backends helps us to keep our code cleaner, and reduces our own hard dependencies on a particular product line.
There are two main impediments to keeping code working on non-MySQL platforms:
- lazy code breakages -- us MySQL-natives accidentally use MySQL-isms that
break queries on other platforms
- lazy schema updates -- us MySQL-natives add new schema updates into the
system but only implement them for MySQL
The first could often be helped simply by having automated testing run to make sure that relevant code paths get exercised. Often there's just a function that's used, or lazy use of LIMIT/OFFSET, or GROUP BY in a form that other databases are pickier about. Just flagging them from the testing can often be enough to make it easy to fix.
The second is a bit harder, but again testing can help flag that something needs to be updated so it doesn't wait until too late for the next release, or something.
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Green bars are your friend! :)
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
OK, I've scheduled a bug triage for next week to discuss, validate & prioritize MediaWiki bugs that affect Postgres, SQL Server, Oracle, SQLite, MariaDB, and other non-MySQL databases. It would be nice to come out with a first plan on improving support. Mark and I will get a list of relevant bugs together at http://etherpad.wikimedia.org/database-bug-triage .
Datetime: Wednesday, 2 November, 19:00 UTC. Timezone conversion: http://www.worldtimebuddy.com/meeting?lid=5128581,2147714,1850147,0,2879139,... (Shortlink: http://ur1.ca/5iztk )
Hi! Have you been following this thread? Maybe you'd like to comment.
Sumana Harihareswara Volunteer Development Coordinator Wikimedia Foundation
---------- Forwarded message ---------- From: Brion Vibber brion@wikimedia.org Date: Mon, Oct 24, 2011 at 3:04 PM Subject: [Wikitech-l] MediaWiki unit testing and non-MySQL databases To: Wikimedia developers wikitech-l@lists.wikimedia.org, MediaWiki announcements and site admin list mediawiki-l@lists.wikimedia.org
Hey all --
I've got a stack of issues discussed during the last couple months' conferences and hackathons to raise, so there may be a few more manifestos on the way. But don't worry, this one will be short!
I had a great chat at the New Orleans hackathon with D J Bauch who's been working on maintaining MediaWiki's MS SQL Server support, and also had a very useful email chat a couple weeks back with Freakalowsky who's put a lot of work into MediaWiki's Oracle support.
Long story short: though traditionally MediaWiki developers have put very little work on our own into maintaining non-MySQL compatibility, there's still lots of folks interested in running on them... AND THEY ACTUALLY MOSTLY WORK!
At this point I think it's a bit crazy of us to keep on marginalizing that code; some folks running their own instances will need or want or prefer (or be forced by office IT) to run on some "funny" database, and we shouldn't stand in their way. More importantly, keeping things working with multiple DB backends helps us to keep our code cleaner, and reduces our own hard dependencies on a particular product line.
There are two main impediments to keeping code working on non-MySQL platforms: * lazy code breakages -- us MySQL-natives accidentally use MySQL-isms that break queries on other platforms * lazy schema updates -- us MySQL-natives add new schema updates into the system but only implement them for MySQL
The first could often be helped simply by having automated testing run to make sure that relevant code paths get exercised. Often there's just a function that's used, or lazy use of LIMIT/OFFSET, or GROUP BY in a form that other databases are pickier about. Just flagging them from the testing can often be enough to make it easy to fix.
The second is a bit harder, but again testing can help flag that something needs to be updated so it doesn't wait until too late for the next release, or something.
I'd like for everybody who's working on MediaWiki on non-MySQL databases to make sure that the phpunit test suite can run on your favorite platform, so we can see about getting them set up in our regular reports on http://integration.mediawiki.org/ci/
Green bars are your friend! :)
-- brion _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Wed, Oct 26, 2011 at 11:43 PM, Sumana Harihareswara sumanah@wikimedia.org wrote:
Hi! Have you been following this thread? Maybe you'd like to comment.
I think you may have meant to send that to a specific person by private e-mail, rather than to the list?
Roan
wikitech-l@lists.wikimedia.org