Hi!
Whenever you do a join with a limit, MySQL gets the query plan wrong.
"Wherever you do a unconstrained join with a limit, MySQL may get the query plan wrong". Anyway, this exact case looks like a bug that was fixed later.
It scans the small table and filesorts the large table.
It actually makes sense when small table refers to subset of large table. I've discussed this with optimizer team quite a few times, it is one of gray areas where having better statistics within engine help.
Does anyone know a DBMS where joining with limits actually works? Because I'm sick of this crap.
Every database in the end will want you to provide hints in one way or another. Indexing itself is already a hint :)
Anyway, as you may note, we were up for quite a while, just developers end up thinking that "oh, database is too stupid to do X or to do Y".
Every static decision can have its own problems, and this is where you end up having humans to do the work. We don't employ software to write all the code for us? Maybe in similar ways we cannot always be sure, that software which does all the data management for us (how many developers do actually think about indexing and data fetch efficiency?), sometimes may go into wrong decisions?
(And yes, maybe it is fixed in later versions, along with other problems fixed, and other problems introduced).
Does anyone know a DBMS where joining with limits actually works?
*shrug*, maybe PG does it properly, maybe MySQL 5.x does it properly, maybe sqlite does it properly.
Because I'm sick of this crap.
Well, FORCE INDEX used to be where it was for a reason, whatever the reason was. Removing it was problem too ;-)
Anyway, one can be sick of lots of crap, I'm sick of fact that 1% of our requests that get to backend (which is 0.1% of requests that are done to the site) make 50% of site CPU load. The simple fact is, that we had forced indexes where needed, but site still has 0.1% of user requests causing 50% of CPU load.
Cheers,