Yuri,
Brion, I agree that API should not duplicate DB
access, but
unfortunately most of the core code was targeted towards a single page
request.
It is not just core code, it is also all architecture. Our main
business is presenting people with rendered wiki pages.
Only some special pages return data for multiple
items, and
from what I understood, they are not easy to refactor to just get the
data for the API (I might be wrong).
Many special pages are a joke (due to 1000 limit), and they continue
to die.
Hence most normal wiki operations
seem to be a special subset the theoretical internal API (e.g. just
need content of a single page whereas API may provide content of
multiple pages) - which validates the separate biz logic tier idea.
We do more just than retrieving data - we end up crunching it,
rerunning related queries (e.g. linkbatches), etc.
The distance of UI and data access code is religious debate, but I
still feel that frontend developers should understand what is needed
at the backend to fulfill the task.
Probably that is not that common in enterprise-y world, where
abstractions are really common, but on the other hand, we're not
running on enterprise-y budget.
API seems to be the work to solve many hypothetical problems, whereas
mediawiki has a very specific task to handle.
Now, throwing away the code and putting anything generic in between
would not help with efficiency.
Many of tasks we're used to do efficiently would fail quite a bit if
say ActiveRecord would be used.
Generic code usually works as long as there's nobody actively using
it (what applies to all API code, whenever someone starts using any
function, we have either to disable or adapt bits..)
*shrug*, I feel that anyone telling about "we should have separate
biz-logic level" doesn't really know biz-logic of mediawiki - but
probably I'm wrong.
BR,
--
Domas Mituzas --
http://dammit.lt/ -- [[user:midom]]