Gregory,
this is great fun and all, but you know, there's really no point in now discussing the fine points of InstantCommons with everyone who has an opinion on it. The specs were announced and RfC'd, the project was approved, it has funding. Now that it's politically convenient to criticize the project you can try to find faults with it however much you like. The point is, it has already undergone the necessary processes and met the requirements.
My whole argument is that the bureaucracy of Wikimedia cannot even handle small projects (and you argue that it's even smaller than we say). If we have the same process - including the free-for-all now - for larger projects, then Wikimedia is an entirely dysfunctional organization when it comes to managing such projects. When "the power of collaboration and openness" becomes "the power of many people to prevent things from happening", nothing will ever happen.
I will just reply briefly below:
What is the purpose of having conversion services?
From my point of view: Offering such services to free content wikis --
and holding shared fundraising drives -- would be a great thing. Again, this is for Wikimedia to figure out, however.
What was the point of including technical detail in the proposal when it was so poorly considered that we're left saying "Well, we can turn it off until it's improved"?
Read the response above -- I disagree that it was poorly technically considered. But if Brion wanted some failure caching mechanism before turning it on, then of course we would implement that. Having some buffer in the development funding helps here.
What happens when the commons image is deleted but the remote wiki's retain the image?
The same that happens when the Commons images about orthodox churches I pointed out to you are deleted, and the remote wikis retain the image. It's their responsibility, not ours. As for the licensing information, it is cached locally.
Fortunately this is fairly easy to resolve. Carry a copy of the image page with it. A system without such a feature could probably not be accepted for use with Wikis outside of the foundation.
See specs: "The description page will use the existing functionality to load metadata from Commons using interwiki transclusion; however, a new caching table will be created and used in order to store and retrieve the returned HTML description once it has been first downloaded."
We know commons frequently has content which we can not legally distribute, but we're able to address it almost completely. Instant commons as proposed will create a situation where we frequently can not address it.
This situation already exists. You just fail to recognize it as such.
I think it's a little disingenuous to call it a 'manual process'. Downloading an image from commons and uploading it to the local wiki is a manual process. It gives an opportunity for someone to evaluate the sanity of the copyright claims.
And as my cited example shows, that makes the situation much worse.
5K EUR is almost two man months at the rates we pay our developers. If this feature as outlined will take more than a weeks time, complete with debugging and the creation of a test suite then I suspect we have over designed it, that we are paying too much, or both.
You think? I may have a few 1.25K projects for you then.
Erik