On 6/8/06, Erik Moeller eloquence@gmail.com wrote:
On 6/8/06, Gregory Maxwell gmaxwell@gmail.com wrote:
I have some questions from a technical perspective.
Before I answer these, let me ask if you agree with this being a reasonable idea in general. If it is, then for me it means that technical details should be worked out by technical people as the project moves along. Kennisnet actually told us that the proposal was more technically detailed than it needs to be in order to be funded.
Fundamentally I think it's a good idea, however our current lack of due diligence in ensuring the copyright status of works uploaded to commons, and the total lack of a plan to achieve acceptable oversight may make the entire proposal a non-starter even ignoring any technical factors.
- What is the purpose of the XML-RPC subsystem? Since the images
are fetched via HTTP in any case, why not simply perform the existence test via simple HTTP as well. If that were done no modifications would be needed on the commons side.
There are future queries where the subsystem would come in handy, such as: show me the available, pre-rendered thumbnails, a list of conversion services, etc. Some of these are mentioned in the specs.
What is the purpose of having conversion services? Why not go a step further and simply offer to host everyone's Wiki's? Not that I'd suggest that, but I don't see where the dividing line is that would suggest we perform a lot of high touch services but not simply host wikis for people.
It appears to me that this aspect of the proposal is asking for a vast increase in complexity in the name of some ill defined future features for which the XML-RPC interface may turn out to be ill suited should such features ever be implemented.
- It would appear from the specification that their is no facility
for caching failures.
Sure, the system should cache non-existence, but given that we anticipate fairly low usage to begin with, and given that the process of adding and reviewing an image is interactive, I do not anticipate any major load resulting from putting that feature into a future version of MW even without such caching. MW has several caching mechanisms that would kick in anyway for pages that have already been viewed before. If it does cause problems, we can deactivate the Commons service entirely until the feature is improved.
What was the point of including technical detail in the proposal when it was so poorly considered that we're left saying "Well, we can turn it off until it's improved"?
- The proposed method of interwiki transclusion doesn't appear fully
formed enough to determine if it will be sufficiently strong to prevent instant commons from accidentally becoming an automated system for license violations. In particular there doesn't appear to be any strong assurance that attribution and license data will *always* be available when the image is available.
We don't have the same assurance for Commons usage within the Wikimedia projects either, I think.
Can you name a time when the bulk of our Wiki's have been accessible and serving commons images, but the commons pages themselves were inaccessible? With common administration it's not likely to be a real problem, but the same is not true when we spread the content all over the world.
What happens when the commons image is deleted but the remote wiki's retain the image? It's also not at all clear to me that remote wikis would be in conformance with various copyleft licenses if they distribute the content without attribution or license data and then refer users to a site operated by a third party.
Fortunately this is fairly easy to resolve. Carry a copy of the image page with it. A system without such a feature could probably not be accepted for use with Wikis outside of the foundation.
- Although copyright concerns are mentioned, they don't seem to be
explored in depth. Commons has a huge amount of copyright violations on it today.
Then they should be deleted. InstantCommons is a manual, user-initiated process. Commons has no legal responsibility to stop users of other wikis from copying images from Commons, whether it is by means of manually downloading or uploading, or by setting their wiki up for IC and initiating an IC transfer process. The copyright cleanup script is a convenient thing we can provide, and by no means a legal necessity.
Copyright violations are deleted from commons but only very slowly, and only after they are discovered by someone who understands the process, or only after an official email comes.
We know commons frequently has content which we can not legally distribute, but we're able to address it almost completely. Instant commons as proposed will create a situation where we frequently can not address it.
I think it's a little disingenuous to call it a 'manual process'. Downloading an image from commons and uploading it to the local wiki is a manual process. It gives an opportunity for someone to evaluate the sanity of the copyright claims. Instant commons image insertions can be made blindly. And unlike uploads which are easy to limit, instant commons insertion can be done by anyone that can edit. This opens exciting new vandalism opportunities. Free penis images on every Wiki in the land!
- If the remote wiki will download the full image in all cases, what
is the purpose of burdening commons with the additional transfer and storage costs of their thumbnail generation?
It's a service which we can choose to provide - for some wikis, for some file formats, etc. This is a policy choice for Wikimedia to make.
What evidence do we have that there is any interest in such a feature. It's also true that with the right code the XML-RPC could be used to have commons generate large prime numbers or spider the web looking for evidence of Elvis. Why would we want to offer thumbnailing and not elvis searching?
The first iteration of IC could support SVG the same way it is supported today: If your local wiki doesn't have a backend rendering library, you can still upload them (locally or through IC), but you can't view them as rendered PNGs and scaled thumbnails. Future iterations could support SVGs through some XML-RPC based query/response mechanism for the PNGs, if Wikimedia wants to provide that service.
I don't see why XML-RPC is required. You can request commons create a basic rasterized version today with a simple http request. In any case, I doubt that we should be in the business of resource intensive transformations for remote wikis.
There are also some more complex issues, like where the $5,000 EUR fee comes from for what is, overall, such a simple feature set. But I don't want to create a flood of comments initially.
The feature set is reasonably simple. Do keep in mind that for a feature to be developed and for it to be "Brion-ready" generally takes some more time spent with testing, debugging, security review, etc. The developer is ready to use any surplus time for the purpose of developing Wikipedia content in one of the native languages of Ghana; Kennisnet was happy with that. Also keep in mind that this is his first MediaWiki project, so he'll need some tutoring from a skilled developer, which should be paid for.
5K EUR is almost two man months at the rates we pay our developers. If this feature as outlined will take more than a weeks time, complete with debugging and the creation of a test suite then I suspect we have over designed it, that we are paying too much, or both.
Just because the funding will be donated does not excuse us from fiscal responsibility if the foundations name will be attached. Nothing is preventing this work from being done independent of the foundation, funded by whatever means are available, and thus being free from oversight or delay by the foundation.
If you're asking the foundation's name to be attached, then it would be reasonable to explain the fees to the foundation to their satisfaction, so that they can ensure that all donations taken in their name are being effectively used.