Lets see...
* all these tools will be needed for flattening sequences anyway. In that case CPU costs are really really high like 1/5 or lower real-time and the number of computation needed explodes much faster as every "stable" edit necessitates a new flattening of some portion of the sequence.
* I don't think its possible to scale the foundation's current donation model to traditional free net video distribution.
* We are not Google. Google lost what like ~470 million~ last year on youtube ...(and that's with $240 million in advertising) so total cost of $711 million [1] say we manage to do 1/100th of youtube ( not unreasonable consider we are a top 4 site. Just imagine a world where you watch one wikipedia video for every 100 you watch on youtube ) ... then we would be what like what 7x the total budget ? ( and they are not supporting video editing with flattening of sequences ) ... The pirate bay on the other hand operates at a technology cost comparable to wikimedia (~$3K~ a month in bandwidth) and is distributing like 1/2 of the nets torrents? [2]. .... (obviously these numbers are a bit of tea leaf reading but give or take an order of magnitude it should still be clear which model we should be moving towards.)
... I think its good to start thinking about p2p distribution and computation ... even if we are not using it today ...
* I must say I don't quite agree with your proposed tactic to retain neutral networks by avoiding bandwidth distribution via peer 2 peer technology. I am aware the net "is not built" for p2p nor is it very efficient vs CDNs ... but the whole micro payment system never paned out ... Perhaps your right p2p will just give companies an excuse to restructure the net in a non network neutral way... but I think they already have plenty excuse with the existing popular bittorrent systems and don't see another way other way for not-for-profit net communities to distribute massive amounts of video to each-other.
* I think you may be blowing this ~a bit~ outside of proportion calling into question foundation priority in the context of this hack. If this was a big initiative over the course of a year or a initiative over the course of more than part-time over a week ~ ... then it would make more sense to worry about this. But in its present state its just a quick hack and the starting point of conversation not foundation policy or initiative.
peace, michael
[1] http://www.ibtimes.com/articles/20090413/alleged-470-million-youtube-loss-wi... [2] http://newteevee.com/2009/07/19/the-pirate-bay-distributing-the-worlds-enter...
Gregory Maxwell wrote:
On Sun, Aug 2, 2009 at 6:29 PM, Michael Dalemdale@wikimedia.org wrote: [snip]
two quick points.
- you don't have to re-upload the whole video just the sha1 or some
sort of hash of the assigned chunk.
But each re-encoder must download the source material.
I agree that uploads aren't much of an issue.
[snip]
other random clients that are encoding other pieces would make abuse very difficult... at the cost of a few small http requests after the encode is done, and at a cost of slightly more CPU cylces of the computing pool.
Is >2x slightly? (Greater because some clients will abort/fail.)
Even that leaves open the risk that a single trouble maker will register a few accounts and confirm their own blocks. You can fight that too— but it's an arms race with no end. I have no doubt that the problem can be made tolerably rare— but at what cost?
I don't think it's all that acceptable to significantly increase the resources used for the operation of the site just for the sake of pushing the capital and energy costs onto third parties, especially when it appears that the cost to Wikimedia will not decrease (but instead be shifted from equipment cost to bandwidth and developer time).
[snip]
We need to start exploring the bittorrent integration anyway to distribute the bandwidth cost on the distribution side. So this work would lead us in a good direction as well.
http://lists.wikimedia.org/pipermail/wikitech-l/2009-April/042656.html
I'm troubled that Wikimedia is suddenly so interested in all these cost externalizations which will dramatically increase the total cost but push those costs off onto (sometimes unwilling) third parties.
Tech spending by the Wikimedia Foundation is a fairly small portion of the budget, enough that it has drawn some criticism. Behaving in the most efficient manner is laudable and the WMF has done excellently on this front in the past. Behaving in an inefficient manner in order to externalize costs is, in my view, deplorable and something which should be avoided.
Has some organizational problem arisen within Wikimedia which has made it unreasonably difficult to obtain computing resources, but easy to burn bandwidth and development time? I'm struggling to understand why development-intensive externalization measures are being regarded as first choice solutions, and invented ahead of the production deployment of basic functionality.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l