On 14 October 2010 20:04, Michael Dale mdale@wikimedia.org wrote:
My quick thought would be to think how the system could work over vanilla http. ie clients can't really be expected to install things. If the files are split into chunks per keyframe set, could a javascript system drive reassembly? And or could this work in conjunction with efforts to build adaptive streaming protocol. http://blog.gingertech.net/2010/10/09/adaptive-http-streaming-for-open-codec... Where each chunk could potentially be pulled from different servers.
That is a challenge.
The p2p architecture shifts to a relatively small amount of medium sized http support servers rather than large amount of p2p clients. Ie something medium sized hosts like universities and small ISPs could donate to, and have it work over normal http
Yep, economies of scale made server bandwidth way more attractive.
I have also cc'ed people who worked on the p2p-next project, Arno Bakker, Riccardo, Diego and Jan Gerber. ( You guys may want to join Wikivideo-l list ). The p2p next group is working on a new architecture for p2p distribution that works over UDP ( so pretty different from the above proposal ) they presented their work at OVC. Arno is there a paper up somewhere that outlines the architecture of the new system your working on?
Well, Arno is on vacation somewhere in US. There is an IETF draft outlining the transport protocol (not the entire arch). https://datatracker.ietf.org/doc/draft-grishchenko-ppsp-swift/ There is a mailing list for the project: https://listserv.tudelft.nl/mailman/listinfo/swift-ewi The general architecture itself may be described as "practical content-centric networking" or "federated CDN". While it is very much P2P-alike inside (server-to-server), it speaks HTTP to regular users.