On 04/23/2012 02:45 PM, Daniel Kinzler wrote:
- In case an update is missed, we need a mechanism to allow requesting a full
purge and re-fetch of all data from on the client side and not just wait until the next push which might very well take a very long time to happen.
Once the data set becomes large and the change rate drops, this would be a very expensive way to catch up. You could use sequence numbers for changes to allow clients to detect missed changes and selectively retrieve all changes since the last contact.
In general, best-effort push / change notifications with bounded waiting for slow clients combined with an efficient way to catch up should be more reliable than push only. You don't really want to do a lot of buffering for slow clients while pushing.
If you are planning to render large, standardized page fragments with little to no input from the wiki page, then it might also become interesting to directly load fragments using JS in the browser or through a proxy with ESI-like capabilities for clients without JS.
Gabriel