George William Herbert wrote:
... It would also not be much more effort or customer impact to pad to the next larger 1k size for a random large fraction of transmissions.
Padding each transmission with a random number of bytes, up to say 50 or 100, might provide a greater defense against fingerprinting while saving massive amounts of bandwidth.
... At some point the ops team would need a security team, an IDS team, and a counterintelligence team to watch the other teams, and I don't know if the Foundation cares that much or would find operating that way to be a more comfortable moral and practical stance...
I'm absolutely sure that they do care enough to get it right, but I think that approach might be overkill. Just one or two cryptology experts to make the transition to HTTPS, PFS, and whatever padding is prudent would really help. I also hope that, if there is an effort to spread disinformation about the value of such techniques, that the Foundation might consider joining with e.g. the EFF to help fight it. I think it's likely that a single cryptology consultant would probably be able to make great progress in both. Getting cryptography right isn't so much as a time-intensive task as it is sensitive to experience and training.
Setting up and monitoring with ongoing auditing can often be automated, but does require the continued attention of at least one highly skilled expert, and preferably more than one in case the first one gets hit by a bus.
On Fri, Aug 2, 2013 at 1:32 PM, James Salsman jsalsman@gmail.com wrote:
George William Herbert wrote:
... It would also not be much more effort or customer impact to pad to the next larger 1k size for a random large fraction of transmissions.
Padding each transmission with a random number of bytes, up to say 50 or 100, might provide a greater defense against fingerprinting while saving massive amounts of bandwidth.
Or it might provide virtually no defense and not save any bandwidth.
On 08/02/2013 01:32 PM, James Salsman wrote:
Padding each transmission with a random number of bytes, up to say 50 or 100, might provide a greater defense against fingerprinting while saving massive amounts of bandwidth.
It would slightly change the algorithm used to make the fingerprint, not make it any significantly higher, and you'd want to have some fuzz in the match process anyways since you wouldn't necessarily want to have to fiddle with your database at every edit.
The combination of "at least this size" with "at least that many secondary documents of at least those sizes in that order" is probably sufficient to narrow the match to a very tiny minority of articles. You'd also need to randomize delays, shuffle load order, load blinds, etc. A minor random increase of size in document wouldn't even slow down the process.
-- Marc
How much padding is already inherent in HTTPS? Does the protocol pad to the size of the blocks in the block cipher?
Seems to me that any amount of padding is going to give little bang for the buck, at least without using some sort of pipelining. You could probably do quite a bit if you redesigned Mediawiki from scratch using all those newfangled asynchronous javascript techniques, but that's not exactly an easy task. :)
On Fri, Aug 2, 2013 at 3:45 PM, Marc A. Pelletier marc@uberbox.org wrote:
On 08/02/2013 01:32 PM, James Salsman wrote:
Padding each transmission with a random number of bytes, up to say 50 or 100, might provide a greater defense against fingerprinting while saving massive amounts of bandwidth.
It would slightly change the algorithm used to make the fingerprint, not make it any significantly higher, and you'd want to have some fuzz in the match process anyways since you wouldn't necessarily want to have to fiddle with your database at every edit.
The combination of "at least this size" with "at least that many secondary documents of at least those sizes in that order" is probably sufficient to narrow the match to a very tiny minority of articles. You'd also need to randomize delays, shuffle load order, load blinds, etc. A minor random increase of size in document wouldn't even slow down the process.
-- Marc
Wikimedia-l mailing list Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
wikimedia-l@lists.wikimedia.org