-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
If I am not mistaken (and I may very well be), MediaWiki still uses MD5s to encrypt (well, technically hash, but it's named wfEncryptPassword(), heh heh) user passwords.
function wfEncryptPassword( $userid, $password ) { global $wgPasswordSalt; $p = md5( $password);
if($wgPasswordSalt) return md5( "{$userid}-{$p}" ); else return $p; }
If this is indeed the case, we should be considering migrating away from MD5 to a more secure algorithm like SHA256. The breadth of attacks against this hashing scheme have grown incredibly sophisticated, and over where I consult, we generally discourage new developers from using MD5 for any security related purposes (still makes a fine good checksum though).
Migrating the hashes would probably prove to be tricky, but if we implement appropriate hooks, with the addition of only one new field we could easily "magically" update the fields once a user logs in, and the system is (for one short request) in possession of the plaintext password. The old algorithm could be supported indefinitely, but only for old user accounts that haven't upgraded yet, all new accounts would use the new hashing scheme. We could even rename the function into something more accurate!
What say the developers?
On 1/22/07, Edward Z. Yang edwardzyang@thewritingpot.com wrote:
If this is indeed the case, we should be considering migrating away from MD5 to a more secure algorithm like SHA256. The breadth of attacks against this hashing scheme have grown incredibly sophisticated, and over where I consult, we generally discourage new developers from using MD5 for any security related purposes (still makes a fine good checksum though).
Aren't the vulnerabilities limited to the attacker creating a collision of two strings *that the attacker created* sharing a common prefix? Are they relevant to a password hash? There's no preimage attack against MD5, and that strikes me as the only thing relevant to passwords. Things like certificates can be a problem, of course, depending on exact implementation.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Edward Z. Yang wrote:
If I am not mistaken (and I may very well be), MediaWiki still uses MD5s to encrypt (well, technically hash, but it's named wfEncryptPassword(), heh heh) user passwords.
[snip]
If this is indeed the case, we should be considering migrating away from MD5 to a more secure algorithm like SHA256.
As a note; AFAIK versions of PHP prior to 5.1.2 include only MD5 and SHA-1 digest functions built-in, and the rumor is SHA-1 isn't safe enough either.
There is an 'mhash' module with more algos including SHA256, but it appears not to be enabled by default: http://www.php.net/manual/en/ref.mhash.php
The more featureful 'hash' module is available by default from 5.1.2 on: http://www.php.net/manual/en/ref.hash.php
Currently MediaWiki supports PHP 5.0.4(?) and up, but 5.0 is mildly annoying (and has some nasty breakage with arrays causing it to fail on 64-bit systems.)
With appropriate hash functions present, we could indeed auto-upgrade hashes on login. (A new field is not necessarily required; the existing hash field can be upgraded to indicate the hash algo along with the hash value. And in a happy case of coincidence, the password hash fields are tinyblobs, so anything that fits in 255 bytes is cool...)
- -- brion vibber (brion @ pobox.com)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brion Vibber wrote:
As a note; AFAIK versions of PHP prior to 5.1.2 include only MD5 and SHA-1 digest functions built-in, and the rumor is SHA-1 isn't safe enough either. [snip]
I would recommend rolling a pure-PHP implementation of SHA-256 and siwtching to the hash implementation if it is present. The hash isn't computed very often: only during login and password setting, so any performance penalty incurred wouldn't be that bad. Plus, there are already a number of quite fast SHA-256 implementations out there for PHP. I personally recommend: http://code.tatzu.net/sha256/
With appropriate hash functions present, we could indeed auto-upgrade hashes on login. (A new field is not necessarily required; the existing hash field can be upgraded to indicate the hash algo along with the hash value. And in a happy case of coincidence, the password hash fields are tinyblobs, so anything that fits in 255 bytes is cool...)
Works then, since raw binary SHA-256 output is only 256 bits (64 bytes, I believe). We can easily spare another 7 bytes to prepend it with something along the lines of "sha256:"
Simetrical wrote:
Aren't the vulnerabilities limited to the attacker creating a collision of two strings *that the attacker created* sharing a common prefix? Are they relevant to a password hash? There's no preimage attack against MD5, and that strikes me as the only thing relevant to passwords. Things like certificates can be a problem, of course, depending on exact implementation.
Well, in spite of these extremely devastating attacks in the collision area, the keyspace of MD5 is extremely small: 128 bits is small enough that a birthday attack is extremely feasible. MD5 also has many comprehensive rainbow tables (including one that's 4.9 TB large!) I think it's worth migrating, even if the security increase is comparitatively small. It's not difficult to do.
Edward Z. Yang wrote:
I would recommend rolling a pure-PHP implementation of SHA-256 and siwtching to the hash implementation if it is present.
New crypto implementations often have far more security issues than the primitives they're implementing. Despite the known attacks on SHA-1, it's perfectly fine for password hashing, and it doesn't require external libraries. Use it, be merry.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Ivan Krstić wrote:
New crypto implementations often have far more security issues than the primitives they're implementing. Despite the known attacks on SHA-1, it's perfectly fine for password hashing, and it doesn't require external libraries. Use it, be merry.
Actual encryption (both design and implementation) is indeed rocket science, but implementing cryptographic hashes is not difficult at all as long as you understand the algorithm and a good battery of unit tests to make sure your implementation is working properly.
Yes, actually *designing* a hash function is difficult. And yes, SHA-1 *probably* is still good enough. But if we're going to go the trouble of migration (small trouble, but trouble that requires DB schema changes nonetheless (be they formal or informal)), we might as well do it right. I remember one security expert saying that there is no smoke yet, but the alarm bells have gone off for SHA-1 and it's time to walk (not run) for the exits.
Edward Z. Yang wrote:
implementing cryptographic hashes is not difficult at all as long as you understand the algorithm and a good battery of unit tests to make sure your implementation is working properly.
Yes, and jet turbine maintenance is not difficult at all as long as you understand their construction and have a good battery of pre-flight tests to make sure they're working properly. Yet you wouldn't feel very comfortable grabbing a random person off the street, handing them the manual, and having them go to town on the engines of your 747.
It's trivial to introduce all sorts of subtle bugs when creating new crypto implementations -- we've seen this happen time after time -- so unless there's a particularly compelling reason to reimplement from scratch, sticking with something that's already out and has had many eyeballs looking at it is almost always the correct choice.
But if we're going to go the trouble of migration (small trouble, but trouble that requires DB schema changes nonetheless (be they formal or informal)), we might as well do it right. I remember one security expert saying that there is no smoke yet, but the alarm bells have gone off for SHA-1 and it's time to walk (not run) for the exits.
FUD. Talk of "doing it right" and "alarm bells having gone off" is meaningless without a threat model. New uses of SHA-1 are rightly discouraged in protocol design, where collisions directly translate to potential problems, but password hashing?
Generally, the only password hashing scenario in which the choice of algorithm makes a difference at all is an offline attack once the password table has been compromised, at which point, the difference between one algorithm and the next is nothing more but how long you can hold off a brute-forcing attacker. And for that, without preimage attacks, the known MD5 and SHA-1 flaws make about zero difference for any practical purpose.
On 1/22/07, Ivan Krstić krstic@solarsail.hcs.harvard.edu wrote: [snip]
Generally, the only password hashing scenario in which the choice of algorithm makes a difference at all is an offline attack once the password table has been compromised, at which point, the difference between one algorithm and the next is nothing more but how long you can hold off a brute-forcing attacker. And for that, without preimage attacks, the known MD5 and SHA-1 flaws make about zero difference for any practical purpose.
Ivan is right on in his statements here.
There might be good reasons to move.. for example, as part of an overall effort to stop using legacy hash functions throughout the software... or better, to change to a system which supports client side hashing.
None of the pre-existing MD5 rainbow tables will do a lick of good against mediawiki because of the weird legacy inspired H(s+'-'+H(P)) that we use.
Frankly, we could probably be using classic unix crypt() for this application. Hashing our server side passwords is useful to make it less attractive for someone to steal the database, and perhaps keep a curious developer from doing something actually useful like publishing a list of accounts with the same password.
Once we've succeeded in making sure that attacking the database of passwords is no longer the lowest hanging fruit, we're pretty much done... Taking the extra effort to add lots of additional security will only risk adding additional vulnerabilities.
(/me waits for someone to notice my above H(s +'-'+H(P)) above and cry about the minor precomputation a smart attacker can do to reduce the workload from 2*users*passwords MD5s to passwords + passwords*users MD5s)
If someone wants to add something that will substantially improve security which requires no substantial changes, ... build something like Google's password strength indicator for the password change page. :) (bonus points for pure js rather than ajax based)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Moin,
On Tuesday 23 January 2007 05:10, Gregory Maxwell wrote:
On 1/22/07, Ivan Krstić krstic@solarsail.hcs.harvard.edu wrote: [snip]
Generally, the only password hashing scenario in which the choice of algorithm makes a difference at all is an offline attack once the password table has been compromised, at which point, the difference between one algorithm and the next is nothing more but how long you can hold off a brute-forcing attacker. And for that, without preimage attacks, the known MD5 and SHA-1 flaws make about zero difference for any practical purpose.
Ivan is right on in his statements here.
[snip]
I agree that changing the hashing algorithm is unnec. here.
But:
(/me waits for someone to notice my above H(s +'-'+H(P)) above and cry about the minor precomputation a smart attacker can do to reduce the workload from 2*users*passwords MD5s to passwords + passwords*users MD5s)
Actually, if you want to strengthen the password-hash table against some offline brute-force/dictionary attacks, you should hash them with a function that takes a long time per test, but still not enough time to slow down the log-in servers.
Something like
hash = H(password); for (0..100) { hash = H(hash); }
What function you actually use for H(), may it be MD5 or SHA1, is practically irrelevant here, tho, but when you migrate to such a scheme, you might as well use SHA256 instead of MD5 (even if it is just to quiten all the "MD5 is insecure" cryers :)
Best wishes,
Tels
- -- Signed on Tue Jan 23 18:45:25 2007 with key 0x93B84C15. View my photo gallery: http://bloodgate.com/photos PGP key on http://bloodgate.com/tels.asc or per email.
"Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." -- Howard Aiken
"Tels" nospam-abuse@bloodgate.com wrote in message news:200701231852.29592@bloodgate.com...
What function you actually use for H(), may it be MD5 or SHA1, is practically irrelevant here, tho, but when you migrate to such a scheme, you might as well use SHA256 instead of MD5 (even if it is just to quiten all the "MD5 is insecure" cryers :)
In security, doing things because "you might as well" is an incredibly bad idea! A security system should only be changed to be a _better_ security system (and even then after it has been proven to be better). _Never_ because it's 'probably not worse'!
- Mark Clements (HappyDog)
Mark Clements schreef:
"Tels" nospam-abuse@bloodgate.com wrote in message news:200701231852.29592@bloodgate.com...
What function you actually use for H(), may it be MD5 or SHA1, is practically irrelevant here, tho, but when you migrate to such a scheme, you might as well use SHA256 instead of MD5 (even if it is just to quiten all the "MD5 is insecure" cryers :)
In security, doing things because "you might as well" is an incredibly bad idea! A security system should only be changed to be a _better_ security system (and even then after it has been proven to be better). _Never_ because it's 'probably not worse'!
- Mark Clements (HappyDog)
Hoi, This discussion /is /about changing the security. Changing it because the need is felt for the current system to be improved. So when you have the option between several choices where one is theoretically substantially better, it is worth the consideration. As many people have mentioned it pays to use well tested, well known algorithms. As many people have mentioned, it pays to double check that the implementation is done perfectly. Public perception about security is important. When people think that something is not secure, they are a step closer to proving that something is not secure. Thanks, GerardM
Gerard Meijssen wrote:
This discussion /is /about changing the security. Changing it because the need is felt for the current system to be improved.
No, the discussion is about changing the present system because the original poster didn't understand what he was talking about.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Aw, come on, why is there so much contention about such a simple issue? And yes, I did slip when I started talking about birthday attacks. Sorry about that.
- From the way I see it, then, the primary objections with migrating are that:
* All this hashing and salting tomfoolery makes no difference because the only way an attacker could possibly get the hashes is if they hacked into the server or a developer with full-read access pulled the hashes. * The current way is not broken, because there have been no pre-image attacks developed for MD5 * Effort spent on migration would be better spent on HTTPS * The security benefit is miniscule, compared to other possible strategies such as encouraging strong passwords * And that I have no clue what I'm talking about (I like to think that I at least have some clue)
There are some periphery issues surrounding SHA256:
* SHA-256 is not as widely deployed, so it might have some fatal weaknesses. (it's unlikely, though, because it is still based off of SHA-1) * SHA-256 would require a pure-PHP implementation which might have bugs (Due to the avalanche effect, however, any bug should be glaringly obvious. Furthermore, the algorithm is small enough so that a full code audit would not be difficult to do (think <15kb))
My arguments for migrating are:
* Defence in depth: it is true that there are a number of elements that already potentially should "solve" the problem, but this argument taken to the extreme would mean there would be no need to hash or salt the password at all: after, anyone who has access to the database could just change it themselves! But I'm sure most reasonable people here agree that hashing the password is better than no protection at all, even if it doesn't seem to make a difference. * It saves effort later: By adding support for multiple hashing techniques, and building the framework necessary to migrate these hashes, we reduce the cost of any other hash migrations we have to do in the future down. If done properly, a future migration could be done by specifying a new hash function, and the code from the previous migration would do all the work. * It is forward thinking: attacks against hashing schemes never grow worse: they only get better, as processing power, memory capabilities and knowledge of attacks increase.
It's no problem if we don't, of course. The strangeness of MediaWiki's current hash function should, for now, prevent people from using rainbow tables to find the password (even though you reduce entropy by double-hashing). But, as I said, defense in depth!
Now, about that password security indicator...
Edward Z. Yang wrote:
Aw, come on, why is there so much contention about such a simple issue? And yes, I did slip when I started talking about birthday attacks. Sorry about that.
I find it interesting that you're advocating moving away from MD5 in a situation where the known collision weaknesses aren't relevant, yet you personally are still using SHA1 (which was broken about two years ago) in a situation which *is* susceptible to collision - and your signature didn't verify on that message (ep65i0$496$1@sea.gmane.org).
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Alphax (Wikipedia email) wrote:
I find it interesting that you're advocating moving away from MD5 in a situation where the known collision weaknesses aren't relevant, yet you personally are still using SHA1 (which was broken about two years ago) in a situation which *is* susceptible to collision -
First of all, SHA1 is not *broken*: although cryptographers have discovered ways to force collisions at a rate lower than brute-force, the attack is still not practical. Furthermore, in a message signing context, you would need to trick me into signing a doctored message, which would be pretty much impossible as I almost always only use GPG to sign plaintext.
Furthermore, I'm currently using a DES signature, which uses 160 bits and thus does not support SHA-256. I could use RSA, but then encryption would be out of question.
What you SHOULD be asking is why I'm using an old version of GnuPG (the current version is 1.4.6).
and your signature didn't verify on that message (ep65i0$496$1@sea.gmane.org).
Don't know why, my archived copy gives similar results. Maybe Thunderbird did something to it.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Moin,
On Tuesday 23 January 2007 19:20, Mark Clements wrote:
"Tels" nospam-abuse@bloodgate.com wrote in message news:200701231852.29592@bloodgate.com...
What function you actually use for H(), may it be MD5 or SHA1, is practically irrelevant here, tho, but when you migrate to such a scheme, you might as well use SHA256 instead of MD5 (even if it is just to quiten all the "MD5 is insecure" cryers :)
In security, doing things because "you might as well" is an incredibly bad idea! A security system should only be changed to be a _better_ security system (and even then after it has been proven to be better). _Never_ because it's 'probably not worse'!
I agree with you in principle, but please note that I advocated the switch "as well" because:
* SHA256 is generally considered a more secure hash (or it would be silly to switch) * after changing the core algorithm, you must evaluate the security of the new system, so you might as well *consider* switching the hash function, because then you need to do the switch only once and the evaluation only once, too. You know, as to not having to redo this in a few months when the next attack on MD5 comes (this time affecting your system)
Of course, you evaluation might also result in "there is no need to switch anything".
Best wishes,
Tels
- -- Signed on Tue Jan 23 21:24:46 2007 with key 0x93B84C15. View my photo gallery: http://bloodgate.com/photos PGP key on http://bloodgate.com/tels.asc or per email.
"In 1988, Jack Thompson ran against Janet Reno for DA of Dade County: Thompson's unique campaign message was that Reno was unfit for the job because, as a closeted lesbian with a drinking problem, she was great candidate for blackmail by the criminal element. Jack never explained why this remained a threat even after he exposed her 'secret'. Reno cruised at the polls."
On 1/22/07, Edward Z. Yang edwardzyang@thewritingpot.com wrote:
Well, in spite of these extremely devastating attacks in the collision area, the keyspace of MD5 is extremely small: 128 bits is small enough that a birthday attack is extremely feasible. MD5 also has many comprehensive rainbow tables (including one that's 4.9 TB large!) I think it's worth migrating, even if the security increase is comparitatively small. It's not difficult to do.
A birthday attack is not relevant to a password hashing scheme, and rainbow tables are useless since Mediawiki uses salts.
The fact that the keyspace of MD5 is only 128 bits does limit the password strength, but who's using a password more than 13 characters for their Wikipedia password? Does Mediawiki even allow more than 13 character passwords?
Anthony
On 1/22/07, Edward Z. Yang edwardzyang@thewritingpot.com wrote:
Well, in spite of these extremely devastating attacks in the collision area, the keyspace of MD5 is extremely small: 128 bits is small enough that a birthday attack is extremely feasible.
Birthday attack, maybe, but that's useless for cracking a password. It's still far too large to brute-force a preimage. Maybe not for too many years to come . . . I don't disagree with the idea of moving to a new hash function just to be safe. It seems like a good idea.
(While we're on the topic of hashes, by the way, vBulletin has JS-enabled browsers hash and salt their passwords before they even send them. Thus man-in-the-middle attacks are impossible. Seems like a nifty idea to consider, anyway.)
On 1/22/07, Anthony wikitech@inbox.org wrote:
The fact that the keyspace of MD5 is only 128 bits does limit the password strength, but who's using a password more than 13 characters for their Wikipedia password? Does Mediawiki even allow more than 13 character passwords?
I think the limiting factor in password length in MediaWiki is how large a POST the server is willing to accept. ;) I once tried a password on my local install thousands of pages long, just for the heck of it, and it worked fine.
"Simetrical" Simetrical+wikitech@gmail.com wrote in message news:7c2a12e20701221939h45f35021u6e37e067ba078df2@mail.gmail.com...
On 1/22/07, Anthony wikitech@inbox.org
wrote:
The fact that the keyspace of MD5 is only 128 bits does limit the password strength, but who's using a password more than 13 characters for their Wikipedia password? Does Mediawiki even allow more than 13 character passwords?
I think the limiting factor in password length in MediaWiki is how large a POST the server is willing to accept. ;) I once tried a password on my local install thousands of pages long, just for the heck of it, and it worked fine.
And could you remember it the next time you tried to log in? :-)
- Mark Clements (HappyDog)
On 1/22/07, Mark Clements gmane@kennel17.co.uk wrote:
And could you remember it the next time you tried to log in? :-)
Well, I believe it consisted entirely of the letter 'A', so possibly I could have if I wanted to. :P
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Simetrical wrote:
(While we're on the topic of hashes, by the way, vBulletin has JS-enabled browsers hash and salt their passwords before they even send them. Thus man-in-the-middle attacks are impossible. Seems like a nifty idea to consider, anyway.)
I did a demo implementation of that a couple years ago (might be in SVN somewhere, or might be lost) on this model:
- - server sends a challenge string C with the login form - - JavaScript takes over on form submission, asking server for the salt (user id) for the given name - - client calculates the salted hash H - - client calculates a combined hash, something like MD5(c + H), and submits that with the form instead of plaintext - - server confirms that the submitted combined hash matches what it can calculate with the challenge string and its copy of H
Is it more secure than sending plaintext passwords? A bit. But even if the challenge can armor against replay attacks, anyone sniffing can just hijack the session cookie and do all manner of nasty things right then and there.
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
- -- brion vibber (brion @ pobox.com)
On 23/01/07, Brion Vibber brion@pobox.com wrote:
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
What infrastructure changes would it require for us to start migrating to HTTPS, at least during the login process?
Rob Church
Rob Church schreef:
On 23/01/07, Brion Vibber brion@pobox.com wrote:
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
What infrastructure changes would it require for us to start migrating to HTTPS, at least during the login process?
Rob Church
Hoi, I would say first finish the Single User Login. It makes little sense to consider it before this implementation as it is part of the login process. Thanks, GerardM
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
I would say first finish the Single User Login. It makes little sense to consider it before this implementation as it is part of the login process.
*shrug*
All I did was ask what kind of changes we would need to plan for. You know, if we require a few hundred new servers, we might as well whinge about that at the next budget allocation now, so we can plan ahead.
I don't see you rushing to help with single-user login. Nor do I see any of those people who're whinging about it taking ages, and how they could do better, coming up with any better implementations.
This next bit isn't aimed at you Gerard...CAN PEOPLE PLEASE LAY OFF ON SINGLE USER LOGIN? It's driving us all mad everytime someone asks "when and how", and if I were Brion, I'd be getting absolutely bloody *sick* of it. Furthermore, "SUL" is *not* some magical sword which will fix all our problems in one fell swoop, so please remember that.
Rob Church
Rob Church schreef:
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
I would say first finish the Single User Login. It makes little sense to consider it before this implementation as it is part of the login process.
*shrug*
All I did was ask what kind of changes we would need to plan for. You know, if we require a few hundred new servers, we might as well whinge about that at the next budget allocation now, so we can plan ahead.
I don't see you rushing to help with single-user login. Nor do I see any of those people who're whinging about it taking ages, and how they could do better, coming up with any better implementations.
This next bit isn't aimed at you Gerard...CAN PEOPLE PLEASE LAY OFF ON SINGLE USER LOGIN? It's driving us all mad everytime someone asks "when and how", and if I were Brion, I'd be getting absolutely bloody *sick* of it. Furthermore, "SUL" is *not* some magical sword which will fix all our problems in one fell swoop, so please remember that.
Rob Church
Hoi, Actually I will not lay off on Single User Login. The reason is quite simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with the most nifty things and all kinds of functionality that HAS to be implemented because you are volunteers too. In the mean time some rather basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
The promised date for SUL was end of January 2005. It is now a year late.
Thanks, GerardM
On 1/23/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
Hoi, Actually I will not lay off on Single User Login. The reason is quite simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with the most nifty things and all kinds of functionality that HAS to be implemented because you are volunteers too. In the mean time some rather basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
The promised date for SUL was end of January 2005. It is now a year late.
Thanks, GerardM
I couldn't agree with this more. Single User Login is a pretty useless feature. It was a neat dream back in 2001 or whenever passport.com came out, but it really is rather useless. But...there are so many other useful features that are being ignored because they aren't scheduled to be implemented until *after* SUL.
I really wish the board would fix the issue and tell Brion to just drop the whole idea of Single User Login. You know, Firefox (like probably all the other browsers) has this nifty feature that stores your username and password for each site, so you only have to type in your password once.
By the way, January 2005 was *two* years ago, not one.
Anthony
Anthony wrote:
Single User Login is a pretty useless feature. It was a neat dream back in 2001 or whenever passport.com came out, but it really is rather useless. [...] You know, Firefox (like probably all the other browsers) has this nifty feature that stores your username and password for each site, so you only have to type in your password once.
SUL isn't generic SSO. Clearly, you have little or no understanding of the kind of problems that SUL is trying to fix or the benefits it would bring, so I'm not sure why you thought it was a good idea to embarrass yourself on a public list by backing ignorance with invective.
You aren't entitled to getting your desired MediaWiki features implemented by other people. If the current development model or pace bother you, either focus on your power animal and find the patience to wait some more, or start posting patches.
On 1/23/07, Ivan Krstić krstic@solarsail.hcs.harvard.edu wrote:
Anthony wrote:
Single User Login is a pretty useless feature. It was a neat dream back in 2001 or whenever passport.com came out, but it really is rather useless. [...] You know, Firefox (like probably all the other browsers) has this nifty feature that stores your username and password for each site, so you only have to type in your password once.
SUL isn't generic SSO. Clearly, you have little or no understanding of the kind of problems that SUL is trying to fix or the benefits it would bring, so I'm not sure why you thought it was a good idea to embarrass yourself on a public list by backing ignorance with invective.
Spoken like a true crackpot. I don't think your little dream idea is useful or manageable so I must not understand it.
You aren't entitled to getting your desired MediaWiki features implemented by other people. If the current development model or pace bother you, either focus on your power animal and find the patience to wait some more, or start posting patches.
I never claimed to be entitled to anything. I guess I implied that I'm entitled to make fun of the poor implementation of a dumb idea on this list, though. And hey, I am.
Anthony
On 24/01/07, Anthony wikitech@inbox.org wrote:
Spoken like a true crackpot. I don't think your little dream idea is useful or manageable so I must not understand it.
That's enough lame attacks, please, kthx.
Rob Church
On 1/24/07, Rob Church robchur@gmail.com wrote:
On 24/01/07, Anthony wikitech@inbox.org wrote:
Spoken like a true crackpot. I don't think your little dream idea is useful or manageable so I must not understand it.
That's enough lame attacks, please, kthx.
You forgot to put the lame attack in your post:
"SUL isn't generic SSO. Clearly, you have little or no understanding of the kind of problems that SUL is trying to fix or the benefits it would bring, so I'm not sure why you thought it was a good idea to embarrass yourself on a public list by backing ignorance with invective."
Anthony
On 24/01/07, Anthony wikitech@inbox.org wrote:
You forgot to put the lame attack in your post:
It's *all* lame, and it's *all* pointless bickering. We're supposed to be techies; we're supposed to discuss things and be normal (to our standards).
Let's maintain civilised discussion on the mailing lists, please.
Rob Church
Anthony schreef:
On 1/23/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
Hoi, Actually I will not lay off on Single User Login. The reason is quite simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with the most nifty things and all kinds of functionality that HAS to be implemented because you are volunteers too. In the mean time some rather basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
The promised date for SUL was end of January 2005. It is now a year late.
Thanks, GerardM
I couldn't agree with this more. Single User Login is a pretty useless feature. It was a neat dream back in 2001 or whenever passport.com came out, but it really is rather useless. But...there are so many other useful features that are being ignored because they aren't scheduled to be implemented until *after* SUL.
I really wish the board would fix the issue and tell Brion to just drop the whole idea of Single User Login. You know, Firefox (like probably all the other browsers) has this nifty feature that stores your username and password for each site, so you only have to type in your password once.
By the way, January 2005 was *two* years ago, not one.
Anthony
Hoi, I have my dates wrong then; January 2006 it is.
I do disagree that SUL is a useless feature .. I operate with over 100 user profiles within the Wikimedia Foundation and I hate it. It leads to practices that from a security pov are horrible. I am quite happy that things wait until after SUL but I am not happy that it takes so long. I also thing it bad that there is this feature creep that prevents relevant things from getting done.
I do agree that it would be good when the board helps with some prioritisations. I am afraid that many things will get on the backburner as a result and THIS is something that people will not like. Yet again a situation where people do not realise what they ask for ? Thanks, GerardM
Anthony wrote:
On 1/23/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
Hoi, Actually I will not lay off on Single User Login. The reason is quite simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with the most nifty things and all kinds of functionality that HAS to be implemented because you are volunteers too. In the mean time some rather basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
The promised date for SUL was end of January 2005. It is now a year late.
Thanks, GerardM
I couldn't agree with this more. Single User Login is a pretty useless feature. It was a neat dream back in 2001 or whenever passport.com came out, but it really is rather useless. But...there are so many other useful features that are being ignored because they aren't scheduled to be implemented until *after* SUL.
I really wish the board would fix the issue and tell Brion to just drop the whole idea of Single User Login. You know, Firefox (like probably all the other browsers) has this nifty feature that stores your username and password for each site, so you only have to type in your password once.
By the way, January 2005 was *two* years ago, not one.
Anthony
Actually, digital identity management is more important than ever, and this time it looks like it's going to be done right. SUL is an important part of helping make that happen for Wikipedia and all of its attendant projects.
-- Neil
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with
Like what?
basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
Is Wikimedia going to pay me to do it?
The promised date for SUL was end of January 2005. It is now a year late.
Well, thank god we're not operating on the Microsoft release cycle then.
Rob Church
Hoi, Basic stuff that does not get done? To me the fact that all Marathi projects are doing the localisation for EVERY of their project is extremely basic. This is something that can be fixed. And if I can help it, I will try to get it fixed in such a way that such situations will not happen in the future. I am not a developer and I am busy as it is doing other things to become one.
It is clear that the Wikimedia Foundation wants to extend the number of developers. If I were doing the hiring I would particularly be interested in people that make sure that the procedures are efficient .. This would make sure that the valuable time of developers is spent as little as possible on these type of things. It would also mean that stupid anomalies as having people to learn developer procedures in order to get their language supported would not exist. It is stupid because supporting projects and their languages well is the second highest priority after the continued service of our infrastructure. When both these things work well, only then there would be room for new functionality.
The next thing to consider is what has priority, the continued unabated rush to get new functions in, or is it wise to stop including such functionality until some of the projects that /are /prioritised by the WMF become a reality. If this means that Brion does not do anything but SUL for a month on new functionality until SUL gets done, it would be in your interest to help him in order to get your stuff in production.
You make a comparison to Microsoft; in many ways MediaWiki is on the other end of the spectrum when it comes to release management. Luckily on both ends of the spectrum great things happen. Both the Windows and the MediaWiki fan-boys have plenty to cheer about. Both have different parts of the puzzle to make for an optimal product and both have problems getting major new inventions integrated.
Thanks, GerardM
Rob Church schreef:
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
simple. Some things that are important to me will not happen until AFTER Single User Login. It is all well and good that everyone comes up with
Like what?
basic stuff does not get done. If this drives you mad, fine. Fix the issue - help Brion implement SUL !
Is Wikimedia going to pay me to do it?
The promised date for SUL was end of January 2005. It is now a year late.
Well, thank god we're not operating on the Microsoft release cycle then.
Rob Church
The next thing to consider is what has priority, the continued unabated rush to get new functions in, or is it wise to stop including such functionality until some of the projects that /are /prioritised by the WMF become a reality.
But a significant number of features are written by people who are volunteers. Try telling volunteers that "the WMF has not prioritized that feature", and see what happens!
If this means that Brion does not do anything but SUL for a month on new functionality until SUL gets done, it would be in your interest to help him in order to get your stuff in production.
If you want SUL done, and you take a "means-justify-the-ends" approach, then the best approach is probably to kidnap him, fly him to Antarctica, with a laptop, various test boxen, a supply of tea, and dump him by himself in an isolated hut with adequate power & food, but no internet connection whatsoever, and no phones, no post (i.e. no way of communicating with the outside world) and no other responsibilities _whatsoever_ besides doing SUL, and don't release him until it's done.
Failing that, stop asking so few people to do so much stuff, and then acting surprised when the non-urgent items get pushed back. For me, the remarkable thing is not that SUL isn't done, but rather that anything has been done about SUL at *all*.
All the best, Nick.
Nick Jenkins schreef:
The next thing to consider is what has priority, the continued unabated rush to get new functions in, or is it wise to stop including such functionality until some of the projects that /are /prioritised by the WMF become a reality.
But a significant number of features are written by people who are volunteers. Try telling volunteers that "the WMF has not prioritized that feature", and see what happens!
Both Brion and Tim are no volunteers, it is perfectly reasonable for the WMF to indicate that specific things have a priority. This may mean that people do not get their attention with what they volunteered to develop at the moment they are done with it, it would be perfectly reasonable. Because not only developers are volunteers. You make it as if it is you developers that dictate what happens. As to Brion's time, you can only get so many pork chops out of a pig. :)
If this means that Brion does not do anything but SUL for a month on new functionality until SUL gets done, it would be in your interest to help him in order to get your stuff in production.
If you want SUL done, and you take a "means-justify-the-ends" approach, then the best approach is probably to kidnap him, fly him to Antarctica, with a laptop, various test boxen, a supply of tea, and dump him by himself in an isolated hut with adequate power & food, but no internet connection whatsoever, and no phones, no post (i.e. no way of communicating with the outside world) and no other responsibilities _whatsoever_ besides doing SUL, and don't release him until it's done.
This would be completely and utterly insane. As I indicated before, the first priority is the continued running of the services. With Brion and or Tim flown to Antartica, these services are likely to suffer a lot. Then and only then comes new functionality including SUL.
Failing that, stop asking so few people to do so much stuff, and then acting surprised when the non-urgent items get pushed back. For me, the remarkable thing is not that SUL isn't done, but rather that anything has been done about SUL at *all*.
We surely disagree on what has urgency, to me the first thing after what has already been given priority is the needed improvement of the basic running of the localisation for languages like Marathi. It is ridiculous to consider that things that are "nice to have" are given priority when things that are manifestly broken are not fixed. Given that the Wikimedia Foundation claims that it is important that people are able to edit in their language, the consequence is that this is given priority. Failing to do so is what is considered to be discrimination.
Given that Brion is the release manager, he is the one who has to look at the code that Nikerabbit has produced. It has to be accepted and implemented. It is code that is there, it needs his stamp of approval. When you think that all what you produce in code should have priority over what will help operationally on the most basic level, please explain.
Thanks, GerardM
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
Basic stuff that does not get done? To me the fact that all Marathi projects are doing the localisation for EVERY of their project is extremely basic. This is something that can be fixed. And if I can help
Yes, I agree; in fact, if you'd been in the #mediawiki IRC channel within the first few days of this month, you would have noticed I mentioned something about, "it would be great if we had regular updates for *every* language by the end of the year."
The next thing to consider is what has priority, the continued unabated rush to get new functions in, or is it wise to stop including such functionality until some of the projects that /are /prioritised by the
People work on what interests them. Would you reject new functionality and piss off your volunteer coders, or would you rather say, "hey, cool...now, do you reckon you could...?"
WMF become a reality. If this means that Brion does not do anything but SUL for a month on new functionality until SUL gets done, it would be in your interest to help him in order to get your stuff in production.
Actually, if I'm blunt; I have little or no interest in single user login. And it's not in my interest to do anything I don't want to do, which is why I asked the question, "Is Wikimedia going to pay me for it?" Unless there's a fat cheque coming my way, I'm not going to even consider it, because if I were to put a foot out of line and make a mistake while doing it, I'd probably be ripped to shreds.
Rob Church
Rob Church schreef:
On 23/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
Basic stuff that does not get done? To me the fact that all Marathi projects are doing the localisation for EVERY of their project is extremely basic. This is something that can be fixed. And if I can help
Yes, I agree; in fact, if you'd been in the #mediawiki IRC channel within the first few days of this month, you would have noticed I mentioned something about, "it would be great if we had regular updates for *every* language by the end of the year."
Great to know that more people feel strongly about this.. :)
The next thing to consider is what has priority, the continued unabated rush to get new functions in, or is it wise to stop including such functionality until some of the projects that /are /prioritised by the
People work on what interests them. Would you reject new functionality and piss off your volunteer coders, or would you rather say, "hey, cool...now, do you reckon you could...?"
Why reject new functionality? That is NOT what I want to see happen. Not at all. What I want to see happen is that certain things come to conclusion. It is awful when because some developer made something that is pretty things that are basic functionality have to wait. I would not reject new functionality, I would give it its priority and this means that it will gets assessed and implemented at the appropriate time. The problem is that Brion does not scale and neither does Tim.
WMF become a reality. If this means that Brion does not do anything but SUL for a month on new functionality until SUL gets done, it would be in your interest to help him in order to get your stuff in production.
Actually, if I'm blunt; I have little or no interest in single user login. And it's not in my interest to do anything I don't want to do, which is why I asked the question, "Is Wikimedia going to pay me for it?" Unless there's a fat cheque coming my way, I'm not going to even consider it, because if I were to put a foot out of line and make a mistake while doing it, I'd probably be ripped to shreds.
There are plenty of other BASIC things that need doing. Getting the Marathi localisation from the Marathi Wikipedia is just one. As you agree that getting the localisation for all our languages having regular updates, it needs some work to get the language files for ALL languages a language file and make sure that the procedure for new languages includes the creation of a language file. It makes the ground fertile for when Brion has the time to check out the BetaWiki software.
Thanks, GerardM
On 24/01/07, Gerard Meijssen gerard.meijssen@gmail.com wrote:
There are plenty of other BASIC things that need doing. Getting the Marathi localisation from the Marathi Wikipedia is just one. As you
You're missing the core issue, which is kind of fundamental to your original complaint.
Rob Church
Rob Church wrote:
On 23/01/07, Brion Vibber brion@pobox.com wrote:
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
What infrastructure changes would it require for us to start migrating to HTTPS, at least during the login process?
Rob Church
You will need one IP address per HTTPS server name, since you cannot virtual-host HTTPS: however, there's nothing to stop a single machine from having as many IP addresses as desired, and thus as many HTTPS servers as desired. This also means that load-balance cannot "look into" the encrypted HTTPS connection to see where to send the traffic to: it will have to go by the destination IP address. However, HTTPS can still be load-balanced, you just need a different external IP address for each visible service, and one internal IP address for each HTTPS service on each server within the load-balancing cloud.
A lot of this could, of course, be made much simpler if you only had a single HTTPS server URL for login in each project, such as https://login.wikipedia.org/ -- this would fit rather well with the unified login idea.
The servers would need to have their certificates signed by a well-known CA, or browsers will complain (as, for example, occurs with https://wikitech.leuksman.com). Ideally, images, style sheets and other media within the login pages should also be served over HTTPS, ideally from the same server, or some browsers will complain.
HTTPs has a small but significant per-session overhead to perform the initial authentication and key exchange (but not per-hit, since the authentication information can be cached between multiple connections within the same session.) There is also a small extra overhead for all data passed over the encrypted connection, which has to be compressed/decompressed and encrypted/decrypted on the fly as necessary; however, this last is going to be pretty unimportant if all you use it for is to serve login pages.
-- Neil
Neil Harris wrote:
You will need one IP address per HTTPS server name, since you cannot virtual-host HTTPS: however, there's nothing to stop a single machine from having as many IP addresses as desired, and thus as many HTTPS servers as desired. This also means that load-balance cannot "look into" the encrypted HTTPS connection to see where to send the traffic to: it will have to go by the destination IP address. However, HTTPS can still be load-balanced, you just need a different external IP address for each visible service, and one internal IP address for each HTTPS service on each server within the load-balancing cloud.
HTTPS does support virtual hosting. You can have certificates with wildcards, e.g. *.wikipedia.org, and you can even have certificates that list multiple second-level domains. In theory we could even support https://en.wikipedia.org/, by having LVS pass the traffic off to an SSL proxy cluster, which forwards to the Florida squids via a secure tunnel.
-- Tim Starling
HTTPS does support virtual hosting. You can have certificates with wildcards, e.g. *.wikipedia.org, and you can even have certificates that list multiple second-level domains. In theory we could even support https://en.wikipedia.org/, by having LVS pass the traffic off to an SSL proxy cluster, which forwards to the Florida squids via a secure tunnel.
From what I've read HTTPS does not support name-based virtual hosting,
only IP based virtual hosting, as the SSL connection happens before the request is made. So, if you have two virtual hosts such as:
<Virtualhost en.wikipedia.org> ... </Virtualhost>
<Virtualhost de.wikipedia.org> ... </Virtualhost>
The first virtual host will always be used as the server has absolutely no clue, when the connection is made, which virtual host it needs to use.
I believe this is the point Neil was trying to make.
V/r,
Ryan Lane
Lane, Ryan wrote:
HTTPS does support virtual hosting. You can have certificates with wildcards, e.g. *.wikipedia.org, and you can even have certificates that list multiple second-level domains. In theory we could even support https://en.wikipedia.org/, by having LVS pass the traffic off to an SSL proxy cluster, which forwards to the Florida squids via a secure tunnel.
From what I've read HTTPS does not support name-based virtual hosting, only IP based virtual hosting, as the SSL connection happens before the request is made. So, if you have two virtual hosts such as:
<Virtualhost en.wikipedia.org> ... </Virtualhost>
<Virtualhost de.wikipedia.org> ... </Virtualhost>
The first virtual host will always be used as the server has absolutely no clue, when the connection is made, which virtual host it needs to use.
I believe this is the point Neil was trying to make.
The same is true of unencrypted HTTP. The server has no idea, when the connection is made, what virtual host will be required. That only comes when the client sends the Host header.
The issue is that with HTTPS, the server has to send a certificate before the client sends the Host header. The certificate has to match the hostname.
See http://wiki.cacert.org/wiki/VhostTaskForce for a discussion of the various ways of achieving this.
-- Tim Starling
Tim Starling wrote:
Lane, Ryan wrote:
HTTPS does support virtual hosting. You can have certificates with wildcards, e.g. *.wikipedia.org, and you can even have certificates that list multiple second-level domains. In theory we could even support https://en.wikipedia.org/, by having LVS pass the traffic off to an SSL proxy cluster, which forwards to the Florida squids via a secure tunnel.
From what I've read HTTPS does not support name-based virtual hosting, only IP based virtual hosting, as the SSL connection happens before the request is made. So, if you have two virtual hosts such as:
<Virtualhost en.wikipedia.org> ... </Virtualhost>
<Virtualhost de.wikipedia.org> ... </Virtualhost>
The first virtual host will always be used as the server has absolutely no clue, when the connection is made, which virtual host it needs to use.
I believe this is the point Neil was trying to make.
The same is true of unencrypted HTTP. The server has no idea, when the connection is made, what virtual host will be required. That only comes when the client sends the Host header.
The issue is that with HTTPS, the server has to send a certificate before the client sends the Host header. The certificate has to match the hostname.
See http://wiki.cacert.org/wiki/VhostTaskForce for a discussion of the various ways of achieving this.
-- Tim Starling
And, once the various browsers and CAs have all sorted out the different ways of generating and handling these wildcard and multi-name certs in a fully interoperable way, the problem will have been solved. But for the moment, I believe that this currently isn't the case: although, if there is a way to do this across all the currently-deployed browsers, I'd be interested to hear about it.
-- Neil
Neil Harris wrote:
Tim Starling wrote:
[...]
See http://wiki.cacert.org/wiki/VhostTaskForce for a discussion of the
[...]
And, once the various browsers and CAs have all sorted out the different ways of generating and handling these wildcard and multi-name certs in a fully interoperable way, the problem will have been solved. But for the moment, I believe that this currently isn't the case: although, if there is a way to do this across all the currently-deployed browsers, I'd be interested to hear about it.
The interoperability table in the article I linked to has three methods that work in the current versions of the major browsers, two of which work in the older versions as well, and one of which that works with every client they tested. What's the problem?
-- Tim Starling
Tim Starling wrote:
Neil Harris wrote:
Tim Starling wrote:
[...]
See http://wiki.cacert.org/wiki/VhostTaskForce for a discussion of the
[...]
And, once the various browsers and CAs have all sorted out the different ways of generating and handling these wildcard and multi-name certs in a fully interoperable way, the problem will have been solved. But for the moment, I believe that this currently isn't the case: although, if there is a way to do this across all the currently-deployed browsers, I'd be interested to hear about it.
The interoperability table in the article I linked to has three methods that work in the current versions of the major browsers, two of which work in the older versions as well, and one of which that works with every client they tested. What's the problem?
-- Tim Starling
Both methods have their own drawbacks and advantages.
The problem with multi-name certs is all the other platforms: mobile phones and embedded web browsers, HTTP client libraries for Java/Python/Perl etc, bizarre things like man-in-the-middle HTTPS proxies (yes, they're out there)... until you've tried it, you don't know if its going to work, unlike using IP addresses to select the cert to be sent, which is known to work for all HTTPS-speaking clients.
Of course, this doesn't mean you shouldn't do the multi-name cert thing; the wiki page suggests it should just work for all the major browsers, which implies that it should just work for something like 99% of all web users, just that it's going to need a lot of testing before it goes live, and then providing fallback to plain HTTP login for all clients which cannot handle this, most probably based on user-agent strings.
-- Neil
Brion Vibber wrote:
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
Absolutely agreed. Not being able to deal with the computational cost of SSL is the only convincing reason to try and use JavaScript hackery to do a more secure login, and I don't think that's a valid concern at Wikipedia these days. If you find a designated SSL login machine becomes CPU-bound, I can recommend PCI SSL accelerator cards that'll be happy to take over the work.
Brion Vibber wrote:
There was some muttering at the time that just using HTTPS is safer and it's not worth the bother. Agreement? Disagreement?
Probably HTTPS is safer, though JS challenges are easier to implement. Still, the https server needs to send the user back to the 'normal' page with some token, as it can't set the logged cookie. The http protocol was enhaced with some response codes 'changing to secure mode', so it might be feasible produce the login over https with the same server but i don't know the state of current implementations (both client and server), could be tricky.
As SUL will change the authentication schemas, close accounts, etc. it IS the appropiate moment to change hashes on the 'joined' accounts, set a login https server, etc.
wikitech-l@lists.wikimedia.org