Hi all,
I wanted to get some input from you all about any ideas or plans they have for identifying OAuth user in your applications. tl;dr, Since lots of people want to do authentication with OAuth, I'm thinking we'll implement a custom way to get identity information from the wiki in the near term, and then probably try to implement the OpenID Connect extension to OAuth 2 sometime next year. Since we started rolling out OAuth, we've consistently had requests to use OAuth to authenticate users in other applications, based on their wiki identity. OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using https for your api calls, but it's surprisingly hard to get https right [1], even if you trust all your CA's. We were planning to roll out OpenID this fall to authenticate users, while OAuth is used for authorizing access into the wiki. Wikinaut has been working hard on the OpenID extension, but it's not quite ready to deploy yet. Additionally, with this setup, applications like Snuggle and UTRS that want to both authenticate users and use OAuth for authorized api requests have to implement both OpenID and OAuth libraries, which is a pain. This is a common issue is being addressed by the OpenID Connect extension to OAuth2, which allows the application to request information about the person doing the authorization, and the result is signed by the server to prevent tampering. The standard is still a draft, but it seems like the correct solution to our pain point, so I think that's the direction we'll head once the standard is finalized, and we have some time to throw at it. In the meantime, I'm also thinking we may implement something similar, where an OAuth consumer can request a signed assertion about the authorizing user. It would probably use the same format that OpenID Connect uses (JWT [2]), and use the consumer secret for the hmac signature. This would let the application check that a user has a specific set of permission at the time of the call, and would require a lot less development effort. Does this seem like a reasonable tradeoff? Assuming we do this direction, what attributes about the wiki user account should be provided. I was planning on username, if the account is autoconfirmed, maybe number of edits, and the list of groups to which the user belongs. Anything else? [1] - http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls... http://tools.ietf.org/html/draft-ietf-oauth-json-web-token-12
Hi Chris,
On 22 October 2013 05:45, Chris Steipp csteipp@wikimedia.org wrote:
OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using https for your api calls, but it's surprisingly hard to get https right [1], even if you trust all your CA's.
(...)
This is a common issue is being addressed by the OpenID Connect extension to OAuth2, which allows the application to request information about the person doing the authorization, and the result is signed by the server to prevent tampering.
(...)
I'm a bit confused by this -- I was under the impression https would be enough to confirm I'm actually talking to the WMF's servers. The main argument in [1] against just using https seems to be it's easy to ignore invalid certificates. Is there another reason why it's dangerous to assume you're talking to mw.o if the certificates check out?
Basically, I'm not quite sure whether using OIDC will help alleviate this problem - you get a response back, but you still have to check the signature! And with the ease of not checking the signature, you're basically back to the same problem with not checking the ssl certificate.
Nonetheless, I think it's useful to add an authentication mechanism that follows a standard - which is clearly not the case with the current 'api.php?meta=userinfo' calls.
Merlijn
[1] http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls...
On Tue, Oct 22, 2013 at 1:57 AM, Merlijn van Deen valhallasw@arctus.nlwrote:
Hi Chris,
On 22 October 2013 05:45, Chris Steipp csteipp@wikimedia.org wrote:
OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using https
for
your api calls, but it's surprisingly hard to get https right [1], even
if
you trust all your CA's.
(...)
This is a common issue is being addressed by the OpenID Connect extension to OAuth2, which allows the application to request information about the person doing the authorization, and the result is signed by the server to prevent tampering.
(...)
I'm a bit confused by this -- I was under the impression https would be enough to confirm I'm actually talking to the WMF's servers. The main argument in [1] against just using https seems to be it's easy to ignore invalid certificates. Is there another reason why it's dangerous to assume you're talking to mw.o if the certificates check out?
That's correct. The issue is more that we (the security community) keep finding code out there that doesn't correctly handle the verification ( http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf was one of the popular surveys of the subject). It's often the underlying libraries at fault (errors parsing the certificates, or the revocation lists, that fail open), or common programming mistakes (like how mediawiki set CURLOPT_SSL_VERIFYHOST to true, instead of 2, for a very long time). But if you accept that your current libraries are probably flawed, and so you keep your libraries up to date, and you're careful about how you're doing the verification at the application layer, you *should* be fine.
Basically, I'm not quite sure whether using OIDC will help alleviate this problem - you get a response back, but you still have to check the signature! And with the ease of not checking the signature, you're basically back to the same problem with not checking the ssl certificate.
Correct. Hopefully, applications that really need to know the identity of a user (like UTRS) will go through the bother of checking the signature (in both OpenID Connect, and the intermediate solution I'm proposing, this is an HMAC signature using a pre-established secret, so it should be easy enough that the effort is worth the security).
Nonetheless, I think it's useful to add an authentication mechanism that follows a standard - which is clearly not the case with the current 'api.php?meta=userinfo' calls.
Merlijn
[1]
http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls... _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I am basically interested only in oauth that can be used by remote applications / processes running on user's PC, which isn't available yet
On Tue, Oct 22, 2013 at 7:18 PM, Chris Steipp csteipp@wikimedia.org wrote:
On Tue, Oct 22, 2013 at 1:57 AM, Merlijn van Deen valhallasw@arctus.nlwrote:
Hi Chris,
On 22 October 2013 05:45, Chris Steipp csteipp@wikimedia.org wrote:
OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using https
for
your api calls, but it's surprisingly hard to get https right [1], even
if
you trust all your CA's.
(...)
This is a common issue is being addressed by the OpenID Connect extension to OAuth2, which allows the application to request information about the person doing the authorization, and the result is signed by the server to prevent tampering.
(...)
I'm a bit confused by this -- I was under the impression https would be enough to confirm I'm actually talking to the WMF's servers. The main argument in [1] against just using https seems to be it's easy to ignore invalid certificates. Is there another reason why it's dangerous to assume you're talking to mw.o if the certificates check out?
That's correct. The issue is more that we (the security community) keep finding code out there that doesn't correctly handle the verification ( http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf was one of the popular surveys of the subject). It's often the underlying libraries at fault (errors parsing the certificates, or the revocation lists, that fail open), or common programming mistakes (like how mediawiki set CURLOPT_SSL_VERIFYHOST to true, instead of 2, for a very long time). But if you accept that your current libraries are probably flawed, and so you keep your libraries up to date, and you're careful about how you're doing the verification at the application layer, you *should* be fine.
Basically, I'm not quite sure whether using OIDC will help alleviate this problem - you get a response back, but you still have to check the signature! And with the ease of not checking the signature, you're basically back to the same problem with not checking the ssl certificate.
Correct. Hopefully, applications that really need to know the identity of a user (like UTRS) will go through the bother of checking the signature (in both OpenID Connect, and the intermediate solution I'm proposing, this is an HMAC signature using a pre-established secret, so it should be easy enough that the effort is worth the security).
Nonetheless, I think it's useful to add an authentication mechanism that follows a standard - which is clearly not the case with the current 'api.php?meta=userinfo' calls.
Merlijn
[1]
http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls... _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Tue, Oct 22, 2013 at 10:33 AM, Petr Bena benapetr@gmail.com wrote:
I am basically interested only in oauth that can be used by remote applications / processes running on user's PC, which isn't available yet
This is the second most requested feature that we don't support yet.
We've been looking at options for it. All solutions would basically require we make a second class of OAuth Consumers. This has precedence: OAuth 2 makes the distinction between "confidential" and "public" consumers, and Twitter's xAuth has to be specifically enabled on your OAuth Consumer. We're debating making a similar distinction for our OAuth Consumers, but we don't want to get into the situation where we need to give lots of caviots to our users that, "Yes, this OAuth thing is secure, as long as Consumers of this type are doing these things, but these other ones also need to do these other things...".
On Tue, Oct 22, 2013 at 7:18 PM, Chris Steipp csteipp@wikimedia.org wrote:
On Tue, Oct 22, 2013 at 1:57 AM, Merlijn van Deen <valhallasw@arctus.nl wrote:
Hi Chris,
On 22 October 2013 05:45, Chris Steipp csteipp@wikimedia.org wrote:
OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using
https
for
your api calls, but it's surprisingly hard to get https right [1],
even
if
you trust all your CA's.
(...)
This is a common issue is being addressed by the OpenID Connect
extension
to OAuth2, which allows the application to request information about
the
person doing the authorization, and the result is signed by the
server to
prevent tampering.
(...)
I'm a bit confused by this -- I was under the impression https would be enough to confirm I'm actually talking to the WMF's servers. The main argument in [1] against just using https seems to be it's easy to ignore invalid certificates. Is there another reason why it's dangerous to
assume
you're talking to mw.o if the certificates check out?
That's correct. The issue is more that we (the security community) keep finding code out there that doesn't correctly handle the verification ( http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf was one of the popular surveys of the subject). It's often the underlying libraries at fault (errors parsing the certificates, or the revocation lists, that fail
open),
or common programming mistakes (like how mediawiki set CURLOPT_SSL_VERIFYHOST to true, instead of 2, for a very long time). But if you accept that your current libraries are probably flawed, and so you keep your libraries up to date, and you're careful about how you're doing the verification at the application layer, you *should* be fine.
Basically, I'm not quite sure whether using OIDC will help alleviate
this
problem - you get a response back, but you still have to check the signature! And with the ease of not checking the signature, you're basically back to the same problem with not checking the ssl
certificate.
Correct. Hopefully, applications that really need to know the identity
of a
user (like UTRS) will go through the bother of checking the signature (in both OpenID Connect, and the intermediate solution I'm proposing, this is an HMAC signature using a pre-established secret, so it should be easy enough that the effort is worth the security).
Nonetheless, I think it's useful to add an authentication mechanism that follows a standard - which is clearly not the case with the current 'api.php?meta=userinfo' calls.
Merlijn
[1]
http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls...
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Do you realize that these application are asking users for their password in this moment? That seems to me even worse than oauth with these "caviots"
On Tue, Oct 22, 2013 at 7:54 PM, Chris Steipp csteipp@wikimedia.org wrote:
On Tue, Oct 22, 2013 at 10:33 AM, Petr Bena benapetr@gmail.com wrote:
I am basically interested only in oauth that can be used by remote applications / processes running on user's PC, which isn't available yet
This is the second most requested feature that we don't support yet.
We've been looking at options for it. All solutions would basically require we make a second class of OAuth Consumers. This has precedence: OAuth 2 makes the distinction between "confidential" and "public" consumers, and Twitter's xAuth has to be specifically enabled on your OAuth Consumer. We're debating making a similar distinction for our OAuth Consumers, but we don't want to get into the situation where we need to give lots of caviots to our users that, "Yes, this OAuth thing is secure, as long as Consumers of this type are doing these things, but these other ones also need to do these other things...".
On Tue, Oct 22, 2013 at 7:18 PM, Chris Steipp csteipp@wikimedia.org wrote:
On Tue, Oct 22, 2013 at 1:57 AM, Merlijn van Deen <valhallasw@arctus.nl wrote:
Hi Chris,
On 22 October 2013 05:45, Chris Steipp csteipp@wikimedia.org wrote:
OAuth does not support this, since the results of an api call using OAuth signatures aren't signed (only the request from the OAuth consumer is signed), so it's possible that an attacker could forge a response back to the application, and the application would think a different user was logged in. This is less likely if you're using
https
for
your api calls, but it's surprisingly hard to get https right [1],
even
if
you trust all your CA's.
(...)
This is a common issue is being addressed by the OpenID Connect
extension
to OAuth2, which allows the application to request information about
the
person doing the authorization, and the result is signed by the
server to
prevent tampering.
(...)
I'm a bit confused by this -- I was under the impression https would be enough to confirm I'm actually talking to the WMF's servers. The main argument in [1] against just using https seems to be it's easy to ignore invalid certificates. Is there another reason why it's dangerous to
assume
you're talking to mw.o if the certificates check out?
That's correct. The issue is more that we (the security community) keep finding code out there that doesn't correctly handle the verification ( http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf was one of the popular surveys of the subject). It's often the underlying libraries at fault (errors parsing the certificates, or the revocation lists, that fail
open),
or common programming mistakes (like how mediawiki set CURLOPT_SSL_VERIFYHOST to true, instead of 2, for a very long time). But if you accept that your current libraries are probably flawed, and so you keep your libraries up to date, and you're careful about how you're doing the verification at the application layer, you *should* be fine.
Basically, I'm not quite sure whether using OIDC will help alleviate
this
problem - you get a response back, but you still have to check the signature! And with the ease of not checking the signature, you're basically back to the same problem with not checking the ssl
certificate.
Correct. Hopefully, applications that really need to know the identity
of a
user (like UTRS) will go through the bother of checking the signature (in both OpenID Connect, and the intermediate solution I'm proposing, this is an HMAC signature using a pre-established secret, so it should be easy enough that the effort is worth the security).
Nonetheless, I think it's useful to add an authentication mechanism that follows a standard - which is clearly not the case with the current 'api.php?meta=userinfo' calls.
Merlijn
[1]
http://blog.astrumfutura.com/2010/10/do-cryptographic-signatures-beat-ssltls...
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Petr Bena wrote:
Do you realize that these application are asking users for their password in this moment? That seems to me even worse than oauth with these "caviots"
Which applications are asking users for their password?
The only partial example I can come up with off-hand is AutoWikiBrowser, though I believe that passed the credentials through Internet Explorer or something and then only used the cookies/session? I'm not sure it stores the username and password directly, though it's been a long time since I've used it and I certainly can't say for sure.
I'd be interested to know which applications you're referring to.
MZMcBride
P.S. s/caviot/caveat/ :-)
On 10/21/2013 08:45 PM, Chris Steipp wrote:
Hi all,
I wanted to get some input from you all about any ideas or plans they have for identifying OAuth user in your applications. tl;dr, Since lots of people want to do authentication with OAuth, I'm thinking we'll implement a custom way to get identity information from the wiki in the near term, and then probably try to implement the OpenID Connect extension to OAuth 2 sometime next year.
+1. I especially like the part about being able to verify signed assertions and identity without hitting the DB, which is very useful for high-volume APIs.
Does this seem like a reasonable tradeoff? Assuming we do this direction, what attributes about the wiki user account should be provided. I was planning on username, if the account is autoconfirmed, maybe number of edits, and the list of groups to which the user belongs. Anything else?
It would be great if the JWT could carry the same authorization info as is retrieved from the DB in OAuth 1. I'm especially interested in knowing whether a user can read, edit etc a given wiki page, whether it is a bot account etc. We could perhaps derive this information from a list of groups by exposing the per-group rights through an internal JSON API and then expanding groups to rights in an API end point using that information.
Gabriel
wikitech-l@lists.wikimedia.org