Wednesday, February 27, 2013

URL detection with location.hash and history Timing attack. I know your Facebook username.

Meanwhile working hard on Pagebox. XHR proxy is done, looking forward your feedback

TL;DR there is a way to detect current URL in iframe or window by assigning it TRY_URL# - if our guess was right page wont be reloaded. Not very practical and pretty slow, but it's still a (minor?) vulnerability in standard.

UPDATE I found a better way - hash assignment changes history much faster than normal, "reloading" assignment - so we use a Timing attack to figure out if our TRY_URL == REAL_URL. Works for all websites quite reliably.

1) set data: url checker, you can postMessage some data to opener
2) redirect window to /redirector path which will redirect to /real_path
3) try to assign /try_path# and if it was equal real_path history object will be changed right away. So - execute history.go(-1) and see
4) if it redirects to data: - it means real_path != try_path. You can setTimeout for 100 ms and realize that you found real_path - history.back() will remove new #, not redirect to data: url
5) works for any website, https too. I demonstrate it on Facebook

I can't say this vulnerability is very major and severe. At some extent it's not a vulnerability, it's a common standard that will not be changed in the near future. Severity depends on what critical information is exposed in your URL.

Yes, the trick is perfectly legal but here I prove that it's a vulnerability!

  • location.hash can be used as data transport as well as and it's quite well-known. If you assing location=SAME_URL#new_hash for iframe/window location, where SAME_URL is current path - only hash will be changed, page itself will not be reloaded.
  • onload is not fired up if you change hash this way
  • redirect from path#hash -> new_path#hash adds #hash automatically to new path.
  • when you assign location=SAME_URL#SAME_HASH it won't be pushed to history object
  • blocked popup windows in Chrome are full-featured windows just running in background. They are not "blocked" at all.

The idea is — we simply can check if URL is equal to our TRY_URL by assigning location="TRY_URL#" and if onload event has been triggered - answer is "No". Otherwise some setTimeout can let us know that our TRY_URL was a right guess!

Theoretical showcase - sinatra app and ?id=95 path detection

Real world showcase... Facebook again, maybe?
I will detect your current username. Wait, I cannot use frames, FB has X-Frame-Options! And when we use we cannot set onload for cross origin windows. Too bad. But I didn't give up and used timing attack!

Here - very simple PoC telling if your real handle is equal TRY_HANDLE.

Saturday, February 23, 2013

Pagebox — sandboxing XSS attacks.

View FAQ and proof of concept (Sinatra app) on Github  Here I explain the idea and problems I stumbled upon.

Demo online

Pagebox is a technique for building bullet-proof web apps, which can dramatically improve XSS protection for complex and multi-layer websites with huge attack surface.
Web is not super robust (Cookies, Clickjacking, Frame navigation, CSRF etc) but XSS is an Achilles' heel, it is a shellcode for the entire domain.

If we prove /some_path to be vulnerable to XSS, we can interact with the whole website. This might lead to bad things happening — for example, malicious XSS on /some_path may attempt to, damn it, /withdraw our money. Pagebox presents the solution to this problem.

Sandboxed pages

The idea I want to implement is to make every page independent and secure from other vulnerable pages located on the same domain.

Developers can limit interactions of this particular page with others—by either blacklisting things that clearly shouldn't be done, or by taking a more restrictive approach and whitelisting only allowed things.

Pagebox splits the entire website into many sandboxed pages with unique origins. Every page is not accessible from other pages unless developer allowed it explicitly - you simply cannot<iframe> other pages on the same domain and extract document.body.innerHTML because of the CSP header: Sandbox.
Every page contains a signed serialized object in <meta> tag, and sends it along with every XMLHttpRequest and <form> submission. Meta tag contains signed information about what was permitted for this URL.
Boxed pages are assigned their pagebox scope, which either limits or allows only particular kinds of request (suppose you're viewing the messages page, you only have the :messages scope, which indicates only message query/creation is allowed; attempt to access any other part would be denied.) Backend checks if the permissions were tampered, if they're good and the action is allowed, it processes the request. To some extent, it's Cross Page Request Forgery protection.

Pagebox can look like: ["follow", "write_message", "read_messages", "edit_account", "delete_account"]. Or it can be more high-level: ["default", "basic", "edit", "show_secret"]
permitted URLs


Now page can only submit forms, but XHR CORS doesn't work properly - nobody knew we will try it in such way. I'm stuck with XHR-with-credentials and I need your help and ideas.
1) Every page is sandboxed and we cannot put 'allow-same-origin' to avoid DOM interactions
2) When we sandbox a page it gets a unique origin 'null', when we make requests from 'null' we cannot attach credentials (Cookies), because wildcard ('*') is not allowed in Access-Control-Allow-Origin: * for with-credentials requests.
when responding to a credentialed request,  server must specify a domain, and cannot use wild carding.
3) Neither * nor null are allowed for Access-Control-Allow-Origin. So XHR is not possible with Cookies from sandboxed pages.
4) I was trying to use not sandboxed /pageboxproxy iframe, which would do the trick from not sandboxed page and return result back with postMessage, but when we frame not sandboxed page under sandboxed it doesn't work either.
I don't know how to fix it but I really want to make pagebox technique work. It fixes the Internet.

Feel free to troll ask me questions in comments or at - I'm happy to help! Please share your ideas how to make it real.

Tuesday, February 19, 2013

How we hacked Facebook with OAuth2 and Chrome bugs

TL;DR We (me and @isciurus) chained several different bugs in Facebook, OAuth2 and Google Chrome to craft an interesting exploit. MalloryPage can obtain your signed_request, code and access token for any client_id you previously authorized on Facebook. The flow is quite complicated so let me explain the bugs we used.

1. in Google Chrome XSS Auditor, leaking document.referrer.
3 weeks ago I wrote disclosure post on referrer leakage for pages with X-XSS-Protection: '1;mode=block'. Please read the original post to understand how it works. When Auditor blocks page, it redirects to about:blank URL (about:blank always inherits parent's origin). And we can access document.referrer containing the previous URL Auditor just blocked. Facebook had '1; mode=block' header. Now it's 0; because of us (Auditor is dangerous, new vulns will be posted soon). Sadly, this bug report was marked as sev=low by Chrome security team and no bounty granted.
It's not patched yet.

2. OAuth2 is... quite unsafe auth framework. Gigantic attack surface, all parameters are passed in URL. I will write a separate post about OAuth1 vs OAuth2 in a few weeks. Threat Model is bigger than in official docs.
In August 2012 I wrote a lot about common vulnerabilities-by-design and even proposed to fix it: OAuth2.a.

We used 2 bugs: dynamic redirect_uri and dynamic response_type parameter.
response_type=code is the most secure authentication flow, because end user never sees his access_token. But response_type is basically a parameter in authorize URL. By replacing response_type=code to response_type=token,signed_request we receive both token and code on our redirect_uri.

redirect_uri can be not only app's domain, but domain is also allowed.
In our exploit we used response_type=token,signed_request&redirect_uri=FB_PATH where FB_PATH was a specially crafted URL to disclose these values...

3. location.hash disclosure on
For response_type=token provider sends an access token in location fragment (aka location.hash) to avoid data leaking via referrers (location.hash is never sent in referrers)
@isciurus found a "bouncing" hashbang in September 2012. The trick was: facebook removes '#' from URLs containing "#!" (AJAX google indexation trick) , it boils down to copying location.hash into URL and discloses access token in document.referrer.
Later, in January he just found another bypass of "fixed" vulnerability, using %23 instead of #.

Here we go - PoC, look at the source code.

cut_me Custom Payload we used to make Auditor to block the final page. We put it in the 'state' parameter (used to prevent CSRF, you must know!)
target_app_id client_id we want to steal access_token and code from. In "real world" exploit we would use 100-200 most popular Facebook applications and just gather all the available tokens. It would be awesome.
sensitive_info - tampering of response_type parameter: signed_request and token are Private Info we are going to leak through document.referrer
Now the final URL:
url = "" + target_app_id + "&response_type="+sensitive_info+"&display=none&"+cut_me+"&sdk=joey";

Value will look like:'BigPipe'))(%7B%22lid%22%3A0%2C%22forceFinish%22%3Atrue%7D)%3B%3C%2Fscript%3E&sdk=joey


1) We open 25 windows (this is maximum amount of allowed windows in Chrome) with different target_app_id. Gotcha: Chrome DOES load the URL even if it blocks a window. This makes exploit even cooler: we open 25 windows, all of them are blocked but loaded, Auditor blocks Custom Payload, we grab document.referrer, user is not scared at all.

2) If user previously authorized certain app_id he will be automatically redirected to

3) Here Facebook javascript removes '#' from the URL and redirects user to another FB_PATH/...?signed_request=SR&access_token=TOKEN&state=CUSTOM_PAYLOAD

4) Now server responds with HTML page and
X-XSS-Protection: '1; mode=block'
Chrome XSS Auditor detects state=CUSTOM_PAYLOAD in HTML code of response:
<script>var bigPipe = new (require('BigPipe'))({"lid":0,"forceFinish":true});</script>'
blocks and redirects to about:blank

5) On MalloryPage we have setInterval which waits for location.href=='about:blank'.
about:blank inherits our MalloryPage origin - so we have access to document.referrer. Final routine:

var ref = playground.document.referrer;
window.token = ref.match(/token=([^\&]+)/)
  window.token = window.token[1];
  document.write('<script src="'+window.token+'"><'+'/script>');
var hello = function(data){
  alert('Whats up '" your token is "+window.token);

Voila! Using this exploit we can obtain code, signed_request and your access_token for any Client.

After party.
We are splitting $2500 + $2500 bounty from Facebook and working on new attacks.

You really must check the coming soon article I promised to write in a few weeks, explaining how broken OAuth2 is.
For example, if you authenticate users with Facebook it means any XSS on your website can steal User's account. Currently I'm discussing and proposing new ways to Facebook security team how to handle it and make response_type=code more secure, because they are the biggest provider and their decisions matter. If we don't fix it - it's The Road To Hell!

By the way there is another sev=medium vulnerability in Chrome Auditor, will be published as soon as it will be patched :)


Friday, February 15, 2013

Are you sure you use JSONP properly?

This is a friendly reminder about old problem with JSONP.

We need JSONP to receive some data on domain1 from domain2 because of cross origin restrictions. We put <script src="domain2?callback=CALLBACK"></script> where CALLBACK is name of Javascript function which handles received data. Here comes the rule: JSONP Should Never Be Used For Personal Data.(unless you have a special token, different for every user).

Knowledge "Who am I" is a secret. When I visit new website it's not supposed to know who I am, my email, name or sometimes my private messages. With JSONP a common vector can be:
<script src=data_provider/me.json?callback=leak></script>
By inserting this script any website can automatically receive information about me from data_provider.

Some showcases:
Disqus(api_key is same for all users):

Disqus refused to fix it explaining that "exposed data is not private", lol, but it's still personal.

Why this happens? Sometimes JSONP for private data applied by purpose - this means developers are bad at web security. But more interesting case when, for example, team adds Rack::JSONP middleware for Feature1 but it is designed to add callback in all JSON responses. You should know your stack well. 
API must not detect user by his cookies (about "broken" cookies' nature). If you "proxy" your API through website make sure to add additional CSRF token in the headers. Yes, for GET too. Because CSRF token's main goal is to protect your cookies. This is not about POST or GET, this is about your cookies == credentials.

By the way people often forget to filter and sanitize "callback" parameter. It can lead to XSS in IE6-8, because it detects content type from file extension:

It's also a handy Content Security Policy (denying 3rd party script sources) bypass - <script src="local.json?callback=payload//">

I recommend to allow [a-zA-Z0-9] callbacks only and use HTML5 cross domain sharing techniques rather than this insecure trick. Thanks for reading, share your ideas

Monday, February 11, 2013

Rails Vulnerabilities: Learning The Lesson

previous: Rails is [Fr]agile. Vulnerabilities Will Keep Coming
JSON exploit:

Today we got 3 new vulns, IMO only JSON-related is dangerous, but I want to look at the lesson we all should learn.

attr_protected must be removed at all because it leads to DoS easily.
I guess this method is extremely unpopular anyway - nobody cares.

use /m flag to match with newlines with .
"aa(cc\nbb".to_s.gsub(/\(.+/, '')
 => "aa\nbb" 

Wasn't it absolutely obvious to keep YAML.load away from "input"? It was, then why rubygems (Gemfile.lock) and 'serialize' were waiting for "personal" exploitsI knew about potential vuln through SQL injection, but I was on rails 4 codebase.  Even direct assignment leads to RCE in older rails. This is bad and silly :(

Don't Put Magic Params Into Magic Methods
just a few examples(you probably read them if you're following @homakov)

Any parameter in rails app can be: Integer, Date, Time, StringIO, Boolean, BigDecimal, Float, Array or Hash. thanks to alternative inputs and rack query parser!

1) validates :field, length: 2..10
 it is not safe because param can be an array: field[]=123123123123123123123&field[]
2) redirect_to user_input
can lead to XSS if user_input[status]=200&user_input[protocol]=javascript:...
3) create(!) accepts arrays and can create thousands of records with one call! It can be used with JSON params and your service will be full of spam.
4) your mailer can be used for bad because :to accepts an array and it can be a long array with other people's addresses. 
5) mysql compares "string" with integers by casting strings to 0. Yes, "randomtoken"==0 for it. Thanks @joernchen for the find but it's WONTFIX. 0 can be easily passed via JSON and XML.

If you ask me what I trust in I reply God JSON. It looks like such a simple format, so reliable and obvious. Until today. Today I learned that "json_class" attribute makes json parser to "const_get" the value. This is non sense. JSON::GenericObject is double non sense. Sad panda.

there are some vulns fixed in 4 but they were not ported on old versions. For example escape_html_entities_in_json - if you do JSON.dump(user_input) it can lead to XSS in < 4.
And CSRF for routes.rb match method.
default_headers to prevent clickjacking
And bunch of other stuff.

rails codebase contains too many features not used in everyday development. Multiparameters, alternative inputs, aliases(alias in named route collection leading to RCE) and so on. It needs hardening, to reduce such enormous surface for attacks.

P.S. don't blame Rails again. You can blame rails for mass assignment (i'm kidding, blame yourself for this), but for RCEs - blame JSON/YAML gems.

So far, Rails itself is pretty safe.

Thursday, February 7, 2013

Rethinking Cookies: originOnly

Hacker news, reddit
TL;DR this post is full pain, theory and utopia. Don't mind to read if you don't care about better designed web.

What are Cookies now? This is a custom set of data, sent with every request from Client to Server in HTTP headers, mostly used for authentication purposes. Client-side(javascript) can set cookies, but this is a rare use case. Usually cookies are set on server-side, and it contains a special string to determine who is user: signed session, session_id, whatever.

All client side vulnerabilites in web security are because of cookies.

I repeat:

Cookies. They made web broken.

Problem 0 [solved, unrelated]: tampering. 

You can sign cookie with session_secret or store only random string associated with session to prevent tampering.

Problem 1 [solved]: XSS(Cross Site Scripting) 

It can steal cookies because Javascript has access to cookies. This is why we invented 'httpOnly' flag. This flag disallows reading httpOnly cookie from javascript. There is also similar "secure" flag to transfer cookies only via secure connection(MITM)
We invented httpOnly to make critical auth information inaccessible from client side and XSS. We were too stupid to make everything httpOnly by default and only few, really needed on client side httpAccessible or something.

Problem 2 [worked around] Clickjacking, framing.

Framed website received your cookies. There is nothing bad in framing any website without sending auth cookies.

We invented X-Frame-Options to deny "showing" of actually *received* private information. Cool, huh? Response with message: "don't read it if you're wrong person". And, look at some PoC.
Bonus question: Why ClearClick(checks visibility of frame on click) technology is not built-in? It is so easy, isn't it.. But Google has AdSense. AdSense is multibillion business and they have powerful tools to detect clickjacking - people, bots, data mining. Other companies( newbies in web advertising) are easily clickjackable. It's called competitive advantage

Problem 3 [worked around]: CSRF 

My old articles: CSRF Is Vulnerability in All Browsers and CSRF in 15 Popular Websites
What is CSRF, in a nutshell? Is this just a request from domain1 to domain2? No, if you want such request you can use curl, so core 'benefit' of CSRF is usage of User's cookies for domain2.
This is how cookies work from its creation: you set them once and they are sent automatically, from all hosts/origins and for all kinds of requests(<script>, <img>, <iframe>).

Requests are always sent with your cookies. The very core, key problem.

We invented CSRF Token to make sure that Cookie was sent from proper origin intentionally, not from unknown malicious website. This is just so damn stupid: token to protect another, automatically-sent token.

Auth cookie is "key", CSRF token is "another key", proof that "key" was used intentionally. U MAD

Problem 4: [not solved]: Advanced CSRF

Remember JSON leakings? Discovered 4-5 years ago, it was on Hacker News again yesterday. 
It all happened because information was actually received by malicious website. All it needs is a trick to extract it. Previously people could reload Array constructor. Who knows what's next? Stealing plain text tokens through ECMAScript Proxy object(analog of method_missing)? Through styleSheet definitions? Through <audio>/<video> stream or canvas? about:blank referrer? Yet another flash vuln?

Jesus, I'm tired of this.

Maybe we are doing it all wrong..?
It's never too late to make something right.
Don't use my Cookies until I decide to use it myself.

originOnly flag

with such flag the Cookie will be sent only when request's origin is equal this cookie's domain. If it was set on than it's sent only if element(<img>, <form>, <script>, style, <iframe> etc) or XHR was executed on this domain. Or if user navigated to this website(opened in a new tab or clicked link somewhere else)

Why I wrote this?
I want web to be ideal. I want web to be like Mona Lisa: perfect or close to it (in terms of architecture)

Yes, this post is utopic, I just want more people to think about the problem, not about yet another workaround.
"It will break "like" buttons and sometimes useful cross POST whatever..."
If you don't need this flag - don't set it. Just let other 99% of websites to be safe-by-default, not waiting for yet another 0day "trick".

Would love to discuss it with web security minded people! Please, write your thoughts!

Sunday, February 3, 2013

Hacking With XSS Auditor

on Hacker News and reddit
Previously I wrote how you can use XSS Auditor for Great Good(report to administrator about detected XSS exploits) and how to destroy framebrakers/other scirpts with it(just passing script's code in a random parameter).
Today's topic is really interesting. We are not hacking XSS Auditor anymore, we are hacking with it.
I'll tell you how to steal referers with sensitive information.

First of all, there are three values of 'X-XSS-Protection' header which control XSS Auditor: 0; 1; and 1;mode=block.
First one just switches it off(I recommend it, lol).
1; is default, it detects XSS and tries to remove malicious code.
1;mode=block means basically if anything has been detected - redirect to about:blank. People used to think about it as the most secure one. Actually, no!
Steps for the hack are very simple(TL;DR is point 5):
  1. choose URL which redirects automatically(or with some user interaction) to another URL, and also carries both Private Info and Custom Payload.

    For OAuth and Single Sign On implementations Private Info is code/token/signed_request. It can be also kind of SID if it was added automatically to original URL, not removing Custom Payload.

    Custom Payload - part of redirect_uri if it's not static or some kind of 'state', which is used in OAuth to prevent CSRF and basically returned back along with code(And I found the Most Common Vulnerability with it another day).
  2. Look at the source code of the final page which user is redirected to. Choose some <script src=...></script> or <script>code</script> or <meta http-equiv..> or <a href=javascript:..> etc. Anything, that will look like an "injection" for Mr. Auditor. Copy it and encode.
  3. Now put it in the original URL in your Custom Payload. for example 'state=%3Cscript%3Esetup()%3C%2Fscript%3E'
  4. create MalloryPage. You can use <iframe> if target has no X-Frame-Options or use if it has.
  5. when User visits MalloryPage he opens your crafted URL with Custom Payload, website redirects him to final page with both Private Info and Custom Payload, chrome XSS Auditor detects XSS because Custom Payload was found in source code, redirects him again to about:blank, which is easily accessible from opener's domain - now you got document.referrer with Private Info!
Demo of vulnerable SSO implementation, using sinatra and exploit for it:

There are some restrictions of course! The most obvious - it won't work for https:// pages because they don't send Referrer. But as a new vector sounds pretty awesome.
The fix is gonna be very simple - clear document.referrer for about:blank redirect.

[this guy, who wrote the article... you can hire him for a penetration test or security consulting btw. affordable price, cutting edge hacks:]