Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google's login page accepts a vulnerable GET parameter (aidanwoods.com)
325 points by ivank on Aug 31, 2016 | hide | past | favorite | 128 comments


One important consideration here is that the phishing attack as described here could be pulled off even if the targeted site did not support redirects - and in general, it would be exploitable without any identifiable fault on the part of the "vulnerable" web app.

This property is an artifact of how browsers work, and it's not something that's likely to change soon. Basically, if you visit evil.com, evil.com can always load accounts.some-trusted-domain.com in a new window, give you enough time to examine the address bar and confirm that it's legit - and then sneakily navigate that window to a phishy location that looks the same as our legit login prompt, but is controlled by the attacker.

(The evil site can also detect certain events, such as navigation, and deliver the payload only at that point.)

For my whimsical demo for Chrome and Firefox (dating back to 2011!), see: http://lcamtuf.coredump.cx/switch/

(Disclaimer: I kinda wrote a book about this stuff. Also, I work for Google.)


I agree that you cannot 100% prevent phishing, but I think that the case presented here is slightly different: the "evil URL" is only contained in a parameter to an request to another URL. I don't think that the typical users will look at the things that come after the `?` in the URL, especially since those can be url-encoded an thus just look like garbage.


Couple notes:

1. You're talking about a pop-up / new tab that somebody clicked while already on an attack site...

2. then loading google.com in this pop-up / new tab and, waiting a few seconds, and then changing the location of the pop-up / new tab

The vulnerability in question consists of landing up on google.com, logging in while still on google.com, then google.com sending you directly to an attack site after you finish logging in.

The user could come from an official looking email (a known and largely unavoidable problem), or from a link that somebody pasted (e.g. "click here to view my spreadsheet on Google Sheets") into a comment on a different trusted site. The "trusted site" is important to note, because as you might have deduced, no scripts or malicious intent is required on the part of the other site; just a link. In your example, the other site would have to either A) be compromised itself, to facilitate the script, or B) malicious itself.

Which of the workflows would you be more likely to fall for, your example or mine?


Steps:

    1. Send email that looks
       like it's from AdWords,
       claiming the user's CC
       needs to be updated

    2. User logs into Google
       after verifying URL is
       in fact google.com

    3. Google literally sends
       trusting user to attack
       site... to enter their
       credit card number
How is this not a serious vulnerability?


First of all, open redirection is not a serious vulnerability. Serious vulnerabilities include remote code execution, SQL injection and cross-slide scripting (depending on the context). Serious vulnerabilities can be used to compromise user accounts without their interaction. Vulnerability severity is always a bit of a touchy subject, but calling open redirection serious is either hyperbolic or report-padding for security consultants.

Second, this vulnerability cannot be used to compromise user accounts or data, let alone with or without their interaction. It is a social engineering attack that could work, yes, but the Google Vulnerability Rewards Program is specifically designed to reward researchers for reporting technical security vulnerabilities, not those leveraged through social engineering.

Third, this is a rabbit hole. Google's core functionality is redirecting users to third party websites. You can just as easily send someone a link to a Google redirector that occurs in the browser when you click on a search result. It's not feasible to prevent this, and it's not feasible to prevent social engineering in general.

Bug bounty programs require a hard line somewhere in order to function. In order for an attack to be acceptable, it must have a clearly demonstrable foundation in a software implementation failure. It must be clearly defined as a logic error, not an error in human reasoning. The only cases where open redirection and social engineering are eligible are those where they are simply peripheral - an actual technical vulnerability that allows these attacks to scale far beyond manual means (an example might be a technical failure in Facebook's linkshim service).

I am disappointed in seeing that this story has risen to the top of the HN front page, because I believe there isn't a single security engineer here that would agree this is a technical vulnerability that should be eligible for a reward.


> Serious vulnerabilities include remote code execution...

I just meant "serious" in the general sense of the word, like the way that a bullet would in the stomach is serious, even though one in the head is more serious.

> Google's core functionality is redirecting users to third party websites

I'm pretty sure that Google's core functionality is ads, which don't require the user to actually be actually visiting pages Google.

Pertaining using Google for social auth, the user is actually expecting to go back to the site they just came from, whereas when they login to update their credit card they're logging in to stay on Google, which means that a page that looks exactly like Google...

How long this functionality remains on the site will be a better indicator. My guess is not long, esp after the publicity.


>> I just meant "serious" in the general sense of the word, like the way that a bullet would in the leg is serious, even though one in the head is more serious."

To continue your analogy more accurately, open redirection is not a gunshot wound, it's a cold.


I don't know about a cold. Maybe the flu, since elderly and children are more at risk. ;)


Really, people. Are we seriously doing this...


>I'm pretty sure that Google's core functionality is ads

You have got to be kidding me... So we're completely going to ignore that everyone is still using Google as a search engine, and not ads? You're making things up just to get your point across.


More people see Google ads than any other form of interaction with Google, by a long shot.

If you want to talk about Google search, it's obvious, as a user, that when you click a link in the results you're going to be going to the URL that the link points to (even if Google changes the URL when you click it for tracking purposes, there shouldn't be much room for confusion).


dsacco is correct in my mind. I work as a security researcher and penetration tester, and I would only include this in a client report as an informational or low risk vulnerability at best in a vulnerability assessment. In a penetration testing narrative I likely wouldn't even bring it up. Even if we look at the social engineering side of things, this is likely not something I'd bother with. There are so many social engineering attack vectors I would give priority to over this as an attacker. A hardline really does need to be drawn with regards to bug bounty programs, or we would constantly be chasing the proverbial rabbit.


> First of all, open redirection is not a serious vulnerability.

I 100% disagree.

There was a time when people didn't think XSS was a serious vulnerability too.

Most web developers in general are way too casual about a lot of attack vectors, and this is just one more example of that.


Okay, you can disagree, but the word "serious" only works comparatively.

If every vulnerability is "serious", then none of them are, and the word loses its meaning and people stop responding to it.

If not every vulnerability is "serious" but open redirection is, what do you call something like command injection or SQL injection? "Double-plus serious"?

I understand that developers need to take security seriously, but that effort is undermined by being hyperbolic about vulnerability severity. No one is trying to say open redirection isn't a vulnerability, it's just not all that...well, serious. Fix it, but it's not a hair on fire problem.


I would call any easily corrected behavior of your site that could lead a normal innocent user to being compromised is a serious vulnerability.

Your statement implies there's an unlimited number of these but there aren't. SQL injection is worse than open direction, but storing your passwords and CC data in plaintext and leaving your root account SSH-able on port 22 with no password is worse than SQL injection. Just because there's something worse than X doesn't mean X is not bad.

The important thing here is open redirection is a stupid, optional behavior and there's no good reason for it. It shouldn't be there. Allowing it is just as stupid as allowing XSS. It's just lazy and sloppy and frankly inexcusable for a company like Google, which should be held to a much higher standard than a mom and pop web store that sells cat figurines.


>> I would call any easily corrected behavior of your site that could lead a normal innocent user to being compromised is a serious vulnerability.

So your stance is the first option, that basically any security vulnerability is serious. The grand majority of vulnerabilities are much simpler to fix than they are to find; once you find them, fixing them is generally straightforward. Given that we are using fundamentally different and incompatible definitions of the word "serious", we are obviously going to continue to disagree on this point. Perhaps it would help if I reframe this for you - there are vulnerabilities which warrant an on-call engineer being woken up at 4 am, and there are those which do not. Further, there are vulnerabilities which need to be fixed this week and those which can wait until the next feature release. If you don't like my use of the word "serious", hopefully you'll find that more agreeable. I don't think we are arguing over the same point anymore.

You continue on to attack Google for having this vulnerability, but I don't think you have spent much time doing vulnerability research if you react that way about a company of Google's size. The sheer codebase alone basically guarantees every type of vulnerability will be found. Furthermore, being antagonistic about it in the way that you are is not conducive to recognition or remediation. Attempting to shame companies or engineering teams is counterproductive to developer awareness and education. At the end of the day, this is just a specialization and your job is not to be mean to them for making mistakes, it's to try to improve their understanding and help them resolve the issue while avoiding it in the future.


I apoligize I don't have my copy of Garner in front of me. Why are you saying serious is only used comparatively?


Thank you since 5 years now I kept myself out of Google SSO as much is I could and carefully use a bug tracker outside my favorites sites because I don't trust Google for privacy matter.

Sometimes I wonder if I'm too much paranoid, but your carelessness just prove me right and I will keep NOT trusting your company.


Er, who do you think works for Google here?


I feel like the industry has had a long time to consider both XSS and open redirects, and has come to a pretty firm conclusion on the severity ratings for each of those classes.

I'll sum it up as: "Google is not outside of the mainstream in their assessment."



CVSS is a ouija board. You can make it say whatever you want.

I strongly recommend people avoid CVSS. Pentesters with clients that require CVSS in reports spend an extra couple minutes on every vuln in that silly calculator trying to figure out how to make XSS come out to sev:med and logout CSRF to sev:info.


CVSS is an ouija board .. You can't make it say whatever you want. Each item has pretty well documented constraints.

It's not great, but do you have something better?


Severity rating is a highly contextual process, for which CVSS fails magnificently. Not only is it devoid of context (is this CSRF on private messaging or password reset?), it is not consistent.

If you browse the vulnerabilities listed by CVSS for random Linux utilities, for example, you'll find the same vulnerability listed several times with different ratings despite them apparently having the same exploitation requirements and final impact ("authentication not required, remote attacker, potential user data compromise", etc).

Then it will generally go on to give a vague explanation that is absent any real exploitation details and simply link to five different websites that all copy-pasted the same SecLists disclosure verbatim.

Basically, it's crap, and it exists because the information security industry is bifurcated into two different industries - the risk folks and scanner Rockies who don't have any real technical competency but who love to have things like the CISSP and those who actually know what they're doing in the trenches of technical penetration testing and security audits.

The industry unfortunately attracts two distinct crowds - people who can and do develop actual software, but who specialize in security, and people who can talk the talk and have a vested interest in the pomp that comes along with crappy guidelines.

If you asked a real vulnerability researcher like Tavis Ormandy to CVSS score any of his findings, he'd probably laugh. His disclosures are usually verbose precisely because this is an industry that should be nuanced.

Just because we have nothing better does not mean we should heed an abysmal system.


Yeah, and WordPress is not outside the mainstream in their views on security either but they're wrong (which is constantly proven by how often WordPress sites get compromised.)

Part of my comment was directed at the fact that the mainstream standards of security for web developers (speaking as one myself) are too low.


This is a non sequitur.

Also: WordPress is far outside the mainstream of security practice; they are, and have been for coming up on a full decade, a running joke in the field.

Nobody says that about Google.


This is a bit off-topic, but can you expand on this with regard to WordPress, or possibly link to someone who has? I work with WordPress and would be eager to learn more about where & how it's deficient, and how to guard against those deficiencies better.



WordPress is the mainstream of web development. It's not a non-sequitur because you brought up "the mainstream".

If you're saying that you meant the mainstream of security professionals specifically, I'd be interested in what a survey of security professionals had to say about open redirect. For me it's clearly a potential attack vector, and Google's specific implementation is obviously sloppy and lazy. I can't see any security professional worth their salt giving it a pass.

As far as Google not being a running joke, that's true and it's all the more reason it's surprising they are so dismissive of concerns about their way too broad whitelist.


I feel like at this point we both understand the point I was making and don't need to litigate it further. You can and should feel free to disagree with it, but there's a reason pentesters file open redirects sev:lo.

(XSS is the canonical sev:medium, for what it's worth).


x


Man if you are afraid to disagree with someone just because of who they are I feel bad for you. I've let pg have it on here when I disagreed with him. It doesn't matter. Everybody's wrong sometimes.


I'm one of your downvoters. :)


It's not a serious vulnerability, at least in the commonly understood sense of the word in security. It is phishing and spoofing, as Google accurately describes in their response.

Maybe it's functionality for Google to change to prevent social engineering for susceptible users, but it's not something that deserves a bug bounty.


Step 1 and step 3 are "trick the user". That alone should make it clear the real vulnerability is social engineering, and not a software flaw.


"Hey", they don't understand. </sarcasm> After the second "Hey", it started to annoy me, and I was just reading that post. Sounds quite condescending...


Google seems to consider such issues about security UX being severely affected, where expectations are involved, to not be vulnerabilities.

This is one of them. "theoretically", it is not a vulnerability, since the actual integrity of the security model is not compromised. However, the UX changes. A user who checks for the green lock in https urls may not check that a "wrong password" page reached via the google login page is on a different domain because they don't expect it to be. Given the existence of google oauth login, I do agree that this isn't really a good expectation, but I suspect that many folks have it (and it's easy to add "login successful, redirecting to <site>" for non-whitelisted endpoints). It's certainly something to improve upon, even if you don't give out a bounty or w/e.

I reported a vulnerability of a similar kind months ago, which was similarly classified as not a vulnerability. When Chrome receives an invalid certificate which is invalid due to multiple reasons, it does not show all the reasons or prioritize. This means that if I have a self-signed certificate that also expired half an hour ago, chrome just tells me that it is an expired certificate. The integrity of the security model hasn't been affected here -- the warning is still shown, the user still has to manually click through. But the user may be much more likely to. Recently-expired certificates are relatively common, and seem like a reasonable thing to bypass. Of course, you should always check the cert details and cert tree before bypassing (unless you don't want to login or don't care about things being MITMd on that site), but not everyone understands the cert trees, and not everyone will do this.

Of course, it's up to them to not consider security UX fails as vulnerabilities, and UX is pretty hard to get right anyway (Google has some awesome folks working on security ux though!).


> A user who checks for the green lock in https urls may not check that a "wrong password" page reached via the google login page is on a different domain because they don't expect it to be.

I wonder if you could use that new fangled css that changes the tab color to red, then the red unlocked icon looks like part of the error page styling.


Or just use lets encrypt to get a certificate for accountsgoogle.com ?

That ensures basically no one will notice a difference.


That's a good idea. We'd need to apply it to all http:// pages too for it to work though.


Since Google has decided that https://www.google.com/amp/[any_domain_here] isn't a vulnerability, then I don't see how combining that with a google login is a vulnerability.

OP talks about having the continue page prompt for password, but how is that any different from creating a fake Google password prompt page now? That page would not be on https://accounts.google.com/, and it would not have my personal info displayed, so why would I enter my password? Just because the last thing I did was log into Google? Is that supposed to "put me in the mood"?

If I'm on page A, and I click a link that prompts me for my google credentials, I've either expected that, so I check to make sure it's Google, or I haven't, so I close that tab. If I enter my username and password, or just my password, and then end up at page B, and it prompts me for my password again, it certainly doesn't know my username like the real Google password prompt page did. It looks different, and raises all sorts of flags.

Alternatively, if I'm on page A, and I click on a link that sends me to page B directly, where I'm prompted to enter my username and password, I don't. Why would I? It's not google.com, etc.

There's just no way that this seems like any more of a vulnerability than the open redirector already is.


I'm on the sign-in page for Google.

I check out the URL, it's google.com. The padlock is there.

I sign in. Whoops, must have typed my password in wrong. It happens sometimes. So I type it in correctly.

...I just got phished.

The problem is that the behavior of this exploit mimics almost EXACTLY the expected behavior. The warning flags to even an educated user are not clear at all. It would be so easy to fall for this.


Your profile picture would go away on the second page though, which might be unusual.

I recommend installing this extension, however: https://chrome.google.com/webstore/detail/password-alert/noo...


Which might be unusual, unless you add an error message, as in this example:

https://cdn.kuschku.de/ServiceLogin/video.mp4


I lack the patience to create a video (kudos!), but I'm still not seeing how this is any more of a vulnerability than just sending someone to that page in the first place? It's not on a google.com URL, which seems immediately obviously to me.

It might help that I always enter my password via command-\, never typing it. So even if I was really not paying any attention, and didn't realize the domain had completely changed, and that my username had disappeared (which still seems unlikely), command-\ wouldn't fill in my credentials, because it's not google.com.

That last page is phishing, for sure. And it's going to fool some number of people, for sure. I'm struggling to think of how that's something Google should do something about though.


Google obviously thinks it shouldn’t happen – that’s why they have a whitelist in the first place.

This is just a simple whitelist bypass.

But the issue is: How do you check the page you are on is the correct one?

You might check everything the first time, but after that?


It looks like your domain is flagged for phishing in my Chrome. :(


Yes, that's why I removed the link, too.

Now I just have to wait a few days until it's not flagges anymore.

Did I mention how much I hate Google, and their "automate everything" stance, especially regarding flagging of content?


Woah, thanks for the video. That makes the issue more apparent.

Still, couldn't you do the same thing with the standard OAuth flow for Google/Facebook/Twitter?

Also, cool profile photo. :-P


I'm not sure I agree. There will be a percentage of users who would see the initial google.com page and not realise the subsequent page was an impostor, especially since Google 'took them there' after they logged in. This doesn't work on you because you have been primed to know what to expect - I reckon at least five percent of people phished this way would not be so observant, and Google should block off-site redirection after login.


The simple fix would be for google to show their own page after successfully logging in. On this page they would tell the user they are about to be forwarded to another page external to google.


That wouldn't solve this problem given that the page they're redirecting to _is_ a Google page, the continue check enforces that.


Signing in on this page will download PuTTY, for example: https://accounts.google.com/ServiceLogin?service=mail&contin...


A non-vulnerability like this is a good example of how easy it is to get press for $important_company + security.

Top of hackernews at the moment and fingers crossed there wont be a wave of articles about this in the coming days from tech press who don't fully understand the issue but know clicks when they see them.


A non-vulnerability? I understand how you could call it non-serious if you don't work on user-oriented code or think all users have perfect periferic vision all the time. But how do you explain the purpose for the check that fails any non-google domain then?


Not only to GET press, but also to PRESS the company for $.

This is a huge problem I see with bugbounties. People running the bug bounties, who are not appsec security literate, are basically bullied in to thinking that something is a security risk when it is not quite often.

I deal with people trying to do this 10-15 times per week. I can totally see how people get pushed in to paying thousands for essentially worthless bugs.


I can appreciate their stance on phishing but being able to automatically execute an arbitrary file download that appears to come from Google as part of the login process strikes me as a bad thing, no?


To get that exploit you'd need to send a faulty URL and get the user to follow it. You could just as easily do that with a faulty URL and no Google login.

The only risk is if the redirect can access your login session/credentials. If it can't, no exploit.


The Google login wraps the whole flow in a trojan horse, though. Users are taught to trust the single sign ons from FB/Google, and they're also taught to check the URL. Both of those would be false senses of security in this case. In isolation, this isn't necessarily a security vulnerability. But that it breaks down when users do what they're supposed to do? That's bad.


The logo and the host portion of the URL are ensuring that my credentials are going to Google, and not anyone else. The rest of the URL is telling me I shouldn't trust where I'm going. ;-)


> automatically execute an arbitrary file download

Your phrasing could easily be misread. A downloaded program would not automatically run.


And in modern browsers/OS not even after you click on them, until you clear a "yes, I know this is risky" dialog.


Replace execute with 'initiate'?


It is. It also seems like an easy fix, I'm surprised why this hasn't been done yet.


It's not an "easy fix", there could be (and probably are) thousands of different endpoints that can be redirected to after login, whitelisting all of those just doesn't make sense.


That sounds tedious, not necessarily difficult. I wonder if there is a technical challenge that goes beyond the tedious work of going through possible redirects and whitelisting them


Probably corporate inertia and big testing/deployment pipeline, etc.


It feels like you should phish one of the Google Security Team to really get the point home.


This has been this way for several years now.

I have notes on a Google attack I discovered ~2 years ago that includes an open redirect as a critical part. It allows running arbitrary JS on authenticated users on the click of a link.

However, there is one small part of the attack (one character!) which prevents it working so I've never reported it. Interesting that there is such a debate here, as I thought open redirects were generally accepted as out of scope for most bug bounties.


Well, maybe there should be social engineering bounties then.

For things like this, or when you can engineer their support to gove you access to their systems.


If any URL can be passed, and this is not a vulnerability, why validate the argument at all for google.com.*?


Couldn't they just sign the URL with a hash and call it a day? For example, have a parameter continue be "something" and hash be sha256("something"+someRandomStringThatOnlyGoogleKnows)... And on the server check if sha256(continue+random) == hash.


I have noticed that a redirect to any site on sites.google.com also works seamlessly on mobile. The amp pages are not working for me and I get an error page, but as someone mentioned they probably don't work on mobile and only on desktop.


Frankly, I don't see any reason why Google isn't right here. This "vulnerability" can only be exploited by appending another one, fishing. It's reasonable to say, "If AB is a vulnerability only if B is a vulnerability, then A is not a vulnerability". That you can't solve fishing [as a service owner, without also owning the browser] is unfortunate but irrelevant.


I disagree. The linked google page ( https://sites.google.com/site/bughunteruniversity/nonvuln/op... ) makes the argument about the mouse hover tooltip, as if that is the only risk of an open redirect.

This is much worse, this is a redirect after login. You just logged into google, you entered your 2FA code, and then the next site you arrive at you damn well expect to be google.

Additionally, the OP is correct in that there is a whitelist bypass happening here. Why bother to whitelist *.google.com/ if you can bypass it with a redirect.


> at you damn well expect to be google.

That's not really true though. Google serves as an identity provider providing login into 3rd party websites. Plus there's also OAuth prompts when 3rd party services want to use your Google data.


> at you damn well expect to be google.

If it's again asking for a password, it must be Google.


No, that's not true. Think of every time you click on a "sign in with Google" link. If you're not signed in, it will take you to this login page. The redirect will only send back a token, and only for that site.


> then the next site you arrive at you damn well expect to be google.

Actually, if it is an untrusted URL, I'm not sure why I'd expect that.


I think if google (verified with ssl cert etc) is asking for my password, there was a certain expectation there, but you're right, I should be less trusting ;)

Here is the demonstration url from the report https://accounts.google.com/ServiceLogin?service=mail&contin...

My point isn't about 'open redirects' not being a security issue, it's that the report should be valid because it's a whitelist bypass.

continue=yahoo.com is blocked but continue=https://www.google.com/amp/yahoo.com is not blocked.

both do the same thing as far as the end user is concerned.


continue=https://www.google.com/amp/yahoo.com is taking me to a Google error page that says yahoo.com is invalid amp code.


I am being redirected to a page that will let me change my google account recovery options.

My guess is that posting this thing on HN has caused them to fix it.

OR maybe I'm being phished.


the /amp/ hack only works for desktop and not mobile. I was on mobile and that's why I got the Google error page.


Weird, it worked perfectly for me just now.


I totally agree. I don't even see how the impact is even more than the open-redirects which already exist. You could do this exact same exploit against tons of providers (Facebook, Twitter, etc) via the standard OAuth flow and the 'redirect_url' parameter.


Because I go to a page, check the green padlock and all that noise, it's all legit, enter my credentials, oops I typed them in wrong, try again, right now I'm logged in properly... I would say I am dramatically less likely to exercise as much caution on the "Ooops wrong credentials" page having checked the original one.


I'm the type of person who can't fathom actually falling for a phishing attack. As a tech-savvy software developer, it is extremely unlikely I will ever be successfully attacked via a phishing email, text message, blog/forum link, etc. If the initial link I follow could possibly be questionable, I look at the domain before logging in. I must be part of much less than 10% of the population that is at least that informed and vigilant.

I hadn't considered login redirects. This attack vector is one I can see myself falling for. Sign in on the legitimate google.com domain. Whoops, I just mistyped my Google password - not a rare occurrence. An exact replica of the "incorrect password" page has me type in my password again. Then they have a replica of the two-step authentication screen. I enter in the valid TOTP code, and now I've just given a third-party full access to my Google account.

This should absolutely be considered a serious threat. Hell, the only thing we freaking teach non-tech-savvy people to do is to look at the domain name. This attack vector completely bypasses that common understanding of how to detect phishing. There is absolutely no way I should land on a 3rd party domain after authenticating, without at least an interstitial page informing me I am heading off of Google's properties. Any redirect chain specifically from login should require landing on a Google domain, or have an interstitial. This doesn't need to be done globally for all redirects - just from login.

Oh well, I learned something new today. Re-check the domain you are on every time you type in credentials. You can't rely on your initial entry page being on the right domain, you have to be paranoid to the point of insanity every time you touch your keyboard.


And the best part is you don't know it happened because after you've filled in the fake page, they redirect you again to a Google page you did need to be signed in for, and you're now logged in (from the first page).

If people ever send you Google Docs links, you're likely vulnerable unless you are as vigilant on the "Wrong Password" page as you were on the initial one.


It's worst than that. Below (or somewhere in this thread) you'll find some example posted by kuschku: you don't land in someone else domain, you're still in the Google domain, you'll just find out that you're viewing someone's else page if you see the full URL.


If you're that confident that you have either already been phished or haven't had someone try. It happens, you make a mistake, you click too fast. A little humility goes a long way.


Perhaps it's just safer to use a password manager for everything.


Somewhere, Tavis Ormandy is cringing.


Does he have a statement regarding password managers?



> "If AB is a vulnerability only if B is a vulnerability, then A is not a vulnerability"

Stringing together multiple (seemingly) insignificant vulnerabilities to pivot through a system or own a user is hacking 101. I'm not sure how you'd fix this, but given Google's prevalence as an authentication provider, it's definitely not worth being glib about.


Try this variant:

(link removed, see video below)

And tell me your parents/grandparents/etc will not mistake that for the real thing.

(you can enter whatever you want into the phishing form, it will just take you back to this post – but someone with less morals could do entirely different things)

EDIT: Video for mobile users (this page doesn’t work on mobile): https://cdn.kuschku.de/ServiceLogin/video.mp4


Perfect example. Everything that I can see in my cell looks right, including informations from the green padlock, just opening (and understanding! ) the whole URL would make one NOT enter the password.


And that’s the thing, thanks to Let’s Encrypt, the padlock on the fake site also stays, everything looks normal.

If you now also register something like accountsgoogle.com or a similar page, people likely wouldn’t even notice.

This is like it’s made for phishing.


Ironically, Chrome is now blocking that as a phishing site!


Yes, and now I have to deal with Google to unblock it.

One of the reasons why I hate automated systems judging things.

Thanks, Google...


Please tell me this is sarcasm. The system is working...


The system would be working if it was an actual phishing page.

If it is an example, which is non-malicious, and even says "Don’t enter your password", and doesn’t even store the data from the input fields, it’s not.

It could easily check if any JS is executed, or if the form fields ever get sent anywhere (both is not the case).


The only bit you're missing here is redirecting the user back to a Google page you do need to be logged in for, so they think everything is ok and don't think twice about what just happened


I was hoping I could avoid getting flagged as phishing site if I'd make the form ignore all content and redirect back here.

Turns out I was wrong.


Temporary: please take the page back, it worked for me... Keep the video for those who can't.


I can’t, it’s hard already to get rid of the phishing flag Google has set for the domain.


This is what I get: "Sorry, this page is not valid AMP HTML"


Do you happen to be on mobile? This particular site only works on desktop, at the moment. I’ll do a screenrecording for you mobile users in a moment.

https://cdn.kuschku.de/ServiceLogin/video.mp4


Yes, I'm on mobile. That must be it.


The sexism is really out of line.


Well, it wasn’t intentional (and I’m female myself anyway), but [long family story omitted] I edited it out.


Agreed. Open redirect aside, this is simply social engineering; someone convinces the victim to navigate to a crafted URL hoping for some eventual exfiltration of credentials. This kind of issue is a good demonstration that it's not (just) the payload of the URL that's important -- in this case the deception may be more difficult to spot -- but also who's asking to go there and why.


That's a weak excuse for this horrible security hole.

Google is facilitating phishing with this terrible behavior. It means that someone trying to phish a Google user can send someone to a totally valid google.com URL and it will still result in their credentials being stolen. That's the #1 thing that you teach non-technical people to avoid phishing; "Make sure the site you're on is the one you think you're logging into."

Because Google has chosen to be sloppy and dumb that simple instruction is not enough to protect people.

Google is 100% wrong here. This is a blatant hole and just a lazy, sloppy stupid, way too broad whitelist.


Agreed, and not trying to be pedantic, but it might be more accurate to say: "If AB is a vulnerability only if B is present, then A is not a vulnerability".


The boilerplate FOAD email they send when they decide to end the conversation is dripping with smart.

Exclamation points! Bummer!


That is quite an incredible security flaw. I also appreciate the irony of using a Google service to exploit Google's login itself.

It's akin to that scene in "Terminator 2" when the T-800 walks into a gun shop, asks to see all the top shelf guns, then loads up and shoots the shop owner before walking out...


That was Terminator 1 though.


Clerk: "You can't do that"

Arnie: "Wrong"

https://m.youtube.com/watch?v=TsMAhHtHroE


Was it really Terminator 1 before any other films came out, though?


No but it became Terminator 1 retroactively, much in the same way that Kyle Reese retroactively become John Connor's father.


I guess this should not matter to someone like me who never logs on to google in a web browser? I just use gmail via IMAP.


I guess I am wrong. Looks like I do have to go through some sort of Google login page when I set up my email for the first time https://imgur.com/a/Vlj09


This changed sometime in 2014. You used to be able to use IMAP authtype 'simple', but in April 2014, Google released a rather vague blog post [1] about how they're going to deprecate something; please move to OAuth 2.0.

The Thunderbird team's bug tracker has some context [2] where they had to move quickly to add functionality to Thunderbird to support the SASL XOAUTH2 for IMAP so that Thunderbird would keep working with Gmail.

[1] https://security.googleblog.com/2014/04/new-security-measure...

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1059100


Woah. Yea I moved off Gmail in 2013. Their broken IMAP implementation is terrible.

Even though a lot of my outgoing e-mail goes straight to spam, I'm glad I did. I hate this crap.


What did you replace Gmail with?



I also Gnus to read my email every once in a while and when I tried it last week, it complained

  Opening connection to imap.gmail.com via tls...
  Opening TLS connection to `imap.gmail.com'...
  Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com'...done
  Opening TLS connection to `imap.gmail.com'...done
  nnimap (gmail) open error: 'NO (ALERT) Please log in via your web browser:
  https://support.google.com/mail/accounts/answer/78754 (Failure)'.  Continue? (y or n)
This used to work until a few months ago. It is also possible that something is missing on my computer. My computer was stolen recently and I haven't quite had the time to get to the bottom of this and several other similar things :/


See this stackoverflow question [1] by someone having the same problem to see what's going on. Someone else wrote [2] an okay-looking script that implements this custom XOAUTH2 scheme in emacs; if you trust you can use that, or use it to write your own.

[1] https://emacs.stackexchange.com/questions/19382/

[2] https://github.com/ggervasio/gnus-gmail-oauth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: