Is there a database of these big corporate hacks anywhere?
Furthermore, is there any categorization of the method that was used? I am curious of the ratio: X% mix-configured cloud permissions, Y% bad defaults, Z% unpatched 9-year old vulnerability, N% most minimal social hacking, etc
Probably too much egg on everyone's face to get public post-mortems, but I would enjoy knowing more.
There is generally going to be a lag between a breach being reported in the news and the breached data being processed & organized for services like Monitor & HaveIBeenPwned (this also assumes the breached data is posted publicly rather than being sold privately or held for ransom).
> It can sometimes take months or years for credentials exposed in a data breach to appear on the dark web. Breaches get added to our database as soon as they have been discovered and verified.
I've often though about maintaining a website with a timeline of hacked organizations, large and small, as an illustration of the fact that there is no such thing as computer security and how all data on networked computers should be considered semi-public. Mostly what keeps me from doing it is it seems like too many events happen to stay on top of it, haha.
Is this Sony Interactive Entertainment LLC or Sony Pictures Entertainment or Sony Music Entertainment or Sony Electronics Inc. or SOA or Sony Group Corporation KK ?? What an all of sony possibly mean?
I had a hell of a time for the last month trying to login to playstation's website where it asked me to count dice in 20 pictures. I wonder if that is related, because I gave up.
I’m glad I’m not the only one having captchas cycle a dozen times or more without granting access. But I’m sure we will come up with something even more annoying to replace captchas.
Many times. Before the first high profile hacks (that I remember) in 2011, Sony's CTO made a career of giving high profile talks about, essentially, reducing your IT budget by not doing security. Don't do pentests, don't do audits - they only uncover issues for your teams to fix! Certifications are an industry that sells you problems, he said. Ignore and skimp on the whole thing. IIRC there was even a great talk about how to ignore your engineers when they say something is urgent.
He didn't get fired after the first round of hacks, and he wasn't fired after the 2014 round either. I wonder where he is now?
Noncompliance is a fact of life as the list of security and privacy regulations grows. The key is knowing how to comply just enough so that you don't waste your time or bankrupt your company.
The person this seems to be referring to, according to info in the article posted in a sibling comment and a Time article [0] about the 2014 hack, is Jason Spaltro, executive director of information security.
An interesting piece of info in the Time article is that Sony only had 3 people working on infosec, excluding managers.
Not so inflammatory at the time. Those were wild days. Someone else posted his CIO article about "just enough compliance", but IIRC there were talk summaries and interviews around, too.
And for the comedy factor: those hacks were dictionary password attacks against leaked usernames, and a plain text file left laying on an open network share with key credentials. Not exactly oceans' eleven.
> “We have successfully [compromised] all of [Sony’s] systems,” Ransomed.vc proclaimed. “We won’t ransom them! We will sell the data. Due to Sony not wanting to pay. DATA IS FOR SALE. WE ARE SELLING IT.”
The key detail. If Sonys not willing to pay these 6000 files are not important.
Sony paying = Sony saying to the hackers please still our data we will give millions.
There is no doubt the hackers would take sonys money and then sell the data anyway. You will lose badly by paying them. You have to consider the breach as a total loss if you are Sony, paying up ransomware is foolish decision.
The target for these hacks is always for the company to pay, so companies not paying is a very good play.
The ransomware groups hit lots of companies. Their reputation for holding up their side of the bargain is how they get paid. So they tend to keep their side of the deal.
Sorry, but that's dead wrong. Ransomware only works against companies, because individuals can't pay you. Most individuals can't figure out bitcoin, and the overhead of the transaction is too much. But for $10 million, a company can figure out bitcoin and the transaction overhead is minimal.
As for the backups, the groups running ransomware are very aware of the importance of backups. Therefore they have developed best practices targeting the backup systems in a variety of ways. If the backups are online, delete them. If the backups are offline, compromise the backup server so it stops taking backups, leave that for a while, then do your attack. (You'd be shocked at how many companies don't test backups regularly...) If whole computers are backed up, insert a malware timebomb - as soon as the computer realizes the date, it destroys itself. When it is restored from backup, the backup also destroys itself.
Most companies pay too little attention to backups. And therefore highly motivated attackers regularly succeed in attacking them.
While true sometimes, like when the US says they won't negotiate with terrorists, the realpolitik is that if the data is valuable enough normally companies will pay for it.
Most won't take some moral stand that costs them money.
Of course companies would pay for data if they had the choice. And small companies often do in case of a ransom attack. But if you're as big as Sony, there's really no upside to paying them. Like the other commenter said, if they did pay, the attackers would without a doubt still sell the data to others. Why wouldn't they? It's not like they have any morale.
And to make things worse, this would just signal to other hacker groups that hacking Sony is very profitable, since they apparently are happy to pay. So they would just increase the target on their back. No upside.
I usually feel bad about the victims of ransomware, but in this case I feel a little less bad, since in the past Sony has betrayed trust by attempting to install their own rootkits on unsuspecting user's hardware. When they got called out, they made a removal tool that collected more data from the users who used it.
What's funny is that I was working in the online music biz at the time and Sony would keep sending us all these CDs with more and more elaborately fucked-up "copy protection" and ask us to RIP THEM so we could ingest them for online download systems.
I had quite a few calls with their people about the insanity of these CDs at the time.
This sounds perfect lol "Just rip our music from the CD what's the big deal?" said one part of the company while they were whipping another part of the company to make doing that difficult/impossible.
Sony's network has been compromised something like a half dozen times over the last decade or so. They can't seem to secure their systems. Bad behavior aside, it's probably not a good idea to give them any sensitive information.
It seems like security in general is taken lightly in Japan.
Probably because they tend to work on a honor system and are generally well behaved. I think the subconscious idea is "Why would people try to hack the system? It is wrong", because they wouldn't do it themselves, the threat is like an alien concept.
It works in a day-to-day life. In fact, it is a very pleasant experience and often one of the top things people say when you ask them what the like about Japan. But for IT security where the threat can come out of anywhere, it doesn't work.
What?!? You're telling me the weekly full disk virus scan that ruins 3 hours of work, the fortnightly security announcements from the poor schmuck chosen to be the security representative, and sending the password for a file in a separate email than the file.. all these don't really do much? the horror
Not in "another email", but can make some sense if done via "alternative communication channel". The idea is that the attacker obtaining some access to one device (let's say computer with email account containing the message with said file) doesn't have the full set of data required to open the file given password was sent with a different channel (say, text message via phone). Not the Final Word in security, but rather yet another layer of it.
Can't reply directly to bart because the thread is too deep, but really, they do send it another e-mail. Don't ask me what it's supposed to achieve. Purportedly, if you sent the attachment to the wrong person, then you can remove the recipient when sending the mail containing the password if you happen to realize the mistake. Well, the thing is, (1) they usually send the password e-mail right after the attachment mail, which means it is highly likely they won't check the recipient list, and (2) they don't bother to use AES256 (because other Windows Explorer won't be able to open them), so even without the password it's trivial to crack.
It was the end of Digg, when Digg tried to comply with Sony's request to take down some leaked Sony key. All Digg users were commenting/posting the keys everywhere. Many migrated to Reddit thereafter. About 17 years ago, I think.
The end of digg was letting advertisers have their way with the front page and consolidating around a couple power users, who were also able to be paid to push whatever advertisers wanted to be pushed. That whole digg v4 fiasco.
I'm assuming you're talking about those mail order CD things that promised ~10 CD's for a penny, plus shipping and handling. This also confirmed your subscription to a service which sold you two CDs per month at full retail price plus shipping for at least a year unless you explicitly refused each one. I don't think you even got to choose which CDs you got.
Malware wasn't the point. They sent off unsold stocks of old CDs, made you "buy" new releases which would count for Billboard and the such, and charged you a ton for doing it.
That sounds about right - I was a foolish kid :P Don't give this too much stock, playful speculation/pointing of irony on what's already an aside.
There was some selection for what was sent. Not just some random collection of whatever... at least entirely; I'm not sure. Maybe only the original set - doesn't matter anyway.
There is no "software you install for playing CDs." Your CD-ROM drive streamed audio directly from the disc to your sound card. Some drives actually had a Play button on the front to start it up without even talking to the OS. If you wanted more control, you used the CD player app that came with your OS.
Occasionally you'd get a fancy album with a music video on a data track, or something. But other than that, there was never any legitimate reason for inserting a music CD to install anything.
I might be remembering the situation wrong, but I think you’re mistaken. Sony made music discs that would fuck your computer up and install software without you knowing or consent.
Also in the early days cd drives were wired up to the sound card. But I feel that went away later on with SATA drives and dvd drives. I must say I haven’t see a laptop with a disc drive that worked as you suggest.
> Sony made music discs that would fuck your computer up and install software without you knowing or consent.
Yes, that is what we are talking about. OP claimed that the Sony malware was "just a part of the software you installed for playing the CDs." I'm pointing out that there was no such legitimate software, and the malware was just malware.
The software in the screenshot actually has nothing to do with their rootkit and was not present on the CDs in question.
More: this wikipedia article you're using to try to validate your claims that it wasn't a rootkit says it's a rootkit in the first paragraph. It also says the software would install itself even when the EULA was refused, while you claim the opposite.
My only question is why are you so vehemently defending Sony's actions? Do you somehow believe that it is a company's right to fuck over your devices security in an attempt at selling more licenses to infinitely-reproducible content? My guess is that you have something to gain from the public perception shifting to "actually this is fine" but i can't put my finger on what.
>The software in the screenshot actually has nothing to do with their rootkit and was not present on the CDs in question.
It was present on the CDs as its the way you very supposed to use the CDs.
>this wikipedia article you're using to try to validate your claims that it wasn't a rootkit
I was using just the image on there to show you that there was a whole software package that was included with the CD. It wasn't just DRM, but also a media player and a disc burner. Wikipedia is just copying media propagania of the subject. That's how the site works.
>It also says the software would install itself even when the EULA was refused, while you claim the opposite.
The article is confusing there because it's talking about a different DRM solution than XCP. The article is claiming a different DRM solution was doing that.
>My only question is why are you so vehemently defending Sony's actions?
I am not defending Sony. I am just not going to join in with made up outrage of malware when there was no malicious intent.
>Do you somehow believe that it is a company's right to fuck over your devices security in an attempt at selling more licenses to infinitely-reproducible content?
Insecure software with elevated priviledges is not something unique to this situation and it still is happening to this day. This is an industry wide problem. Thankfully in these times we have proper DRM that is built into silicon and the operating system so that companies don't reinvent something worse and buggy.
> It was present on the CDs as its the way you very supposed to use the CDs.
I'm getting the impression that you don't understand how CDs worked. You're acting like it was perfectly normal and expected for music CDs to include an installer for a CD player app. But it wasn't; essentially any computer that had a CD-ROM drive had the ability to play music CDs with no software installation needed, the same as any computer with a floppy drive had the ability to load files from disk. Yes, some CDs shipped with branded player software anyway; this was all useless advertising shovelware even when it didn't also contain harmful rootkits.
In the early 2000s, it was quite common for 5.25" CD drives to have a 3.5mm audio jack port on the front, to play music independently of the computer. I can't remember ever making use of that feature.
They internally connected to the sound card with another line level connection rather than over the IDE cable. That cable looked like this: https://www.startech.com/en-us/cables/cdaudio2
Regular dedicated audio CD hardware like hifi equipment and car stereos had no problem reading the discs without Sony's software. It was setup as a way specifically to trip up computers trying to play the CD.
It installed itself whether or not you agreed to it, hid the installation from the system, and interfered with CD copying without announcing itself, and had no uninstall. What’s a rootkit again?
The XCP software did not install if you didn't agree to the EULA. It interfered with CD copying to prevent unauthorized copies. There was no uninstall because people shouldn't be able to just uninstall it and gain the ability to make unauthorized copies.
>What’s a rootkit again?
Malware that attains root and maintain it as long as possible.
Sony is one of the coolest companies. Hack them, challenge them, force them to become better, harder, faster, stronger. Fuel their evolution. If there is one brand in the world that can become human again, it's Sony.
Furthermore, is there any categorization of the method that was used? I am curious of the ratio: X% mix-configured cloud permissions, Y% bad defaults, Z% unpatched 9-year old vulnerability, N% most minimal social hacking, etc
Probably too much egg on everyone's face to get public post-mortems, but I would enjoy knowing more.