Mikina ,

I’m still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google’s fault?

If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?

Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it’s “always on CAPTCHA” - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.

Not because I want to defend the company or something, they have definitely done some things wrong (even though nowhere near as wrong as the users), but because of security awarness.

Shifting the blame solely on the company that it “hasn’t done enough” only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with “They’ve hacked the stupid company, it’s not my fault.”. No. It’s their fault. Get a password manager FFS.

Headlines like “A company was breached and leaked 7 000 000 of user’s worth of private data” will probably get mostly unnoticed. A headline “14 000 people with weak passwords caused the leak of 7 000 000 user’s worth of private data” may at least spread some awarness.

DaleGribble88 ,
@DaleGribble88@programming.dev avatar

As someone else who dabbles in cybersecurity - hard disagree. If developers and alleged IT professionals got their shit together, most data breaches wouldn't be a significant problem. Looking at the OWASP top ten, every single item on that list can be boiled down to either 1) negligence, or 2) industry professionals negotiating with terrorist business leaders who prioritize profits over user safety.
Proper engineers have their standards, laws, and ethical principles written in blood. They are much less willing to bend safety requirements compared to the typical jr. developer who sees no problem storing unsalted passwords with an md5 hash.

Mikina ,

I get what are you getting at, and I agree with that - a world where every product would follow best practices in regards to security, instead of prioritizing user convenience in places where it's definitely not worth it in the long term, would be awesome (and speaking from the experience of someone who's doing Red Teamings, we're slowly getting there - lately the engagements have been more and more difficult, for the larger companies at least).

But I think that since we're definitely not there yet, and probably won't ever be for was majority of webs and services, it's important to educate users and hammer proper security practices outside of their safe environment into them. Pragmatically speaking, a case like this, where you can illustrate what kind of impact will your personal lack of security practices cause, I think it's better to focus on the user's fault, instead of the company. Just for the sake of security awarness (and that is my main goal), because I still think that headlines about how "14000 people caused millions of people private data leaked", if framed properly, will have better overall impact than just another "company is dumb, they had a breach".

Also, I think that going with "lets force users into environment that is really annoying to use" just by policies alone isn't really what you want, because the users will only get more and more frustrated, that they have to use stupid smart cards, have to remember a password that's basically a sentence and change it every month, or take the time to sign emails and commits, and they will start taking shortcuts. I think that the ideal outcome would be if you managed to convince them that's what they want to do, and that they really understand the importance and reasoning for all of the uncomfortable security anoyancies. And this story could be, IMO, a perfect lesson in security awarness, if it wasn't turned into "company got breached".

But just as with what you were saying about what the company should be doing, but isn't, it's unfortunately the same problem with this point of view - we'll probably never get there, so you can't rely on other users being as security aware as you are, thus you need the company to force it onto them. And vice versa, many companies won't do that, so you need to also rely on your own security practices. But for this case - I think it would serve as a better lesson in personal security, than in the corporate security, because from what I've read the company didn't really do that much wrong, as far as security is considered - their only mistake was not forcing users to use MFA. And tbh, I don't think we even include "Users are not forced to use MFA" into pentest reports, although that may have changed, I haven't done a regular pentest it quite some time (but it's actually a great point, and I'll make sure to include it into our findings database if it isn't there).

kylekatarn ,
@kylekatarn@lemmy.world avatar

The hackers initially got access to around 14,000 accounts using previously compromised login credentials, but they then used a feature of 23andMe to gain access to almost half of the company's user base, or about 7 million accounts

Is there more to the breach than just stolen passwords? What feature did they use and what access did they gain?

trebuchet ,

I recall from previous coverage of this that there is a social network feature in the site where you can voluntarily share your info with friends and family.

So 14,000 accounts got accessed via reused passwords and then that gave them access to 7 million people's data because they chose previously to share info with those 14,000.

MrCookieRespect ,

Bro the data wasn't breached, someone just took already available passwords and tried them. It is their fault for using the same password everywhere.

And im not defending the company here, fuck em but thats definitely not on them.

tiramichu ,

23 and Me are technically correct in that it's customer behaviour that caused the issue. People reused passwords and didn't use MFA.

They can claim the moral high ground if they like and shift the blame, but the truth is that regardless of WHY the breach happened, it was still a breach and it still happened!

As a software engineer, I believe there's a real argument to be made here that 23 and Me were negligent in their approach. Given the personal nature of data stored they should have enforced MFA from the start, but they did not. They made an explicit decision to choose customer convenience above customer security.

The argument that customers should have made better security decisions is evasive bullshit.

As a software engineer you cannot trust customers to take correct decisions about security. And customers should not be expected to either - they are not the experts! It's the job of IT professionals to ensure that data has an appropriate level of protection so that it is safeguarded even against naive user behaviour.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • privacy@programming.dev
  • test
  • worldmews
  • mews
  • All magazines