girlfreddy ,
@girlfreddy@lemmy.ca avatar

A small blurb from The Guardian on why Andres Freund went looking in the first place.

So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.

EmperorHenry ,
@EmperorHenry@discuss.tchncs.de avatar

At least microsoft is honest enough to admit their software needs protection, unlike apple and unlike most of the people who have made distros of linux. (edit: microsoft is still dishonest about what kind of protection it needs though)

Even though apple lost a class action lawsuit for false advertising over the claim "mac can't get viruses" they still heavily imply that it doesn't need an antivirus.

any OS can get infected, it's just a matter of writing the code and finding a way to deliver it to the system....Now you might be thinking "I'm very careful about what I click on" that's a good practice to have, but most malware gets delivered through means that don't require the user to click on anything.

You need an antivirus on every computer you have, linux, android, mac, windows, iOS, all of them. There's loads of videos on youtube showing off how well or not so well different antivirus programs work for windows and android.

Pantherina ,
JoeKrogan ,
@JoeKrogan@lemmy.world avatar

I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.

In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.

I'd also love to see systems hardened by default.

suy ,

no more patching fuzzers to allow that one program to compile. Fix the program

Agreed.

Remember Debian's OpenSSL fiasco? The one that affected all the other derivatives as well, including Ubuntu.

It all started because OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID. Who the hell relies on uninitialized memory ever? The Debian maintainer wanted to fix Valgrind errors, and submitted a patch. It wasn't properly reviewed, nor accepted in OpenSSL. The maintainer added it to the Debian package patch, and then everything after that is history.

Everyone blamed Debian "because it only happened there", and definitely mistakes were done on that side, but I surely blame much more the OpenSSL developers.

dan ,
@dan@upvote.au avatar

OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID.

Did they have a comment above the code explaining why it was doing it that way? If not, I'd blame OpenSSL for it.

The OpenSSL codebase has a bunch of issues, which is why somewhat-API-compatible forks like LibreSSL and BoringSSL exist.

umbrella ,
@umbrella@lemmy.ml avatar

did we find out who was that guy and why was he doing that?

fluxion ,

It was Spez trying to collect more user data to make Reddit profitable

refreeze ,
@refreeze@lemmy.world avatar

I have been reading about this since the news broke and still can't fully wrap my head around how it works. What an impressive level of sophistication.

rockSlayer ,

And due to open source, it was still caught within a month. Nothing could ever convince me more than that how secure FOSS can be.

lung ,
@lung@lemmy.world avatar

Idk if that's the right takeaway, more like 'oh shit there's probably many of these long con contributors out there, and we just happened to catch this one because it was a little sloppy due to the 0.5s thing'

This shit got merged. Binary blobs and hex digit replacements. Into low level code that many things use. Just imagine how often there's no oversight at all

rockSlayer ,

Yes, and the moment this broke other project maintainers are working on finding exploits now. They read the same news we do and have those same concerns.

Quill7513 ,
@Quill7513@slrpnk.net avatar

I was literally compiling this library a few nights ago and didn't catch shit. We caught this one but I'm sure there's a bunch of "bugs" we've squashes over the years long after they were introduced that were working just as intended like this one.

The real scary thing to me is the notion this was state sponsored and how many things like this might be hanging out in proprietary software for years on end.

Aatube ,
@Aatube@kbin.melroy.org avatar

Don't forget all of this was discovered because ssh was running 0.5 seconds slower

Steamymoomilk ,
@Steamymoomilk@sh.itjust.works avatar

Its toooo much bloat.
There must be malware
XD linux users at there peak!

rho50 ,

Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It's not hugely surprisingly that a curious engineer dug into that.

ryannathans ,

Especially that it only took 300ms before and 800ms after

Jolteon ,

Half a second is a really, really long time.

imsodin ,

Technically that wasn't the initial entrypoint, paraphrasing from https://mastodon.social/@AndresFreundTec/112180406142695845 :

It started with ssh using unreasonably much cpu which interfered with benchmarks. Then profiling showed that cpu time being spent in lzma, without being attributable to anything. And he remembered earlier valgrind issues. These valgrind issues only came up because he set some build flag he doesn't even remember anymore why it is set. On top he ran all of this on debian unstable to catch (unrelated) issues early. Any of these factors missing, he wouldn't have caught it. All of this is so nuts.

etchinghillside ,

Any additional information been found on the user?

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

Probably Chinese?

Potatos_are_not_friends ,

Can't confirm but unlikely.

Via https://boehs.org/node/everything-i-know-about-the-xz-backdoor

They found this particularly interesting as Cheong is new information. I’ve now learned from another source that Cheong isn’t Mandarin, it’s Cantonese. This source theorizes that Cheong is a variant of the 張 surname, as “eong” matches Jyutping (a Cantonese romanisation standard) and “Cheung” is pretty common in Hong Kong as an official surname romanisation. A third source has alerted me that “Jia” is Mandarin (as Cantonese rarely uses J and especially not Ji). The Tan last name is possible in Mandarin, but is most common for the Hokkien Chinese dialect pronunciation of the character 陳 (Cantonese: Chan, Mandarin: Chen). It’s most likely our actor simply mashed plausible sounding Chinese names together.

fluxion ,

That actually suggests not Chinese due to naming inconsistencies

ForgotAboutDre ,

Could be Chinese creating reasonable doubt. Making this sort of mistake makes explanations that this wasn't Chinese sound plausible. Even if evidence other than the name comes out, this rebuttal can be repeated and create confusion amongst the public, reasonable suspicions against accusers and a plausible excuse for other states to not blame China (even if they believe it was China).

Confusion and multiple narratives is a technique carried out often by Soviet, Russian and Chinese government. We are unlikely to be able to answer the question ourselves. It will be up to the intelligence agencies to do that.

If someone wanted to blame China for this, they would take the name of a real Chinese person to do it. There is over a billion real people they could take a name from. It unlikely that a person creating a name for someone for this type of espionage would make a mistake like picking an implausible name accidentally.

dan ,
@dan@upvote.au avatar

They're more likely to be based in Eastern Europe based on the times of their commits (during working hours in Eastern European Time) and the fact that while most commits used a UTC+8 time zone, some of them used UTC+2 and UTC+3: https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and

luthis ,
@luthis@lemmy.nz avatar

I have heard multiple times from different sources that building from git source instead of using tarballs invalidates this exploit, but I do not understand how. Is anyone able to explain that?

If malicious code is in the source, and therefore in the tarball, what's the difference?

Aatube ,
@Aatube@kbin.melroy.org avatar

Because m4/build-to-host.m4, the entry point, is not in the git repo, but was included by the malicious maintainer into the tarballs.

luthis ,
@luthis@lemmy.nz avatar

Tarballs are not built from source?

Aatube ,
@Aatube@kbin.melroy.org avatar

The tarballs are the official distributions of the source code. The maintainer had git remove the malicious entry point when pushing the newest versions of the source code while retaining it inside these distributions.

All of this would be avoided if Debian downloaded from GitHub's distributions of the source code, albeit unsigned.

Corngood ,

All of this would be avoided if Debian downloaded from GitHub's distributions of the source code, albeit unsigned.

In that case they would have just put it in the repo, and I'm not convinced anyone would have caught it. They may have obfuscated it slightly more.

It's totally reasonable to trust a tarball signed by the maintainer, but there probably needs to be more scrutiny when a package changes hands like this one did.

barsoap ,

Downloading from github is how NixOS avoided getting hit. On unstable, that is, on stable a tarball gets downloaded (EDIT: fixed links).

Another reason it didn't get hit is that the exploit is debian/redhat-specific, checking for files and env variables that just aren't present when nix builds it. That doesn't mean that nix couldn't be targeted, though. Also it's a bit iffy that replacing the package on unstable took in the order of 10 days which is 99.99% build time because it's a full rebuild. Much better on stable but it's not like unstable doesn't get regular use by people, especially as you can mix+match when running NixOS.

It's probably a good idea to make a habit of pulling directly from github (generally, VCS). Nix checks hashes all the time so upstream doing a sneak change would break the build, it's more about the version you're using being the one that has its version history published. Also: Why not?

Overall, who knows what else is hidden in that code, though. I've heard that Debian wants to roll back a whole two years and that's probably a good idea and in general we should be much more careful about the TCB. Actually have a proper TCB in the first place, which means making it small and simple. Compilers are always going to be an issue as small is not an option there but the likes of http clients, decompressors and the like? Why can they make coffee?

chameleon ,
@chameleon@kbin.social avatar

You're looking at the wrong line. NixOS pulled the compromised source tarball just like nearly every other distro, and the build ends up running the backdoor injection script.

It's just that much like Arch, Gentoo and a lot of other distros, it doesn't meet the gigantic list of preconditions for it to inject the sshd compromising backdoor. But if it went undetected for longer, it would have met the conditions for the "stage3"/"extension mechanism".

gregorum ,
@gregorum@lemm.ee avatar

Thank you open source for the transparency.

Cornelius_Wangenheim ,

And thank you Microsoft.

Pantherina ,

They just pay some dude that is doing good work

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • linux@lemmy.ml
  • test
  • worldmews
  • mews
  • All magazines