GitLab discovers widespread NPM supply chain attack
Posted by OuterVale 12 days ago
Comments
Comment by qubex 11 days ago
Comment by sigmoid10 11 days ago
Comment by Glemkloksdjf 11 days ago
Which can't be the right way.
Comment by ndriscoll 11 days ago
Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.
Comment by kernc 11 days ago
Comment by godzillabrennus 11 days ago
Comment by naikrovek 11 days ago
I really wish people had paid more attention to that operating system.
Comment by nyrikki 11 days ago
K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.
I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.
If I need a new model I just download it to a shared volume.
Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.
People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.
The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…
I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.
The information is out there but scattered.
If it is something people would find useful please leave a comment.
Comment by naikrovek 11 days ago
Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.
I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.
Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.
Comment by nyrikki 11 days ago
As someone who actually ran plan9 over 30 years ago I ensure that if you go back and look at it, the namespaces were intended to abstract away the hardware limitations of the time, to build distributed execution contexts of a large assembly of limited resources.
And if you have an issue with Unix sockets you would have hated it as it didn’t even have stalls and everything was about files.
Today we have a different problem, where machines are so large that we have to abstract them into smaller chunks.
Plan9 was exactly the opposite, when your local system CPU is limited you would run the cpu command and use another host, and guess what, it handed your file descriptors to that other machine.
The goals of plan9 are dramatically different than isolation.
But the OSes you seem to hate so much implemented many of the plan9 ideas, like /proc, union file systems, message passing etc.
Also note I am not talking about k8s in the above, I am talking about containers and namespaces.
K8s is an orchestrater, the kernel functionality may be abstracted by it, but K8s is just a user of those plan9 inspired ideas.
Netns, pidns, etc… could be used directly, and you can call unshare(2)[0] directly, or use a cri like crun or use podman.
Heck you could call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
You don’t need an api or K8s at all.
Comment by naikrovek 11 days ago
The base OS should be providing a lot/all of these features by default.
Plan9 is as you describe out of the box, but what I want is what plan9 might be if it were designed today and could be with a little work. Isolation would not be terribly difficult to add to it. The default namespace a process gets by default could limit it to its own configuration directory, its own data directory, and standard in and out by default. And imagine every instance of that application getting its own distinct copy of that namespace and none of them can talk to each other or scan any disk. They only do work sent to them via stdin, as dictated in the srv configuration for that software.
Everything doesn’t HAVE to be a file, but that is a very elegant abstraction when it all works.
> call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
I should not have to opt in to that. Processes should be isolated by default. Their view of the computer should be heavily restricted; look at all these goofy NPM packages running malware, capturing credentials stored on disk. Why can an NPM package see any of that stuff by default? Why can it see anything else on disk at all? Why is everything wide fucking open all the time?
Why am I the goofy one for wanting isolation?
Comment by bitfilped 10 days ago
Comment by naikrovek 9 days ago
OS-level isolation needs to be a thing. And it needs to be on by default.
Comment by ElectricalUnion 11 days ago
If using software securely was really a priority, everyone would be rustifing everything, and running everything in separated physical machines with restrictive AppArmor, SELinux, TOMOYO and Landlock profiles, with mTLS everywhere.
It turns out that in Security, "availability" is a very important requirement, and "can't run your insecure-by-design system" is a failing grade.
Comment by naikrovek 11 days ago
Only via virtualization in the case of MacOS. Somehow, even windows has native container support these days.
A much more secure system can be made I assure you. Availability is important, but an NPM package being able to scan every attached disk in its post-installation script and capture any clear text credentials it finds is crossing the line. This isn’t going to stop with NPM, either.
One can have availability and sensible isolation by default. Why we haven’t chosen to do this is beyond me. How many people need to get ransomwared because the OS lets some crappy piece of junk encrypt files it should not even be able to see without prompting the user?
Comment by rafterydj 11 days ago
Comment by naikrovek 11 days ago
I would love to know.
Comment by gizmo686 11 days ago
If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.
Comment by metachris 11 days ago
Comment by bitfilped 10 days ago
Comment by estimator7292 11 days ago
Comment by wasmainiac 9 days ago
It’s a fair angle your taking here, but I would only expect to see it on hardend servers.
Comment by rkagerer 11 days ago
Comment by baq 11 days ago
Comment by nektro 11 days ago
Comment by 2OEH8eoCRo0 11 days ago
Comment by tucnak 11 days ago
Comment by philipwhiuk 11 days ago
When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.
Comment by bakkoting 11 days ago
Comment by jazzypants 11 days ago
[0] - TC39 member who is self-described as "obsessed with backwards compatibility": https://github.com/ljharb
[1] - Here's one of many articles describing the situation: https://marvinh.dev/blog/speeding-up-javascript-ecosystem-pa...
Comment by bakkoting 10 days ago
It's true that there are a few people who publish packages on npm including polyfills, Jordan among them. But these are a very small fraction of all packages on npm, and none of the compromised packages were polyfills. Also, he cares about backwards compatibility _with old versions of node_; the fact that JavaScript was originally a web language, as the grandparent comment says, is completely irrelevant to the inclusion of those specific polyfills.
Polyfills are just completely irrelevant to this discussion.
Comment by jazzypants 10 days ago
Comment by kubafu 11 days ago
Comment by wonderfuly 12 days ago
In addition to concerns about npm, I'm now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.
Comment by douglascamata 11 days ago
For example, in my macOS machines the token is safely stored in the OS keyring (yes, I double checked the file where otherwise it would've been stored as plain text).
Comment by naikrovek 11 days ago
Comment by hombre_fatal 11 days ago
It would be better if you could have multiple providers attached (gnome-keyring and keepassxc) and then decide which app uses which provider.
Because only some secrets you want to share across devices, like wifi passwords, and the rest you don’t, like the key chromium uses to encrypt local cookies or the gh cli token.
Comment by kd913 11 days ago
Comment by didntcheck 11 days ago
But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than endlessly updating a patchy blacklist
Comment by naikrovek 11 days ago
One could easily allow or restrict visibility of almost anything to any program. There were/are some definite usability concerns with how it is done today (the OS was not designed to be friendly, but to try new things) and those could easily be solved. The core of this existed in the Plan9 kernel and the Plan9 kernel is small enough to be understood by one person.
I’m kinda angry that other operating systems don’t do this today. How much malware would be stopped in its tracks and made impotent if every program launched was inherently and natively walled off from everything else by default?
Comment by brendyn 11 days ago
Not disagreeing with the need for isolation though, I just think it should be designed carefully in a zero-sacrifice way (of use control/pragmatic software freedom)
Comment by GrantMoyer 8 days ago
Comment by mcny 11 days ago
I believe Wayland (don't quote me on this because I know exactly zero technical details) as opposed to x is a big step in this direction. Correct me if I am wrong but I believe this effort alone has been ongoing for a decade. A proper sandbox will take longer and risks being coopted by corporate drones trying to take away our right to use our computers as we see fit.
Comment by rkangel 11 days ago
All programs in X were trusted and had access to the same drawing space. This meant that one program could see what another one was drawing. Effectively this meant that any compromised program could see your whole screen if you were using X.
Wayland has a different architecture where programs only have access to the resources to draw their own stuff, and then a separate compositor joins all the results together.
Wayland does nothing about the REST of the application permission model - ability to access files, send network requests etc. For that you need more sandboxing e.g. Flatpak, Containers, VMs
Comment by akshitgaur2005 11 days ago
Comment by Hendrikto 11 days ago
Comment by ElectricalUnion 11 days ago
They are hooks that latch on the common GUI application library calls for things such as "open file dialogs" such that exeptions to the sandbox are implicitly added as-you-go.
They cannot prevent for example direct filesystem access if the application has permission to open() stuff, like if they're not running in a sandbox, or if said sandbox have a "can see and modify entire filesystem" exception (very common on your average flatpak app, btw).
Comment by internet_points 11 days ago
E.g. under X you can use bubblewrap or firejail to restrict access to the web or whatever for some program, but still give that program access to for example an xdg portal that lets you "open url in web browser" (except the locked-down program can't for example see the result of downloading that web page)
Comment by febusravenga 11 days ago
All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.
And worst thing, afaik there is no way do do it correctly in MacOS for example. I'd like to be corrected though.
Comment by mcny 11 days ago
I feel like we are barking up the wrong tree here. The plain text token thing can't be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.
Comment by sakisv 11 days ago
For a given project, I have a `./creds` directory which is managed with pass and it contains all the access tokens and api keys that are relevant for that project, one per file, for example, `./creds/cloudflare/api_token`. Pass encrypts all these files via gpg, for which I use a key stored on a Yubikey.
Next to the `./creds` directory, I have an `.envrc` which includes some lines that read the encrypted files and store their values in environment variables, like so: `export CLOUDFLARE_API_TOKEN=$(pass creds/cloudflare/api_token)`.
Every time that I `cd` into that project's directory, direnv reads and executes that file (just once) and all these are stored as environment variables, but only for that terminal/session.
This solves the problem of plain-text files, but of course the values remain in ENV and something malicious could look for some well known variable names to extract from there. Personally I try to install things in a new termux tab every time which is less than ideal.
I'd like to see if and how other people solve this problem
[1]: https://direnv.net/ [2]: https://www.passwordstore.org/
Comment by gerardnico 11 days ago
Example : https://github.com/combostrap/devfiles/blob/main/dev-scripts...
It’s not completely full proof but at least gpg asks my passphrase only when I run the script
Comment by hrimfaxi 11 days ago
Comment by internet_points 11 days ago
Comment by tinodb 9 days ago
Comment by L-four 11 days ago
This does mean entering your keyring password a lot.
Comment by 1718627440 11 days ago
Not when you put that keyrings password into the user keyring. I think it is also cached by default.
Comment by masfuerte 11 days ago
Comment by 1718627440 11 days ago
Comment by mxey 11 days ago
Comment by ElectricalUnion 11 days ago
If all Homebrew "apps" are the same key then accepting a keyring notification on one app is a lost cause at it would allows things vulnerable to RCE to read/write everything?
Comment by flir 11 days ago
otoh I wouldn't do it, because I don't believe I could implement it securely.
Comment by data-ottawa 11 days ago
I had a Borg backup script for example and 1password needed me to authenticate to run it.
Authenticating for ssh and git is great.
Comment by __turbobrew__ 10 days ago
Comment by mxey 11 days ago
https://developer.apple.com/documentation/security/keychain-...
And similar services exist on Linux desktops. There are libraries that will automatically pick the right backend.
Comment by akdev1l 11 days ago
1. Piggyback of your existing auth infra (eg: ActiveDirectory or whatever you already have going on for user auth) 2. Failing that use identity center to create user auth in AWS itself
Either way means that your machine gets temporary credentials only
Alternatively, we could write an AWS CLI helper to store the stuff into the keychain (maybe someone has)
Not to take away from your more general point
We need flatpak for CLI tools
Comment by queenkjuul 10 days ago
Comment by 1718627440 11 days ago
Also this is a complete non-issue on Unix(-like) systems, because everything is designed around passing small strings between programs. Getting a secret from another program is the same amount of code, as reading it from a text file, since everything is a file.
Comment by naikrovek 11 days ago
What? The MacOS Keychain is designed exactly for this. Every application that wants to access a given keychain entry triggers a prompt from the OS and you must enter your password to grant access.
Comment by sierra1011 11 days ago
Have you wiped your laptop/infected machine? If not I would recommend it; part of it created a ~/.dev-env directory which turned my laptop into a GitHub runner, allowing for remote code execution.
I have a read-only filesystem OS (Bluefin Linux) and I don't know quite how much this has saved me, because so much of the attack happens in the home directory.
Comment by mikkupikku 11 days ago
Pop quiz, hot shot! A terrorist is holding user data hostage, got enough malware strapped to his chest to blow a data center in half. Now what do you do?
Shoot the hostage.
Comment by hsbauauvhabzb 11 days ago
Comment by wiradikusuma 12 days ago
Comment by broeng 12 days ago
1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].
That, combined with:
2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it's not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.
I'm no NPM expert, but that's the worst offenders from a technical perspective, in my opinion.
[1]: I'm sure it can be disabled, and it might even be now by default - I don't know. [2]: Yes, I know you can use a lock file, but it's definitely not the norm to actively consider each upgraded version when refreshing the lockfile.
Comment by hiccuphippo 11 days ago
IMO, `ci` should be `install`, `install` should be `update`.
Plus the install command is reused to add dependencies, that should be a separate command.
Comment by bakkoting 11 days ago
`npm install` will always use the versions listed in package-lock.json unless your package.json has been edited to list newer versions than are present in package-lock.json.
The only difference with `npm ci` is that `npm ci` fails if the two are out of sync (and it deletes `node_modules` first).
Comment by silverwind 11 days ago
Yep, auto-updating dependencies are the main culprit why malware can spread so fast. I strongly recommend the use `save-exact` in npm and only update your dependencies when you actually need to.
Comment by tedivm 11 days ago
The answer is a balance. Use Dependabot to keep dependencies up to date, but configure a dependency cooldown so you don't end up installing anything too new. A seven day cooldown would keep you from being vulnerable to these types of attacks.
Comment by SAI_Peregrinus 11 days ago
Comment by tedivm 10 days ago
Comment by Cthulhu_ 11 days ago
* NPM has a culture of "many small dependencies", so there's a very long tail of small projects that are mostly below the radar that wouldn't stand out initially if they get a patch update. People don't look critically into updated versions because there's so many of them.
* Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.
Comment by kace91 12 days ago
That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.
Comment by louiskottmann 12 days ago
One package for lists, one for sorting, and down the rabbit hole you go.
Comment by sensanaty 11 days ago
Refactoring these also isn't always trivial either, so it's a long journey to fully get rid of something like Lodash from an old project
Comment by silverwind 11 days ago
Comment by rhubarbtree 12 days ago
Comment by palata 12 days ago
To be fair Java has improved a lot over the last few years. I really have the feeling that Java is getting better, while C++ is getting worse.
Comment by rhubarbtree 8 days ago
Comment by PhilipRoman 11 days ago
Comment by rhubarbtree 8 days ago
Comment by yunwal 11 days ago
Comment by parliament32 12 days ago
Comment by dboreham 12 days ago
Comment by Sophira 12 days ago
I've even seen "setup scripts" for projects that will use root (with your permission) to install software. Such scripts are less common now with containers, but unfortunately containers aren't everything.
Comment by 1718627440 11 days ago
I consider this to be a sign that someone is still an amateur, and this is a reason to not use the software and quickly delete it.
If you need a dependency, you can call the OS package manager, or tell me to compile it myself. If you start a network connection, you are malware in my eyes.
Comment by Cthulhu_ 11 days ago
Comment by dtech 12 days ago
Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don't do this.
Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.
Comment by lights0123 12 days ago
Comment by oefrha 12 days ago
Comment by silverwind 11 days ago
What we really need is a system to restrict packages in what they can do (for example, many packages don't need network access).
Comment by duncanbeevers 11 days ago
There has been some promising prior research such as BreakApp attempting to mitigate unusual supply-chain compromises such as denial-of-service attacks targeting the CPU via pathological regexps or other logic-bomb-flavored payloads.
Comment by Balinares 12 days ago
So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it's an easy vector for a worm to propagate.
Comment by magnetometer 11 days ago
pip install <package> --only-binary :all:
to only install wheels and fail otherwise.
Comment by Balinares 8 days ago
Would source distributions work as a vector for automated propagation, though? If I'm not mistaken, there's no universal standard for building from source distributions.
Comment by nottorp 11 days ago
In other "communities" you upgrade dependencies when you have time to evaluate the impact.
Comment by Ekaros 12 days ago
Comment by Karliss 11 days ago
Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.
Although higher branching factor for JavaScript and potential target count are probably very important factors as well.
Comment by sgammon 11 days ago
Comment by DANmode 11 days ago
not chat bots.
Comment by thepasswordapp 12 days ago
The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you're like most developers and reused any of those passwords elsewhere... rotate those too.
This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.
Comment by Towaway69 12 days ago
How does one know one is affected?
What's the point of rotating tokens if I'm not sure that I've been affected - the new tokens will just be ex-filtrated as well.
First step would be to identify infection, then clean up and then rotate tokens.
Comment by mcintyre1994 12 days ago
From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.
Comment by sierra1011 11 days ago
Comment by Ferret7446 12 days ago
Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.
I'd assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.
Comment by TeMPOraL 11 days ago
"Basic security practices" is an ever expanding set of hoops to jump through, that if properly followed, stop all work in its tracks. Few are following them diligently, or at all, if given any choice.
Places that care about this - like actually care, because of contractual or regulatory reasons - don't even let you use the same machine for different projects or customers. I know someone who often has to carry 3+ laptops on them because of this.
Point being, there's a cost to all these "basic security practices", cost that security practitioners pretend doesn't exist, but in fact it does exist, and it's quite substantial. Until security world acknowledges this fact openly, they'll always be surprised by how people "stubbornly" don't follow "basic practices".
Comment by lionkor 11 days ago
Comment by throwawayqqq11 11 days ago
Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.
Comment by vedhant 11 days ago
Comment by dawnerd 12 days ago
I hate that high profile services still default to plain text for credential storage.
Comment by internet_points 11 days ago
If I just need to `fly secrets set KEY=hunter2` one time for production I can copy it from a paper pad even but if it's a key I need to use every time I run a program that I'm developing on, it's likely going to end up at least being in my program's shell environment (and thus readable from its /proc/pid/environ). So if I `npm install compromised-package` – even from some other terminal – can't it just `grep -a KEY= /proc/*/environ`?
Or are you saying the programs we hack on should use some kind of locker api to fetch secrets and do away with env vars?
Comment by mcintyre1994 12 days ago
Comment by dawnerd 12 days ago
GitHub has a massive malware problem as it is and it doesn’t get enough attention.
Comment by baobun 11 days ago
Comment by princevegeta89 12 days ago
Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol
The original Java platform is a good example to think about.
Comment by amiga386 11 days ago
The golang modules core to the language are hosted at golang.org
Module authors have always been free to have their own prefix rather than github.com, even if they host their module on Github. If they say their module is example.com/foo and then set their webserver to respond to https://example.com/foo?go-get=1 with <meta name="go-import" content="example.com/foo mod https://github.com/the_real_repository/foo"> then they will leave no hint that it's really hosted at github, and they could host it somewhere else in future (including at https://example.com directly if they want)
Another feature is that go uses a default proxy, https://proxy.golang.org/, if you don't set one yourself. This means that Google, who control that proxy, can choose to make a request for a package like github.com/foo/bar go to some place else, if for whatever reason Microsoft won't honour it any more.
Comment by oefrha 11 days ago
Comment by Cthulhu_ 11 days ago
And (to put on my Go defender hat), the Go ecosystem doesn't like having many dependencies, in part because of supply chain attack vectors and the fact that Node's ecosystem went a bit overboard with libraries.
Comment by hiccuphippo 11 days ago
Comment by benatkin 12 days ago
Comment by testdelacc1 11 days ago
Comment by philipwhiuk 11 days ago
Comment by testdelacc1 11 days ago
Comment by arkh 12 days ago
So I'm surprised to never see something akin to "our AI systems flagged a possible attack" in those posts. Or the fact Github from AI pusher fame Microsoft does not already use their AI to find this kind of attacks before they become a problem.
Where is this miracle AI for cybersecurity when you need it?
Comment by michaelt 11 days ago
Comment by firesteelrain 11 days ago
Comment by nottorp 11 days ago
Edit: see the curl posts about them being bombarded with "AI" generated security reports that mean nothing and waste their time.
Comment by efortis 11 days ago
ignore-scripts=true
to your .npmrcComment by hiccuphippo 11 days ago
Comment by efortis 11 days ago
Comment by TeMPOraL 11 days ago
- If it's safe to "ignore scripts", why does this option exist in the first place?
- Otherwise, what kind of cascade breakage in dependencies you risk by suppressing part of their installation process?
Comment by efortis 11 days ago
Why it is allowed by default?
> it’s npm’s belief that the utility of having installation scripts is greater than the risk of worms.
NPM co-founder Laurie Voss
https://blog.npmjs.org/post/141702881055/package-install-scr...
Comment by seanwilson 11 days ago
Comment by efortis 11 days ago
https://nodejs.org/api/permissions.html
Regardless, it’s worth using `--ignore-scripts=true` because that’s the common vector these supply chain attacks target. Consider that when automating the attack, adding it to the application code is more difficult than injecting it into life-cycle scripts, which have well-known config lines.
Comment by MetaWhirledPeas 11 days ago
Comment by philipwhiuk 11 days ago
Comment by jMyles 11 days ago
I'm curious though: how do you avoid being stuck on the _vulnerable_ versions, delaying updates?
Comment by homebrewer 11 days ago
npm should have died long ago, I don't know why it's still being used.
Comment by mrklol 12 days ago
Comment by Cthulhu_ 11 days ago
Comment by Aeolun 12 days ago
Comment by jaggirs 12 days ago
Comment by hu3 12 days ago
There's a reason disclosures are obligatory in academic papers.
Comment by baq 12 days ago
Comment by rockskon 12 days ago
Comment by serial_dev 12 days ago
Call me a conspiracy theorist, but I start to think these people might be affiliated with GitLab.
Comment by TeMPOraL 11 days ago
Comment by hiccuphippo 11 days ago
Comment by Aeolun 11 days ago
Comment by norman784 12 days ago
Comment by ChrisArchitect 12 days ago
Comment by ares623 12 days ago
Comment by gchamonlive 12 days ago
Although it's not entirely new, it's something else.
Comment by prophesi 12 days ago
Comment by Yokohiii 11 days ago
Pretty sad.
Comment by mkesper 11 days ago
Comment by newsoftheday 11 days ago
HTH.
Comment by Yokohiii 11 days ago
Comment by xyzal 12 days ago
Comment by mcintyre1994 12 days ago
Comment by dmitrygr 12 days ago
Comment by john01dav 11 days ago
Comment by 1718627440 11 days ago
What it doesn't have is a hashmap type, but in C types are cheap and are created on an ad-hoc basis. As long as it corresponds to the correct interface, you can declare the type anyway you like.
Comment by dmitrygr 11 days ago
Comment by TheTxT 12 days ago
Comment by 1718627440 11 days ago
char *
left_pad (const char * string, unsigned int pad)
{
char tmp[strlen (string)+pad+1];
memset (tmp, ' ', pad);
strcpy (tmp+pad, string);
return strdup (tmp);
}
Doesn't sound too hard in my opinion. This only works for strings, that fit on the stack, so if you want to make it robust, you should check for the string size. It (like everything in C) can of course fail. Also it is a quite naive implementation, since it calculates the string size three times.Comment by brabel 11 days ago
Comment by 1718627440 11 days ago
Like the sibling already wrote, that's what strdup does.
> Is it safe to return the duplicate of a stack allocated
Yeah sure, it's a copy.
> wouldn’t the copy be heap allocated anyway?
Yes. I wouldn't commit it like that, it is a naive implementation. But honestly I wouldn't commit leftpad at all, it doesn't sound like a sensible abstraction boundary to me.
> Not to mention it blows the stack and you get segmentation fault?
Yes and I already mentioned that in my comment.
---
> dynamic array right on the stack
Nitpick: It's a variable length array and it is auto allocated. Dynamic allocation refers to the heap or something similar, not already done by the compiler.
Comment by lionkor 11 days ago
Comment by brabel 11 days ago
Comment by 1718627440 10 days ago
Allocating on the stack is pretty cheap, it's only a single instruction to move the stack pointer. The compiler is likely to optimize it away completely. When doing more complicated things, where you don't build the string linearly allocating on the stack first can be likely cheaper, since the stack memory is likely in cache, but a new allocation isn't. It can also make the code easier, since you can first do random stuff on the stack and then allocate on the heap once the string is complete and you know its final size.
Comment by newsoftheday 11 days ago
Comment by 1718627440 11 days ago
strndup prevents you from overrunning the allocation of a string given that you pass it the containing allocations size correctly. But if you got passed something that is not a string, there will be a buffer overrun right there in the first line. Also what outer allocation?
You use strcpy when you get a string and memcpy when you get an array of char. strncpy is for when you get something that is maybe a string, but also a limited array. There ARE use cases for it, but it isn't for safety.
Comment by kidmin 11 days ago
Comment by testdelacc1 11 days ago
Comment by TZubiri 12 days ago
Comment by cyanydeez 12 days ago
Just like in the 90s when viruses primarily went to windows, it' wasn't some magical property of windows, it was the market of users available.
Also, following this logic, it then becomes survivorship bias, in that the more attacks they get, the more researchers spend time looking & documenting.
Comment by elwebmaster 12 days ago
Comment by KevinMS 12 days ago
no, it really was windows
Comment by foobiekr 12 days ago
Comment by ndsipa_pomu 11 days ago
Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.
I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.
Comment by cesarb 11 days ago
Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.
It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted. Once CD-R writers became popular, and anyone could and did write their own data CDs, these assumptions no longer held.
> I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.
That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.
Comment by ndsipa_pomu 11 days ago
It's a stupid default, though. One way round the issue is to present the user with the option to either just open a disc or to run the installer and allow them to change the default if they prefer the less secure option.
> It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted
This allowed Sony BMG to infect so many computers with their rootkit (https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...).
> That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.
A sudoers group is different though as it highlights the difference between what files they are expected to change (i.e. that they own) and which ones require elevated permissions (e.g. installing system software). Earlier versions of Windows did not have that distinction which was a huge security issue.
Comment by elwebmaster 12 days ago
Comment by TZubiri 12 days ago
Comment by austin-cheney 11 days ago
Comment by tuzemec 11 days ago
Also the whole ecosystem around OXS looks very promising: https://oxc.rs/
Comment by jackwilsdon 11 days ago
Admittedly you're not normally downloading the dependencies to your machine as you're often using pre-built binaries, but a malicious package could still run if a version was shipped with it.
[0]: https://github.com/biomejs/biome/blob/93182ea8e9d479fd0187ce...
[1]: https://github.com/oxc-project/oxc/blob/65bd5584bfce0c7da90f...
[2]: https://users.rust-lang.org/t/yet-another-npm-supply-chain-a...
Comment by brabel 11 days ago
Comment by rkagerer 11 days ago
With some package managers these days I don't even know how to do that (and I'm not necessarily talking about Node, specifically). How do you figure out what the install process does to your computer, without becoming an expert on the manifest syntax? For those of us who care about what goes on under the hood, it is definitely not easier than the days of following well-formed (or even semi-formed) documentation by hand.
Comment by akdor1154 11 days ago
Comment by csutil-com 11 days ago
Comment by noobcoder 11 days ago
Comment by newsoftheday 11 days ago
Comment by loginatnine 11 days ago
I did that a couple of weeks ago and received an acknowledgment "Another request on Trusted Publishing option. Assigning to Product for review and further action." so this is a bit encouraging.
At least Maven dependencies don't execute scripts on install, but Maven plugins could have a big blast radius.
Comment by jonhohle 11 days ago
At my previous company, I implemented staged dependencies with artifactory so that production could never get packages that had never gone through CR, or staging environments first. They just were never replicated. That eliminated fuzzy dependency matches that showed up for the first time in production (something that did happen). Because dev to production was about 1 week, it also afforded time to identify packages before they had a chance to be deployed. Obviously it was less robust than manually importing.
Maybe self-hosted package caches support these features now, but 6-7 years ago, that was all manual work.
Comment by xomodo 11 days ago
Comment by ksynwa 11 days ago
Comment by hiccuphippo 11 days ago
Comment by hakcermani 11 days ago
Comment by yupyupyups 12 days ago
Comment by gruez 12 days ago
CN = Johannes Schindelin O = Johannes Schindelin S = Nordrhein-Westfalen C = DE
Downside is the cost. Certificates cost hundreds of dollars per year. There's probably some room to reduce cost, but not by much. You also run into issues of paying some homeless person $50 to use their identity for cyber crimes.
Comment by mc32 12 days ago
Comment by veeti 12 days ago
Comment by brabel 11 days ago
Comment by gruez 11 days ago
Comment by brabel 11 days ago
Comment by gruez 11 days ago
Comment by brabel 10 days ago
Comment by morkalork 12 days ago
Comment by hirsin 12 days ago
The inevitable evolution of such a feature is a button on your repo saying" block all contributors from China, Russia, and N other countries". I personally think that's the antithesis of OSS and therefore couldn't find the value in such a thing.
Comment by morkalork 12 days ago
Comment by hirsin 12 days ago
Comment by ozgrakkurt 10 days ago
Comment by berdario 11 days ago
"easily", not so much...
As in, services can still detect if you're connecting through a VPN, and if you ever connect directly (because you forgot to enable the VPN), your real location might be detected. And the consequences there might not be "having to refresh the page with the VPN enabled", but instead: "find the whole organisation/project blocked, because of the connection of one contributor"
This is why Comaps is using codeberg, after its predecessor (before the fork) project got locked by GitHub
https://news.ycombinator.com/item?id=43525395
https://mastodon.social/@organicmaps/114155428924741370
Moreover, this kind of stuff is also the reason I stopped accessing Imgur:
- if I try without VPN, imgur stops me, because of the UK's Online Safety Act
- if I try with my personal VPN, I get a 403 error every single time
I'm sure I could get around it by using a different service (e.g. Mullvad), but imgur is just not important enough for me to bother, so I just stopped accessing it altogether
Comment by 1718627440 11 days ago
Comment by dcrazy 12 days ago
Comment by laserbeam 12 days ago
In principle, what’s stopping the technique from targeting macos CI runners which improperly store keys used for Notorization signing? Or… is it impossible to automate a publishing step for macos? Does that always require a human to do a manual thing from their account to get a project published?
Comment by zx8080 11 days ago
Enjoy it while saving your cent!
Comment by Flere-Imsaho 11 days ago
Perhaps there is a light at the end of the tunnel: with AI coding assistance, the whole application can be written from scratch (like the old days). All the code is there, not buried deep within someone else's codebase.
Comment by bn-l 11 days ago
Comment by Traubenfuchs 11 days ago
Comment by Barry-Perkins 11 days ago
Comment by hresvelgr 11 days ago
Comment by AmbroseBierce 12 days ago
Comment by dominicrose 11 days ago
Meanwhile I have been using Ruby for 15 years and it has evolved in a stable way without breaking everything and without having to rewrite tons of libraries. It's not as powerful in terms of performance and I/O, it's not as far-reaching as JS is because it doesn't support the browser, it doesn't have a typescript equivalent, but it's mature and stable and its power is that it's human-friendly.
Comment by testdelacc1 11 days ago
And what’s more, people have proposed a standard library through tc39 without success - https://github.com/tc39/proposal-built-in-modules
Of course any large company could create a massive standard library on their own without going through the standards process but it might not be adopted by developers.
Comment by bakkoting 11 days ago
Comment by AmbroseBierce 10 days ago
Comment by h4ck_th3_pl4n3t 11 days ago
Comment by nottorp 11 days ago
The one with 12 competing standards going to 13 competing standards, or something like that.
Comment by AmbroseBierce 11 days ago
Comment by nottorp 11 days ago
Comment by AmbroseBierce 11 days ago
Comment by nottorp 10 days ago
And they're company policy as opposed to honest mistakes like security vulns.
Comment by latexr 11 days ago
Comment by Incipient 12 days ago
Comment by bhouston 12 days ago
This feels like opportunistic cyber criminals, or North Korea (which acts like cyber criminals.)
Comment by Towaway69 12 days ago
This kind of large scale attack is perfect advertising for anyone selling protection against such attacks.
Spy agencies have no interest in selling protection.
Comment by Nextgrid 12 days ago
This can of course be resolved, but here’s the kicker: our own governments equally enjoy this ambiguity to do their own bidding; so no government truly has an incentive to actually improve cross-border identity verification and cybercrime enforcement.
Not to mention, even besides government involvement, these malicious actors still “engage” or induce “engagement” which happens to be the de-facto currency of the technology industry, so even businesses don’t actually have any incentive of fighting them.
Comment by mc32 12 days ago
Comment by halJordan 12 days ago
Comment by c0balt 12 days ago
It's just not that effective when the SBOM becomes unmanageable. For example, our JS project at $work has 2.3k dependencies just from npm. I can give you that SBOM (and even include the system deps with nix) but that won't really help you.
They are only really effective when the size is reasonable.
Comment by Ekaros 12 days ago
Comment by csomar 12 days ago
Take the Jaguar hack, the economic loss is estimated at 2.5bn. Given an average house price in the UK of $300k, that’s like destroying ~8.000 homes.
Do you think the public and international response will be the same if Russia or China leveled a small neighborhood even with no human casualties?
Comment by lionkor 11 days ago
Or, in other words; maybe the nature of humans and the inherent pressure of our society to perform, to be rich, to be successful, drives people to do bad things without any state actor behind it?
Comment by epolanski 12 days ago
We should fight this kind of behavior (and our privacy) regardless of whose involved, yet our governments in the west have nurtured this narrative of always pointing at big tech and foreign actors as scape goats for anything privacy or hacking related.
Also, any cyber attack tracker will show you this is a global issue, if you think there aren't millions of attacks carried out from our own countries, you're not looking enough.
Comment by kachapopopow 12 days ago