The nx supply chain attack via npm was the bullet many companies did not doge. I mean, all you needed was to have the VS Code nx plugin installed — which always checked for the latest published nx version on npm. And if you had a local session with GitHub (eg logged into your company’s account via the GH CLI), or some important creds in a .env file… that was exfiltrated.
This happened even if you had pinned dependencies and were on top of security updates.
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
Yeah, Editor extensions are both auto-updated and installed in high risk dev environments. Quite a juicy target and I am surprised we haven’t seen large scale purchases by bad actors similar to browser extensions yet. However, I remember reading that the VsCode team puts a lot of effort in catching malware. But do all editors (with auto-updates) such as Sublime have such checks?
Maybe we need to unify explicit build toolchains rather than trying to glue them right into package managers?
The culprit here is not the source code, it's all the automatically executed build and installation hooks which lack total regards to any isolation or sandboxing.
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
> You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
"found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months.
I'm assuming they meant the account takeover was likely to be found out right away. You change your password on a major site like that and you're going to get an email about it. Login from a new location also triggers these emails, though I admit I haven't logged onto NPM in quite a long time so I don't know that they do this.
It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset."
Yes, but this is an ecosystem large enough to include people who have that time (and inclination and ability); and once they have reported a problem, everyone is on high alert.
If you steal the cookies from dev machines or steal ssh keys along with a list of recent ssh connections or do any other credential theft there are going to be lots of people left impacted. Yes, lots of people reading tech news or security bulletins is going to check if they were compromised and preemptively revoke those credentials. But that's work, meaning even among those informed there will be many who just assume they weren't impacted. Lots of people/organisations are going to be complacent and leave you with valid credentials
If a dev doesn't happen to run npm install during the period between when the compromised package gets published and when npm yanks it (which for something this high-profile is generally measured in hours, not days), then they aren't going to be impacted. So an attacker's patience won't be rewarded with many valid credentials.
But that is dumb luck. Release an exploit, hope you can then gain further entry into a system at a company that is both high value and doesn't have any basic security practices in place.
That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff.
VS blast out some crypto stealing code and grab as many funds as possible before being found out.
> Lots of people/organisations are going to be complacent and leave you with valid credentials
You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines.
Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs.
With a large enough blast radius, this may have worked, but it wouldn't be guaranteed.
The window of installation time would be pretty minimal, and the operating window would only be as long as those who deployed while the malicious package was up waited to do another deploy.
Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway.
Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops.
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line.
You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?)
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster.
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
What makes you so sure that the exploit is over? Maybe they wanted their secondary exploit to get caught to give everyone a sense of security? Their primary exploit might still be lurking somewhere in the code?
API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs.
> My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.
I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.
At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.
I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.
If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility?
The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.
As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself.
So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?
You are making my argument for me.
This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.
At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds).
They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts.
It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
i fell for this malware once. had the malware on my laptop even with mb in the background. i copy paste and address and didn't even check it. my bad indeed. those guys makes a lot of money from this "one shot" moments
> find it insane that someone would get access to a package like this, then just push a shitty crypto stealer
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
Maybe one in a million is hyperbolic but that’s sorta the game with these attacks isn’t it? Registering thousands upon thousands of domains + tens of thousands of emails until you catch something from the proverbial pond.
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
This is how I feel about my Honda, and to some extent, Kubernetes. In the former case I kept a 2006 model in good order for so long I skipped at least two (automobile) generation's worth of car-to-phone teething problems, and after years of hearing people complain about their woes I've found the experience of connecting my iphone to my '23 car pretty hassle-free.
In the latter, I am finally moving a bunch of workloads out of EC2 after years of nudging from my higher-ups and, while it's still far from a simple matter I feel like the managed solutions in EKS and GKE have matured and greatly lessen the pain of migrating to K8S. I can only imagine what I would have gotten bogged down with had I promptly acted on my bosses' suggestion to do this six or seven years ago. (I also feel very lucky that the people I work for let me move on these things in my own due time.)
Sorry, the "npm ecosystem" command has been deprecated. You can instead use npm environment (or npm under-your-keyboard because we helpfully decided it should autocorrect and be an alias)
I wrote the first commit for slice-ansi in 2015 to solve a baby problem for a cli framework I was building, and worked with Qix a little on the chalk org after it. It's wild looking back and seeing how these things creep in influence over time.
I know this isn't really possible for smaller guys but larger players (like NPM) really should buy up all the TLD versions of "npm" (that is: npm.io, npm.sh, npm.help, etc). One of the reasons this was so effective is that the attacker managed to snap up "npm.help"
Then you have companies like AWS, they were sending invoices from `no-reply-aws@amazon.com` but last month they changed it to `no-reply@tax-and-invoicing.us-east-1.amazonaws.com`.
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
> These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
<Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>
---
So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.
A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.
So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...
A few years ago our annual corporate phishing training was initiated by an email sent from a random address asking us to log in with our internal credentials on a random website.
A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.
Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.
“We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
There's like 1500 TLDs, now some of them are restricted and country-code TLDs but now it makes me wonder how much it would actual cost per year to maintain registration of every non-restricted TLD. I'm sure theres some SaaS company that'll do it.
OTOH, doesn't ICANN already sometimes restrict who has access to a given TLD? Would it really be that crazy for them to say "maybe we shouldn't let registrars sell npm.<TLD> regardless of the TLD", and likewise for a couple dozen of the most obvious targets (google., amazon., etc.)? No one needs to pay for these domains if no one is selling them in the first place. I don't love the idea of special treatment for giant companies in terms of domains, but we're already kind of there with the whole process they did when initially allowing companies to compete for exclusive access to TLDs, so we might as well use that process for something actually useful (unlike, say, letting companies apply for exclusive ownership of ".music" and have a whole legal process to determine that maybe that isn't actually beneficial for the internet as whole: https://en.wikipedia.org/wiki/.music)
This won't work - npm.* npmjs.* npmjs-help.* npm-help.* node.* js.* npmpackage.*. The list is endless.
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
That seems like a bad idea compared to just having a canonical domain - people might become used to seeing "npm.<whatever>" and assuming it is legit. And then all it takes is one new TLD where NPM is a little late registering for someone to do something nefarious with the domain.
Just because you buy them doesn't mean that you have to use them. Squatting on them is no more harmful (except financially) than leaving them available for potentially hostile 3rd parties.
Sure, I guess buying up every npm.* you can find and then having a message "never use this, only use npm.com" could work. I thought OP was saying have every npm.* site be a mirror of the canonical site
Looks like it costs ~$200,000 to get your own TLD. If a bunch of companies started doing the "register every TLD of our brand", I wonder what the breakeven point would be where just registering a TLD is profitable.
> If you were targeted with such a phishing attack, you'd fall for it too and it's a matter of when not if. Anyone who claims they wouldn't is wrong.
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
"'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.
Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.
Linux distributions packages are also very trust driven — but you have to earn trust to publish. Then there is whole system to verify trust. NPM is more like „everything goes”.
I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.
So the guy either had TOTP or just the pw.
Seems like should be easy to implement enforcement.
There needs to be a massive push from the larger important packages to eliminate these idiotic transitive dependencies. Core infrastructure shouldn't rely on trivial packages maintained by a single random person from who knows where that can push updates without review. It's absolutely insane.
As the post mentions wallets like MetaMask being the targets, AFAIK MetaMask in particular might be one of the best protected (isolated) applications from this kind of attack due to their use of LavaMoat https://x.com/MetaMask/status/1965147403713196304 -- though I'd love to read a detailed analysis of whether they actually are protected. No affiliation with MetaMask, just curious about effectiveness of seemingly little adopted measures (relative to scariness of attacks).
This is the Way. To minimize attack surface, the senders of authentic messages should straight-up avoid putting links to "do the thing" in the message. Just tell the user to update their credentials via the website.
For most users, that'll just result in them going to Google, searching for the name of your business, and then clicking the first link blindly. At that point you're trusting that there's no malicious actors squatting on your business name's keyword -- and if you're at all an interesting target, there's definitely malvertising targeting you.
The only real solution is to have domain-bound identities like passkeys.
The Microsoft ecosystem certainly makes this challenging. At work, I get links to Sharepoint hosted things with infinitely long hexadecimal addresses. Otherwise finding resources on Sharepoint is impossible.
> I just try to avoid clicking links in emails generally...
I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.
It seems to me that having an email client that simply disables all the links in the email is probably a good idea. Or maybe, there should be explicit white-listing of domains that are allowed to be hyperlinks.
I've always thought it's insane that anyone on the planet with a connection can drop a clickable link in front of you. Clickable links in email should be considered harmful. Force the user to copy/paste
I'm rather convinced that the next major language-feature wave will be permissions for libraries. It's painfully clear that we're well past the point where it's needed.
I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.
Alternatively, I've long been wondering if automatic package management may have been a mistake. Its primary purpose seems to be to enable this kind of proliferation of micro-dependencies by effectively sweeping the management of these sprawling dependency graphs under the carpet. But the upshot of that is, most changes to your dependency graph, and by extension your primary vector for supply chain attacks, becomes something you're no longer really looking at.
Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.
yea, just look at the state of many C projects. it's rather clearly worse in practice in aggregate.
should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.
but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.
C has a lot of characteristics beyond simple lack of a standard automatic package manager that complicate the situation.
The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.
when I was doing C# pre-nuget we had an utterly absurd amount of libraries that nobody had checked and nobody ever upgraded. so... yeah I think it applies there too, at least from my experience.
I do agree that C is an especially-bad case for additional reasons though, yeah.
Gotcha. When I was, we actively curated our dependencies and maintaining them was a regularly scheduled task that one team member in particular was in charge of making sure got done.
Well, consider that a lot of these functions that were exploited are simple things. We use a library to spare ourselves the drugdery of rewriting them, but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control, vs. bringing in an external dependency that puts me on a permanent upgrade treadmill and opens the risk to supply chain attacks?
Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.
Unpopular opinion these days, but: It should be painful to pull in a dependency. It should require work. It should require scrutiny, and deep understanding of the code you're pulling in. Adding a dependency is such an important decision that can have far reaching effects over your code: performance, security, privacy, quality/defects. You shouldn't be able to casually do it with a single command line.
I wouldn’t go for painful that much. The main issue is transitive dependencies. The tree can be several layer deep.
In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.
In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.
For better or worse it is often less work to create a dependency than to maintain it over its lifetime. Improvements in maintenance also ease creation of new dependencies.
Java went down that road with the applet sandboxing. They thought that this would go well because the JVM can be a perfect gatekeeper on the code that gets to run and can see and stop all calls to forbidden methods.
It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.
Yes, but that was with a very ambitious sandbox that included full GUI access. Sandboxing a pure data transformation utility like something that strips ANSI escape codes would have been much easier for it.
This was one of Doug Crockford's big bugaboos since The Good Parts and JSLint and Yahoo days—the idea that lexical scope aka closures give you an unprecedented ability to actually control I/O because you can say
function main(io) {
const result = somethingThatRequiresHttp(io.fetch);
// ...
}
and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do
function main(io) {
const result = something(readonlyFetch(onlyOurAPI(io.fetch))
}
function onlyOurAPI(fetch) {
return (...args) => {
const test = /^https:\/\/api.mydomain.example\//.exec(args[0]);
if (test == null) {
throw new ValueError("must only communicate with our API");
}
return fetch(..args);
}
}
function readonlyFetch(fetch) { /* similar but allowlist only GET/HEAD methods */ }
I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."
Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.
In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.
Thanks, it's great to see all the issues you raise.
On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.
But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.
If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.
tbh none of that sounds particularly bad, nor do I think capabilities are necessary (but obviously useful).
we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.
and like:
>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.
sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.
perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.
Yes, dependency injection can help although injectors don't have any understanding of whether an object really needs a dependency. But that's not a god object in the sense it's normally meant. For one, it's injecting different objects :)
Thanks, this was a good overview of some of the challenges involved with designing a capability language.
I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.
Yes. It is a bit painful this is not rather obvious by now. But I do have, every code review, whine about people who just include trivial outdated one function npms :(
It wouldn't be a problem if there wasn't a culture of "just upgrade everything all the time" in the javascript ecosystem. We generally don't have this problem with Java libraries, because people pick versions and don't upgrade unless there's good reason.
Working for a bank did make me think much more about all the vulnerabilities that can go into certain tools. The company has a lot of bureaucracy to prevent installing anything or adding external dependencies.
Working for a fintech and being responsible for the software made me very wary of dependencies and weeding out the deprecated and EOL'd stuff that had somehow already found its way into what was a young project when I joined. Left unrestrained, developers will add anything if it resolves their immediate needs like you could probably spread malware very well just by writing a fake-blog advocating a malicious module to solve certain scenarios.
I've nixed javascript in the backend in several places, partly because of the weird culture around dependencies. Having to audit that for compliance, or keeping it actually secure, is a nightmare.
Nixing javascript in the frontend is a harder sell, sadly
What did you switch to instead? I used to be a C# dev, and have done my fair share of Go. Both of those have decent enough standard libraries that I never found myself with a large 3rd party dependency tree.
Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.
You can get pretty far in python without a lot of dependencies, and the dependencies you do need tend to be more substantial blocks of functionality. Much easier to keep the tree small than npm.
Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.
You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`
Just started with golang... the language is a bit annoying but the dependency culture seems OK
What I'd like to know is why anyone thinks it's a good idea to have this level of granularity in libraries? Seriously? A library that only contains "a utility function that determines if its argument can be used like an array"? That's a lot of overhead in dependency management, which translates into a lot of cognitive load. Sooner or later, something's going to snap...and something did, here.
We need a permission system for packages just like with Android apps. The text coloring package suddenly needs a file access permission for the new version? Seems strange.
I had a minor scare some time ago with npm. Can't remember the exact details, something like I had a broken symlink in my homedir and nodemon printed an error about the symlink! My first thought was it's a supply chain attack looking for credentials!
Since then I've done all my dev in an isolated environment like a docker container. I know it's possible to escape the container, but at least that raises the bar to a level I'm comfortable with.
Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections? That lock icon signified that the connection is encrypted alright. To a tech geek that's a valid use of a lock icon. But browsers still removed it because it's a massive UX fail. You have to consider what the lock icon means to people who are minimally tech literate. I understand and have set up DKIM and SPF, but you cannot condense the intended security feature of DKIM/SPF/DMARC into a single icon and expect that to be good UX.
We are talking about a UX failure regarding what a lock icon or a checkmark icon represents. Popularity is irrelevant. It's entirely about the disconnect between what tech geeks think a lock/checkmark icon represents and normal users think it represents.
Instead of ranting, can you say something constructive?
I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).
1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.
2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.
3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?
The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.
"Batteries included" ecosystems are the ultimate defense against the dark arts. Your F100 first party vendor might get it wrong every now and then, but they have so much more to lose than a random 3rd party asshole who decides to deploy malicious packages.
The worst thing I can recall from the enterprisey ecosystems is the log4j exploit, which was easily one of the most attended to security problems I am aware of. Every single beacon was lit for that one. It seems like when an NPM package goes bad, it can take a really long time before someone starts to smell it.
Log4Shell didn't light up all the beacons because Java is "enterprisey", it was because it was probably the worst security vulnerability in history; not only was the package extremely widely used, the vulnerability existed for nearly a decade and was straightforwardly wormable, so basically everybody running Java code anywhere had to make sure to update and check that they hadn't been compromised. Which is just a big project requiring an all-out response, since it's hard to know where you might have something running. By contrast, this set of backdoors only existed for a few hours, and the scope of the vulnerability is well-understood, so most developers can be pretty sure they weren't impacted and will have quite reasonably forgotten about it by next week. It's getting attention because it's a cautionary tale, not because it's causing a substantial amount of real damage.
I do think it's worth reducing the number of points of failure in an ecosystem, but relying entirely on a single library that's at risk of stagnating due to eternal backcompat obligations is not the way; see the standard complaints about Python's "dead batteries". The Debian or Stackage model seems like it could be a good one to follow, assuming the existence of funding to do it.
Daily reminder that no one can easily impersonate you if you sign your commits and make it easy to discover and verify your authentic key with keyoxide or similar.
An authentication environment which has gotten so complex we expect to be harassed by messages say "your Plex password might be compromised", "your 2FA is all fucked up", etc.
And the crypto thing. Xe's sanguine about the impact, I mean, it just the web3 degens [1] that are victimized, good innocent decent people like us aren't hurt. From the viewpoint of the attacker it is all about the Benjamins and the question is: "does an attack like this make enough money to justify the effort?" If the answer is yes than we'll see more attacks like this.
There are just all of these things that contribute to the bad environment: the urgent emails from services you barely use, the web3 degens, etc.
> "Warning! This is the first time you have received a message from sender support@npmjs.help. Please be careful with links and attachments, and verify the sender's identity before taking any action."
Is this not a good use case for AI in your email client (local-only to avoid more opportunities for data to leak)?
Have the client-embedded AI view the email to determine if it contains a link to a purported service. Remotely verify if the service URL domain is valid, by comparing to the domains known for that service
If unknown, show the user a suspected phishing message.
This will occasionally give a false positive when a service changes their sending domain, but the remote domain<->service database can then be updated via an API call as a new `(domain, service)` pair for investigation and possible inclusion.
I feel like this would mitigate much of the risk of phishing emails slipping past defenses, and mainly just needs 2 or 3 API calls to service once the LLM has extracted the service name from the email.
`Symbol` wasn't supported when I wrote `is-arrayish`. Neither were spreads. It was meant to be used with DOM lists or the magical `arguments` variable.
Does the Go ecosystem have a similar security screening process as NPM? This was caught because a company was monitoring a centralized packaging distribution platform, but I worry about all those golang modules spread across GitHub without oversight..
> Even then, that wouldn't really stand out to me because I've seen companies use new generic top level domains to separate out things like the blog at .blog or the docs at .guide, not to mention the .new stack.
This is very much a 'can we please not' situation, isn't it? (Obviously it's not something that the email recipients can (usually) control, so it's not a criticism of them.) It also has to meaningfully increase the chance that someone will eventually forget to renew a domain, too.
There's only one thing that would throw me off this email and that is DMARC. But I didn't get the email, so who is to say if I actually would have been caught.
This was a domain "legitimately" owned by the adversary. They controlled that DNS. They could set any SPF or DKIM records they wanted. This email probably passed all DMARC checks. From some screenshots, the email client even has a green check probably because it did pass DMARC.
Sometimes I think I'm a stubborn old curmudgeon for staunchly refusing to use node, npm, and the surrounding ecosystem. Pick and choose specific packages if I really have to.
Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.
We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.
Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.
of all people my mortgage servicer is the worst about this. Your login is valid on like 3 different top level domains and you get bounced between them when you sign in, eventually going from servicer.com to myservicer.com to servicer.otherthing.com! It's as though they were training you to not care about domain names.
Paying US taxes online is just as bad. The official way to pay tax balances with a debit card online is to use officialpayments[.]com. This is what the IRS advises you to use. Our industry is a clown factory.
What about aka.ms, which is a valid domain for Microsoft. Why didn't they use microsoft.com, or windows.com?
I always wonder if this aka is short for 'also known as'.
Wow! This site uses anubis with the meta-refreshed based challenge that doesn't require javascript. So I can actually read the article in my old browser. It's so rare for anubis deployals to be setup with any configuration beyond the defaults. What a delight.
This phishing email is full of red flags. Here are example red flags from that email:
- Update your 2FA credentials
What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
- It's been over 12 months since you last 2FA update
Again - meaningless nonsense. There's no such thing as a 2FA update. Maybe the recipient was thinking "password update" - but updating passwords regularly is also bad practice.
- "Kindly ask ..."
It would be very unusual to write like that in a formal security notification.
- "your credentials will be temporarily locked ..."
What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
- A link to change your credentials
A legit security email should never contains a link to change your credentials.
- It comes from a weird domain - .help
Any nonstandard domain is a red flag.
I don't use NPM, and if this actually looks like an email NPM would send, NPM has serious problems. However security ignorant companies do send emails like this. That's why the second layer of defense if you receive an email like this and think it might be real is to just log directly into (in this case) NPM and update your account settings without clicking links in the email.
NEVER EVER EVER click links in any kind of security alert email.
I don't blame the people who fell for this, but it is also concerning that there's such limited security awareness/training among people with publish access to such widely used packages.
Hi, said person who clicked on the link here. Been wanting to post something akin to this and was going to save it for the post mortem but I wanted to address the increase in these sort of very shout-ey comments directed toward me.
> What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
I didn't sit and read and parse the whole thing. That was mistake one. I have stated elsewhere, I was stressed and in a rush, and was trying to knock things off my list.
Also, 2FA can of course be updated. npm has had some shifts in how it approaches security over the years, and having worked within that ecosystem for the better part of 10-15 years, this didn't strike me as particularly unheard of on their part. This, especially after the various acquisitions they've had.
It's no excuse, just a contributing factor.
> It would be very unusual to write like that in a formal security notification.
On the contrary, I'd say this is pretty par for the course in corpo-speak. When "kindly" is used incorrectly, that's when it's a red flag for me.
> What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
Yes, of course it is. I'm well aware of that. Again, this email reached me at the absolute worst time it could have and I made a very human error.
"Temporarily locked" surprises me that it surprises you. My account was, in fact, temporarily locked while I was trying to regain access to it. Even npm had to manually force a password reset from their end.
> Any nonstandard domain is a red flag.
When I contacted npm, support responded from githubsupport.com. When I pay my TV tax here in Germany (a governmental thing), it goes to a completely bizarre, random third party site that took me ages to vet.
There's no such thing as a "standard" domain anymore with gTLDs, and while I should have vetted this particular one, it didn't stand out as something impossible. In my head, it was their new help support site - just like github.community exists.
Again - and I guess I have to repeat this until I'm blue in the face - this is not an excuse. Just reasons that contributed to my mistake.
> NEVER EVER EVER click links in any kind of security alert email.
I'm aware. I've taught this as the typical security person at my respective companies. I've embodied it, followed it closely for years, etc. I slipped up, and I think I've been more than transparent about that fact.
I didn't ask for my packages to be downloaded 2.6 billion times per week when I wrote most of these 10 years ago or inherited them more than five ago. You can argue - rightfully - about my technical failure here of using an outdated form of 2FA. That's on me, and would have protected against this, but to say this doesn't happen to security-savvy individuals is the wrong message here (see: Troy Hunt getting phished).
Shit happens. It just happened to happen to me, and I happen to have undue control over some stuff that's found its way into most of the javascript world.
The security lessons and advice are all very sound - I'm glad people are talking about them - but the point I'm trying to make is, that I am a security aware/trained person, I am hyper-vigilant, and I am still a human that made a series of small or lazy mistakes that turned into one huge mistake.
Thank you for your input, however. I do appreciate that people continue to talk about the security of it all.
Always use password manager to automatically fill in your credentials. If password manager doesn't find your credentials, check the domain. On top of that, you can always go directly to the website, to make any needed changes there, without following the link.
Password managers are still too unreliable to auto-fill everywhere all the time, and manually having to copy paste something from the password manager happens regularly so it's not something that feels unusual if it doesn't auto-fill it for some reason.
I put the fault on companies for making their login processes so convoluted. If you take the time to do it, you can usually configure the password manager to work (we shouldn’t have to make the effort). But even if you do, then the company will at some point change something about their login processes and break it.
I don't think this really helps. I use Bitwarden and it constantly fails to autofill legitimate websites and makes me go to the app to copy-paste, because companies do all kinds of crap with subdomains, marketing domains, etc. Any safeguard relying on human attention is ultimately susceptible to this; the only true solutions are things like passkeys where human fuckups are impossible by design and they can't give credentials to the wrong place even if they want to.
Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.
It's a pita but BitWarden has quite some flexibility in filtering where what gets autofilled. I agree the defaults are pretty shit and indeed lead to constant copy-pasting. On the other hand, it will offer all my password all the time for all my selfhosted stuff on my 1 server.
> Formatting text with colors for use in the terminal
...
> These kinds of dependencies are everywhere and nobody would even think that they could be harmful.
The first article I ever read discussing the possibility of npm supply chain attacks actually used coloured text in terminal as the example package to poison. And ever since then I have always been associated coloured terminal in text with supply chain attack
Is there a tool that you can put between your npm client and npm web servers that serves package versions that are month old and possibly also tracks discovered malware and never serves infected versions?
I'm looking at Verdaccio currently, since Artifactory is expensive and I think the CE version still only supports C++. Does anyone have any experience with Verdaccio?
This reads like a joke that's missing the punchline.
The post's author's resume section reinforces this feeling:
I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.
I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.
If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.
Most phishing emails are so bad, it’s quite terrifying when you see a convincing one like this.
Email is such an utter shitfest. Even tech-savvy people fall for phishing emails, what hope do normal people have.
I recommend people save URLs in their password managers, and get in the habit of auto-filling. That way, you’ll at least notice if you’re trying to log into a malicious site. Unfortunately, it’s not foolproof, because plenty of sites ask you to randomly sign into different URLs. Sigh…
Isn't it a bit crazy that phishing e-mails still exist? Like, couldn't this be solved by encrypting something in a header and using a public key in the DNS to unencrypt it?
I'm not a top-level expert in cybersecurity nor email infra....but the little that i know has taught me that i merely have to create a similar-looking domain name...
Let's say there's a company named Awesome...and i register the domain name of AwesomeSupport.com. I could be a total dark hat/evil hacker/neverdoweller....and this domain may not be infringing on any trademark, etc. And, then i can start using all the encryption you noted...which merely means that *my domain name* (the bad one) is "technically sound"...but of course, all that use of encryption fails to convey that i am not the legitimate Awesome company. So, how is the victim supposed to know which of the domains is legit or not? Especially considering that some departments of the real, legit Awesome company might register their own domain name to use for actual, real reasons - like the marketing department might register MyAwesome.com...for managing customer accounts, etc.
Is encryption necessary in digital life? Hellz yeah! Does it solve *all issues*? Hellz no! :-)
True! But, the possibility exists that enough % of victims do not indeed check the OV cert. Also, are we 100% sure that every single legit company that you and I do business with, has an OV cert for their websites?
This honestly doesn't feel like it should be the case.
There aren't that many websites. The e-mail provider could have a list of "popular" domains, and the user could have their own list of trusted domains.
There is all sorts of ways to warn the user about it, e.g. "you have never interacted with this domain before." Even simply showing other e-mails from the same domain would be enough to prevent phishing in some cases.
There are practical ways to solve this problem. They aren't perfect but they are very feasible.
My previous comments were merely in response to your original comments...so really only to point out that bare use of encryption by itself is not sufficient protection - that's all.
To your more recent points, i agree that there are other several protections in place...and depending on a number of facotrs, some foks have more at their disposal, and others might have less...but, still there are mechnisms in place to help - without a doubt. But yet with all these mechanisms in place, people still fall prey to phishing attacks...and sometimes those victims are not lay people, but actual technologists. So, i think the solution(s) to solve this are not so simple, and likely are not only tech-based. ;-)
I might be missing the joke, but there are several layers like SPF and DMARC available to only allow your whitelisted servers to send email on the behalf of your domain.
Wouldn't help in this case where someone bought a domain that looked a tiny bit like the authentic one for a very casual observer.
100% solved and has been for a very long time. The PGP/GPG trust chain goes CLUNK CLUNK CLUNK. Everyone shuts it off after a week or so of experimentation.
I think it's quite good, there's a sense of urgency, but it's also not "immediately change it!"
they gave more than a day, and stated that it would be a temporary lock. Feel like this one really hit the spot on that aspect.
You should still never click a link in an email like this, but the urgency factor is well done here
the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1
but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3
nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority
Besides the ecosystem issues, for the phishing part, I'll repost what I responded somewhere in the other related post, for awareness
---
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
> 1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
Sites choosing to replace password login with initiating the login process and then clicking a "magic link" in your email client is awful for developing good habits here, or for giving good general advice.
:c
In that case it's the same as a reset-password flow.
In both cases it's good advice not to click the link unless you initiated the request. But with the auth token in the link, you don't need to login again, so the advice is still the same: don't login from a link in your email; clicking links is ok.
Clicking links from an email is still a bad idea in general because of at least two reasons:
1. If a target website (say important.com) sends poorly-configured CORS headers and has poorly configured cookies (I think), a 3rd-party website is able to send requests to important.com with the cookies of the user, if they're logged in there. This depends on important.com having done something wrong, but the result is as powerful as getting a password from the user. (This is called cross-site request forgery, CSRF.)
2. They might have a browser zero-day and get code execution access to your machine.
If you initiated the process that sent that email and the timing matches, and there's no other way than opening the link, that's that. But clicking links in emails is overall risky.
1 is true, but this applies to all websites you visit (and their ads, supply chain, etc). Drawing a security boundary here means never executing attacker-controlled Javascript. Good luck!
2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.
A browser-integrated password manager is only phishing-proof if it's 100% reliable. If it ever fails to detect a credential field, it trains users that they sometimes need to work around this problem by copy-pasting the credential from the password manager UI, and then phishers can exploit that. AFAIK all existing password manager extensions have this problem, as do all browsers' native password-management features.
It doesnt need to be 100% reliable, just reliable enough.
If certain websites fail to be detected, thats a security issue on those specific websites, as I'll learn which ones tend to fail.
If they rarely fail to detect in general, its infrequent enough to be diligent in those specific cases. In my experience with password managers, they rarely fail to detect fields. If anything, they over detect fields.
I think it's more appropriate to say TOTP /is (nearly)/ phishing-proof if you use a password manager integrated with the browser (not that it /doesn't need to be/ phishing-proof)
I receive Google Doc links periodically via email; fortunately they're almost never important enough for me to actually log in and see what's behind them.
My point, though, is that there's no real alternative when someone sends you a doc link. Either you follow the link or you have to reach out to them and ask for some alternative distribution channel.
(Or, I suppose, leave yourself logged into the platform all the time, but I try to avoid being logged into Google.)
I don't know what to do about that situation in general.
A Firefox plugin/feature, probably also available on other browsers as well. It is useful for siloing cookies, so you can easily be logged into Google on one set of browser tabs and block their cookies on another.
> U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
Last I checked, we're still in a world where the large majority of people with important online accounts (like, say, at their bank, where they might not have the option to disable online banking entirely) wouldn't be able to tell you what any of those things are, and don't have the option to use anything but SMS-based TOTP for most online services and maybe "app"-based (maybe even a desktop program in rare cases!) TOTP for most of the rest. If they even have 2FA at all.
This is the point of the "passkey" branding. The idea is to get to the point where these alphabet-soup acronyms are no longer exposed to normal users and instead they're just like "oh, I have to set up a passkey to log into this website", the way they currently understand having to set up a password.
Sure. That still doesn't make Yubikey-style physical devices (or desktop keyring systems that work the same way) viable for everyone, everywhere, though.
Yeah, the pressure needs to be put on vendors to accept passkeys everywhere (and to the extent that there are technical obstacles to this, they need to be aggressively remediated); we're not yet at the point where user education is the bottleneck.
Urgency is also either phishing (log in now or we'll lock you out of your account in 24 hours) or marketing (take advantage of this promotion! expires in 24 hours!).
A guy I knew needed a car, found one, I told him to take it to a mechanic first. Later he said he couldn't, the guy had another offer, so he had to buy it right now!!!, or lose the car.
I mean, real deadlines do exist. The better heuristic is that, if a message seems to be deliberately trying to spur you into immediate action through fear of missing a deadline, it's probably some kind of trick. In this respect, the phishing message that was used here was brilliantly executed; it calmly, without using panic-inducing language, explains that action is required and that there's a deadline (that doesn't appear artificially short but in fact is coming up soon), in a way quite similar to what a legitimate action-required email would look like. Even a savvy user is likely to think "oh, I didn't realize the deadline was that soon, I must have just not paid attention to the earlier emails about it".
Yeah, this particular situation's a bit weird because it's asking the user to do something (rotate their 2FA secret) that in real life is not really a thing; I'm not sure what to think of it. But you could imagine something similar like "we want you to set up 2FA for the first time" or "we want you to supply additional personal information that the government has started making us collect", where the site might have to disable some kind of account functionality (though probably not a complete lockout) for users who don't do the thing in time.
Most mail providers have something like plus addressing. Properly used that already eliminates a lot of phishing attempts: If I get a mail I need to reset something for foobar, but it is not addressed to me-foobar (or me+foobar) I already know it is fraudulent. That covers roughly 99% of phishing attempts for me.
The rest is handled by preferring plain text over HTML, and if some moron only sends HTML mails to carefully dissect it first. Allowing HTML mails was one of the biggest mistakes for HTML we've ever made - zero benefits with huge attack surface.
I agree that #1 is correct, and I try to practice this; and always for anything security related (update your password, update your 2FA, etc).
Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags… it’s the perfect stereotype of an obvious scam domain. Maybe falling just short of npmjshelp.nigerianprince.net.
Is there somewhere you'd recommend that I can read more about the pros/cons of TOTP? These authenticator apps are the most common 2FA second factor that I encounter, so I'd like to have a good source for info to stay safe.
I watched a presentation from Stripe internal eng that was given I forget where.
An internal engineer there who did a bunch of security work phished like half of her own company (testing, obviously). Her conclusion, in a really well-done talk, was that it was impossible. No human measures will reduce it given her success at a very disciplined, highly security conscious place.
The only thing that works is yubikeys which prevent this type of credential + 2fa theft phishing attack.
I had someone from a bank call me and ask for my SSN to confirm my identity. The caller ended up being legitimate, but I still didn't give it...like, are you kidding me?
This has happened to me more times than I can count, and it's extremely frustrating because it teaches people the wrong lesson. The worst part is they often get defensive when you refuse to cooperate, which just makes the whole thing unnecessarily more stressful.
1- As a professional, installing free dependencies to save on working time.
There's no such thing as a free lunch, you can't have your cake and eat it too that is, download dependencies that solve your problems, without paying, without ads, without propaganda (for example to lure you into maintaining such projects for THE CAUSE), without vendor lockin or without malware.
It's really silly to want to pile up mountains of super secure technology like webauthn, when the solution is just to stop downloading random code from the internet.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
The problem here is that a single dev account can make updates to a prod codebase, or in the case of NX a single CI/CD token. Something with 5 Million downloads per week should not be controlled by one token if it takes me 3 approvals to get my $20 lunch reimbursement.
At the very least have an LLM review every PR to prod.
The nx supply chain attack via npm was the bullet many companies did not doge. I mean, all you needed was to have the VS Code nx plugin installed — which always checked for the latest published nx version on npm. And if you had a local session with GitHub (eg logged into your company’s account via the GH CLI), or some important creds in a .env file… that was exfiltrated.
This happened even if you had pinned dependencies and were on top of security updates.
We need some deeper changes in the ecosystem.
https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...
> We need some deeper changes in the ecosystem.
I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.
As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.
NPM will often have different source then the github repo source. How does anyone even trust the system?
Yeah, Editor extensions are both auto-updated and installed in high risk dev environments. Quite a juicy target and I am surprised we haven’t seen large scale purchases by bad actors similar to browser extensions yet. However, I remember reading that the VsCode team puts a lot of effort in catching malware. But do all editors (with auto-updates) such as Sublime have such checks?
I usually make sure all the packages and db are local, so my dev machine can run in Airplane mode. And only turn on internet when use git push
Maybe we need to unify explicit build toolchains rather than trying to glue them right into package managers?
The culprit here is not the source code, it's all the automatically executed build and installation hooks which lack total regards to any isolation or sandboxing.
Related. Others?
DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware - https://news.ycombinator.com/item?id=45179939 - Sept 2025 (209 comments)
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)
Dodged a bullet indeed
I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.
You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
You don't even need to do anything with those, there's forums to sell that stuff.
Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?
> You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.
Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.
"found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months.
I'm assuming they meant the account takeover was likely to be found out right away. You change your password on a major site like that and you're going to get an email about it. Login from a new location also triggers these emails, though I admit I haven't logged onto NPM in quite a long time so I don't know that they do this.
It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset."
There's probably already hundreds of thousands of Jira tickets to fix it with no sprint assigned....
Yes, but this is an ecosystem large enough to include people who have that time (and inclination and ability); and once they have reported a problem, everyone is on high alert.
If you steal the cookies from dev machines or steal ssh keys along with a list of recent ssh connections or do any other credential theft there are going to be lots of people left impacted. Yes, lots of people reading tech news or security bulletins is going to check if they were compromised and preemptively revoke those credentials. But that's work, meaning even among those informed there will be many who just assume they weren't impacted. Lots of people/organisations are going to be complacent and leave you with valid credentials
If a dev doesn't happen to run npm install during the period between when the compromised package gets published and when npm yanks it (which for something this high-profile is generally measured in hours, not days), then they aren't going to be impacted. So an attacker's patience won't be rewarded with many valid credentials.
But that is dumb luck. Release an exploit, hope you can then gain further entry into a system at a company that is both high value and doesn't have any basic security practices in place.
That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff.
VS blast out some crypto stealing code and grab as many funds as possible before being found out.
> Lots of people/organisations are going to be complacent and leave you with valid credentials
You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines.
Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs.
With a large enough blast radius, this may have worked, but it wouldn't be guaranteed.
The window of installation time would be pretty minimal, and the operating window would only be as long as those who deployed while the malicious package was up waited to do another deploy.
If they'd waited a week before using their ill-gotten credentials to update the packages, would they have been detected in that week?
That is what the tj-actions attacker did: https://unit42.paloaltonetworks.com/github-actions-supply-ch...
Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway.
Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft.
Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.
And it's probably the lowest risk way to profit from this attack
Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments.
The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.
Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops.
Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.
There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.
There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.
> if you know to evade the cops.
step 1: live in a place where the cops do not police this type of activity
step 2: $$$$
> do the exact same thing a few months later
> one-in-a-million opportunity
This guy made 66$ - https://x.com/SolanaFloor/status/1965144417565900868
It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line.
You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?)
But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?
So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?
Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.
The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster.
OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.
If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.
> You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?
The plot of Office Space might offer clues.
Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught?
What makes you so sure that the exploit is over? Maybe they wanted their secondary exploit to get caught to give everyone a sense of security? Their primary exploit might still be lurking somewhere in the code?
API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs.
> My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.
What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.
We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.
Security is hard, and it is very inconvenient, but it's increasingly necessary.
I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake.
To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.
Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.
The objection isn’t against security. It is against security theater.
This sounds sensible for the “ops person”?
It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.
What about this sounds sensible?
I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.
At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.
I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.
If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility?
The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.
As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself.
So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?
You are making my argument for me.
This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.
So you agree with me the ops person is behaving sensibly given real life constraints?
Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked.
At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds). They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones.
All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.
There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.
And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.
Blocking remote desktop forwarding of security keys also is a fun one.
Funny, I read that quote, and assumed it meant something unsavory, and not say, root access to an AWS account.
There's nothing wrong with staying focused (on grabbing the money).
Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker.
Because it's North Korea and crypto currency is the best assets they can get for pragmatic reasons.
For anything else you need a fiat market, which is hard to deal with remotely.
Seems possible to me that someone has done an attack exactly like you describe and just was never caught.
i fell for this malware once. had the malware on my laptop even with mb in the background. i copy paste and address and didn't even check it. my bad indeed. those guys makes a lot of money from this "one shot" moments
As long as we get lucky nothing is going to change.
> find it insane that someone would get access to a package like this, then just push a shitty crypto stealer
Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.
yeah a shitty crypto stealer is more lucrative, more quickly monetized, has less OPSEC issues for the thief if done right, easier to launder
nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto
one in a million opportunity? the guy registered a domain and sent some emails dude. its cheap as hell
Maybe one in a million is hyperbolic but that’s sorta the game with these attacks isn’t it? Registering thousands upon thousands of domains + tens of thousands of emails until you catch something from the proverbial pond.
[dead]
>Saved by procrastination!
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
This is how I feel about my Honda, and to some extent, Kubernetes. In the former case I kept a 2006 model in good order for so long I skipped at least two (automobile) generation's worth of car-to-phone teething problems, and after years of hearing people complain about their woes I've found the experience of connecting my iphone to my '23 car pretty hassle-free. In the latter, I am finally moving a bunch of workloads out of EC2 after years of nudging from my higher-ups and, while it's still far from a simple matter I feel like the managed solutions in EKS and GKE have matured and greatly lessen the pain of migrating to K8S. I can only imagine what I would have gotten bogged down with had I promptly acted on my bosses' suggestion to do this six or seven years ago. (I also feel very lucky that the people I work for let me move on these things in my own due time.)
Not in the "npm ecosystem". You're hopelessly behind there if you haven't updated in the last 54 seconds.
Well in this case it makes sense to update fast isn't it?
Sorry, the "npm ecosystem" command has been deprecated. You can instead use npm environment (or npm under-your-keyboard because we helpfully decided it should autocorrect and be an alias)
this seems to be a clever joke. sad to see it dead
"Just wait 2 weeks to use new versions by default" is an amazing defense method against supply chain attacks.
Its also really ineffective defense against 0 days!
IF I put my risk management hat on - 0 days in npm ecosystem are not that much of a problem.
They stop working before can use them.
Sadly we don't have any defense against 0 days if an emergency patch is indistinguishable from an attack itself.
Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense.
I'll reply to you tomorrow
...by then it might be working again anyway, or the user figured out what they were doing wrong.
"Hey, is it still broken? No? Great!"
I wrote the first commit for slice-ansi in 2015 to solve a baby problem for a cli framework I was building, and worked with Qix a little on the chalk org after it. It's wild looking back and seeing how these things creep in influence over time.
I know this isn't really possible for smaller guys but larger players (like NPM) really should buy up all the TLD versions of "npm" (that is: npm.io, npm.sh, npm.help, etc). One of the reasons this was so effective is that the attacker managed to snap up "npm.help"
Then you have companies like AWS, they were sending invoices from `no-reply-aws@amazon.com` but last month they changed it to `no-reply@tax-and-invoicing.us-east-1.amazonaws.com`.
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
> These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
I remember an email I once got.
Title: "Expense report overdue - Please fill now"
Subject:
<empty body>
<Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>
---
So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.
A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.
So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...
A few years ago our annual corporate phishing training was initiated by an email sent from a random address asking us to log in with our internal credentials on a random website.
A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.
Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.
It’s all just box ticking and CYA compliance.
“We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”
If Kevin mitnick shows up or is referenced then I’m pretty sure it’s performance art
I thought facebookmail.com was fake. No, it is actually legit
There are way too many TLDs for this to be even practical: https://data.iana.org/TLD/tlds-alpha-by-domain.txt
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
There's like 1500 TLDs, now some of them are restricted and country-code TLDs but now it makes me wonder how much it would actual cost per year to maintain registration of every non-restricted TLD. I'm sure theres some SaaS company that'll do it.
OTOH, doesn't ICANN already sometimes restrict who has access to a given TLD? Would it really be that crazy for them to say "maybe we shouldn't let registrars sell npm.<TLD> regardless of the TLD", and likewise for a couple dozen of the most obvious targets (google., amazon., etc.)? No one needs to pay for these domains if no one is selling them in the first place. I don't love the idea of special treatment for giant companies in terms of domains, but we're already kind of there with the whole process they did when initially allowing companies to compete for exclusive access to TLDs, so we might as well use that process for something actually useful (unlike, say, letting companies apply for exclusive ownership of ".music" and have a whole legal process to determine that maybe that isn't actually beneficial for the internet as whole: https://en.wikipedia.org/wiki/.music)
The TLDs run the whole gamut from completely open to almost impossible to get.
[dead]
This won't work - npm.* npmjs.* npmjs-help.* npm-help.* node.* js.* npmpackage.*. The list is endless.
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
there is also the more recent style of phising domains that look like healthcare.gov-profile.co/user
That seems like a bad idea compared to just having a canonical domain - people might become used to seeing "npm.<whatever>" and assuming it is legit. And then all it takes is one new TLD where NPM is a little late registering for someone to do something nefarious with the domain.
Just because you buy them doesn't mean that you have to use them. Squatting on them is no more harmful (except financially) than leaving them available for potentially hostile 3rd parties.
Sure, I guess buying up every npm.* you can find and then having a message "never use this, only use npm.com" could work. I thought OP was saying have every npm.* site be a mirror of the canonical site
Looks like it costs ~$200,000 to get your own TLD. If a bunch of companies started doing the "register every TLD of our brand", I wonder what the breakeven point would be where just registering a TLD is profitable.
That’s like insane proportion.
npmjs.help not npm.help - the typo is also in the article.
> If you were targeted with such a phishing attack, you'd fall for it too and it's a matter of when not if. Anyone who claims they wouldn't is wrong.
I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.
"'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.
Really feels like these big open packages repos need a better security solution. Or at least a core subset of carefully vetted ones.
Same issue with python, rust etc. It’s all very trust driven
Is the fundamental problem with npm still a lack of enforced namespacing?
In the Java world, I know there’s been griping from mostly juniors re “why isn’t Maven easy like npm?” (I work with some of these people). I point them to this article: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...
Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.
Nothing about this attack would be solved by namespacing, but it might have been solved by maven's use of GPG keys.
isn't time NPM start to use that? Why has this taken soo long?
Linux distributions packages are also very trust driven — but you have to earn trust to publish. Then there is whole system to verify trust. NPM is more like „everything goes”.
In a case like this, the package maintainer's account itself has been hacked, so I'm not sure if that would be meaningful.
The only solution would be to prevent all releases from being applied immediately.
A solution could be enforcing hardware keys for 2FA for all maintainers if a package has more than XX thousand weekly downloads.
No hardware keys, no new releases.
Passkeys - no need for hardware key.
They have it implemented.
I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.
So the guy either had TOTP or just the pw.
Seems like should be easy to implement enforcement.
There needs to be a massive push from the larger important packages to eliminate these idiotic transitive dependencies. Core infrastructure shouldn't rely on trivial packages maintained by a single random person from who knows where that can push updates without review. It's absolutely insane.
As the post mentions wallets like MetaMask being the targets, AFAIK MetaMask in particular might be one of the best protected (isolated) applications from this kind of attack due to their use of LavaMoat https://x.com/MetaMask/status/1965147403713196304 -- though I'd love to read a detailed analysis of whether they actually are protected. No affiliation with MetaMask, just curious about effectiveness of seemingly little adopted measures (relative to scariness of attacks).
Added: story dedicated to this topic more or less https://news.ycombinator.com/item?id=45179889
"there is no way to prevent this", says the only ecosystem where this regularly happens
> With that in mind, at a glance the idea of changing your two-factor auth credentials "for security reasons" isn't completely unreasonable.
No?
How do you change your 2FA? Buy a new phone? A new Yubikey?
For TOTP it's as simple as scanning a new QR code.
I agree that rotating 2FA should ring alarm bells as an unusual request. But that requires thinking.
Is it possible to do the thing proposed in the email without clicking the link?
I just try to avoid clicking links in emails generally...
Should be - open another browser window and manually log into npm whatever, and update your 2fa there.
Definitely good practice .
This is the Way. To minimize attack surface, the senders of authentic messages should straight-up avoid putting links to "do the thing" in the message. Just tell the user to update their credentials via the website.
That's what the Australian Tax Office does. Just a plaintext message that's effectively "you've got a new message. Go to the website to read it."
All my medical places I use do that, with the note that you can also use their app. Good system.
My doctor's office does the same thing. So do some financial services companies.
For most users, that'll just result in them going to Google, searching for the name of your business, and then clicking the first link blindly. At that point you're trusting that there's no malicious actors squatting on your business name's keyword -- and if you're at all an interesting target, there's definitely malvertising targeting you.
The only real solution is to have domain-bound identities like passkeys.
That's what I always do. Never click these kinds of links in e-mail.
Always manually open the website.
This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.
The Microsoft ecosystem certainly makes this challenging. At work, I get links to Sharepoint hosted things with infinitely long hexadecimal addresses. Otherwise finding resources on Sharepoint is impossible.
> I just try to avoid clicking links in emails generally...
I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.
It seems to me that having an email client that simply disables all the links in the email is probably a good idea. Or maybe, there should be explicit white-listing of domains that are allowed to be hyperlinks.
And who would control that whitelist? How would it be any different than the domain system or PKI CA system we have now?
Do you think there would be the time to properly review applications to get on the whitelist?
I've always thought it's insane that anyone on the planet with a connection can drop a clickable link in front of you. Clickable links in email should be considered harmful. Force the user to copy/paste
URLs are also getting too damn long
How would copy-pasting help in this scenario?
> These kinds of dependencies are everywhere and nobody would even think that they could be harmful.
Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.
I'm rather convinced that the next major language-feature wave will be permissions for libraries. It's painfully clear that we're well past the point where it's needed.
I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.
Alternatively, I've long been wondering if automatic package management may have been a mistake. Its primary purpose seems to be to enable this kind of proliferation of micro-dependencies by effectively sweeping the management of these sprawling dependency graphs under the carpet. But the upshot of that is, most changes to your dependency graph, and by extension your primary vector for supply chain attacks, becomes something you're no longer really looking at.
Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.
Manual dependency management without a package manager does not lead people to do more auditing.
And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.
yea, just look at the state of many C projects. it's rather clearly worse in practice in aggregate.
should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.
but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.
C has a lot of characteristics beyond simple lack of a standard automatic package manager that complicate the situation.
The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.
when I was doing C# pre-nuget we had an utterly absurd amount of libraries that nobody had checked and nobody ever upgraded. so... yeah I think it applies there too, at least from my experience.
I do agree that C is an especially-bad case for additional reasons though, yeah.
Gotcha. When I was, we actively curated our dependencies and maintaining them was a regularly scheduled task that one team member in particular was in charge of making sure got done.
Well, consider that a lot of these functions that were exploited are simple things. We use a library to spare ourselves the drugdery of rewriting them, but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control, vs. bringing in an external dependency that puts me on a permanent upgrade treadmill and opens the risk to supply chain attacks?
Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.
Unpopular opinion these days, but: It should be painful to pull in a dependency. It should require work. It should require scrutiny, and deep understanding of the code you're pulling in. Adding a dependency is such an important decision that can have far reaching effects over your code: performance, security, privacy, quality/defects. You shouldn't be able to casually do it with a single command line.
I wouldn’t go for painful that much. The main issue is transitive dependencies. The tree can be several layer deep.
In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.
In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.
For better or worse it is often less work to create a dependency than to maintain it over its lifetime. Improvements in maintenance also ease creation of new dependencies.
Java went down that road with the applet sandboxing. They thought that this would go well because the JVM can be a perfect gatekeeper on the code that gets to run and can see and stop all calls to forbidden methods.
It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.
Yes, but that was with a very ambitious sandbox that included full GUI access. Sandboxing a pure data transformation utility like something that strips ANSI escape codes would have been much easier for it.
It was too complex. Just making system calls require white listing libraries goes a long way of preventing a whole class of exploits.
There’s no reason a color parser, or a date library should require network or file system access.
Totally agreed, and I'm surprised this idea hasn't become more mainstream yet.
If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.
This was one of Doug Crockford's big bugaboos since The Good Parts and JSLint and Yahoo days—the idea that lexical scope aka closures give you an unprecedented ability to actually control I/O because you can say
and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.
In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.
It's harder than it looks. I wrote an essay exploring why here:
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
Thanks, it's great to see all the issues you raise.
On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.
But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.
If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.
tbh none of that sounds particularly bad, nor do I think capabilities are necessary (but obviously useful).
we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.
and like:
>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.
sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.
perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.
Yes, dependency injection can help although injectors don't have any understanding of whether an object really needs a dependency. But that's not a god object in the sense it's normally meant. For one, it's injecting different objects :)
Thanks, this was a good overview of some of the challenges involved with designing a capability language.
I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.
yup, here is node's docs for it (WIP): https://nodejs.org/api/permissions.html
Yeah, there's an entire community dedicated to cleaning up the js ecosystem.
https://e18e.dev/
Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.
Discord server? Is it that much work to create a forum or a mailing list with anonymous access. Especially with a community you can vet that easily?
Yes. It is a bit painful this is not rather obvious by now. But I do have, every code review, whine about people who just include trivial outdated one function npms :(
It wouldn't be a problem if there wasn't a culture of "just upgrade everything all the time" in the javascript ecosystem. We generally don't have this problem with Java libraries, because people pick versions and don't upgrade unless there's good reason.
From maintenance perspective both never and always seem like extremes though.
Upgrading when falling off the train is serious drawback on moving fast..
and then you get Log4Shell
Working for a bank did make me think much more about all the vulnerabilities that can go into certain tools. The company has a lot of bureaucracy to prevent installing anything or adding external dependencies.
Working for a fintech and being responsible for the software made me very wary of dependencies and weeding out the deprecated and EOL'd stuff that had somehow already found its way into what was a young project when I joined. Left unrestrained, developers will add anything if it resolves their immediate needs like you could probably spread malware very well just by writing a fake-blog advocating a malicious module to solve certain scenarios.
> Left unrestrained, developers will add anything if it resolves their immediate needs
Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.
What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.
I've nixed javascript in the backend in several places, partly because of the weird culture around dependencies. Having to audit that for compliance, or keeping it actually secure, is a nightmare.
Nixing javascript in the frontend is a harder sell, sadly
What did you switch to instead? I used to be a C# dev, and have done my fair share of Go. Both of those have decent enough standard libraries that I never found myself with a large 3rd party dependency tree.
Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.
You can get pretty far in python without a lot of dependencies, and the dependencies you do need tend to be more substantial blocks of functionality. Much easier to keep the tree small than npm.
Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.
You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`
Just started with golang... the language is a bit annoying but the dependency culture seems OK
Throwback to leftpad!
Hey that was also on NPM iirc!
What I'd like to know is why anyone thinks it's a good idea to have this level of granularity in libraries? Seriously? A library that only contains "a utility function that determines if its argument can be used like an array"? That's a lot of overhead in dependency management, which translates into a lot of cognitive load. Sooner or later, something's going to snap...and something did, here.
“We all dodged a massive bullet”
I don’t think we did. I think it is entirely plausible that more sophisticated attacks ARE getting into the npm ecosystem.
We need a permission system for packages just like with Android apps. The text coloring package suddenly needs a file access permission for the new version? Seems strange.
I had a minor scare some time ago with npm. Can't remember the exact details, something like I had a broken symlink in my homedir and nodemon printed an error about the symlink! My first thought was it's a supply chain attack looking for credentials!
Since then I've done all my dev in an isolated environment like a docker container. I know it's possible to escape the container, but at least that raises the bar to a level I'm comfortable with.
His email client even puts a green check mark next to the fake NPM email. UX fail.
The claim is valid -- it is legit from npm.help
If you think npm.help is something it isn't, that's not something DKIM et al can help with.
Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections? That lock icon signified that the connection is encrypted alright. To a tech geek that's a valid use of a lock icon. But browsers still removed it because it's a massive UX fail. You have to consider what the lock icon means to people who are minimally tech literate. I understand and have set up DKIM and SPF, but you cannot condense the intended security feature of DKIM/SPF/DMARC into a single icon and expect that to be good UX.
> Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections?
Few years ago? I have lock icon right now in my address bar
Browsers moved away from the https lock icon after https become very very common. Email hasn't reached a comparable state.
We are talking about a UX failure regarding what a lock icon or a checkmark icon represents. Popularity is irrelevant. It's entirely about the disconnect between what tech geeks think a lock/checkmark icon represents and normal users think it represents.
Instead of ranting, can you say something constructive?
I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).
1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.
2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.
3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?
The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.
"Batteries included" ecosystems are the ultimate defense against the dark arts. Your F100 first party vendor might get it wrong every now and then, but they have so much more to lose than a random 3rd party asshole who decides to deploy malicious packages.
The worst thing I can recall from the enterprisey ecosystems is the log4j exploit, which was easily one of the most attended to security problems I am aware of. Every single beacon was lit for that one. It seems like when an NPM package goes bad, it can take a really long time before someone starts to smell it.
Heartbleed? Solarwinds? Spectre/Meltdown? Stuxnet? Eternal Blue? CVE-2008-0166 (debian predictable private keys)?
Log4Shell didn't light up all the beacons because Java is "enterprisey", it was because it was probably the worst security vulnerability in history; not only was the package extremely widely used, the vulnerability existed for nearly a decade and was straightforwardly wormable, so basically everybody running Java code anywhere had to make sure to update and check that they hadn't been compromised. Which is just a big project requiring an all-out response, since it's hard to know where you might have something running. By contrast, this set of backdoors only existed for a few hours, and the scope of the vulnerability is well-understood, so most developers can be pretty sure they weren't impacted and will have quite reasonably forgotten about it by next week. It's getting attention because it's a cautionary tale, not because it's causing a substantial amount of real damage.
I do think it's worth reducing the number of points of failure in an ecosystem, but relying entirely on a single library that's at risk of stagnating due to eternal backcompat obligations is not the way; see the standard complaints about Python's "dead batteries". The Debian or Stackage model seems like it could be a good one to follow, assuming the existence of funding to do it.
Solarwinds?
“A utility function that determines if its argument can be used like an array”
I see the JavaScript ecosystem hasn’t changed since leftpad then.
My man, it has...in the worse direction...
Daily reminder that no one can easily impersonate you if you sign your commits and make it easy to discover and verify your authentic key with keyoxide or similar.
Great write up. I can understand the indignation at the exploit, but I believe it’s an A+ exploit for the chosen attack vector.
Not only is it “proof of concept” but it’s a low risk high reward play. It’s brilliant really. Dangerously so.
This has so many dimensions.
An authentication environment which has gotten so complex we expect to be harassed by messages say "your Plex password might be compromised", "your 2FA is all fucked up", etc.
And the crypto thing. Xe's sanguine about the impact, I mean, it just the web3 degens [1] that are victimized, good innocent decent people like us aren't hurt. From the viewpoint of the attacker it is all about the Benjamins and the question is: "does an attack like this make enough money to justify the effort?" If the answer is yes than we'll see more attacks like this.
There are just all of these things that contribute to the bad environment: the urgent emails from services you barely use, the web3 degens, etc.
[1] if it's an insult it is one the web3 community slings https://www.webopedia.com/crypto/learn/degen-meaning/
Agree with most that this could have been way way worse. No doubt next time it will be.
I keep expecting some new company to bring out this revolutionary idea of "On prem: your machine, your libraries, your business."
Gmail could have easily placed a red banner like
> "Warning! This is the first time you have received a message from sender support@npmjs.help. Please be careful with links and attachments, and verify the sender's identity before taking any action."
At this super-wide level of near-miss, you must assume Jia Tan 3.0 will be coming for your supply chains.
In case you missed it:
NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (697 comments, including exemplary comments from the project maintainer)
Is this not a good use case for AI in your email client (local-only to avoid more opportunities for data to leak)?
Have the client-embedded AI view the email to determine if it contains a link to a purported service. Remotely verify if the service URL domain is valid, by comparing to the domains known for that service
If unknown, show the user a suspected phishing message.
This will occasionally give a false positive when a service changes their sending domain, but the remote domain<->service database can then be updated via an API call as a new `(domain, service)` pair for investigation and possible inclusion.
I feel like this would mitigate much of the risk of phishing emails slipping past defenses, and mainly just needs 2 or 3 API calls to service once the LLM has extracted the service name from the email.
No, the solution to a security problem is not to radically increase the vulnerable attack surface.
`Object.getPrototypeOf(obj)[Symbol.iterator] !== undefined`
There I fixed it. Now I don't even need the package array-ish!
`Symbol` wasn't supported when I wrote `is-arrayish`. Neither were spreads. It was meant to be used with DOM lists or the magical `arguments` variable.
Does the Go ecosystem have a similar security screening process as NPM? This was caught because a company was monitoring a centralized packaging distribution platform, but I worry about all those golang modules spread across GitHub without oversight..
This page has a short explanation of the default way in which Go downloads modules, with links for more details: https://sum.golang.org/
WebAuthN/fido/passkey should be mandatory to publish a package with >N downloads. Email and TOTP codes can be MITMd.
> Even then, that wouldn't really stand out to me because I've seen companies use new generic top level domains to separate out things like the blog at .blog or the docs at .guide, not to mention the .new stack.
This is very much a 'can we please not' situation, isn't it? (Obviously it's not something that the email recipients can (usually) control, so it's not a criticism of them.) It also has to meaningfully increase the chance that someone will eventually forget to renew a domain, too.
Facebook sends legit account secuirty emails from facebookmail.com. Horrible.
For a company that is otherwise quite serious about security nowadays, MS seems to be the champion of this. Say hello to live.com and its friends …
There's only one thing that would throw me off this email and that is DMARC. But I didn't get the email, so who is to say if I actually would have been caught.
This was a domain "legitimately" owned by the adversary. They controlled that DNS. They could set any SPF or DKIM records they wanted. This email probably passed all DMARC checks. From some screenshots, the email client even has a green check probably because it did pass DMARC.
Sometimes I think I'm a stubborn old curmudgeon for staunchly refusing to use node, npm, and the surrounding ecosystem. Pick and choose specific packages if I really have to.
Then there's days like this.
Dat domain name.
Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.
We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.
Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.
of all people my mortgage servicer is the worst about this. Your login is valid on like 3 different top level domains and you get bounced between them when you sign in, eventually going from servicer.com to myservicer.com to servicer.otherthing.com! It's as though they were training you to not care about domain names.
Paying US taxes online is just as bad. The official way to pay tax balances with a debit card online is to use officialpayments[.]com. This is what the IRS advises you to use. Our industry is a clown factory.
Wells Fargo apparently emails from epay@onlinemyaccounts[.]com.
What about aka.ms, which is a valid domain for Microsoft. Why didn't they use microsoft.com, or windows.com? I always wonder if this aka is short for 'also known as'.
They use that domain name because it’s used for short links
How much money did the attackers make?
I'm not sure whether the compromised packages were the source of Kiln's API compromise, but it's plausible. It lead to theft of $41M worth of SOL. https://cointelegraph.com/news/swissborg-hacked-41m-sol-api-...
According to a crypto tracking site linked indirectly via the other popular submission, about $500 worth of crypto.
5 cents of eth and $20 of a meme coin.
Wow! This site uses anubis with the meta-refreshed based challenge that doesn't require javascript. So I can actually read the article in my old browser. It's so rare for anubis deployals to be setup with any configuration beyond the defaults. What a delight.
The blog author is also the creator of Anubis
This phishing email is full of red flags. Here are example red flags from that email:
- Update your 2FA credentials
What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
- It's been over 12 months since you last 2FA update
Again - meaningless nonsense. There's no such thing as a 2FA update. Maybe the recipient was thinking "password update" - but updating passwords regularly is also bad practice.
- "Kindly ask ..."
It would be very unusual to write like that in a formal security notification.
- "your credentials will be temporarily locked ..."
What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
- A link to change your credentials
A legit security email should never contains a link to change your credentials.
- It comes from a weird domain - .help
Any nonstandard domain is a red flag.
I don't use NPM, and if this actually looks like an email NPM would send, NPM has serious problems. However security ignorant companies do send emails like this. That's why the second layer of defense if you receive an email like this and think it might be real is to just log directly into (in this case) NPM and update your account settings without clicking links in the email.
NEVER EVER EVER click links in any kind of security alert email.
I don't blame the people who fell for this, but it is also concerning that there's such limited security awareness/training among people with publish access to such widely used packages.
Hi, said person who clicked on the link here. Been wanting to post something akin to this and was going to save it for the post mortem but I wanted to address the increase in these sort of very shout-ey comments directed toward me.
> What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
I didn't sit and read and parse the whole thing. That was mistake one. I have stated elsewhere, I was stressed and in a rush, and was trying to knock things off my list.
Also, 2FA can of course be updated. npm has had some shifts in how it approaches security over the years, and having worked within that ecosystem for the better part of 10-15 years, this didn't strike me as particularly unheard of on their part. This, especially after the various acquisitions they've had.
It's no excuse, just a contributing factor.
> It would be very unusual to write like that in a formal security notification.
On the contrary, I'd say this is pretty par for the course in corpo-speak. When "kindly" is used incorrectly, that's when it's a red flag for me.
> What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
Yes, of course it is. I'm well aware of that. Again, this email reached me at the absolute worst time it could have and I made a very human error.
"Temporarily locked" surprises me that it surprises you. My account was, in fact, temporarily locked while I was trying to regain access to it. Even npm had to manually force a password reset from their end.
> Any nonstandard domain is a red flag.
When I contacted npm, support responded from githubsupport.com. When I pay my TV tax here in Germany (a governmental thing), it goes to a completely bizarre, random third party site that took me ages to vet.
There's no such thing as a "standard" domain anymore with gTLDs, and while I should have vetted this particular one, it didn't stand out as something impossible. In my head, it was their new help support site - just like github.community exists.
Again - and I guess I have to repeat this until I'm blue in the face - this is not an excuse. Just reasons that contributed to my mistake.
> NEVER EVER EVER click links in any kind of security alert email.
I'm aware. I've taught this as the typical security person at my respective companies. I've embodied it, followed it closely for years, etc. I slipped up, and I think I've been more than transparent about that fact.
I didn't ask for my packages to be downloaded 2.6 billion times per week when I wrote most of these 10 years ago or inherited them more than five ago. You can argue - rightfully - about my technical failure here of using an outdated form of 2FA. That's on me, and would have protected against this, but to say this doesn't happen to security-savvy individuals is the wrong message here (see: Troy Hunt getting phished).
Shit happens. It just happened to happen to me, and I happen to have undue control over some stuff that's found its way into most of the javascript world.
The security lessons and advice are all very sound - I'm glad people are talking about them - but the point I'm trying to make is, that I am a security aware/trained person, I am hyper-vigilant, and I am still a human that made a series of small or lazy mistakes that turned into one huge mistake.
Thank you for your input, however. I do appreciate that people continue to talk about the security of it all.
full of red flags present in many non phishing emails
> However security ignorant companies do send emails like this
exactly
Allowing just anybody to rent npmjs.help feels like aiding and abetting.
Who should have stopped this from happening and how should they have gone about doing so?
The is the main reason why if you ever get a password reset email you ALWAYS go to the site directly and NEVER through the link provided in the email.
Always use password manager to automatically fill in your credentials. If password manager doesn't find your credentials, check the domain. On top of that, you can always go directly to the website, to make any needed changes there, without following the link.
Password managers are still too unreliable to auto-fill everywhere all the time, and manually having to copy paste something from the password manager happens regularly so it's not something that feels unusual if it doesn't auto-fill it for some reason.
I put the fault on companies for making their login processes so convoluted. If you take the time to do it, you can usually configure the password manager to work (we shouldn’t have to make the effort). But even if you do, then the company will at some point change something about their login processes and break it.
Better yet, use password manager as the store of the valid domain and click there to go to resource.
I don't think this really helps. I use Bitwarden and it constantly fails to autofill legitimate websites and makes me go to the app to copy-paste, because companies do all kinds of crap with subdomains, marketing domains, etc. Any safeguard relying on human attention is ultimately susceptible to this; the only true solutions are things like passkeys where human fuckups are impossible by design and they can't give credentials to the wrong place even if they want to.
Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.
It's a pita but BitWarden has quite some flexibility in filtering where what gets autofilled. I agree the defaults are pretty shit and indeed lead to constant copy-pasting. On the other hand, it will offer all my password all the time for all my selfhosted stuff on my 1 server.
what do you mean bankofamericaabuse.com isn't a real website!? It's in the email and everything! The nice guy on the phone said it was legit...
[dead]
> Formatting text with colors for use in the terminal ... > These kinds of dependencies are everywhere and nobody would even think that they could be harmful.
The first article I ever read discussing the possibility of npm supply chain attacks actually used coloured text in terminal as the example package to poison. And ever since then I have always been associated coloured terminal in text with supply chain attack
I used to share this article with students https://david-gilbertson.medium.com/im-harvesting-credit-car...
I might need bloggers to not use "web 3" as a term.
With the way things are going, I can't tell at a glance whether they mean crypto, VR, or AI when they say "web 3."
Is there a tool that you can put between your npm client and npm web servers that serves package versions that are month old and possibly also tracks discovered malware and never serves infected versions?
Artifactory. Nexus. I believe AWS/GCP/Azure have offerings.
No bank, and almost no large corporations go directly to artifact/package repos. They all host them internally.
I'm looking at Verdaccio currently, since Artifactory is expensive and I think the CE version still only supports C++. Does anyone have any experience with Verdaccio?
Something like this? https://jfrog.com/artifactory/
the company that first found this vulnerability also has a tool for this https://www.npmjs.com/package/@aikidosec/safe-chain
This reads like a joke that's missing the punchline.
The post's author's resume section reinforces this feeling:
I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.
I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.
If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.
Most phishing emails are so bad, it’s quite terrifying when you see a convincing one like this.
Email is such an utter shitfest. Even tech-savvy people fall for phishing emails, what hope do normal people have.
I recommend people save URLs in their password managers, and get in the habit of auto-filling. That way, you’ll at least notice if you’re trying to log into a malicious site. Unfortunately, it’s not foolproof, because plenty of sites ask you to randomly sign into different URLs. Sigh…
Related:
NPM debug and chalk packages compromised
https://news.ycombinator.com/item?id=45169657
Isn't it a bit crazy that phishing e-mails still exist? Like, couldn't this be solved by encrypting something in a header and using a public key in the DNS to unencrypt it?
I'm not a top-level expert in cybersecurity nor email infra....but the little that i know has taught me that i merely have to create a similar-looking domain name...
Let's say there's a company named Awesome...and i register the domain name of AwesomeSupport.com. I could be a total dark hat/evil hacker/neverdoweller....and this domain may not be infringing on any trademark, etc. And, then i can start using all the encryption you noted...which merely means that *my domain name* (the bad one) is "technically sound"...but of course, all that use of encryption fails to convey that i am not the legitimate Awesome company. So, how is the victim supposed to know which of the domains is legit or not? Especially considering that some departments of the real, legit Awesome company might register their own domain name to use for actual, real reasons - like the marketing department might register MyAwesome.com...for managing customer accounts, etc.
Is encryption necessary in digital life? Hellz yeah! Does it solve *all issues*? Hellz no! :-)
an OV cert "solves" this, but you'd still have to bother to check it
True! But, the possibility exists that enough % of victims do not indeed check the OV cert. Also, are we 100% sure that every single legit company that you and I do business with, has an OV cert for their websites?
This honestly doesn't feel like it should be the case.
There aren't that many websites. The e-mail provider could have a list of "popular" domains, and the user could have their own list of trusted domains.
There is all sorts of ways to warn the user about it, e.g. "you have never interacted with this domain before." Even simply showing other e-mails from the same domain would be enough to prevent phishing in some cases.
There are practical ways to solve this problem. They aren't perfect but they are very feasible.
My previous comments were merely in response to your original comments...so really only to point out that bare use of encryption by itself is not sufficient protection - that's all.
To your more recent points, i agree that there are other several protections in place...and depending on a number of facotrs, some foks have more at their disposal, and others might have less...but, still there are mechnisms in place to help - without a doubt. But yet with all these mechanisms in place, people still fall prey to phishing attacks...and sometimes those victims are not lay people, but actual technologists. So, i think the solution(s) to solve this are not so simple, and likely are not only tech-based. ;-)
I might be missing the joke, but there are several layers like SPF and DMARC available to only allow your whitelisted servers to send email on the behalf of your domain.
Wouldn't help in this case where someone bought a domain that looked a tiny bit like the authentic one for a very casual observer.
100% solved and has been for a very long time. The PGP/GPG trust chain goes CLUNK CLUNK CLUNK. Everyone shuts it off after a week or so of experimentation.
It's typical phishing email... and if the author when though any type of cybersecurity training, they would see that the email wasn't that great.
The sense of urgency is always the red flag.
I think it's quite good, there's a sense of urgency, but it's also not "immediately change it!" they gave more than a day, and stated that it would be a temporary lock. Feel like this one really hit the spot on that aspect.
You should still never click a link in an email like this, but the urgency factor is well done here
I go through those trainings several times a year. That email is as close to perfect for a phishing email as I've ever seen.
the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1
but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3
nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority
> the email wasn't that great
It was obviously good enough.
Snark aside, you only need to trick one person once and you've won.
Besides the ecosystem issues, for the phishing part, I'll repost what I responded somewhere in the other related post, for awareness
---
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
Please don't copy-paste comments on HN. It strictly lowers the signal/noise ratio.
> 1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
Sites choosing to replace password login with initiating the login process and then clicking a "magic link" in your email client is awful for developing good habits here, or for giving good general advice. :c
In that case it's the same as a reset-password flow.
In both cases it's good advice not to click the link unless you initiated the request. But with the auth token in the link, you don't need to login again, so the advice is still the same: don't login from a link in your email; clicking links is ok.
Clicking links from an email is still a bad idea in general because of at least two reasons:
1. If a target website (say important.com) sends poorly-configured CORS headers and has poorly configured cookies (I think), a 3rd-party website is able to send requests to important.com with the cookies of the user, if they're logged in there. This depends on important.com having done something wrong, but the result is as powerful as getting a password from the user. (This is called cross-site request forgery, CSRF.)
2. They might have a browser zero-day and get code execution access to your machine.
If you initiated the process that sent that email and the timing matches, and there's no other way than opening the link, that's that. But clicking links in emails is overall risky.
1 is true, but this applies to all websites you visit (and their ads, supply chain, etc). Drawing a security boundary here means never executing attacker-controlled Javascript. Good luck!
2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.
how is this any worse than a spear phishing email that gives a login link to a malicious domain that looks the same as the official domain?
> 2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
TOTP doesn't need to be phishing-proof if you use a password manager integrated with the browser, though.
A browser-integrated password manager is only phishing-proof if it's 100% reliable. If it ever fails to detect a credential field, it trains users that they sometimes need to work around this problem by copy-pasting the credential from the password manager UI, and then phishers can exploit that. AFAIK all existing password manager extensions have this problem, as do all browsers' native password-management features.
It doesnt need to be 100% reliable, just reliable enough.
If certain websites fail to be detected, thats a security issue on those specific websites, as I'll learn which ones tend to fail.
If they rarely fail to detect in general, its infrequent enough to be diligent in those specific cases. In my experience with password managers, they rarely fail to detect fields. If anything, they over detect fields.
I think it's more appropriate to say TOTP /is (nearly)/ phishing-proof if you use a password manager integrated with the browser (not that it /doesn't need to be/ phishing-proof)
> 1. NEVER EVER login from an email link.
I receive Google Doc links periodically via email; fortunately they're almost never important enough for me to actually log in and see what's behind them.
My point, though, is that there's no real alternative when someone sends you a doc link. Either you follow the link or you have to reach out to them and ask for some alternative distribution channel.
(Or, I suppose, leave yourself logged into the platform all the time, but I try to avoid being logged into Google.)
I don't know what to do about that situation in general.
> leave yourself logged into the platform all the time
Or only log in when you need to open a google link. Or better yet, use a multi-account container for google.
Yeah, this should have occurred to me. I guess for me it's alien to think about logging into Google.
> Or better yet, use a multi-account container for google.
Pardon; a what? Got any reference links?
A Firefox plugin/feature, probably also available on other browsers as well. It is useful for siloing cookies, so you can easily be logged into Google on one set of browser tabs and block their cookies on another.
https://addons.mozilla.org/en-US/firefox/addon/multi-account...
Log into Google, then click the link. If you get prompted to log in again, don't.
Good point, I guess this is the obvious answer.
> U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
Last I checked, we're still in a world where the large majority of people with important online accounts (like, say, at their bank, where they might not have the option to disable online banking entirely) wouldn't be able to tell you what any of those things are, and don't have the option to use anything but SMS-based TOTP for most online services and maybe "app"-based (maybe even a desktop program in rare cases!) TOTP for most of the rest. If they even have 2FA at all.
This is the point of the "passkey" branding. The idea is to get to the point where these alphabet-soup acronyms are no longer exposed to normal users and instead they're just like "oh, I have to set up a passkey to log into this website", the way they currently understand having to set up a password.
Sure. That still doesn't make Yubikey-style physical devices (or desktop keyring systems that work the same way) viable for everyone, everywhere, though.
Yeah, the pressure needs to be put on vendors to accept passkeys everywhere (and to the extent that there are technical obstacles to this, they need to be aggressively remediated); we're not yet at the point where user education is the bottleneck.
Urgency is also either phishing (log in now or we'll lock you out of your account in 24 hours) or marketing (take advantage of this promotion! expires in 24 hours!).
Just ... don't.
It's funny how it's never "don't" too.
A guy I knew needed a car, found one, I told him to take it to a mechanic first. Later he said he couldn't, the guy had another offer, so he had to buy it right now!!!, or lose the car.
He bought, had a bad cylinder.
False urgency = scam
I mean, real deadlines do exist. The better heuristic is that, if a message seems to be deliberately trying to spur you into immediate action through fear of missing a deadline, it's probably some kind of trick. In this respect, the phishing message that was used here was brilliantly executed; it calmly, without using panic-inducing language, explains that action is required and that there's a deadline (that doesn't appear artificially short but in fact is coming up soon), in a way quite similar to what a legitimate action-required email would look like. Even a savvy user is likely to think "oh, I didn't realize the deadline was that soon, I must have just not paid attention to the earlier emails about it".
With credentials? Aren’t you always forced to refresh them right after a login?
As in right then, without being given a deadline…
Yeah, this particular situation's a bit weird because it's asking the user to do something (rotate their 2FA secret) that in real life is not really a thing; I'm not sure what to think of it. But you could imagine something similar like "we want you to set up 2FA for the first time" or "we want you to supply additional personal information that the government has started making us collect", where the site might have to disable some kind of account functionality (though probably not a complete lockout) for users who don't do the thing in time.
Most mail providers have something like plus addressing. Properly used that already eliminates a lot of phishing attempts: If I get a mail I need to reset something for foobar, but it is not addressed to me-foobar (or me+foobar) I already know it is fraudulent. That covers roughly 99% of phishing attempts for me.
The rest is handled by preferring plain text over HTML, and if some moron only sends HTML mails to carefully dissect it first. Allowing HTML mails was one of the biggest mistakes for HTML we've ever made - zero benefits with huge attack surface.
I agree that #1 is correct, and I try to practice this; and always for anything security related (update your password, update your 2FA, etc).
Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags… it’s the perfect stereotype of an obvious scam domain. Maybe falling just short of npmjshelp.nigerianprince.net.
Is there somewhere you'd recommend that I can read more about the pros/cons of TOTP? These authenticator apps are the most common 2FA second factor that I encounter, so I'd like to have a good source for info to stay safe.
I watched a presentation from Stripe internal eng that was given I forget where.
An internal engineer there who did a bunch of security work phished like half of her own company (testing, obviously). Her conclusion, in a really well-done talk, was that it was impossible. No human measures will reduce it given her success at a very disciplined, highly security conscious place.
The only thing that works is yubikeys which prevent this type of credential + 2fa theft phishing attack.
edit:
karla burnette / talk https://www.youtube.com/watch?v=Z20XNp-luNA
#1 is the real deal. Just like you don't give private info to any caller you aren't expecting. You call them back at a number you know.
I had someone from a bank call me and ask for my SSN to confirm my identity. The caller ended up being legitimate, but I still didn't give it...like, are you kidding me?
This has happened to me more times than I can count, and it's extremely frustrating because it teaches people the wrong lesson. The worst part is they often get defensive when you refuse to cooperate, which just makes the whole thing unnecessarily more stressful.
I would be surprised if the database with SSN of all adult americans wasn't out there on the usual data dumps website available for 5 dollars.
Here's the actual root cause of the issue:
1- As a professional, installing free dependencies to save on working time.
There's no such thing as a free lunch, you can't have your cake and eat it too that is, download dependencies that solve your problems, without paying, without ads, without propaganda (for example to lure you into maintaining such projects for THE CAUSE), without vendor lockin or without malware.
It's really silly to want to pile up mountains of super secure technology like webauthn, when the solution is just to stop downloading random code from the internet.
[flagged]
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
The problem here is that a single dev account can make updates to a prod codebase, or in the case of NX a single CI/CD token. Something with 5 Million downloads per week should not be controlled by one token if it takes me 3 approvals to get my $20 lunch reimbursement. At the very least have an LLM review every PR to prod.