Intentionally corrupting popular libraries by maintainer
In an ideal decentralised world, anyone would be able to publish anything, but new versions of packages wouldn't get automatically downloaded until multiple trusted third parties had independently verified there were no regressions or security issues introduced.
Right now we have a collapsed version of that system, where the code host, package repo, and "auditing" is all done by one company, which gives them a lot of power and not much accountability. Also, to be clear, the "auditing" they do is only after the fact, i.e. they wait for other people to find a problem with a package and then only prevent it causing future harm.
Ultimately, though, you either have to read every line of code yourself, or trust some person or group of people to make security decisions on your behalf. In this case it only took one malicious person to cause trouble, and this can be mitigated by requiring more (independent) reviewers. The cost of adding more reviewers scales linearly with the number of reviewers, whereas the chance of them all independently but simultaneously being malicious decreases exponentially, so economics suggests this might be solvable if the value could be captured.
I think this is interesting story on many fronts from both an ethical front as well as systems to communicate better exactly what packages are being used.
Right now, many packages are deeply dependent on other packages. If at a wrong time, things could break, and worst, timed correctly, malicious code in at some random interval that may go uncaught.
What is the correct direction we need to head as developers and the toolings needed? This is not from perspective of just the maintainers, but this makes the point that breaches and unauthorized pushes could happen.
``` In response to the corrupted libraries, Microsoft quickly suspended his GitHub access and reverted the projects on npm.... ```
Microsoft embraces open source