captn3m0 2 days ago

> The M4 Max MacBook I'm using to write this would've ranked among the 50 fastest supercomputers on Earth in 2009.

I attempted to validate this: You'd need >75 TFlop/s to get into the top50 in the TOP500[0] rankings in 2009. M4 Max review says 18.4 TFlop/s at FP32, but TOP500 uses LINPACK, which uses FP64 precision.

An M2 benchmark gives a 1:4 ratio for double precision, so you'd get maybe 9 TFlop/s at FP64? That wouldn't make it to TOP500 in 2009.

[0]: https://top500.org/lists/top500/list/2009/06/

  • nine_k 2 days ago

    > Now multiply that by thousands of concurrent connections each doing multiple I/O operations. Servers spent ~95% of their time waiting for I/O operations.

    Well, no. The particular thread of execution might have been spending 95% of time waiting for I/O, but a server (the machine serving the thousands of connections) would easily run at 70%-80% of CPU utilization (because above that, tail latency starts to suffer badly). If your server had 5% CPU utilization under full load, you were not running enough parallel processes, or did not install enough RAM to do so.

    Well, it's a technicality, but the post is devoted to technicalities, and such small blunders erode the trust to the rest of the post. (I'm saying this as a fan of Bun.)

    • ukblewis 20 hours ago

      If you’ve never seen a machine stuck waiting for disk I/O, I don’t know what to tell you… but it is common, even with SSDs it can happen (as they point out due to handoff between the OS and user level process takes time)

  • fleebee 2 days ago

    I'm guessing that's an LLM hallucination. The conclusion section especially has some hints it was pulled out of an LLM:

    > The package managers we benchmarked weren't built wrong, they were solutions designed for the constraints of their time.

    > Buns approach wasn't revolutionary, it was just willing to look at what actually slows things down today.

    > Installing packages 25x faster isn't "magic": it's what happens when tools are built for the hardware we actually have.

    • notpushkin 2 days ago

      Sorry, what is the telltale sign here?

      • pests 2 days ago

        It’s not _____, it’s ______.

        Some more conversation a week or so ago I had:

        https://news.ycombinator.com/item?id=44786962#44788567

        • philsnow 21 hours ago

          That just seems like a rhetorical technique, which shows up in LLMspeak because people use it

          • pests 3 hours ago

            I’ve talked to hundreds of humans online and only see that pattern once in a blue moon. In fact I just went thru the last 10 pages of my comments (and their replies, etc) and grepped for the phrase and the only time it’s been uttered has been in AI response examples like above.

  • LollipopYakuza a day ago

    > even low-end smartphones have more RAM than high-end servers had in 2009

    That's even less accurate. By two orders of magnitude. High-end servers in 2009 had way more than 4GB. The (not even high-end) HP Proliant I installed for a small business in 2008, that was already bought used at the time, had 128GB of RAM.

    I understand why one would want to make an article entertaining but that seriously makes me doubt the rest of the articles when diving into a topic I don't know as much.

    • ukblewis 20 hours ago

      I guess that they may have meant workstations / servers for horizontally scaled software, but yes, I agree that that sentence does seem misleading

robinhood 2 days ago

Complex subject, beautifully simple to read. Congrats to the author.

Also: I love that super passionate people still exist, and are willing to challenge the statut quo by attacking really hard things - things I don't have the brain to even think about. It's not normal that we have better computers each month and slower softwares. If only everyone (myself included) were better at writing more efficient code.

  • ljm 2 days ago

    I didn’t know it was written in Zig. That’s a fascinating choice to me given how young the language is.

    Amazing to see it being used in a practical way in production.

    • robinhood 2 days ago

      Zig was created in 2016 though - almost 10 years at this point. Perhaps the surprise here is that we are not as exposed to this language on well-known and established projects as other languages like Rust, Go and C.

      • pdpi 2 days ago

        Zig is still at the 0.x stage, and there's still a bunch of churn going on on really basic stuff like IO and memory allocation. I really enjoy writing it, but it's by no means stable to the point many people would write production software in it.

      • dwattttt 2 days ago

        Rust hit 1.0 in 2015, it started as a project by Graydon Hoare in 2006; those dates line up pretty well with Zig's timeline.

      • ivanjermakov 2 days ago

        To be fair, Zig 10 years ago is drastically different language from Zig today.

        • ojosilva a day ago

          Which is unfortunately a problem for AI trained on Zig, it makes some AI-assisted Zig coding more challenging, like Q&A and code-completion. It's sad that this glass-ceiling has been enacted for new languages and frameworks, not a deal-breaker at all, just that suddenly there's this penalty on time-to-deliver on anything Zig. But then... the same issue exists when hiring good programmers for lesser-known tech.

          There'll probably be a strategy (AEO?) for this in the future for newcomers and the underrepresented, like endless examples posted by a sane AI to their docs and github for instance so it gets picked up by training sets or live, tool calling, web-searches.

          • rererereferred a day ago

            Yes, I wouldn't train AI on Zig code just yet. But here's a radical idea, rename the language the moment it hits 1.0: all documentation, blog posts, discussions, SO answers and LLMs for older versions gets automatically voided.

            For future languages, maybe it's better to already have a dev name and a release name from the get go.

    • epolanski 2 days ago

      The language is very in development but it's ecosystem and tooling are absolutely mature.

      • ivanjermakov 2 days ago

        I would not say that ecosystem is mature. Outside of superb C interop and popular C/C++ lib wrappers.

        • epolanski a day ago

          What would you say that the ecosystem lacks that C has?

          • ivanjermakov a day ago

            Various native Zig libraries. While C interop is great, converting between C types and calling convention is rather not convenient.

blizdiddy 2 days ago

I used bun for the first time last week. It was awesome! The built-in server and SQLite meant i didn’t need any dependencies besides bun itself, which is certainly my favorite way to develop.

I do almost all of my development in vanilla js despite loathing the node ecosystem, so i really should have checked it out sooner.

  • k__ 2 days ago

    I tried using Bun a few times, and I really liked working with it.

    Much better than Node.

    However...!

    I always managed to hit a road block with Bun and had to go back to Node.

    First it was the crypto module that wasn't compatible with Nodejs signatures (now fixed), next Playwright refused working with Bun (via Crawlee).

    • Jarred 2 days ago

      Playwright support will improve soon. We are rewriting node:http’s client implementation to pass node’s test suite. Expecting that to land next week.

      • ukblewis 20 hours ago

        That may be just the best way to drop that you are introducing a rewrite which fixes a set of bugs that affect users

    • koakuma-chan 2 days ago

      You can use Bun as package manager only. You don't have to use Bun as runtime.

      • iansinnott 2 days ago

        Indeed! also as a test runner/lib if you're not doing browser automation. bun definitely has benefits even if not used as a runtime.

        • koakuma-chan a day ago

          I believe Playwright worked for me with the latest Bun though

      • winterrdog 2 days ago

        Sure?

        Does it work if I have packages that have nodejs c++ addons?

        • abejfehr 2 days ago

          Why wouldn’t it? The end result of a npm install or a bun install is that the node_modules folder is structured in the way it needs to be, and I think it can run node-gyp for the packages that need it.

    • Cthulhu_ 2 days ago

      I think this is the big one that slows adoption of "better" / "faster" tooling down, that is, backwards compatibility and drop-in-replacement-ability. Probably a lot of Hyrum's Law.

    • drewbitt 2 days ago

      Deno doesn't work with crawlee either unfortunately

    • epolanski 2 days ago

      Playwright has been fixed one year ago I think.

    • petralithic 2 days ago

      You should try Deno, they have good Node compatibility

      • erikpukinskis 2 days ago

        Does it? Last I tried, several years ago, coverage of the Node APIs was not good. I wanted to send data over UDP and a lot of Node basics there were missing.

    • jherdman 2 days ago

      Storybook is another for me.

  • simantel 2 days ago

    Node also has a built-in server and SQLite these days though? Or if you want a lot more functionality with just one dependency, Hono is great.

    • blizdiddy 2 days ago

      And how many dependencies does Hono have? Looks like about 26. And how many dependencies do those have?

      A single static zig executable isn’t the same as a a pipeline of package management dependencies susceptible to supply chain attacks and the worst bitrot we’ve had since the DOS era.

      • bakkoting 2 days ago

        > And how many dependencies does Hono have?

        Zero.

        I'm guessing you're looking at the `devDependencies` in its package.json, but those are only used by the people building the project, not by people merely consuming it.

        • PxldLtd a day ago

          That doesn't prevent supply chain attacks. Dev dependencies are still software dependencies and add a certain level of risk.

          • arcfour a day ago

            This is needlessly pedantic unless you are writing from an OS, browser, etc. that you wrote entirely by yourself, without using an editor or linter or compiler not written by you, in which case I tip my cap to you.

          • bakkoting a day ago

            Only in the sense that any other software on the developers' machines adds a certain level of risk.

Jarred 2 days ago

I work on Bun and also spent a lot of time optimizing bun install. Happy to answer any questions

  • norman784 a day ago

    AFAIK `bun install` is similar to `npm install` in the sense that it installs everything in a flat structure inside node modules, why didn't you choose something like pnpm that I believe is better, because you cannot by mistake import a transitive dependency. Maybe that's a non issue for most, but I care about those things.

    • Jarred a day ago

      You can use `bun install --linker=isolated`, which we might make the default in Bun v1.3. The main downside is it makes your app load slightly slower since now every file path is a symlink.

      https://bun.sh/docs/install/isolated

      • norman784 a day ago

        I didn't knew about it, thanks!

  • hu3 a day ago

    Kinda offtopic but how to you manage to stay so productive for so long?

    Vitamins/supplements? Sleep? Exercise? Vacations?

    I have sprints of great productivity but it's hard to keep it for long.

  • nzoschke 2 days ago

    I just want to say thanks to you and the team and community. Bun is a treat to use.

manuhabitela 2 days ago

I'm impressed how pleasant and easy to read this pretty technical explanation was. Good job on the writing.

  • winterrdog 2 days ago

    Truth!

    Lydia is very good at presenting complex ideas simply and well. I've read and watched most of her work or videos. She really goes to great lengths in her work to make it come to life. Highly recommend her articles and YouTube videos.

    Though she's been writing less I think due to her current job

thornewolf 2 days ago

I think they forgot to include the benchmark time for "npm (cached)" inside the Binary Manifest Caching section. We have bun, bun (cached), npm. I think the summary statistics are also incorrect.

  • sapper123 a day ago

    They don’t seem to clear the cache in between fresh runs. This is evident from the lower bound range time being the same time as the cached mean.

    This leads them to the incorrect conclusion that bun fresh runs are faster than npm cached, which doesn’t seem to be the case.

aleyan 2 days ago

I have been excited about bun for about a year, and I thought that 2025 is going to be its breakout year. It is really surprising to me that it is not more popular. I scanned top 100k repos on GitHub, and for new repos in 2025, npm is 35 times more popular and pnpm is 11 time more popular than bun [0][1]. The other up and coming javascript runtime, deno is not so popular either.

I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?

Can someone who tried bun and didn't adopt it personally or at work chime in and say why?

[0] https://aleyan.com/blog/2025-task-runners-census/#javascript...

[1] https://news.ycombinator.com/item?id=44559375

  • phpnode 2 days ago

    It’s a newer, vc funded competitor to the open source battle tested dominant player. It has incentives to lock you in and ultimately is just not that different from node. There’s basically no strategic advantage to using bun, it doesn’t really enable anything you can’t do with node. I have not seen anyone serious choose it yet, but I’ve seen plenty of unserious people use it

    • marcjschmidt 2 days ago

      I think that summarizes it well. It's not 10x better that makes the risky bet of going into vendor lock from a VC-backed company worth it. Same issue with Prisma and Next for me.

    • sam_goody a day ago

      Tailwind uses it.

      Considering how many people rely on a tailwind watcher to be running on all of their CSS updates, you may find that bun is used daily by millions.

      We use Bun for one of our servers. We are small, but we are not goofing around. I would not recommend them yet for anything but where they have a clear advantage - but there are areas where it is noticeably faster or easier to setup.

  • dsissitka 2 days ago

    I really want to like Bun and Deno. I've tried using both several times and so far I've never made it more than a few thousand lines of code before hitting a deal breaker.

    Last big issue I had with Bun was streams closing early:

    https://github.com/oven-sh/bun/issues/16037

    Last big issue I had with Deno was a memory leak:

    https://github.com/denoland/deno/issues/24674

    At this point I feel like the Node ecosystem will probably adopt the good parts of Bun/Deno before Bun/Deno really take off.

    • hoten 2 days ago

      uh... looks like an AI user saw this comment and fixed your bun issue? Or maybe it just deleted code in a random manner idk.

      https://github.com/oven-sh/bun/commit/b474e3a1f63972979845a6...

      • drewbitt 2 days ago

        The bun team uses Discord to kick off the Claude bot, so someone probably saw the comment and told it to do it. that edit doesn't look particularly good though

  • williamstein 2 days ago

    I am also very curious what people think about this. To me, as a project, Node gives off a vibe of being mature, democratic and community driven, especially after successfully navigating then io.js fork drama etc a few years ago. Clearly neither bun nor deno are community driven democratic projects, since they are both VC funded.

  • johnfn 2 days ago

    I am Bun's biggest fan. I use it in every project I can, and I write all my one-off scripts with Bun/TS. That being said, I've run into a handful of issues that make me a little anxious to introduce it into production environments. For instance, I had an issue a bit ago where something simple like an Express webserver inside Docker would just hang, but switching bun for node worked fine. A year ago I had another issue where a Bun + Prisma webserver would slowly leak memory until it crashed. (It's been a year, I'm sure they fixed that one).

    I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.

  • silverwind 2 days ago

    Take a look at their issue tracker, it's full of crashes because apparently this Zig language is highly unsafe. I'm staying on Node.

    • petralithic 2 days ago

      That's why out if I had to choose a Node competitor, out of Bun and Deno, I'd choose Deno.

    • audunw 2 days ago

      Zig isn’t inherently highly unsafe. A bit less than Rust in some regards. But arguably more safe in a few others.

      But the language haven’t even reached 1.0 yet. A lot of the strategies for doing safe Zig isn’t fully developed.

      Yet, TigerBeetle is written in Zig and is an extremely robust piece of software.

      I think the focus of Bun is probably more on feature parity in the short term.

    • mk12 2 days ago

      Good thing libuv is written in a "safe" language.

      • otikik 2 days ago

        npm is a minefield that thousands of people traverse every day. So you are unlikely to hit a mine.

        bun is a bumpy road that sees very low traffic. So you are likely to hit some bumps.

    • keybored 2 days ago

      There’s a `crash` label. 758 open issues.

    • actionfromafar 2 days ago

      Well node is C++ which isn’t exactly safe either. But it’s more tested.

  • veber-alex 2 days ago

    Neither Bun nor Deno have any killer features.

    Sure, they have some nice stuff that should also be added in Node, but nothing compelling enough to deal with ecosystem change and breakage.

    • gkiely 2 days ago

      bun test is a killer feature

  • MrJohz 2 days ago

    I think part of the issue is that a lot of the changes have been fairly incremental, and therefore fairly easy to include back into NodeJS. Or they've been things that make getting started with Bun easier, but don't really add much long-term value. For example, someone else in the comments talked about the sqlite module and the http server, but now NodeJS also natively supports sqlite, and if I'm working in web dev and writing servers, I'd rather use an existing, battle-tested framework like Express or Fastify with a larger ecosystem.

    It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.

    • ifwinterco a day ago

      This is a long term pattern in the JS ecosystem, same thing happened with Yarn.

      It was better than npm with useful features, but then npm just added all of those features after a few years and now nobody uses it.

      You can spend hours every few years migrating to the latest and greatest, or you can just stick with npm/node and you will get the same benefits eventually

      • rererereferred a day ago

        If Node becomes much better thanks to the existence of Bun, then I think Bun accomplished its goals. Same for C and Zig.

      • sam_goody a day ago

        I have been using pnpm as my daily driver for several years, and am still waiting for npm to add a symlink option. (Bun does support symlinks).

        In the interim, I am very glad we haven't waited.

        Also, we switched to Postgres early, when my friends were telling me that eventually MySQL will catch up. Which in many ways, they did, but I still appreciate that we moved.

        I can think of other choices we made - we try to assess the options and choose the best tool for the job, even if it is young.

        Sometimes it pays off in spades. Sometimes it causes double the work and five times the headache.

  • tracker1 2 days ago

    There's still a few compatibility sticking points... I'm far more familiar with Deno and have been using it a lot the past few years, it's pretty much my default shell scripting tool now.

    That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.

    It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.

    I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.

    That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.

  • davidkunz 2 days ago

    I tried to run my project with bun - it didn't work so I gave up. Also, there needs to be a compelling reason to switch to a different ecosystem.

  • fleebee 2 days ago

    There are some rough edges to Bun (see sibling comments), so there's a apparent cost to switching, namely wasted developer time in dealing with Node incompatibility. Being able to install packages 7x faster doesn't matter much to me so I don't see an upside to making the switch.

  • oefrha 2 days ago

    To beat an incumbent you need to be 2x better. Right now it seems to be a 1.1x better (for any reasonably sized projects) work in progress with kinks you’d expect from a work in progress and questionable ecosystem buy-in. That may be okay for hobby projects or tiny green field projects, but I’m absolutely not gonna risk serious company projects with it.

  • turtlebits 2 days ago

    Tried it last year - I spent a few hours fighting the built in sqlite driver and found it buggy (silent errors) and the docs were very lacking.

  • fkyoureadthedoc 2 days ago

    Bun is much newer than pnpm, looking at 1.0 releases pnpm has about a 6 year head start.

    I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.

  • madeofpalk 2 days ago

    Honestly, it doesn't really solve a big problem I have, and introduces all the problem with being "new" and less used.

  • koakuma-chan 2 days ago

    > I wonder why that is?

    LLMs default to npm

    • fkyoureadthedoc 2 days ago

      You sure it's not just because npm has been around for 15 years as the default package manager for node?

      • koakuma-chan 2 days ago

        Didn't prevent me from switching to Bun as the cost is 0.

rs_rs_rs_rs_rs 2 days ago

Python has uv, JS has bun, what does Ruby or PHP have? Are the devs using those languages happy with how fast the current popular dependency managers are?

  • JamesSwift 2 days ago

    Youre looking at it wrong. Python has nix, JS has nix, ruby and php have nix : D

    Thats closer to how pnpm achieves speed up though. I know there is 'rv' recently, but havent tried it.

    • koakuma-chan 2 days ago

      You mean nix the package manager? I used to use NixOS and I had to switch off because of endless mess with environment variables.

      • JamesSwift 2 days ago

        Yes, nix package manager. Or devenv for a more streamlined version of what I'm describing, similar to mise but powered by nix.

  • tommasoamici 2 days ago

    it's pretty new, but in Ruby there's `rv` which is clearly inspired by `uv`: https://github.com/spinel-coop/rv.

    >Brought to you by Spinel

    >Spinel.coop is a collective of Ruby open source maintainers building next-generation developer tooling, like rv, and offering flat-rate, unlimited access to maintainers who come from the core teams of Rails, Hotwire, Bundler, RubyGems, rbenv, and more.

  • weaksauce 2 days ago

    bundler is generally pretty fast on the ruby side. it also reuses dependencies for a given ruby version so you don't have the stupid node_folder in every project you use with every dependency re-downloaded and stored. if you have 90% of the dependencies for a project you only have to download and install/compile 10% of them. night and day difference.

  • aarondf 2 days ago

    PHP has Composer, and it's extremely good!

    • kijin 2 days ago

      PHP is much closer to raw C and doesn't do any threading by default, so I suppose composer doesn't suffer from the thread synchronization and event loop related issues that differentiate bun from npm.

      • gertop 2 days ago

        But node doesn't do threading by default either? Are you saying that npm is somehow multithreaded?

mrcarrot a day ago

The "Optimized Tarball Extraction" confuses me a bit. It begins by illustrating how other package managers have to repeatedly copy the received, compressed data into larger and larger buffers (not mentioning anything about the buffer where the decompressed data goes), and then says that:

> Bun takes a different approach by buffering the entire tarball before decompressing.

But seems to sidestep _how_ it does this any differently than the "bad" snippet the section opened with (presumably it checks the Content-Length header when it's fetching the tarball or something, and can assume the size it gets from there is correct). All it says about this is:

> Once Bun has the complete tarball in memory it can read the last 4 bytes of the gzip format.

Then it explains how it can pre-allocate a buffer for the decompressed data, but we never saw how this buffer allocation happens in the "bad" example!

> These bytes are special since store the uncompressed size of the file! Instead of having to guess how large the uncompressed file will be, Bun can pre-allocate memory to eliminate buffer resizing entirely

Presumably the saving is in the slow package managers having to expand _both_ of the buffers involved, while bun preallocates at least one of them?

  • Jarred a day ago

    Here is the code:

    https://github.com/oven-sh/bun/blob/7d5f5ad7728b4ede521906a4...

    We trust the self-reported size by gzip up to 64 MB, try to allocate enough space for all the output, then run it through libdeflate.

    This is instead of a loop that decompresses it chunk-by-chunk and then extracts it chunk-by-chunk and resizing a big tarball many times over.

    • mrcarrot a day ago

      Thanks - this does make sense in isolation.

      I think my actual issue is that the "most package managers do something like this" example code snippet at the start of [1] doesn't seem to quite make sense - or doesn't match what I guess would actually happen in the decompress-in-a-loop scenario?

      As in, it appears to illustrate building up a buffer holding the compressed data that's being received (since the "// ... decompress from buffer ..." comment at the end suggests what we're receiving in `chunk` is compressed), but I guess the problem with the decompress-as-the-data-arrives approach in reality is having to re-allocate the buffer for the decompressed data?

      [1] https://bun.com/blog/behind-the-scenes-of-bun-install#optimi...

alberth 2 days ago

I really enjoyed the writing style of this post.

A few things:

- I feel like this post repurposed could be a great explanation on why io_uring is so important.

- I wonder if Zig recently io updates in v0.15 make any perf improvement to Bun beyond its current fast perf.

atonse 2 days ago

I absolutely loved reading this. It's such an excellent example of a situation where Computer Science principles are very important in day to day software development.

So many of these concepts (Big O, temporal and spatial locality, algorithmic complexity, lower level user space/kernel space concepts, filesystems, copy on write), are ALL the kinds of things you cover in a good CS program. And in this and similar lower level packages, you use all of them to great effect.

  • epolanski 2 days ago

    This is about software engineering not computer science.

    CS is the study of computations and their theory (programming languages, algorithms, cryptography, machine learning, etc).

    SE is the application of engineering principles to building scalable and reliable software.

    • atonse 17 hours ago

      Without getting bogged down in rigid definitions of phrases, do we both agree that this is about the application of deeper technical concepts and algorithms (usually taught as part of a computer science curriculum) towards real world problems than the normal “build this login form” or “write these 5 queries to generate this report that shows up in an html table” that 75% of devs do daily?

RestartKernel 2 days ago

This is very nicely written, but I don't quite get how Linux's hardlinks are equivalent to MacOS's clonefile. If I understand correctly, wouldn't the former unexpectedly update files across all your projects if you modify just one "copy"?

wink 2 days ago

> Node.js uses libuv, a C library that abstracts platform differences and manages async I/O through a thread pool.

> Bun does it differently. Bun is written in Zig, a programming language that compiles to native code with direct system call access:

Guess what, C/C++ also compiles to native code.

I mean, I get what they're saying and it's good, and nodejs could have probably done that as well, but didn't.

But don't phrase it like it's inherently not capable. No one forced npm to be using this abstraction, and npm probably should have been a nodejs addon in C/C++ in the first place.

(If anything of this sounds like a defense of npm or node, it is not.)

  • k__ 2 days ago

    To me, the reasoning seems to be:

    Npm, pnpm, and yarn are written in JS, so they have to use Node.js facilities, which are based on libuv, which isn't optimal in this case.

    Bun is written in Zig, so it doesn't need libuv, and can so it's own thing.

    Obviously, someone could write a Node.js package manager in C/C++ as a native module to do the same, but that's not what npm, pnpm, and yarn did.

  • lkbm 2 days ago

    Isn't the issue not that libuv is C, but that the thing calling it (Node.js) is Javascript, so you have to switch modes each time you have libuv make a system call?

azangru a day ago

I am probably being stupid; but aren't install commands run relatively rarely by developers (less than once a day perhaps)? Is it such an important issue how long it takes for `x install` to finish?

Or is the concern about the time spent in CI/CD?

  • tuetuopay 10 hours ago

    CICD is a major usage. But dependencies version bumps are also a big part of it. In the python ecosystem I’ve had poetry take minutes to resolve the ansible dependencies after bumping the version. And then you see uv take milliseconds to do a full install from scratch.

valtism 2 days ago

I had no idea Lydia was working for Bun now. Her technical writing is absolutely top notch

markasoftware 2 days ago

I'm pretty confused about why it's beneficial to wait to read the whole compressed file before decompressing. Surely the benefit of beginning decompression before the download is complete outweigh having to copy the memory around a few extra times as the vector is resized?

  • Jarred 2 days ago

    Streaming prevents many optimizations because the code can’t assume it’s done when run once, so it has to suspend / resume, clone extra data for longer, and handle boundary cases more carefully.

    It’s usually only worth it after ~tens of megabytes, but vast majority of npm packages are much smaller than that. So if you can skip it, it’s better.

    • yencabulator a day ago

      Streaming compression with a large buffer size handles everything in a single batch for small files.

tracker1 2 days ago

I'm somewhat curious how Deno stands up with this... also, not sure what packages are being installed. I'd probably start a vite template project for react+ts+mui as a baseline, since that's a relatively typical application combo for tooling. Maybe hono+zod+openapi as well.

  • tracker1 2 days ago

    For my own curiousity on a React app on my work desktop.

        - Clean `bun install`, 48s - converted package-lock.json
        - With bun.lock, no node_modules, 19s
        - Clean with `deno install --allow-scripts`, 1m20s
        - with deno.lock, no node_modules, 20s
        - Clean `npm i`, 26s
        - `npm ci` (package-lock.json), no node_modules, 1m,2s (wild)
    
    So, looks like if Deno added a package-lock.json conversion similar to bun the installs would be very similar all around. I have no control over the security software used on this machine, was just convenience as I was in front of it.

    Hopefully someone can put eyes on this issue: https://github.com/denoland/deno/issues/25815

  • steve_adams_86 2 days ago

    I think Deno isn't included in the benchmark because it's a harder comparison to make than it might seem.

    Deno's dependency architecture isn't built around npm; that compatibility layer is a retrofit on top of the core (which is evident in the source code, if you ever want to see). Deno's core architecture around dependency management uses a different, URL-based paradigm. It's not as fast, but... It's different. It also allows for improved security and cool features like the ability to easily host your own secure registry. You don't have to use npm or jsr. It's very cool, but different from what is being benchmarked here.

    • tracker1 2 days ago

      All the same, you can run deno install in a directory with a package.json file an it will resolve and install to node_modules. The process is also written in compiled code, like bun... so I was just curious.

      edit: replied to my own post... looks like `deno install --allow-scripts` is about 1s slower than bun once deno.lock exists.

randomsofr 2 days ago

wow, crazy to see yarn being so slow, when it used to beat npm by a lot, at a company i was we went from npm, to yarn, to pnpm, back to npm. Nowadays i try to use Bun as much as possible, but Vercel still does not uses natively for Next.

rtpg a day ago

bun installs are fast, but I think they might be so fast and concurrent they cause npm to really get confused sometimes.

I end up hitting 500s from npm from time to time installing by bun and I just don't know why.

Really wish the norm was that companies hosted their own registries for their own usage, so I could justify the expense and effort instead of dealing with registries being half busted kinda randomly.

  • mnahkies a day ago

    > Really wish the norm was that companies hosted their own registries for their own usage

    Is this not the norm? I've never worked anywhere that didn't use/host their own registry - both for hosting private packages, but also as a caching proxy to the public registry (and therefore more control over availability, security policy)

    https://verdaccio.org/ is my go to self hosted solution, but the cloud providers have managed solutions and there's also jFrog Artifactory.

    One corollary of this is that many commercial usages of packages don't contribute much to download stats, as often they download each version at most once.

k__ 2 days ago

"... the last 4 bytes of the gzip format. These bytes are special since store the uncompressed size of the file!"

What's the reason for this?

I could imagine, many tools could profit from knowing the decompressed file size in advance.

  • philipwhiuk 2 days ago

    It's straight from the GZIP spec if you assume there's a single GZIP "member": https://www.ietf.org/rfc/rfc1952.txt

    > ISIZE (Input SIZE)

    > This contains the size of the original (uncompressed) input data modulo 2^32.

    So there's two big caveats:

    1. Your data is a single GIZP member (I guess this means everything in a folder)

    2. Your data is < 2^32 bytes.

    • jerf a day ago

      A GZIP "member" is whatever the creating program wants it to be. I have not carefully verified this but I see no reason for the command line program "gzip" to ever generate more than one member (at least for smaller inputs), after a quick scan through the command line options. I'm sure it's the modal case by far. Since this is specifically about reading .tar.gz files as hosted on npm, this is probably reasonably safe.

      However, because of the scale of what bun deals with it's on the edge of what I would consider safe and I hope in the real code there's a fallback for what happens if the file has multiple members in it, because sooner or later it'll happen.

      It's not necessarily terribly well known that you can just slam gzip members (or files) together and it's still a legal gzip stream, but it's something I've made use of in real code, so I know it's happened. You can do some simple things with having indices into a compressed file so you can skip over portions of the compressed stream safely, without other programs having to "know" that's a feature of the file format.

      Although the whole thing is weird in general because you can stream gzip'd tars without every having to allocate space for the whole thing anyhow. gzip can be streamed without having seen the footer yet and the tar format can be streamed out pretty easily. I've written code for this in Go a couple of times, where I can be quite sure there's no stream rewinding occuring by the nature of the io.Reader system. Reading the whole file into memory to unpack it was never necessary in the first place, not sure if they've got some other reason to do that.

    • k__ 2 days ago

      Yeah, I understood that.

      I was just wondering why GZIP specified it that way.

      • ncruces 2 days ago

        Because it allows streaming compression.

        • k__ 2 days ago

          Ah, makes sense.

          Thanks!

  • lkbm 2 days ago

    I believe it's because you get to stream-compress efficiently, at the cost of stream-decompress efficiency.

  • 8cvor6j844qw_d6 2 days ago

    gzip.py [1]

    ---

    def _read_eof(self):

    # We've read to the end of the file, so we have to rewind in order

    # to reread the 8 bytes containing the CRC and the file size.

    # We check the that the computed CRC and size of the

    # uncompressed data matches the stored values. Note that the size

    # stored is the true file size mod 2*32.

    ---

    [1]: https://stackoverflow.com/a/1704576

djfobbz 2 days ago

I really like Bun too, but I had a hard time getting it to play nicely with WSL1 on Windows 10 (which I prefer over WSL2). For example:

  ~/: bun install
  error: An unknown error occurred (Unexpected)
  • lfx 2 days ago

    Why you prefer WSL1 over WSL2?

    • tracker1 2 days ago

      FS calls across the OS boundary are significantly faster in WSL1, as the biggest example from the top of my head. I prefer WSL2 myself, but I avoid using the /mnt/c/ paths as much as possible, and never, ever run a database (like sqlite) across that boundary, you will regret it.

    • djfobbz 2 days ago

      WSL1's just faster, no weird networking issues, and I can edit the Linux files from both Windows and Linux without headaches.

LeicaLatte 2 days ago

Liking the package management from first principles as a systems-level optimization problem rather than file scripting. resembling a database engine - dependency aware task scheduling, cache locality, sys call overhead - they are all there.

wojtek1942 2 days ago

> However, this mode switching is expensive! Just this switch alone costs 1000-1500 CPU cycles in pure overhead, before any actual work happens.

...

> On a 3GHz processor, 1000-1500 cycles is about 500 nanoseconds. This might sound negligibly fast, but modern SSDs can handle over 1 million operations per second. If each operation requires a system call, you're burning 1.5 billion cycles per second just on mode switching.

> Package installation makes thousands of these system calls. Installing React and its dependencies might trigger 50,000+ system calls: that's seconds of CPU time lost to mode switching alone! Not even reading files or installing packages, just switching between user and kernel mode.

Am I missing something or is this incorrect. They claim 500ns per syscall with 50k syscalls. 500ns * 50000 = 25 milliseconds. So that is very far from "seconds of CPU time lost to mode switching alone!" right?

  • Bolwin 2 days ago

    Read further. In one of the later benchmarks, yarn makes 4 million syscalls.

    Still only about 2 secs, but still.

swyx 2 days ago

i'm curious why Yarn is that much slower than npm? whats the opposite of this article?

yahoozoo 2 days ago

Good article but it sounds a lot like AI wrote it.

paularmstrong 2 days ago

This is all well and good, but the time it takes to install node modules is not a critical blocker for any project that I've ever been a part of. It's a drop in the bucket compared to human (ability and time to complete changes) and infrastructure (CI/deploy/costs). Cutting 20 seconds off the dependency install time is just not a make or break issue.

  • tracker1 2 days ago

    It's more than enough to lose your focus. If you can make a process take a couple seconds or less vs over 15, you should do that.

    • paularmstrong 2 days ago

      How often are you doing a full install of dependencies? Re-runs for me using npm/pnpm/yarn are 1-2 seconds at worst in very large monorepos. I can't imagine needing to do full installs on any sort frequency.

      • tracker1 2 days ago

        I find that it's heavily dependent on the drive speed, so I've leaned into getting current generation, very fast drives as much as possible when I put together new computers and sometimes a mid-generation upgrade. Considering I often do consulting work across random projects, I pretty often am having to figure out and install things in one mono repo managed with pnpm, another with yarn, etc... so the pain is relatively real, that said, fastest drive matters as much or more, especially with build steps.

        When handling merge/pull requests, I'll often do a clean step (removing node_modules, and temp files) before a full install and build to test everything works. I know not everyone else is this diligent, but this can happen several times a day... Automation (usually via docker) can help a lot with many things tested through a CI/CD environment, that said, I'm also not a fan of having to wait for too long for that process... it's too easy to get side-tracked and off-task. I tend to set alarms/timers throughout the day just so I don't miss meetings. I don't want to take a moment to look at HN, and next I know it's a few hours later. Yeah, that's my problem... but others share it.

        So, again, if you can make something take less than 15s that typically takes much more, I'm in favor... I went from eslint to Rome/Biome for similar reasons... I will switch to faster tooling to reduce the risk of going off-task and not getting back.

      • paularmstrong 2 days ago

        I also just tried bun install in my main work monorepo vs yarn. bun took 12s and yarn took 15s. This is hardly a difference worth noting.

        • tracker1 2 days ago

          Yeah, I find HDD speed can make more of a difference too. Gen 5 PCIe is amazing.. the difference in Rust builds was pretty phenomenal over even a good gen 4 drive.

  • sgarland 2 days ago

    I am thrilled that anyone in the web dev community is giving a shit about performance, and clearly knows something about how a computer actually works.

    I am so, so tired of the “who cares if this library eats 100 MiB of RAM; it’s easier” attitude.