I appreciate that Aaron is focusing on the practical algorithm/design improvements that could be made to Bundler, vs. prematurely going all in on "rewrite in Rust".
what exactly is your issue? I've been using rvm for a decade(?) without any major pain. Cross-language tools such as mise or asdf also seem to work ok.
I can relate to the "I wish we didn't need a second tool", but it doesn't seem like much of a mess.
I think fibers (or, rather, the Async library) in Ruby tends to be fetishized by junior Rails engineers who don't realize higher level thread coordination issues (connection pools, etc) equally apply to fibers. That said this could be a pretty good use case for fibers -- the code base I use every day has ~230 gems and if you can peel off the actual IO bound installation of all those into non-blocking calls, you would see a meaningful performance difference vs spinning up threads and context switching between them.
What I would do to really squeeze the rest out in pure ruby (bear in mind I’ve been away about a decade so there _might be_ new bits but nothing meaningful as far as I know):
Use a cheaper to parse index format (the gists I wrote years ago cover this: https://gist.github.com/raggi/4957402)
Use threads for the initial archive downloads (this is just io, and you want to reuse some caches like the index)
Use a few forks for the unpacking and post install steps (because these have unpredictable concurrency behaviors)
> there _might be_ new bits but nothing meaningful as far as I know
If you didn't need backwards compatibility with older rubies you could use Ractors in lieu of forks and not have to IPC between the two and have cleaner communication channels. I can peg all the cores on my machine with a simple Ractor pool doing simple computation, which feels like a miracle as a Ruby old head. Bundler could get away with creating their own Ractor safe installer pool which would be cool as it'd be the first large scale use of Ractors that I know of.
Really interesting post, but this part from the beginning stuck out to me:
Ruby Gems are tar files, and one of the files in the tar file is a YAML representation of the GemSpec. This YAML file declares all dependencies for the Gem, so RubyGems can know, without evaling anything, what dependencies it needs to install before it can install any particular Gem. Additionally, RubyGems.org provides an API for asking about dependency information, which is actually the normal way of getting dependency info (again, no eval required).
It would be interesting to compare and contrast the parsing speed for a large representative set of Python dependencies compared to a large representative set of Ruby dependencies. YAML is famously not the most efficient format to parse. We might have been better than `pip`, but I would be surprised if there isn't any room left on the table to parse dependency information in a more efficient format (JSON, protobufs, whatever).
That said, the points at the end about not needing to parse gemspecs to install "most" dependencies would make this pretty moot (if the information is already returned from the gemserver)
Although Yaml is a dreadful thing, given the context and the size of a normal gemspec I would be very surprised if it showed up in any significant capacity when psych should be in the low single digit MB/s throughput.
I’ve been squinting at the “global cache for all bundler instances” issue[1] and I’m trying to figure out if it’s a minefield of hidden complication or if it’s actually relatively straight forward.
It’s interesting as a target because it pays off more the longer it has been implemented as it only would be shared from versions going forward.
I've been doing software of all kinds for a long long time.
I've never, ever, been in a position where I was concerned about the speed of my package manager.
When you're in a big company, the problem really starts showing. A service can have 100+ dependencies, many in private repos, and once you start adding and modifying dependencies, it has to figure out how to version it to create the lock file across all of these and it can be really slow.
Cloud dev environments can also take several minutes to set up.
Many of these package managers get invoked countless times per day (e.g., in CI to prepare an environment and run tests, while spinning up new dev/AI agent environments, etc).
Is the package manager a significant amount of time compared to setting up containers, running tests etc? (Genuine question, I’m on holiday and can’t look up real stats for myself right now)
Anecdotally unless I'm doing something really dumb in my Dockerfile (recently I found a recursive `chown` that was taking 20m+ to finish, grr) installing dependencies is longest step of the build. It's also the most failure prone (due to transient network issues).
Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.
The reason for speeding up bundler isn't CI, it's newcomer experience. `bundle install` is the overwhelming majority of the duration of `rails new`.
> Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.
I’d wager the majority of CI usage fits your bill of “terrible”. No provider provides OOTB caching in my experience, and I’ve worked with multiple in house providers, Jenkins, teamcity, GHA, buildkite.
Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.
> GHA with the `setup-ruby` action will cache gems.
Caching is a great word - it only means what we want it to mean. My experience with GHA default caches is that it’s absolutely dog slow.
> Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.
The only way docker caching works is if you have a persistent host. That’s certainly not most setups. It can be done, but if you have that running in docker doesn’t gain you much at all you’d see the same caching speed up if you just ran it on the host machine directly.
> The only way docker caching works is if you have a persistent host.
You can pull the cache when the build host spawns, but yes, if you want to build efficiently, you can't use ephemeral builders.
But overall that discussion isn't very interesting because Buildkite is more a kit to build a CI than a CI, so it's on you to figure out caching.
So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.
I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds. And when rubygems.org had to yank a critical gem for copyright reasons [0], we continued building and shipping without disruption while other companies with bad CIs were all sitting ducks for multiple days.
This is what I came to say. We pre cache dependencies into an approved baseline image. And we cache approved and scanned dependencies locally with Nexus and Lifecycle.
Obviously effort vs reward comes in here, but if you have 20 devs and you save 5 seconds per run, you save a context switch on every tool invocation possibly.
Well, now my opinion of uv has been damaged. It...
> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive
Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
> When multiple package indexes are configured, pip checks all of them. uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.
Ambiguity detection is important.
> uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.
Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.
... thus shifting the bytecode compilation burden to first startup after install. You're still paying for the bytecode compilation (and it's serialized, so you're actually spending more time), but you don't associate the time with your package manager.
I mean, sure, avoiding tons of Python subprocesses helps, but in our bold new free threaded world, we don't have to spawn so many subprocesses.
>> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive
>
> Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his "Version SAT" blog post (https://research.swtch.com/version-sat). This research is what informed Go's Minimal Version Selection (https://research.swtch.com/vgo-mvs) for modules.
It appears to me that uv is walking the same path. If most developers don't care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it'll be a sign that perhaps Python's ecosystem as a whole will drop package version upper bounds.
Perhaps so, although I'm more algorithmically optimistic. If ignoring upper bounds makes the problem more tractable, you can
1. solve dependency constraints as if upper bounds were absent,
2. check that your solution actually satisfies constraints (O(N), quick and passes almost all the time), and then
3. only if the upper bound constraint check fails, fall back to the slower and reliable parser.
This approach would be clever, efficient, and correct. What you don't get to do is just ignore the fucking rules to which another system studiously adheres then claim you're faster than that system.
> uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.
Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
Astral have been very clear that they have no intention of replicating all of pip. uv pip install was a way to smooth the transition from using pip to using uv. The point of uv wasn't to rewrite pip in rust - and thankfully so. For all of the good that pip did it has shortcomings which only a new package manager turned out capable of solving.
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.
... thus shifting the bytecode compilation burden to first startup after install. You're still paying for the bytecode compilation (and it's serialized, so you're actually spending more time), but you don't associate the time with your package manager.
In most cases this will have no noticeable impact (so a sane default) - but when it does count you simply turn on --compile-bytecode.
I agree that bytecode compilation (and caching to pyc files) seldom has a meaningful impact, but it's nevertheless unfair to tout it as an advantage of uv over pip, because by doing so you've constructed an apples to oranges comparison.
You could argue that uv has a better default behavior than pip, but that's not an engineering advantage: it's just a different choice of default setting. If you turned off eager bytecode compilation in pip you'd get the same result.
There's never going to be a Python 4 so I don't think they are wrong. Even if lighting strikes thrice there's no way they could migrate people to Python 4 before uv could be updated to "fix" that.
> Ambiguity detection is important.
I'm not sure what you mean here. Pip doesn't detect any ambiguities. In fact Pip's behaviour is a gaping security hole that they've refused to fix, and as far as I know the only way to avoid it is to use `uv` (or register all of your internal company package names on PyPI which nobody wants to do).
> thus shifting the bytecode compilation burden to first startup after install
> Pip doesn't detect any ambiguities. In fact Pip's behaviour is a gaping security hole that they've refused to fix, and as far as I know the only way to avoid it is to use `uv`
Agreed the current behavior is stupid, FWIW. I hope PEP 708 and 752 get implemented soon. I'm just pointing out that there's an important qualitative difference between
1. we do the same job, but much faster; and
2. we decided your job is stupid and so don't do it, realizing speedups.
uv presents itself as #1 but is actually #2, and that's a shame.
I can relate to the "I wish we didn't need a second tool", but it doesn't seem like much of a mess.
How uv got so fast - https://news.ycombinator.com/item?id=46393992 - Dec 2025 (457 comments)
If you didn't need backwards compatibility with older rubies you could use Ractors in lieu of forks and not have to IPC between the two and have cleaner communication channels. I can peg all the cores on my machine with a simple Ractor pool doing simple computation, which feels like a miracle as a Ruby old head. Bundler could get away with creating their own Ractor safe installer pool which would be cool as it'd be the first large scale use of Ractors that I know of.
That said, the points at the end about not needing to parse gemspecs to install "most" dependencies would make this pretty moot (if the information is already returned from the gemserver)
It’s interesting as a target because it pays off more the longer it has been implemented as it only would be shared from versions going forward.
[1] https://github.com/ruby/rubygems/issues/7249
Compiler, yes. Linker, sure. Package downloader. No.
Cloud dev environments can also take several minutes to set up.
The reason for speeding up bundler isn't CI, it's newcomer experience. `bundle install` is the overwhelming majority of the duration of `rails new`.
I’d wager the majority of CI usage fits your bill of “terrible”. No provider provides OOTB caching in my experience, and I’ve worked with multiple in house providers, Jenkins, teamcity, GHA, buildkite.
Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.
Caching is a great word - it only means what we want it to mean. My experience with GHA default caches is that it’s absolutely dog slow.
> Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.
The only way docker caching works is if you have a persistent host. That’s certainly not most setups. It can be done, but if you have that running in docker doesn’t gain you much at all you’d see the same caching speed up if you just ran it on the host machine directly.
GHA is definitely far from the best, but it works:, e.g 1.4 seconds to restore 27 dependencies https://github.com/redis-rb/redis-client/actions/runs/205191...
> The only way docker caching works is if you have a persistent host.
You can pull the cache when the build host spawns, but yes, if you want to build efficiently, you can't use ephemeral builders.
But overall that discussion isn't very interesting because Buildkite is more a kit to build a CI than a CI, so it's on you to figure out caching.
So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.
I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds. And when rubygems.org had to yank a critical gem for copyright reasons [0], we continued building and shipping without disruption while other companies with bad CIs were all sitting ducks for multiple days.
[0] https://github.com/rails/marcel/issues/23
But in public tooling, where the benefit is across tens of thousands or more? It's basically always worth it.
took an hour to install 30 dependencies
> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive
Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
> When multiple package indexes are configured, pip checks all of them. uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.
Ambiguity detection is important.
> uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.
Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.
... thus shifting the bytecode compilation burden to first startup after install. You're still paying for the bytecode compilation (and it's serialized, so you're actually spending more time), but you don't associate the time with your package manager.
I mean, sure, avoiding tons of Python subprocesses helps, but in our bold new free threaded world, we don't have to spawn so many subprocesses.
Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his "Version SAT" blog post (https://research.swtch.com/version-sat). This research is what informed Go's Minimal Version Selection (https://research.swtch.com/vgo-mvs) for modules.
It appears to me that uv is walking the same path. If most developers don't care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it'll be a sign that perhaps Python's ecosystem as a whole will drop package version upper bounds.
1. solve dependency constraints as if upper bounds were absent,
2. check that your solution actually satisfies constraints (O(N), quick and passes almost all the time), and then
3. only if the upper bound constraint check fails, fall back to the slower and reliable parser.
This approach would be clever, efficient, and correct. What you don't get to do is just ignore the fucking rules to which another system studiously adheres then claim you're faster than that system.
That's called cheating.
Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.
Astral have been very clear that they have no intention of replicating all of pip. uv pip install was a way to smooth the transition from using pip to using uv. The point of uv wasn't to rewrite pip in rust - and thankfully so. For all of the good that pip did it has shortcomings which only a new package manager turned out capable of solving.
> No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.
... thus shifting the bytecode compilation burden to first startup after install. You're still paying for the bytecode compilation (and it's serialized, so you're actually spending more time), but you don't associate the time with your package manager.
In most cases this will have no noticeable impact (so a sane default) - but when it does count you simply turn on --compile-bytecode.
You could argue that uv has a better default behavior than pip, but that's not an engineering advantage: it's just a different choice of default setting. If you turned off eager bytecode compilation in pip you'd get the same result.
There's never going to be a Python 4 so I don't think they are wrong. Even if lighting strikes thrice there's no way they could migrate people to Python 4 before uv could be updated to "fix" that.
> Ambiguity detection is important.
I'm not sure what you mean here. Pip doesn't detect any ambiguities. In fact Pip's behaviour is a gaping security hole that they've refused to fix, and as far as I know the only way to avoid it is to use `uv` (or register all of your internal company package names on PyPI which nobody wants to do).
> thus shifting the bytecode compilation burden to first startup after install
Which is a much better option.
There will, but called Pyku ...
Agreed the current behavior is stupid, FWIW. I hope PEP 708 and 752 get implemented soon. I'm just pointing out that there's an important qualitative difference between
1. we do the same job, but much faster; and
2. we decided your job is stupid and so don't do it, realizing speedups.
uv presents itself as #1 but is actually #2, and that's a shame.