The Conversation I Didn’t Want to Have with Myself
Using uv with Docker multi-stage builds wasn’t something I planned to do.
I need to admit something.
For years, my Python Docker images were a mess. They worked, but every time I ran docker images and saw the file sizes, I felt a little sick.
Gigabyte-sized images.
Slow CI pipelines that took just long enough to break my flow.
Dependency issues that appeared only in production because my local environment never quite matched CI.
I kept telling myself: “This is fine. Everyone’s Python Docker images are heavy.”
That lie worked—until container registry costs started climbing and deployments stretched past ten minutes.
This is the story of how using uv with Docker multi-stage builds completely changed how I build Python containers—and why I finally stopped dreading docker build.
The “Good Enough” Trap I Fell Into
I wasn’t careless. I was pragmatic.
My setup looked like what most Python teams were doing a few years ago:
pipfor dependency installation- Single-stage Docker builds
- Virtual environments inside containers
- Layers piling up over time
It was good enough when the project was small and CI ran once a day.
But at scale, “good enough” became fragile.
I wasn’t building containers anymore.
I was maintaining a house of cards.
The Accidental Discovery of uv
I didn’t find uv while searching for Docker tools.
I found it because I was frustrated.
Frustrated with pip resolving the same dependency graph again and again.
Frustrated with CI jobs timing out because one mirror was slow.
Frustrated with dependency conflicts being discovered far too late.
Then I saw a simple claim:
“A fast Python package manager written in Rust.”
I didn’t trust it. But I tried it locally.
The speed wasn’t incremental—it was obvious.
That experiment eventually led me to using uv with Docker multi-stage builds, and that’s where everything clicked.
The One Feature That Changed Everything
Speed is nice. Rust is impressive.
But the real game-changing feature was this:
uv installs dependencies cleanly into the system environment—perfect for containers.
That single detail unlocked multi-stage builds for Python in a way I hadn’t experienced before.
Before this, Python multi-stage builds felt awkward:
- Copying virtual environments
- Fixing broken paths
- Hoping symlinks survived the transition
With uv, I realized I could treat dependencies as pure build artifacts.
No virtualenv hacks.
No leftover tooling.
No ambiguity.
Docker multi-stage builds finally felt native.
Why Multi-Stage Builds Finally Made Sense to Me
I had known about Docker multi-stage builds long before this.
I avoided them.
They felt like an optimization for people with too much time on their hands. My thinking was simple: if the container runs, why complicate the Dockerfile?
But pairing multi-stage builds with uv changed how I thought about containers entirely.
Multi-stage builds are not about clever Docker tricks.
They are about boundaries.
One stage exists to build.
The other exists to run.
Before this change, my containers were doing both jobs at once—and doing neither particularly well. The moment I separated those responsibilities, the Dockerfile became easier to reason about, not harder.
That was the turning point.
My Old Dockerfile (The Problem)
This is roughly what I used to run:
FROM python:3.12
WORKDIR /app
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
What was wrong here?
pipruns in the final image- Build tools stay forever
- No separation between build and runtime
- Bloated layers
It worked—but it was sloppy engineering.
The New Approach: uv + Multi-Stage Builds
Here’s the exact Dockerfile I use now.
Stage 1: Builder
FROM python:3.12-slim AS builder
WORKDIR /app
# Copy uv as a single static binary
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# Copy dependency definitions
COPY pyproject.toml uv.lock ./
# Install dependencies into system site-packages
RUN uv pip install --system --no-cache .
# Copy application code
COPY . .
Stage 2: Runtime (Slim & Clean)
FROM python:3.12-slim
WORKDIR /app
# Copy only runtime artifacts
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
COPY --from=builder /app /app
CMD ["python", "main.py"]
Why This Works (And Why It’s Not Obvious)
This setup gets three things right:
- uv runs only in the builder stage
- System-wide installs simplify everything
- Zero build tooling in production
The runtime container doesn’t care how dependencies were installed.
It only knows they’re there.
Another subtle benefit is debuggability.
With my old images, debugging production issues felt like archaeology. There were too many layers, too many side effects, and too many unknowns.
With this setup, the mental model is simple:
- Builder stage creates artifacts
- Runtime stage consumes artifacts
If something breaks, I know exactly where to look.
The Migration Journey
I didn’t flip the switch overnight.
Phase 1: Local Validation
pip with uv locally and generated a uv.lock.It immediately surfaced a dependency conflict
pip had ignored. Fixing that alone improved stability.Phase 2: Parallel Builds
The uv-based build consistently finished before pip had completed downloads.
Phase 3: Production Cutover
I monitored logs closely for missing modules. There were none.
Real Results (No Marketing Numbers)
| Metric | Old Setup | uv + Multi-Stage |
|---|---|---|
| Image Size | ~1.2 GB | ~240 MB |
| Dependency Install | ~3–4 min | ~25 sec |
| CI Build Time | ~7 min | ~2 min |
| Build Reliability | Inconsistent | Predictable |
This wasn’t an optimization.
It was a reset.
When This Setup Might Not Be Worth It
If you’re building:
- Throwaway images
- Infrequent batch jobs
- One-off experiments
This might feel like overkill.
What This Changed in My Day-to-Day Work
The biggest improvement wasn’t image size or CI speed.
It was confidence.
I stopped worrying about whether my local environment matched production.
I stopped second-guessing dependency upgrades.
I stopped treating Dockerfiles as fragile artifacts no one wanted to touch.
Rebuilding images became cheap, predictable, and safe.
That changed how often I refactor and how confidently I ship.
Final Thoughts
uv didn’t save my Docker images by itself.
The combination did:
- uv for fast, deterministic installs
- Docker multi-stage builds for clean separation
Together, they forced me to treat containers like production artifacts—not temporary shells.
I stopped fighting Docker.
I stopped babysitting CI.
And for the first time in years, my Python containers felt boring again.
That’s the highest compliment I can give.
