Heroku to Azure Container Apps migration was not a sudden decision for me—it was a slow, expensive drift apart. I need to have a conversation with myself about the moment I finally broke up with Heroku.
For years, Heroku was my “easy button.” git push heroku main was the most satisfying command in my terminal. It felt like magic. I didn’t care about infrastructure; I just cared about shipping code.
But then the free tier ended. Then the “Hobby” tier costs started adding up. And then, as my application scaled, the “Performance-M” and “Performance-L” Dyno pricing started looking less like a cloud bill and more like a mortgage payment.

I was stuck in a dilemma common to many developers in 2025:

  1. Stay on Heroku: Pay a premium for simplicity and restrictive limits.
  2. Move to Kubernetes (AKS): Save money on compute, but spend my weekends writing YAML manifests and debugging IngressController failures.

I wanted the simplicity of Heroku but the power of Kubernetes. I thought it was a pipe dream.
Then I found Azure Container Apps (ACA).
I migrated everything over a long weekend. And while the cost savings were nice, there was one specific feature that convinced me I had made the right choice.


The “Dyno” Trap

Before I explain the solution, let me explain the friction.
Heroku charges you based on provisioned capacity. If you buy a “Standard-2x” Dyno, you pay for it 24/7, whether users are hitting your site or not.
For my staging environments, review apps, and background workers, this was burning money. My background worker only needed to run when a job was in the queue. My staging site only needed to run when I was testing it. But on Heroku, the meter was always running.
I needed Serverless, but I didn’t want to rewrite my Express.js and Python apps into Azure Functions or AWS Lambda. I wanted to just deploy my container.


Heroku to Azure Container Apps: The Game-Changing KEDA “Scale to Zero” Feature

The feature that changed everything is Serverless Containers with KEDA Auto-Scaling.
Azure Container Apps is built on top of Kubernetes, but Microsoft hides the cluster management from you. You just give it a container image.
But the secret sauce is KEDA (Kubernetes Event-Driven Autoscaling).

Why KEDA is the Magic Bullet

On Heroku, autoscaling is reactive and rudimentary (mostly based on response time). In Azure Container Apps, scaling is event-driven and granular.
Here is what I could suddenly do that was impossible on Heroku:

  1. Scale to Zero: If no one is using my staging environment, the replica count drops to 0. My cost drops to $0.00. The moment a request comes in, it spins up in seconds.
  2. Queue-Based Scaling: My background worker doesn’t need to look at CPU usage. It needs to look at the RabbitMQ or Azure Storage Queue depth. If there are 1,000 messages, spin up 10 containers. If there are 0 messages, spin up 0 containers.

I was no longer paying for “idle.” I was paying for work.

The Code: defining the Rules

In Heroku, scaling is a slider in the UI. In Azure Container Apps, it’s a declarative rule in my YAML configuration.
Here is the configuration for my background worker. It tells Azure: “Look at this Azure Queue. If there are more than 10 messages per instance, scale up.”

resources:
  cpu: 0.5
  memory: 1.0Gi

scale:
  minReplicas: 0
  maxReplicas: 10
  rules:
  - name: my-queue-rule
    custom:
      type: azure-queue
      metadata:
        queueName: orders-to-process
        messageCount: '10'
        connection: 'MY_QUEUE_CONNECTION_STRING'

When the queue is empty, this app stops existing. I pay nothing. The moment an order comes in, the app wakes up, processes it, and goes back to sleep.

The Epiphany: I realized I was saving 50% on my cloud bill not because the compute was cheaper (though it is), but because I stopped paying for resources that were doing nothing.

My Migration Journey

Moving from a PaaS like Heroku to a Container-Native platform isn’t just “copy-paste.” It requires a mindset shift. Here is how I tackled the migration.

Step 1: Containerizing the Buildpacks

Heroku uses “Buildpacks” to magically turn code into a running app. Azure Container Apps prefers a Docker container.

I had to write Dockerfiles for my services.

  • Good News: This removed the “it works on Heroku but not locally” bugs.
  • Bad News: I had to learn how to optimize Docker layers to keep image sizes down.

Once I had the Dockerfile, I pushed the images to Azure Container Registry (ACR) using GitHub Actions.

Step 2: Secrets & Environment Variables

Heroku has heroku config:set.
Azure has the “Secrets” and “Environment Variables” blade.

I scripted the migration. I exported my Heroku config to a .env file, and then used the Azure CLI to bulk-upload them into the Container App revision.

Pro Tip: Azure Container Apps allows you to reference secrets directly from Azure Key Vault. This is a massive security upgrade over Heroku’s environment variables, which are visible to anyone with dashboard access.

Step 3: Goodbye Heroku Postgres

I moved my data from Heroku Postgres to Azure Database for PostgreSQL (Flexible Server).

The migration was standard:

  1. pg_dump from Heroku.
  2. pg_restore to Azure.
  3. Update connection strings.

Azure’s Flexible Server offers burstable compute tiers, which fit my “scale to zero” philosophy perfectly for non-production environments.


The “Dapr” Bonus

I didn’t move for Dapr (Distributed Application Runtime), but I stayed for it.
Azure Container Apps has native integration with Dapr. It acts as a “sidecar” that handles service-to-service communication, state management, and pub/sub.
Instead of hardcoding “RabbitMQ” logic into my Node.js app, I just call the Dapr sidecar via HTTP. If I want to switch from RabbitMQ to Azure Service Bus later, I don’t change my code; I just change a Dapr configuration file.
It made my microservices loosely coupled, which is something Heroku never really helped me solve.


Real Results: The Numbers

After three months of running my SaaS on Azure Container Apps, here is the breakdown.

MetricHeroku EraAzure Container Apps Era
Staging Cost$250/mo (Always On)$12/mo (Scale to Zero)
Production ScalingManual / ReactiveEvent-Driven (KEDA)
Deployment SpeedFast (Git Push)Fast (GitHub Actions)
Lock-inHigh (Heroku Dynos)Low (Standard Docker)

1. 50% Total Cost Reduction

The savings on staging and dev environments alone paid for the migration effort. By enabling minReplicas: 0 on everything except the production frontend, our bill plummeted.

2. Cold Starts are Manageable

I was worried that “Scaling from Zero” would be slow.
In reality, a new container spins up in about 3-5 seconds. For a background worker, this is irrelevant. For a staging web app, the first user waits 4 seconds, and then it’s instant. That is a trade-off I will happily make to save hundreds of dollars.

3. Production-Grade without the DevOps

I have a Load Balancer, SSL termination (automatic certificates), Blue/Green deployments (via “Revisions”), and traffic splitting.
I get all of this without writing a single Kubernetes Ingress manifest.


Final Thoughts

I’m talking to myself here, and to anyone who is holding onto their Heroku account because they are afraid of the complexity of the “Big Cloud.”
Heroku was great for its time. It taught a generation of developers how to deploy. But the cloud has evolved.
Azure Container Apps occupies that perfect middle ground. It gives you the portability of Docker, the power of Kubernetes, and the simplicity of PaaS.
The ability to scale to zero and scale based on actual events (not just CPU usage) is the modern way to build cloud-native apps.
I don’t miss git push heroku main anymore. Watching my GitHub Action turn green and seeing my container spin up only when needed… that feels like the future.

Categorized in: