Datadog to Azure Monitor was not a migration I planned—but one I was forced into after seeing our monthly monitoring bill.
I need to have a conversation with myself about the moment my finance director slacked me a screenshot of our monthly credit card bill.
“Is this Datadog charge correct?” she asked.
I stared at the number. It was higher than our production database costs. It was higher than our Kubernetes cluster costs. In fact, we were spending almost as much on monitoring our infrastructure as we were on the infrastructure itself.
I felt sick.
I had been a Datadog evangelist for years. I loved the UI. I loved the cute dog logo. I loved how easy it was to drag and drop widgets. But as our Azure footprint grew, the “Datadog Tax” became unsustainable.
We were paying twice: once for data egress (sending logs out of Azure) and again for Datadog ingestion. And honestly? I was tired of managing API keys, updating agents, and debugging why a specific log stream wasn’t showing up.
That afternoon, I made a decision that scared me. I decided to rip out the industry’s favorite monitoring tool and go “native” with Azure Monitor.
Here is why I did it, and the one feature that made the switch not just cheaper, but better.
The “Tax” on Observability
Before I explain the solution, let me explain the pain.
When you use a third-party observability tool like Datadog, New Relic, or Splunk with Azure, you are fighting against gravity.
- The Egress Cost: Every gigabyte of logs you send out of an Azure data center costs money.
- The Agent Tax: You have to install, patch, and manage a sidecar or agent on every VM and AKS node.
- The Context Switch: When an alert fires in Datadog, you still have to log in to the Azure Portal to fix the resource. The context switching breaks your flow.
I found myself spending 20% of my week just “keeping the lights on” for the monitoring system.
Datadog to Azure Monitor: The Game-Changing Feature (KQL & Native Data Plane)
The feature that convinced me to leave Datadog wasn’t a dashboard widget. It was Log Analytics powered by KQL (Kusto Query Language).
In Datadog, searching logs is easy, but analyzing them requires learning their specific JSON-based syntax or clicking through menus.
In Azure Monitor, the logs aren’t just text files. They are a massive, queriable database.
Why KQL is a Superpower
Imagine you want to find the top 5 slowest API requests, grouped by user country, but only for requests that resulted in a 500 error, over the last hour.
In many tools, that’s a complex report. In Azure Monitor, it’s a standard query.
requests
| where success == false and resultCode == 500
| where timestamp > ago(1h)
| summarize count() by client_CountryOrRegion, name
| top 5 by count_ desc
| render piechartThis “code-first” approach to observability changed how I debugged. I wasn’t just looking at charts; I was interrogating my infrastructure like a detective.
Zero-Configuration Observability
The other half of this “Native Integration” feature is Application Insights Auto-Instrumentation.
With Datadog, I had to bake the agent into my Docker images. If I forgot, I was blind.
With Azure Monitor, I went to my App Service, clicked “Turn on Application Insights,” and… that was it.
- The connection string was injected automatically.
- The SDK was attached to the process.
- Dependency tracking (SQL, HTTP calls) started flowing instantly.
I didn’t have to touch a single line of code to get distributed tracing.
My Migration Journey
Moving off Datadog is intimidating. They have years of accumulated knowledge trapped in those dashboards. Here is how I broke down the migration.
Phase 1: The Audit & Mapping
For the rest, I mapped them:
- Datadog Dashboard $\rightarrow$ Azure Workbook
- Datadog Monitor $\rightarrow$ Azure Monitor Alert Rule
- Datadog Log Pipeline $\rightarrow$ Azure Log Analytics Workspace
Phase 2: Translating the Queries
Datadog: avg:system.cpu.idle{host:my-host} by {host}
Azure KQL:
Perf
| where ObjectName == "Processor" and CounterName == "% Idle Time"
| summarize avg(CounterValue) by ComputerOnce I built a “Cheat Sheet” for my team, the transition became faster.
Phase 3: Azure Workbooks
Workbooks aren’t just static dashboards; they are interactive reports. I built a “Morning Coffee” workbook that mixed text, KQL queries, and metrics. I could write a paragraph explaining what a metric meant right next to the chart. It turned our dashboards into documentation.
Custom Metrics: The Code Transition
One thing I loved about Datadog was how easy it was to send a custom metric, like “items_processed.”
I feared Azure would be complex XML configuration. I was wrong.
Here is the C# code I used to replace the Datadog SDK.
Old Datadog Code:
Statsd.Increment("orders.processed");New Azure Monitor Code:
private readonly TelemetryClient _telemetry;
// In your method
_telemetry.GetMetric("OrdersProcessed").TrackValue(1);It flows into the customMetrics table in Log Analytics, ready to be queried immediately.
Real Results: The Numbers Don’t Lie
After three months of running purely on Azure Monitor, I sat down with my finance director again. The results were staggering.
| Metric | Datadog Era | Azure Monitor Era |
|---|---|---|
| Monthly Cost | $4,500 + Egress Fees | $1,200 (bundled) |
| Data Egress Cost | ~$800/mo | $0 (internal traffic) |
| User Access | Shared “Admin” Login | Azure AD (RBAC) |
| Troubleshooting | Context switching tabs | Single Portal Experience |
1. 70% Cost Reduction
This isn’t an exaggeration. By keeping the data inside the Azure network, we eliminated egress fees entirely. Plus, Azure offers generous free grants for Log Analytics (5GB/month free per billing account usually).
2. Security & RBAC Integration
In Datadog, I had to manually manage user accounts. If someone left the company, I had to remember to revoke their Datadog access.
With Azure Monitor, access is controlled by Azure Active Directory. If I remove a user from the AD group, they lose access to the logs immediately.
- Developers get “Reader” access.
- Ops gets “Contributor” access.
- Auditors get limited scope.
I didn’t have to configure this; it just inherited the permissions we already set up on the Resource Groups.
3. Faster Troubleshooting
We used to have a problem where logs would take 2-3 minutes to appear in Datadog. In Azure Monitor Live Metrics, it is sub-second.
But the real speed came from the correlation.
When I look at a failed request in Application Insights, I can click “View Remote Dependencies” and jump straight to the SQL Database performance metrics for that exact second. It’s all linked.
Is Azure Monitor Perfect?
No. I promised to be honest in this guide.
The UI can be slow. The Azure Portal is a heavy web application. Sometimes, loading a complex blade takes a few seconds, whereas Datadog’s UI is snappy and optimized.
Alerting is verbose. Setting up an alert in Datadog is often just a slider. In Azure, you have to create an Action Group, then an Alert Rule, then define the condition logic. It’s more powerful, but it’s more clicks.
Visualization limits. Datadog is beautiful. Azure Workbooks are functional. If you need “Executive Eye Candy” to put on a TV screen in the lobby, Datadog wins. If you need “Engineering Precision,” Azure Monitor wins.
Final Thoughts
I’m talking to myself here, and to anyone who is staring at a renewal contract for a third-party observability tool.
There was a time when Azure Monitor (and App Insights) was just a “basic” tool. That time has passed. It is now a full-scale observability platform that rivals the giants.
The friction of moving data out of your cloud provider is real—both in cost and complexity. By moving to Azure Monitor, I didn’t just save money. I simplified my architecture.
I stopped being a “Datadog Administrator” and went back to being a Cloud Architect. And for that alone, the migration was worth every painful KQL query I had to learn.
The grass isn’t greener on the other side. The grass is greener where you don’t have to pay a toll to water it.
