Skip to main content

Overview

When configuring escalation paths, you can branch based on priority and working hours. If you want to vary how alerts are handled based on other attributes — such as the service, environment, team, or severity label from your monitoring tool — the right place to do that is in your alert sources, not in the escalation path itself. This article explains how to configure your alert sources so that the right priority is assigned to each alert before it reaches your escalation path.

Why set priority at the alert source level?

Escalation paths are intentionally kept simple: they branch on priority and working hours only. This is because escalation paths aren’t just used for alerts - they’re also used if you ever want to manually escalate to that team. If you want incoming alerts to be routed differently based on attributes like:
  • The service or component that triggered the alert
  • The environment (production vs staging)
  • A severity label from your monitoring tool
  • Any other payload field
…the way to achieve this is by mapping those attributes to a priority in your alert source config. Your escalation path then branches on the resulting priority.

How to set priority in an alert source

  1. Navigate to Settings > Alerts > Routing
  2. Click on the alert source you want to configure
  3. In the alert source configuration page, find the Priority section
  4. Click Edit to open the priority configuration drawer

Option 1: Set a static priority

If all alerts from this source should have the same priority:
  1. Select A static value
  2. Choose the priority level (e.g. P1, P2, P3)
  3. Click Apply
This is useful when you know that every alert from a particular source is always the same urgency — for example, a source that only fires for critical production outages.

Option 2: Set a dynamic priority based on the alert payload

If alerts from this source should have different priorities depending on what’s in the payload:
  1. Select A dynamic value
  2. Use an expression to derive the priority from the incoming alert payload
For example, you might:
  • Map a Datadog monitor’s priority tag to an incident.io priority
  • Use the severity field from a Prometheus Alertmanager payload
  • Check whether the alert’s environment label is production (P1) or staging (P3)
Expressions allow you to inspect any field in the incoming alert payload and map it to one of your configured alert priorities. You can use the alert preview on the right-hand side of the configuration page to see real payloads and test your expressions.

Managing alert priorities

You can manage which priorities are available across your organisation from the alert source configuration page:
  1. In the priority configuration drawer, click Manage priorities
  2. Add, rename, re-order, or remove priorities as needed
Priorities are shared across all alert sources and escalation paths, so changes here will be reflected everywhere.

Putting it all together

Here is a typical setup:
  1. Alert source: Incoming Datadog alerts have their priority set dynamically based on the monitor’s priority tag
  2. Alert route: Routes alerts to the appropriate escalation path based on the affected service
  3. Escalation path: Branches on priority — P1 alerts page the on-call engineer immediately, P2 alerts wait 5 minutes, P3 alerts wait until working hours