Skip to main content
Alert routes are the set of filters and rules determining what happens to alerts coming from alert sources like observability platforms, error tracking tools and ticketing platforms. Alerts can be routed to page people, create incidents and notify Slack or Microsoft Teams channels. Configure alert routes for scale across your organization. Using the sidebar, navigate to Settings → Alerts → Routes.

How alert routes work

Alert routes ingest incoming alerts from sources through four configurable stages:
1

Filter alerts (optional)

Exclude irrelevant alerts using alert attributes (e.g., filter out staging environments)
2

Configure escalations

Route alerts to escalation paths or specific users
3

Create incidents

Define when alerts trigger incidents, configure grouping, set incident details and triage mode
4

Send to Slack or Microsoft Teams

Notify channels for visibility and manual actions

1. Filter alerts

Filter out alerts that should not be processed by this route. Use alert attributes like environment, priority, or team to exclude irrelevant alerts. Example use cases include:
  • Filtering out Staging environment alerts to focus only on production issues
  • Excluding low-priority alerts that don’t require immediate attention

2. Configure escalations

Choose between static routing for simple, predictable escalations or dynamic routing to automatically route alerts based on their context using the catalog.
Routing typeWhen to useExample scenario
Dynamic routing (recommended)Best when one alert route needs to handle multiple teams or services. Configure once, using alert attributes and catalog relationships to choose the right escalation path.Pick the escalation path of the team owning the affected service

Alert > Service > Owner > Escalation paths
Static routingChoose a specific escalation path or user directly. Best when one alert source should always notify the same team or person.Escalate all P1 alerts from Grafana to the Infrastructure team’s escalation path

How dynamic routing works

Dynamic routing uses alert attributes to read context from the alert payload and traverse your catalog to find the correct escalation path. For example, a dynamic alert route can:
  1. Read the impacted Service from the alert attributes (e.g., “Billing API”)
  2. Find which Team owns that service in your catalog (e.g., “Payments”)
  3. Page that team’s escalation path automatically
This approach allows one alert route configuration to work across all services and teams without manual updates. For example, when team ownership changes in your catalog, routing updates automatically without reconfiguring alert routes. Set up fallback expressions to ensure alerts always reach someone when metadata is incomplete. For example, when escalation paths aren’t available for the Payments team, always alert the Infrastructure team’s escalation path:
Alert > Service > Owner > Escalation paths || Infrastructure escalation path
Enable auto-cancel escalations to automatically cancel pages when alerts resolve.

3. Create incidents from alerts

Create incidents directly from alerts to group related alerts together, and provide a central place to triage whether issues require action. This approach enables tracking alert patterns and tuning quality over time. Choose when alerts create incidents using alert attributes: Create incidents automatically with alert routes to:
  • Track and investigate specific high priority alerts (e.g., P1 and P2 alerts, or alerts from a specific Service)
  • Group related alerts together for centralized triage and response
  • Monitor and tune alerts workload and quality over time
Skip incident creation from alert routes when:
  • Attributes suggest quick routine fixes which don’t need a full incident workflow
  • Alerts are for game days or testing scenarios
  • Alerts come from support ticketing systems that don’t require incident workflow

Grouping alerts

Reduce alert noise by grouping related alerts into a single incident using shared alert attributes like Team, Service, Customer or more. Choose between suggested and automatic grouping:
  • Suggested grouping - On-call responders can confirm or reject suggestions for grouped alerts, and decide to attach alerts to the same incident or create a new incident.
  • Automatic grouping - Immediately attach related alerts to existing incidents without manual confirmation. Responders can unlink alerts via the incident homepage if needed.
Alerts can be grouped together for up to 48 hours in a rolling window.
Incidents can be created in Triage status, for on-call responders to accept, decline, or merge once they have investigated the potential issue. Alternatively, choose to start incidents in Active status.

4. Send to Slack or Microsoft Teams

Route alerts to Slack or Microsoft Teams channels to create a shared surface for teams to see, triage, and act on alerts. Use channels for:
  • Passive visibility - Keep teams aware of what’s happening without paging anyone
  • Triage before escalating - Let teams assess alerts in a channel first, with the option to escalate or page if needed
  • Declare incidents - Allow teams to directly declare incidents from alerts
Route alerts to public or private channels, and add further filter conditions to send only a subset of alerts. For example, sending only P1 alerts to the #critical-alerts channel. Use expressions and catalog relationships to send alerts to different channels based on alert attributes. For example, Alert > Team > Slack channel sends alerts to the Slack channel associated with the impacted team, configured in your Catalog. Customize the details shown in incidents appearing in Slack or Microsoft Teams channels. Configure the following fields, with custom expressions:
Configurable fieldsDescription
NameDefaults to alert title. Customizable using alert attributes, and can be set with AI
SummaryDefaults to alert description. Customizable using alert attributes, and can be set with AI
Incident modeDefaults to real incidents. Optionally create test, retrospective, or tutorial incidents
TypeSet incident type based on alert attributes (e.g., Production incident, Security incident)
SeveritySet incident severity based on alert attributes (e.g., Major, Minor, Critical)
Add further details as fields shown in incidents appearing in channels (e.g., impacted Service)
From alerts posted in Slack or Microsoft Teams, responders can:
  • Declare an incident or join one if it already exists
  • View and resolve the alert
  • View full alert details by clicking into the incident.io dashboard
If your alert route creates private incidents, incidents declared from Slack or Microsoft Teams alerts will also be private. The person declaring the incident is automatically invited.

FAQs

Temporarily disable the alert route during maintenance by navigating to Settings → Alerts → Routes and toggling the route off. Re-enable when maintenance is complete.Alternatively, add a filter condition to exclude alerts during specific time windows using custom alert attributes.
Yes. Navigate to the incident homepage, scroll to the Alerts section, and click Attach alert. Search for the alert you want to attach.You can also attach alerts directly from the On-call → Alerts page by clicking on an alert and selecting Attach to incident.
Configure privacy in the incident details section of your alert route. Select Private for incident visibility. Only invited responders will have access to private incident channels and data.
Deduplication prevents duplicate alerts from the same source. Alerts with the same deduplication key update the existing alert rather than creating a new one. This happens at the alert level before routing.Grouping combines multiple distinct alerts into a single incident to reduce noise. This happens at the incident level during alert route processing. For example, grouping alerts by Service means all alerts affecting the same service attach to one incident.
Alerts can be grouped together for up to 48 hours in a rolling window. Each time a new alert in the group arrives, the 48-hour window resets.Example: If alerts arrive at 9:00 AM, 10:00 AM, and 11:00 AM, all three can be grouped. The window extends 48 hours from 11:00 AM (the most recent alert).If a new alert arrives after the 48-hour window expires, a new incident is created.