
How do I set up Sentry alerts for a spike in 500s or a new error right after a release?
If you’re seeing a spike in 500s, or a brand‑new error right after a deploy, you don’t want to find out from Twitter. You want Sentry to tell you, with enough context to know which release caused it and who should fix it.
Quick Answer: You set up Sentry alerts for 500 spikes and post-release errors by combining Issue Alerts and Metric Alerts with release filters. Configure your SDK to send releases and environments, then create alerts that watch error rate, HTTP 500 counts, and “first seen” issues immediately after a new release goes live.
The Quick Overview
- What It Is: A Sentry alerting setup that notifies you when 500 errors spike or when a new error appears right after a release, with links back to the exact code change and owner.
- Who It Is For: Engineering teams, SREs, and on‑call developers who deploy frequently and can’t afford to guess whether a release is safe.
- Core Problem Solved: You get fast, precise signal when a deploy introduces a regression, instead of sifting through logs hours later.
How It Works
Sentry’s SDKs capture events (errors/exceptions, transactions, spans) and tag them with release, environment, and other context. Sentry groups events into issues and calculates metrics like error rate and “crash‑free sessions.” Alerts then watch those metrics or issue patterns and notify you via Slack, email, PagerDuty, or your incident tool of choice.
At a high level:
- Instrument releases & environments: Configure your Sentry SDK so every event includes
releaseandenvironment, and make sure your deploys are reported to Sentry. - Create post‑release error alerts: Use Issue Alerts to notify when new issues appear or when a specific error type (like HTTP 500s) starts firing right after a release.
- Create spike / regression alerts: Use Metric Alerts to watch error rate or 500 counts over time and alert when they spike relative to a baseline, scoped to production.
Let’s walk it the way I’d do it with a team in a real project.
Step 1: Send Releases and Environments to Sentry
Before alerts can be “right after a release,” Sentry needs to know what a release is.
-
Set the release in your SDK
Example (JavaScript/Node):
Sentry.init({ dsn: process.env.SENTRY_DSN, release: process.env.SENTRY_RELEASE, // e.g. 'my-app@1.4.3' environment: process.env.NODE_ENV || 'production', });Example (Python):
sentry_sdk.init( dsn=os.environ["SENTRY_DSN"], release=os.environ.get("SENTRY_RELEASE"), environment=os.environ.get("SENTRY_ENV", "production"), ) -
Report deploys, so Sentry knows when a release went live. You can:
- Use a CI/CD integration (GitHub Actions, GitLab, CircleCI, etc.).
- Or call Sentry’s releases API / CLI from your pipeline:
sentry-cli releases new "my-app@1.4.3" sentry-cli releases set-commits "my-app@1.4.3" --auto sentry-cli releases finalize "my-app@1.4.3" sentry-cli releases deploys "my-app@1.4.3" new -e production
-
Verify in Sentry UI: Open your project → Releases → confirm you see releases and deploy times.
Once this is in place, every error/500 is tied back to a release and deploy.
Step 2: Create an Alert for a New Error Right After a Release
This is where Issue Alerts shine. You want: “Tell me when a new issue appears in production, in the latest release, with a meaningful impact.”
-
Go to Issue Alerts
- Project → Alerts → Issue Alerts → Create Alert Rule.
-
Set conditions
Configure something like:
- If:
The event is first seen in this projectEvent frequency→More than 10 events in 5 minutes(tune to your traffic)Environment→production
- Filter by release (optional but helpful):
events in the last 1 hourand/or- Add a condition for
releaseis inlast deploys(options vary slightly by UI version).
The idea: only noisy new problems trigger, not every one‑off edge case.
- If:
-
Set actions
- Send a notification to:
- A Slack channel (
#prod-incidents), PagerDuty, Opsgenie, email, or multiple channels.
- A Slack channel (
- Include useful context in the notification:
- Issue title
- Environment
- Release
- Assignee/Code Owner if you use Ownership Rules.
- Send a notification to:
-
Name & save the rule
- Example:
New high-volume error in production after release.
- Example:
Workflow: When your latest release introduces a brand‑new error, Sentry will open an issue, evaluate the “first seen + volume” rule, and ping the right team with a link that already shows the stack trace, release, suspect commits, and Session Replay (if enabled).
Step 3: Create a Metric Alert for a Spike in 500s
For HTTP 500 spikes, you want to watch error rate or 500 count over time, not just individual issues.
Assuming your Sentry SDK (or framework integration) captures HTTP status codes as tags (most modern web frameworks do, or you can manually add them), you can:
-
Go to Metric Alerts
- Project → Alerts → Metric Alerts → Create Alert Rule.
-
Choose what to monitor
Option A: Error rate by transactions/sessions
- Metric:
percentage(error events, all events)or a project-level “error rate” preset. - Filter:
environment:production. - Optional: filter for a specific service or endpoint using tags like
transaction:/api/orders/*.
Option B: Raw 500 event count
- Metric:
count() - Filter:
environment:production AND http.status_code:500- Adjust for your tags, e.g.
status:500,response.status:500, etc.
- Adjust for your tags, e.g.
- Metric:
-
Define thresholds
You can do:
- Static threshold: e.g., alert when
error_rate > 2% for 5 minutes- or
count() of 500s > 100 in 1 minute.
- Or relative to baseline (if enabled in your org):
- Alert when error rate is
> 3xthe previous hour or previous 7 days.
- Alert when error rate is
For “we just deployed” scenarios, a low window (1–5 minutes) after deploy is usually what you want so you catch regressions fast.
- Static threshold: e.g., alert when
-
Scope to production
Always add:
environment:production- Optionally exclude maintenance windows or known noisy endpoints with extra filters.
-
Set actions
- Notify Slack/PagerDuty/Email.
- Optionally create an Issue in Sentry directly when the metric alert fires, so you have an incident trail.
- Some teams also configure auto‑created Linear/Jira tickets from that incident issue.
-
Name & save
- Example:
Spike in HTTP 500s in production.
- Example:
Outcome: When 500s spike (even if they’re spread across multiple issues), you get a single, clear alert showing the graph, the time window, and easy pivot links into Discover to see which endpoints, users, or releases are impacted.
Step 4: Tie Alerts to Releases and Ownership
Getting alerted is half the job. Getting the alert to someone who can actually fix the issue is the other half.
-
Use Ownership Rules / Code Owners
- In Sentry: Project → Settings → Ownership Rules.
- Map file paths or tags to teams:
src/payments/** #team-payments src/auth/** #team-auth - Or connect your repository’s CODEOWNERS file so Sentry infers ownership automatically.
When your alert fires, Sentry will suggest or auto‑assign the issue to the right team, and the alert can target that team’s channel.
-
Use release context
- From the alert → click into the issue → see:
- Release: which version.
- Suspect Commits: who likely introduced the change.
- Changes in this Release: diff of files touched.
- This is where Seer (Sentry’s AI debugging add‑on) can help:
- It uses stack traces, spans, logs, and commits to propose a root cause and even draft a fix or PR.
- From the alert → click into the issue → see:
Now the workflow is: deploy → spike/new error → alert → owner + commit + context → fix.
Features & Benefits Breakdown
| Core Feature | What It Does | Primary Benefit |
|---|---|---|
| Issue Alerts | Trigger on patterns like “first seen” issues or specific tags (e.g., 500). | Catch brand‑new errors right after a release. |
| Metric Alerts | Monitor error rate, 500 counts, or custom metrics over time. | Detect spikes and regressions before they flood support. |
| Release & Ownership Context | Ties events to releases, commits, and Code Owners/Ownership Rules. | Routes alerts to the right team with code‑level context. |
Ideal Use Cases
- Best for high‑velocity deployments: Because it lets you ship often but still catch “oops” releases within minutes using release-aware alerts and error rate thresholds.
- Best for teams with on‑call rotations: Because alerts tie directly to Ownership Rules and incident channels, so whoever’s on‑call can see exactly what broke, in which service, and who to loop in.
Limitations & Considerations
- Bad or missing tags (status codes, environment, release): If your SDK isn’t sending HTTP status codes or release/environment, your 500 filters and post-release alerts will be weaker. Fix instrumentation first: add tags in your middleware or use framework integrations that capture them.
- Too-aggressive thresholds = alert fatigue: If you set thresholds too low (e.g., every single 500), you’ll get spammed. Start with higher thresholds and short windows, then tune down as you see how noisy your app really is.
Pricing & Plans
Alerting is included across Sentry’s plans; the main differences are around overall event volume, retention, and integrations limits, not whether you can create alerts.
- Developer Plan: Best for small teams or new projects needing core error monitoring, a few dashboards, and basic alerting to keep production mostly glitch‑free.
- Team & Business Plans: Best for growing orgs that need more volume, more dashboards, advanced workflow features (Ownership Rules, SCIM on Business+), and the integrations/on‑call plumbing to wire alerts into Slack, PagerDuty, and ticketing tools.
Seer (AI‑assisted debugging) is available as an add‑on priced per active contributor, if you want Sentry to help analyze root cause right from the alert.
Frequently Asked Questions
Can I alert only on 500s that affect a specific service or endpoint?
Short Answer: Yes, if you tag your events or use Sentry’s transaction names, you can scope alerts to a specific service, endpoint, or route.
Details:
When your SDK captures performance data (transactions and spans) or you add custom tags, you can use those in alert filters:
- For an API route:
environment:production AND http.status_code:500 AND transaction:/api/orders/* - For a microservice: tag events with
service:paymentsand filter on that:service:payments AND http.status_code:500.
Create a Metric Alert with count() or error rate, add the filter, and route that alert to the owning team’s Slack channel using Ownership Rules.
How do I avoid getting an alert every time we deploy?
Short Answer: Use “first seen + volume” conditions, reasonable thresholds, and, where possible, baseline or per-release filters.
Details:
Deploys naturally create some noise—new code paths, migrations, etc. To avoid alert fatigue:
- Require a minimum volume: e.g.,
More than 50 events in 5 minutesfor a “new issue” alert. - Use “first seen” to catch genuinely new problems, not every re‑surfaced edge case.
- For spike alerts, use error rate (percentage) rather than raw counts, so traffic fluctuations don’t trigger false positives.
- If available, use “compare to previous time window” so Sentry only alerts when behavior changes meaningfully after a release, not every time traffic goes up a bit.
Tune, watch for a week, then refine. You want “we broke something real,” not “we deployed again.”
Summary
Setting up Sentry alerts for a spike in 500s or a new error right after a release comes down to three things: send good context (release, environment, status codes), use Issue Alerts for “new and loud” problems, and use Metric Alerts for behavior changes over time. Layer in Ownership Rules so the right team gets pinged, and you’ve got a tight loop: deploy, detect, debug, fix—without scrolling through logs at 2 a.m.