During the maintenance window, we will perform planned minor version upgrades on Grafana databases. Users may experience brief service interruptions lasting up to one minute. During this period, Grafana instances may become inaccessible. Other services are unaffected. Posted on
Jan 07, 2026 - 10:02 UTC
Resolved -
We continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.
Jan 12, 18:21 UTC
Monitoring -
Engineering has released a fix and as of 17:01 UTC, customers should no longer experience connectivity issues. We will continue to monitor for recurrence and provide updates accordingly.
Jan 12, 17:01 UTC
Identified -
Engineering has identified the issue and will be deploying a fix shortly. At this time, users will continue to experience disruptions for queries routed via PDC.
We will continue to provide updates as more information is shared.
Jan 12, 16:50 UTC
Investigating -
We are investigating an issue in prod-eu-west-3 where PDC agents are failing to maintain/re-establish connectivity. Some agents are struggling to reconnect, which may cause disruptions or degraded performance for customer queries routed over PDC. We’ll share updates as we learn more.
Jan 12, 15:44 UTC
Resolved -
Engineering has released a fix and we continue to observe a period of recovery. As of 15:12 UTC we are considering this resolved.
Jan 12, 15:26 UTC
Update -
There was a full degradation of write service between 9:13 UTC - 9:35 UTC. The cell is operational but there is still degradation in the write path. Our Engineering team is actively working on this.
Jan 12, 11:41 UTC
Update -
We are continuing to investigate this issue.
Jan 12, 09:09 UTC
Investigating -
We have been alerted to an issue with Tempo write degradation in prod-eu-west-3 - tempo-prod-08. The cell is operational but there is degradation in the write path. Write requests are taking longer than normal. This started at started 7:00 UTC. Our Engineering team is actively investigating this.
Jan 12, 09:03 UTC
Resolved -
Between 20:23 UTC and 20:53 UTC, Grafana Cloud Logs in prod-us-east-3 experienced a write degradation, which may have resulted in delayed or failed log ingestion for some customers.
The issue has been fully resolved, and the cell is currently operating normally. We are continuing to investigate the root cause and will provide additional details if relevant.
Jan 9, 20:30 UTC
Resolved -
There was a ~15 minute partial write outage for some customers in prod-us-central-0. The time frame for this outage was 15:43-15:57 UTC.
Jan 7, 17:41 UTC
Resolved -
This incident has been resolved.
Jan 6, 20:26 UTC
Monitoring -
We are seeing some recovery in affected products. We are continuing to monitor the progress.
Jan 6, 17:50 UTC
Investigating -
We are currently investigating an issue causing degraded Mimir and Tempo read performance in the prod-us-central-7 region.
Jan 6, 17:41 UTC
Resolved -
From 20:32 to 20:37 UTC, a DNS record misconfiguration resulted in temporary Cloudflare 1016 DNS errors on many Grafana Cloud stacks.
The misconfiguration was mitigated within 5 minutes, and we are working with Cloudflare to better understand why the particular misconfiguration resulted in this outage.
Jan 6, 15:09 UTC