AWS North Virginia data center outage – resolved
https://www.theregister.com/off-prem/2026/05/08/aws-warns-of...
https://www.reuters.com/business/retail-consumer/amazon-clou...
- cmiles8 - 46298 sekunder sedanAWS’s US-East 1 continues to be the Achilles heel of the Internet.
And while yes building across multiple regions and AZs is a thing, AWS has had a string of issues where US-East 1 has broader impacts, which makes things far less redundant and resilient than AWS implies.
- aurareturn - 45921 sekunder sedanThese things are dangerous. Someone who can take AWS down such as an employee can place a bet.
These bets aren’t as innocent as they seem because the bettors can often influence or change the outcome.
- fabian2k - 50665 sekunder sedanI thought cooling was pretty much pre-planned in any data center, and you simply don't install more stuff than you can cool?
So did some cooling equipment fail here or was there an external reason for the overheating? Or does Amazon overbook the cooling in their data centers?
- tornikeo - 21755 sekunder sedanI wonder if hetzner had better uptime in EU than AWS this year.
- merek - 115993 sekunder sedanRelated:
AWS EC2 outage in use1-az4 (us-east-1)
- corvad - 43283 sekunder sedanIt's always East 1... Jokes aside I don't understand how often east-1 is taken down compared to other regions. Like it should be pretty similar to other regions architecture wise.
- fastest963 - 41775 sekunder sedanCoinbase claimed multiple AZs were down but the AWS statement was that only a single AZ was affected. Does anyone have more details?
- whatever1 - 22084 sekunder sedan2/last 365 days down. My Ubuntu nas is 0/last 365days down.
Come and give me your cash if you want resilience.
- Havoc - 50574 sekunder sedanCould someone explain to me why they don't build these things near oceans? Like nuclear plants that need plenty cooling capacity too
Two loop cycle with heat exchanger to get rid of the heat
- - 47436 sekunder sedan
- sitzkrieg - 35073 sekunder sedanusing aws since s3 came out and i’ve yet to see any major company do multi az failover in any capacity whatsoever. default region ftw
- yomismoaqui - 42469 sekunder sedanHow many nines of are we at this year?
- matt3210 - 33930 sekunder sedanRight, cooling.
- nikcub - 46236 sekunder sedanboth realtime markets where multi-AZ is hard?
- jeffbee - 39667 sekunder sedanI don't see anything on downdetector suggesting this was particularly disruptive.
- aussieguy1234 - 43908 sekunder sedanOnce known for having super reliable services, I've heard this company is scrambling to re hire some of the engineers they overconfidently "replaced" with AI.
When customers pay for cloud services, they expect them to be maintained by competent engineers.
edit: Not sure why the downvotes. If you fire the engineers that have been keeping your systems running reliably for years, what do you expect to happen?
- ElenaDaibunny - 12011 sekunder sedan[flagged]
- tcp_handshaker - 50804 sekunder sedanI bet post-mortem will say vibe coding confused fahrenheit and celsius, we run too hot...
- fukinstupid - 25327 sekunder sedan[flagged]
- OhMeadhbh - 46114 sekunder sedan[flagged]
- BugsJustFindMe - 44472 sekunder sedan[flagged]
- tailscaler2026 - 48421 sekunder sedanus-east-1 is down? shocking! stop putting SPOF services there. this location has had frequent issues for the past 15 years.
- rswail - 11182 sekunder sedanSo in the comments here we have the usual about us-east-1, it's centralized, it's a SPOF for AWS, they should fix it, don't put your stuff there, etc.
This was one data centre in one zone of a multi-zone region.
Yes IAM/R53 and others are centralized there, yes, reworking those service to be decentralized and cross-region would be a Good Thing. But us-east-1 is already multi-zone (6 with a seventh marked as "coming in 2026") with multi DC within zones. From memory, when a global service like IAM is out, it's more likely to be bugs in the implementation or dependency than a "if this was cross-region it wouldn't have died" issue.
But this wasn't an outage of any AWS global service this time. The only one that seemed to have more impact was/is MSK. Which is likely to be more of an issue with Kafka than anything AWS related.
Nördnytt! 🤓