AWS continued: “The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer—which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run.”
That much seems true. It’s also a classic misdirection. The company conveniently forgot to confirm that the point of the story — that the system decided to delete and recreate an environment — was correct.
“The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.” That’s an impressively narrow interpretation of what happened.
AWS then promised it won’t do it again. “We implemented numerous safeguards to prevent this from happening again — not because the event had a big impact (it…
https://www.computerworld.com/article/4136512/what-really-caused-that-aws-outage-in-december.html