In the midst of a controversial week following Google Cloud’s accidental deletion of Australian pension fund UniSuper’s entire account, the tech giant faced another stumble. A maintenance automation error at 15:22 PT last Thursday, initially meant to shut down an unused network control component at a single location, ended up affecting around 40 locations instead. This led to a three-hour downtime, disrupting 33 Google Cloud services for affected users, including Compute Engine and Kubernetes Engine.
The bug caused various issues such as new VM instances being provisioned without network connectivity, VM migrations and rebooted systems losing connection, and configurations like firewalls and network load balancers not being updated. Operations relying on Google Cloud Engine virtual machines were heavily impacted for a few hours until normal operation was restored at 6:10 p.m. PT. Google attributed the outage to a bug in the maintenance automation tool.
This incident followed a previous outage where Google Big Query and Google Compute Engine went offline due to an unplanned power event caused by a power failure from a utility outage. Additionally, another recent event saw Google accidentally deleting the account of UniSuper, which had approximately $124 billion in funds under management. Businesses and users are now anxious, hoping that Google Cloud will take steps to prevent similar outages in the future.
In response to the recent outage, Google apologized for the service interruption and assured users that they are working to avoid such incidents in the future. This latest incident adds to a series of setbacks that have raised concerns about the reliability of Google Cloud services. Users are now looking for reassurance that measures will be put in place to prevent future disruptions.
Article Source
https://www.techradar.com/pro/google-cloud-has-just-knocked-a-load-of-customers-offline-for-the-second-time-this-month