
We have identified the subsystem where the errors are occurring and are working to resolve the issue. Customers using the Parameter Store for the storage of data may be affected by the event as they are unable to retrieve the data. This will impact customers' ability to make calls to Parameter Store.ĩ:29 AM PDT We can confirm increased API error rates and latencies for the Systems Manager Parameter Store in the US-EAST-1 Region. Resolved Minor Increased API Error Rates and LatencyĨ:36 AM PDT We are investigating increased API error rates and latency in the US-EAST-1 Region.

We have all teams engaged in investigation of root cause and mitigation, but we do not have an ETA for recovery at this time.ĥ:28 AM PDT We are starting to see recovery of IAM authentication and authorization requests, and customers should be seeing improvements in API error rates in the AF-SOUTH-1 Region. The IAM errors affect any AWS API call that requires authentication, though some calls may still be succeeding due to API credential caching. We are actively investigating root cause to identify a path to mitigation.ĥ:13 AM PDT We continue to see elevated API and latency for IAM authentication and authorization requests in the AF-SOUTH-1 Region. This is causing impact on services that require IAM authentication. Existing instances are unaffected.Ĥ:21 AM PDT We are experiencing elevated API errors and latency with the IAM authentication service in the AF-SOUTH-1 Region.

Other services that were impa.Ĥ:01 AM PDT We are investigating increased API error rates in the AF-SOUTH-1 Region. However, it will take a bit longer for other impacted services to fully recover their own provisioning and scaling workflows as they have their own backlogs to work through.ĥ:28 AM PDT The Route 53 health check API has recovered, and all operations are now being processed normally. We have identified the issue and have a path towards recovery, however we do expect that to take at least a couple hours.Įxisting Route 53 health checks are operating normally, and are correctly reflecting the health status of their configured resources.Ĥ:32 AM PDT We continue to work towards recovery of the Route 53 Health Check API, and expect health check create, update, and delete delays to be resolved within about 90 minutes. This is also affecting provisioning and scaling workflows for services like ELB, EKS, and RDS that use Route 53 health checks as well.

Resolved Minor Increased Route 53 Health Check API latencyģ:57 AM PDT We are experiencing delays creating, deleting, and updating Route 53 health checks.
