Issues affecting multiple clusters in AWS US-EAST-1
Incident Report for Instaclustr
Most affected nodes have been recovered. There are a couple of nodes that is still in the process of being restarted and the impacted customers are updated via support ticket.

Please contact support if there are any further questions or concerns about this incident.
Posted Sep 01, 2019 - 03:18 UTC
We are currently monitoring our services to confirm all nodes in the managed service fleet is working .
Posted Aug 31, 2019 - 22:45 UTC
We are seeing services return to normal and most nodes are now reachable.
Posted Aug 31, 2019 - 22:38 UTC
AWS has provided the following update regarding the ongoing issue at 10:47AM PDT,

We want to give you more information on progress at this point, and what we know about the event. At 4:33 AM PDT one of 10 datacenters in one of the 6 Availability Zones in the US-EAST-1 Region saw a failure of utility power. Backup generators came online immediately, but for reasons we are still investigating, began quickly failing at around 6:00 AM PDT. This resulted in 7.5% of all instances in that Availability Zone failing by 6:10 AM PDT. Over the last few hours we have recovered most instances but still have 1.5% of the instances in that Availability Zone remaining to be recovered. Similar impact existed to EBS and we continue to recover volumes within EBS. New instance launches in this zone continue to work without issue.

Instaclustr apologises for the ongoing issue and will look to provide further update when we hear back from AWS.
Posted Aug 31, 2019 - 17:58 UTC
AWS continues to work on the recovery of the instances and EBS volumes within a single Availability Zone in the US-EAST-1 Region.
Posted Aug 31, 2019 - 16:11 UTC
There is a recent update from AWS regarding the ongoing issue at 8:06AM PDT which says,
we are starting to see recovery for instance impairments and degraded EBS volume performance within a single Availability Zone in the US-EAST-1 Region. We are also starting to see recovery of EC2 APIs. We continue to work towards recovery for all affected EC2 instances and EBS volumes.
Posted Aug 31, 2019 - 15:16 UTC
The issue with AWS is still ongoing and their latest update mentions they are continuing to investigate the connectivity and EBS volume issues within a single Availability Zone in the US-EAST-1 Region. They are also investigating increased error rates for new launches within the same Availability Zone.
Posted Aug 31, 2019 - 14:54 UTC
AWS has confirmed connectivity issues & degraded performance with EBS volumes within a single Availability Zone in the US-EAST-1 Region:
Posted Aug 31, 2019 - 14:07 UTC
We are investigating connectivity issues across many nodes in our managed service fleet in AWS us-east-1 region.
Posted Aug 31, 2019 - 13:24 UTC
This incident affected: AWS ec2-us-east-1.