Back to overview

Jobs Running for Too Long

Dec 04 at 01:46pm CST
Affected services
Schedule Triggers
Webhook Triggers
API Triggers

Dec 05 at 01:58pm CST

We have implemented additional hotfixes to the application to try and put additional measure in place to prevent long-running jobs from occuring. These changes were implemented at 1:58 PM.

Dec 05 at 05:05am CST

Infrastructure issues have been resolved as of this morning and it appears that all long running jobs are isolated to a timeframe of 12/4 10:18am CT - 12/5 5:05am CT.

Dec 04 at 01:46pm CST

We are currently aware of an issue that causes a small percentage of jobs to appear like they are running forever, potentially never completing. We are actively investigating the issue with AWS support and are hoping to have a permanent solution soon.

This issue should be temporarily resolved by running your Fleet again manually. If not, please reach out to or via Intercom.

Once we have a resolution in place, we will be clearing out runtime for any of these long running jobs so customers are not charged. We recommend subscribing to this status page for updates.