Degraded service availability Operational

Components

Website, API, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, Support Services, packages.gitlab.com

Locations

Google Compute Engine, Digital Ocean, Zendesk, AWS



April 24, 2019 16:21 UTC
[Resolved] Our cloud provider resolved the underlying inconsistency within their infrastructure 3h ago and we started our remaining job processor as of 30min ago. We are not seeing any further issue. Details: gitlab.com/gitlab-com/gl-infra/production/issues/802

April 24, 2019 11:32 UTC
[Monitoring] The jobs that have been stuck are all caught up and processed now. We are monitoring the issue on our end while we wait to get further update from our cloud provider. For details: gitlab.com/gitlab-com/gl-infra/production/issues/802

April 24, 2019 08:36 UTC
[Identified] We believe we have a good lead on what might be happening and waiting to hear back from our provider for an update. The error rates have dropped down drastically and users should be seeing improvements. Details: gitlab.com/gitlab-com/gl-infra/production/issues/802

April 24, 2019 06:40 UTC
[Investigating] We are investigating an issue within our infrastructure that is causing a degraded service availability. Current known symptoms users might see are intermittent error 500s when trying to do certain operations that involve DB writes. Follow: gitlab.com/gitlab-com/gl-infra/production/issues/802 for details.

Back to current status