All Systems Operational

Updated a few seconds ago

Back to current status

Status History

Filter: packages.gitlab.com (Clear)



September 2021

Increased error rates across all GitLab services

September 12, 2021 21:48 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




September 12, 2021 21:48 UTC
[Resolved] After investigation we determined that the elevated error rates were caused by database connection saturation; further details will be in this issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

September 12, 2021 21:23 UTC
[Monitoring] GitLab performance and error rates have returned to normal; we are investigating the root cause to understand the likelyhood it will reoccur.

September 12, 2021 21:04 UTC
[Investigating] We are investigating slow response times and increased error rates across all GitLab services. Incident: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

August 2021

Elevated error rates on GitLab.com

August 11, 2021 17:51 UTC

Elevated error rates on GitLab.comPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 11, 2021 17:51 UTC
[Resolved] As GitLab.com is operating normally, we'll now be marking this incident as resolved. Further investigation efforts and details surrounding the issue can be found in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:44 UTC
[Investigating] No material updates to report - our investigation is still ongoing. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:27 UTC
[Investigating] All GitLab.com services are operating normally, but our investigation into the cause of the errors continues. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:18 UTC
[Investigating] At 17:04 UTC, there was a 5 minute elevated error ratio event for all GitLab.com services. It has recovered, but we are investigating. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

Increased error rates across all services

August 10, 2021 10:39 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 10, 2021 10:39 UTC
[Resolved] The issue has been resolved, and we can confirm that all services on GitLab.com are fully operational. A full timeline of this incident and more details are available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5324

August 10, 2021 10:05 UTC
[Monitoring] We are back to normal error rates across all services. We keep monitoring the situation. Refer to gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more updates.

August 10, 2021 09:52 UTC
[Investigating] We are investigating increased error rates across all services, see gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more details

GitLab performance degraded

August 6, 2021 12:48 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 6, 2021 12:48 UTC
[Resolved] From 12:05 - 12:15 UTC, we saw a system degradation across all of our components, which seems to have recovered at the moment. Our engineers are investigating. Further details are available in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5306

July 2021

Issue Uploading Artifacts from CI

July 21, 2021 01:39 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 21, 2021 01:39 UTC
[Resolved] This incident is now resolved and we are no longer seeing any errors. We will continue to investigate the cause internally. Full timeline of this incident and more details available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 01:12 UTC
[Monitoring] The rollout has completed and we are no longer seeing errors with the upload. This has been applied to all our servers. We are continuously monitoring this further for any issues.

July 21, 2021 01:04 UTC
[Identified] The rollout has completed on two of our clusters. This will continue on our last few clusters and we will post further updates once completed. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:44 UTC
[Identified] We are currently working on a recent configuration rollback that affected uploading CI artifacts. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:28 UTC
[Investigating] We have identified a failure in uploading artifacts from CI which is currently showing a 400 error. We are investigating the issue at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194 and will post more updates.

June 2021

Short service disruption on GitLab.com

June 21, 2021 21:34 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




June 21, 2021 21:34 UTC
[Resolved] Our team has confirmed that the disruption is now gone, and that services are back to normal, Investigation will continue on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945 for additional details and root cause analysis.

June 21, 2021 21:14 UTC
[Investigating] GitLab.com is up and operational, but from 20:53 to 20:58, we saw an increase in errors on GitLab.com web, API, and Git services that were a short disruption, now gone. Investigation is being done on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945

May 2021

Unscheduled maintenance - Switchover of our primary database

May 8, 2021 14:59 UTC

Description

We will be conducting a switchover of our primary database at 14:45 UTC to mitigate a potential instability. Users will be unable to access GitLab.com for up to 2 minutes during this switchover. Please check gitlab.com/gitlab-com/gl-infra/production/-/issues/4528


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS


Schedule

May 7, 2021 14:45 - May 7, 2021 14:50 UTC



May 8, 2021 14:59 UTC
[Update] The primary database switchover is done and only affected Gitlab.com for 4 seconds. All systems are up and operational. For more details, see gitlab.com/gitlab-com/gl-infra/production/-/issues/4528

May 8, 2021 14:53 UTC
[Update] The primary database switchover is complete. For more details, please see gitlab.com/gitlab-com/gl-infra/production/-/issues/4528

March 2021

Possible Database & Redis degradation

March 18, 2021 15:06 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 18, 2021 15:06 UTC
[Resolved] Our services have fully recovered and the incident is now mitigated. For the full root cause analysis follow our investigation at gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:57 UTC
[Investigating] No material updates; we continue to see recovery across our services. Our engineers are still investigating the root cause. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:40 UTC
[Investigating] We're seeing a recovery across our different services and continuing to investigate to confirm the root cause of the degradation. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:23 UTC
[Investigating] We're investigating a possible service degradation across our Redis and DB components. Users might be experiencing delays in CI/CD pipelines. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

Database latency resulting in 500 errors

March 15, 2021 14:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 15, 2021 14:19 UTC
[Resolved] While the root cause is still being investigated, we have seen no further alerts and GitLab.com is fully operational. We are therefore marking this incident as resolved. For further details please check: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:51 UTC
[Monitoring] Gitlab.com appears to be stable, with no additional updates to share. Our engineers are still monitoring our application and investigating the root cause of this problem. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:35 UTC
[Investigating] No material updates, our engineers are investigating the root cause. We're currently waiting for more database replicas to come online to help with the load. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:15 UTC
[Investigating] We're still experiencing 500 errors on GitLab.com, investigation is underway. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:51 UTC
[Investigating] We're still investigating the overall cause of the issue and are currently checking if this could be related to database re-indexing of a particular table. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:29 UTC
[Investigating] We are experiencing database latency resulting in customers seeing 500 errors when trying to reach GitLab.com. We are currently investigating. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

Intermittent issue on GitLab.com

March 1, 2021 05:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 1, 2021 05:19 UTC
[Resolved] It seems the error rate has not gone back up as the monthly CI minutes reset job has completed. All the affected components are back to operating normally. We will post further updates and fix in the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

March 1, 2021 03:13 UTC
[Monitoring] It seems the CI minutes resetting jobs have finished. We're continuing to monitor the performance.

March 1, 2021 02:34 UTC
[Identified] We're expecting for the error to gradually recover on its own over the next 30 minutes. We're exploring a permanence solution so that it doesn't come up during the next month CI minute reset jobs.

March 1, 2021 02:01 UTC
[Identified] We notice there's degraded performance on GitLab.com. It seems to relate to our monthly CI minutes reset jobs. While the performance is recovering, we're exploring possible solution. You can read more about our investigation on gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

October 2020

SSO warnings

October 28, 2020 01:45 UTC

SSO warningsOperational

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 28, 2020 01:45 UTC
[Resolved] Closing incident per updates on gitlab.com/gitlab-com/gl-infra/production/-/issues/2916

October 28, 2020 00:35 UTC
[Investigating] We’ve received reports that on SSH pushes users were getting an unintended message. This has been remediated.

Degraded performance related to GCS issues

October 24, 2020 21:56 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 21:56 UTC
[Resolved] GitLab.com Registry is back to normal.

October 24, 2020 18:19 UTC
[Monitoring] GitLab.com Registry appears to be recovering. We are continuing to monitor its status along with GCS.

October 24, 2020 17:15 UTC
[Identified] We are continuing to monitor issues with the Registry on GitLab.com and looking for updates from our provider.

October 24, 2020 16:28 UTC
[Identified] We are continuing to monitor the issues with Registry on GitLab.com and are working with our provider on those issues.

October 24, 2020 16:04 UTC
[Identified] We are continuing to investigate issues with GitLab Pages and Registry on GitLab.com which appear to be related to underlying problems with GCS.

October 24, 2020 15:43 UTC
[Investigating] It appears GitLab pages may be affected. Other GitLab.com services appear to operating correctly.

October 24, 2020 15:40 UTC
[Investigating] We are experiencing degraded performance probably related to GCS issues. We are investigating in gitlab.com/gitlab-com/gl-infra/production/-/issues/2887

GitLab.com is experiencing degraded availability because of high database load.

October 24, 2020 11:32 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 11:32 UTC
[Resolved] GitLab.com is back to operating normally.

October 24, 2020 11:19 UTC
[Monitoring] GitLab.com is back to operating normally. We are continuing to monitor on the incident issue.

October 24, 2020 10:57 UTC
[Identified] We are continuing to investigate the origin of some slow database queries. We are tracking on gitlab.com/gitlab-com/gl-infra/production/-/issues/2885.

October 24, 2020 10:27 UTC
[Identified] We have identified a particular endpoint that was related to the issues and have disabled it temporarily on GitLab.com. The application appears to be recovering, but we are continuing to investigate.

October 24, 2020 10:02 UTC
[Investigating] We are continuing to investigate issues related to high load on the DB cluster for GitLab.com.

October 24, 2020 09:36 UTC
[Investigating] GitLab.com is experiencing degraded availability because of high database load. We are investigating.

Potential DDoS on Gitlab.com

October 23, 2020 14:53 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 23, 2020 14:53 UTC
[Resolved] Thank you all of your patience, we can confirm that the DDoS incident is now fully mitigated, Gitlab.com is fully operational and secure.

October 23, 2020 13:58 UTC
[Monitoring] GitLab.com is continuing to operate normally. We've identified the source of an apparent attack and have mitigated its impact. We will continue investigating this incident and how we can further lessen the impact of attacks like these.

October 23, 2020 13:23 UTC
[Investigating] GitLab.com experienced a brief disruption due to a potential DDoS attempt. GitLab.com is now operating as normal and we are continuing to look into the source of the disruption.

August 2020

Routing failures from some locations impacting GitLab.com

August 30, 2020 16:59 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 30, 2020 16:59 UTC
[Resolved] This incident is now resolved.

August 30, 2020 14:51 UTC
[Monitoring] Cloudflare is reporting this issue to be mitigated. We will monitor the situation.

August 30, 2020 13:26 UTC
[Identified] There is an ongoing issue with the CenturyLink ISP that is impacting some traffic to GitLab.com. We are tracking this incident here: cloudflarestatus.com/incidents/hptvkprkvp23.

Maven Artifact Uploads Limited to Default 50MB

August 24, 2020 16:08 UTC

Incident Status

Operational


Components

packages.gitlab.com


Locations

AWS




August 24, 2020 16:08 UTC
[Resolved] Uploads limits for artifacts have been restored and previously succeeding uploads should be succeeding again! We apologize for any inconvenience this change caused.

August 24, 2020 15:45 UTC
[Monitoring] We've identified the other areas where our setting needed to be updated and are working quickly to confirm the change has the full, and originally desired, impact.

August 24, 2020 15:25 UTC
[Identified] We're still trying to identify the underlying stickiness that's causing the upload limit to remain at 50 MB. Our suspicion remains that an application cache is retaining the old value.

August 24, 2020 15:12 UTC
[Identified] We suspect the application setting for the Maven artifacts upload limit is cached in the application. We're working to confirm this, so that we can put the new threshold into effect immediately. Artifacts larger than 50 MB remain blocked.

August 24, 2020 15:01 UTC
[Monitoring] We've moved the default upload max size upward to a very roomy amount. We're monitoring users' pipelines to determine whether or not uploads for affected customers are succeeding again.

August 24, 2020 14:43 UTC
[Identified] A recent release to GitLab introduced a max upload size for Maven artifacts. If users' build pipelines, or other workflows, attempt to upload beyond 50MB they will fail. We're working to increase the upper bound shortly to unblock affected users.

July 2020

Cloudflare DNS issues

July 17, 2020 22:34 UTC

Cloudflare DNS issuesService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 17, 2020 22:34 UTC
[Resolved] Thanks everyone for their patience. We have seen traffic resume normally. Any review will continue through the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 22:00 UTC
[Monitoring] Some users are still experiencing issues accessing GitLab services especially if they're using 1.1.1.1 Cloudflare's DNS. Cloudflare's status page also suggests some locations are still affected. We recommend using a different DNS resolver at least temporarily. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/2433#note_381587434

July 17, 2020 21:39 UTC
[Monitoring] Cloudflare seems to have resolved their DNS issues, and all services are operational now. We are monitoring for now. An incident issue was created and reviewed in gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 21:33 UTC
[Identified] We have confirmed that the Cloudflare issue has affected all our services including Support's ZenDesk instances and our status page. We will continue to provide updates as we learn more.

July 17, 2020 21:22 UTC
[Investigating] GitLab.com is experiencing issues related to DNS with Cloudflare. We are investigating.

May 2020

Project File Browser Inaccessible on Canary

May 30, 2020 02:58 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




May 30, 2020 02:58 UTC
[Resolved] The issue has been resolved and the canary environment is now operational. Full details at gitlab.com/gitlab-org/gitlab/-/issues/219478

May 30, 2020 02:09 UTC
[Monitoring] The fix has been fully deployed and the affected canary environment is now fully operational. We are monitoring at this time to ensure the issue doesn't recur.

May 30, 2020 00:08 UTC
[Identified] The fix is being applied and we have updated gitlab.com/gitlab-org/gitlab/-/issues/219478 with the details. The canary environment should no longer return an error once this is fully deployed.

May 29, 2020 22:24 UTC
[Identified] We've identified the cause of the issue and have a plan to deploy a fix to ensure that production remains unaffected. Details in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:05 UTC
[Investigating] To disable canary on GitLab.com head to next.gitlab.com and toggle the switch to Current. This should mitigate this issue if you're affected while we investigate a fix. More details are available in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:02 UTC
[Investigating] We're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. See status.gitlab.com for steps to disable canary if you're affected.

May 29, 2020 17:00 UTC
[Investigating] GitLab.com is operational, but we're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. Disabling canary mitigates this. See status.gitlab.com for steps to disable it if you're affected.

April 2020

High volume credential stuffing and password spraying event

April 28, 2020 20:37 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 28, 2020 20:37 UTC
[Resolved] After a period of monitoring the implementation of the mitigation we put in place, the attack has subsided and the issue appears resolved.

April 28, 2020 19:23 UTC
[Identified] We're beginning to implement some countermeasures to mitigate the attack and are continuously monitoring the impact on GitLab.com. Stand by for further updates.

April 28, 2020 18:54 UTC
[Investigating] GitLab.com is seeing high volume credential stuffing and password spraying attempts. We're working to limit the impact but the volume of unique and regularly rotating IP's is making it tough. Stay tuned.

We're observing an increased error rate across the fleet

April 8, 2020 15:40 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 8, 2020 15:40 UTC
[Resolved] GCP has given the all clear that the incident has been resolved. GitLab.com is fully available. An incident review issue is attached to our status page and details are in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.

April 8, 2020 15:12 UTC
[Monitoring] Although Google Cloud Platform (GCP) hasn't announced the "all clear", we have observed most of our systems recovering. We're also hearing from our account manager that other GCP customers are observing similar recoveries. We'll continue to monitor, and we'll only resolve the incident once we have firm confirmation from GCP that they've recovered.

April 8, 2020 14:57 UTC
[Monitoring] Although we were primarily impacted by Google Cloud Storage failures, we've confirmed that all API requests to Google Cloud Platform from our systems have been failing since the start of this incident. We've received confirmation from our TAM and will be monitoring status.cloud.google.com and open issues for recovery confirmation.

April 8, 2020 14:45 UTC
[Identified] We're confident that the issue is related to Google Cloud Storage. We're working with our TAM to confirm the issue and are searching for any other means available to us to ensure performance doesn't decrease further.

April 8, 2020 14:30 UTC
[Investigating] This status update is providing status adjustments for each individual component in our stack.

April 8, 2020 14:27 UTC
[Investigating] We're observing multiple systems with object backend storage return errors indicating the service is unavailable, which is contributing to GitLab.com increased error rates. We're continuing to investigate the underlying cause.

April 8, 2020 14:11 UTC
[Investigating] Initial investigations are underway. We're observing an increased error rate on GitLab.com, possibly due to issues with object storage buckets. We'll be updating in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.





Back to current status