All Systems Operational

Updated a few seconds ago

Back to current status

Status History

Filter: Support Services (Clear)



March 2021

Possible Database & Redis degradation

March 18, 2021 15:06 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 18, 2021 15:06 UTC
[Resolved] Our services have fully recovered and the incident is now mitigated. For the full root cause analysis follow our investigation at gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:57 UTC
[Investigating] No material updates; we continue to see recovery across our services. Our engineers are still investigating the root cause. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:40 UTC
[Investigating] We're seeing a recovery across our different services and continuing to investigate to confirm the root cause of the degradation. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:23 UTC
[Investigating] We're investigating a possible service degradation across our Redis and DB components. Users might be experiencing delays in CI/CD pipelines. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

Database latency resulting in 500 errors

March 15, 2021 14:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 15, 2021 14:19 UTC
[Resolved] While the root cause is still being investigated, we have seen no further alerts and GitLab.com is fully operational. We are therefore marking this incident as resolved. For further details please check: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:51 UTC
[Monitoring] Gitlab.com appears to be stable, with no additional updates to share. Our engineers are still monitoring our application and investigating the root cause of this problem. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:35 UTC
[Investigating] No material updates, our engineers are investigating the root cause. We're currently waiting for more database replicas to come online to help with the load. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:15 UTC
[Investigating] We're still experiencing 500 errors on GitLab.com, investigation is underway. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:51 UTC
[Investigating] We're still investigating the overall cause of the issue and are currently checking if this could be related to database re-indexing of a particular table. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:29 UTC
[Investigating] We are experiencing database latency resulting in customers seeing 500 errors when trying to reach GitLab.com. We are currently investigating. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

Intermittent issue on GitLab.com

March 1, 2021 05:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 1, 2021 05:19 UTC
[Resolved] It seems the error rate has not gone back up as the monthly CI minutes reset job has completed. All the affected components are back to operating normally. We will post further updates and fix in the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

March 1, 2021 03:13 UTC
[Monitoring] It seems the CI minutes resetting jobs have finished. We're continuing to monitor the performance.

March 1, 2021 02:34 UTC
[Identified] We're expecting for the error to gradually recover on its own over the next 30 minutes. We're exploring a permanence solution so that it doesn't come up during the next month CI minute reset jobs.

March 1, 2021 02:01 UTC
[Identified] We notice there's degraded performance on GitLab.com. It seems to relate to our monthly CI minutes reset jobs. While the performance is recovering, we're exploring possible solution. You can read more about our investigation on gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

February 2021

Submitting a support ticket is currently available

February 8, 2021 11:46 UTC

Incident Status

Degraded Performance


Components

Support Services


Locations

Zendesk




February 8, 2021 11:46 UTC
[Resolved] The ticketing system at support.gitlab.com is up and running.

February 8, 2021 10:16 UTC
[Monitoring] Submitting tickets at support.gitlab.com is now available.

February 8, 2021 09:41 UTC
[Identified] Submitting a support ticket is currently unavailable due to a server error from our ticket service provider. We are in contact with them and will keep you updated accordingly

October 2020

SSO warnings

October 28, 2020 01:45 UTC

SSO warningsOperational

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 28, 2020 01:45 UTC
[Resolved] Closing incident per updates on gitlab.com/gitlab-com/gl-infra/production/-/issues/2916

October 28, 2020 00:35 UTC
[Investigating] We’ve received reports that on SSH pushes users were getting an unintended message. This has been remediated.

Degraded performance related to GCS issues

October 24, 2020 21:56 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 21:56 UTC
[Resolved] GitLab.com Registry is back to normal.

October 24, 2020 18:19 UTC
[Monitoring] GitLab.com Registry appears to be recovering. We are continuing to monitor its status along with GCS.

October 24, 2020 17:15 UTC
[Identified] We are continuing to monitor issues with the Registry on GitLab.com and looking for updates from our provider.

October 24, 2020 16:28 UTC
[Identified] We are continuing to monitor the issues with Registry on GitLab.com and are working with our provider on those issues.

October 24, 2020 16:04 UTC
[Identified] We are continuing to investigate issues with GitLab Pages and Registry on GitLab.com which appear to be related to underlying problems with GCS.

October 24, 2020 15:43 UTC
[Investigating] It appears GitLab pages may be affected. Other GitLab.com services appear to operating correctly.

October 24, 2020 15:40 UTC
[Investigating] We are experiencing degraded performance probably related to GCS issues. We are investigating in gitlab.com/gitlab-com/gl-infra/production/-/issues/2887

GitLab.com is experiencing degraded availability because of high database load.

October 24, 2020 11:32 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 11:32 UTC
[Resolved] GitLab.com is back to operating normally.

October 24, 2020 11:19 UTC
[Monitoring] GitLab.com is back to operating normally. We are continuing to monitor on the incident issue.

October 24, 2020 10:57 UTC
[Identified] We are continuing to investigate the origin of some slow database queries. We are tracking on gitlab.com/gitlab-com/gl-infra/production/-/issues/2885.

October 24, 2020 10:27 UTC
[Identified] We have identified a particular endpoint that was related to the issues and have disabled it temporarily on GitLab.com. The application appears to be recovering, but we are continuing to investigate.

October 24, 2020 10:02 UTC
[Investigating] We are continuing to investigate issues related to high load on the DB cluster for GitLab.com.

October 24, 2020 09:36 UTC
[Investigating] GitLab.com is experiencing degraded availability because of high database load. We are investigating.

Potential DDoS on Gitlab.com

October 23, 2020 14:53 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 23, 2020 14:53 UTC
[Resolved] Thank you all of your patience, we can confirm that the DDoS incident is now fully mitigated, Gitlab.com is fully operational and secure.

October 23, 2020 13:58 UTC
[Monitoring] GitLab.com is continuing to operate normally. We've identified the source of an apparent attack and have mitigated its impact. We will continue investigating this incident and how we can further lessen the impact of attacks like these.

October 23, 2020 13:23 UTC
[Investigating] GitLab.com experienced a brief disruption due to a potential DDoS attempt. GitLab.com is now operating as normal and we are continuing to look into the source of the disruption.

September 2020

Zendesk Support Portal is currently down

September 9, 2020 19:24 UTC

Incident Status

Service Disruption


Components

Support Services


Locations

Zendesk




September 9, 2020 19:24 UTC
[Resolved] Zendesk resolved the issue and has declared the all clear on this incident. Support services are now once again fully operational.

September 9, 2020 18:13 UTC
[Monitoring] Zendesk has reported that our customers should no longer experience issues accessing, creating, and updating support tickets, though have yet to announce the all clear. status.zendesk.com

September 9, 2020 13:25 UTC
[Monitoring] We're seeing some improvements accessing the support portal though Zendesk is still working to fully resolve the issue. You may encounter latency during this time.

September 9, 2020 11:53 UTC
[Investigating] Zendesk is continuing to investigate the outage which is impacting Pod 18, which hosts our support portal. We are awaiting further updates. status.zendesk.com

September 9, 2020 11:33 UTC
[Investigating] We are aware of errors when accessing our support portal (support.gitlab.com) and are in touch with Zendesk. This currently impacts submitting, viewing and responding to support requests. Discussion in gitlab.com/gitlab-com/gl-infra/production/-/issues/2683.

August 2020

Routing failures from some locations impacting GitLab.com

August 30, 2020 16:59 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 30, 2020 16:59 UTC
[Resolved] This incident is now resolved.

August 30, 2020 14:51 UTC
[Monitoring] Cloudflare is reporting this issue to be mitigated. We will monitor the situation.

August 30, 2020 13:26 UTC
[Identified] There is an ongoing issue with the CenturyLink ISP that is impacting some traffic to GitLab.com. We are tracking this incident here: cloudflarestatus.com/incidents/hptvkprkvp23.

July 2020

Cloudflare DNS issues

July 17, 2020 22:34 UTC

Cloudflare DNS issuesService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 17, 2020 22:34 UTC
[Resolved] Thanks everyone for their patience. We have seen traffic resume normally. Any review will continue through the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 22:00 UTC
[Monitoring] Some users are still experiencing issues accessing GitLab services especially if they're using 1.1.1.1 Cloudflare's DNS. Cloudflare's status page also suggests some locations are still affected. We recommend using a different DNS resolver at least temporarily. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/2433#note_381587434

July 17, 2020 21:39 UTC
[Monitoring] Cloudflare seems to have resolved their DNS issues, and all services are operational now. We are monitoring for now. An incident issue was created and reviewed in gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 21:33 UTC
[Identified] We have confirmed that the Cloudflare issue has affected all our services including Support's ZenDesk instances and our status page. We will continue to provide updates as we learn more.

July 17, 2020 21:22 UTC
[Investigating] GitLab.com is experiencing issues related to DNS with Cloudflare. We are investigating.

May 2020

Project File Browser Inaccessible on Canary

May 30, 2020 02:58 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




May 30, 2020 02:58 UTC
[Resolved] The issue has been resolved and the canary environment is now operational. Full details at gitlab.com/gitlab-org/gitlab/-/issues/219478

May 30, 2020 02:09 UTC
[Monitoring] The fix has been fully deployed and the affected canary environment is now fully operational. We are monitoring at this time to ensure the issue doesn't recur.

May 30, 2020 00:08 UTC
[Identified] The fix is being applied and we have updated gitlab.com/gitlab-org/gitlab/-/issues/219478 with the details. The canary environment should no longer return an error once this is fully deployed.

May 29, 2020 22:24 UTC
[Identified] We've identified the cause of the issue and have a plan to deploy a fix to ensure that production remains unaffected. Details in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:05 UTC
[Investigating] To disable canary on GitLab.com head to next.gitlab.com and toggle the switch to Current. This should mitigate this issue if you're affected while we investigate a fix. More details are available in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:02 UTC
[Investigating] We're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. See status.gitlab.com for steps to disable canary if you're affected.

May 29, 2020 17:00 UTC
[Investigating] GitLab.com is operational, but we're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. Disabling canary mitigates this. See status.gitlab.com for steps to disable it if you're affected.

April 2020

High volume credential stuffing and password spraying event

April 28, 2020 20:37 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 28, 2020 20:37 UTC
[Resolved] After a period of monitoring the implementation of the mitigation we put in place, the attack has subsided and the issue appears resolved.

April 28, 2020 19:23 UTC
[Identified] We're beginning to implement some countermeasures to mitigate the attack and are continuously monitoring the impact on GitLab.com. Stand by for further updates.

April 28, 2020 18:54 UTC
[Investigating] GitLab.com is seeing high volume credential stuffing and password spraying attempts. We're working to limit the impact but the volume of unique and regularly rotating IP's is making it tough. Stay tuned.

We're observing an increased error rate across the fleet

April 8, 2020 15:40 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 8, 2020 15:40 UTC
[Resolved] GCP has given the all clear that the incident has been resolved. GitLab.com is fully available. An incident review issue is attached to our status page and details are in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.

April 8, 2020 15:12 UTC
[Monitoring] Although Google Cloud Platform (GCP) hasn't announced the "all clear", we have observed most of our systems recovering. We're also hearing from our account manager that other GCP customers are observing similar recoveries. We'll continue to monitor, and we'll only resolve the incident once we have firm confirmation from GCP that they've recovered.

April 8, 2020 14:57 UTC
[Monitoring] Although we were primarily impacted by Google Cloud Storage failures, we've confirmed that all API requests to Google Cloud Platform from our systems have been failing since the start of this incident. We've received confirmation from our TAM and will be monitoring status.cloud.google.com and open issues for recovery confirmation.

April 8, 2020 14:45 UTC
[Identified] We're confident that the issue is related to Google Cloud Storage. We're working with our TAM to confirm the issue and are searching for any other means available to us to ensure performance doesn't decrease further.

April 8, 2020 14:30 UTC
[Investigating] This status update is providing status adjustments for each individual component in our stack.

April 8, 2020 14:27 UTC
[Investigating] We're observing multiple systems with object backend storage return errors indicating the service is unavailable, which is contributing to GitLab.com increased error rates. We're continuing to investigate the underlying cause.

April 8, 2020 14:11 UTC
[Investigating] Initial investigations are underway. We're observing an increased error rate on GitLab.com, possibly due to issues with object storage buckets. We'll be updating in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.

March 2020

GitLab.com disruption due to database issues related to failover

March 30, 2020 07:14 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 30, 2020 07:14 UTC
[Resolved] GitLab.com is operating normally and we have not seen a recurrence of the issue during our latest monitoring period. Our apologies for any inconvenience. We will be doing a root cause analysis (RCA) here: gitlab.com/gitlab-com/gl-infra/production/-/issues/1865

March 30, 2020 06:37 UTC
[Monitoring] The affected database replicas are now back to normal operation. We're monitoring the issue now and investigate to ensure this doesn't recur. Details can be found at gitlab.com/gitlab-com/gl-infra/production/-/issues/1865

March 30, 2020 06:29 UTC
[Identified] Our db replicas are continuing to recover and performance is now improving on GitLab.com.

March 30, 2020 06:13 UTC
[Identified] GitLab.com is operational, but our engineers are still working on bringing all DB replicas fully online.

March 30, 2020 06:02 UTC
[Identified] The recovery is still in process while we work to bring our other database replicas online.

March 30, 2020 05:53 UTC
[Identified] A few of other database replicas have recovered and we are continuing to bring other replicas back online.

March 30, 2020 05:43 UTC
[Identified] We are now getting replicas back online and GitLab.com should be starting recover.

March 30, 2020 05:36 UTC
[Identified] We are still working to bring our db replicas online so you may still experience slowdowns in your experience on GitLab.com at this point.

March 30, 2020 05:24 UTC
[Identified] We are continuing to work on bringing our db replicas back online.

March 30, 2020 05:08 UTC
[Identified] We are still facing slowdowns in some requests as we work on resolving the issue with our databases.

March 30, 2020 04:53 UTC
[Identified] We are tracking the incident on gitlab.com/gitlab-com/gl-infra/production/-/issues/1865.

March 30, 2020 04:43 UTC
[Identified] GitLab.com is experiencing a service disruption related to an automatic DB failover which did not fully succeed. We are currently working on bringing replicas back online.

High latencies for some repos on a fileserver with high load

March 28, 2020 15:31 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 28, 2020 15:31 UTC
[Resolved] We resolved the load issues on the affected fileserver. All repositories are responding with normal latencies again.

March 28, 2020 14:45 UTC
[Investigating] We are seeing high latencies for some repositories located on a fileserver with high load. We are taking measures to reduce the load on that server.

Potential password spraying activity

March 24, 2020 18:24 UTC

Incident Status

Security Issue


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 24, 2020 18:24 UTC
[Resolved] Our team noticed some potential password spraying activity that we suspect is taking advantage of vulnerable users on GitLab and likely other services. Here's a doc about 2FA on GitLab to keep you safe: docs.gitlab.com/ee/security/two_factor_authentication.html

February 2020

GitLab.com web UI is currently unavailable

February 22, 2020 09:18 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




February 22, 2020 09:18 UTC
[Resolved] We're fully operational and have now disabled archive downloads. across GitLab.com. We've opened a security incident and are working quickly to reenable this feature.

February 22, 2020 09:04 UTC
[Monitoring] We're fully operational and will be opening up a security issue. Continue to monitor our status page for updates.

February 22, 2020 08:54 UTC
[Monitoring] We're observing continued attacks and are continuing to monitor. Our services are currently online.

February 22, 2020 08:20 UTC
[Monitoring] No change in status. We're continuing to monitor. Our services are currently online.

February 22, 2020 08:08 UTC
[Monitoring] We're observing continued attacks. Our current mitigation strategy is continuing to be effective and all services are currently online. We'll continue to monitor and update.

February 22, 2020 08:03 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 08:02 UTC
[Monitoring] Operational status

February 22, 2020 08:00 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 07:48 UTC
[Monitoring] We've blocked the latest attack target and error rates are beginning to decline.

February 22, 2020 07:36 UTC
[Monitoring] We're still in a degraded state as we work to mitigate an incoming attack.

February 22, 2020 07:20 UTC
[Monitoring] Continuing to see some some errors as we work to mitigate on incoming attacks.

February 22, 2020 07:06 UTC
[Monitoring] We're seeing an increased error rate and are taking steps to mitigate an incoming attack.

February 22, 2020 06:46 UTC
[Monitoring] We're observing continued attacks. Our mitigation strategy is continuing to be effective and all services are currently online. We'll continue to monitor and update.

February 22, 2020 06:30 UTC
[Monitoring] We’re observing another wave of attacks, but appear to have mitigated the attempt to disrupt service. We’ll continue to monitor and update.

February 22, 2020 05:57 UTC
[Monitoring] Out of an abundance of caution we are temporarily disabling archive downloads.

February 22, 2020 05:26 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 05:21 UTC
[Monitoring] We're continuing to monitor the situation.

February 22, 2020 05:02 UTC
[Monitoring] Operational status.

February 22, 2020 04:59 UTC
[Monitoring] Infrastructure is fully operational. We are continuing to monitor for abusers.

February 22, 2020 04:54 UTC
[Monitoring] The mitigation has been deployed and error rates are returning to normal. We are continuing to monitor the situation.

February 22, 2020 04:42 UTC
[Identified] We have identified and tested a mitigation to the increase of traffic and are working to deploy that change system wide.

February 22, 2020 04:26 UTC
[Identified] We're getting a higher than usual number of requests to download the gitlab-foss project. We are disabling downloads on this project temporarily.

February 22, 2020 04:12 UTC
[Investigating] We've observed a significant increase in traffic to Gitlab.com and are preping steps mitigate suspected abuse.

February 22, 2020 03:58 UTC
[Investigating] Attempts to load web pages returns a 502. We're investigating an issue with the http loadbalancers.

Increased web latencies

February 11, 2020 09:13 UTC

Increased web latenciesDegraded Performance

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




February 11, 2020 09:13 UTC
[Resolved] We identified and fixed the root cause for the high db insert rate. All systems are back to normal. Details can be found in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

February 11, 2020 08:56 UTC
[Investigating] The high db insert rate is still affecting our site, causing latencies and increased error rates. Details can be followed in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

February 11, 2020 08:26 UTC
[Investigating] We are investigating increased web latencies caused by an increased db insert rate. Details can be followed in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

November 2019

GitLab.com is Down

November 28, 2019 13:18 UTC

GitLab.com is DownService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




November 28, 2019 13:18 UTC
[Resolved] GitLab.com is back to operating normally. We have taken our working notes and added them to gitlab.com/gitlab-com/gl-infra/production/issues/1421.

November 28, 2019 12:43 UTC
[Monitoring] GitLab.com is now recovering. We found 2 last DB nodes which had not reverted their change. Apologies for the disruption.

November 28, 2019 12:28 UTC
[Identified] We rolled back the firewall change, but along the way the application encountered issues reconnecting to the database. We’re force restarting the application and hope to be back online soon.

November 28, 2019 11:49 UTC
[Identified] We have identified firewall misconfiguration was applied that is preventing applications from connecting to the database. We've rolling back that change and expect to be operational again shortly.

November 28, 2019 11:35 UTC
[Identified] We've identified an issue with database connectivity and are working to restore service.

November 28, 2019 11:18 UTC
[Investigating] We're investigating an outage. Currently GitLab.com is down.





Back to current status