All Systems Operational

Updated a few seconds ago

Back to current status

Status History

Filter: Azure (Clear)



July 2020

Cloudflare DNS issues

July 17, 2020 22:34 UTC

Cloudflare DNS issuesService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 17, 2020 22:34 UTC
[Resolved] Thanks everyone for their patience. We have seen traffic resume normally. Any review will continue through the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 22:00 UTC
[Monitoring] Some users are still experiencing issues accessing GitLab services especially if they're using 1.1.1.1 Cloudflare's DNS. Cloudflare's status page also suggests some locations are still affected. We recommend using a different DNS resolver at least temporarily. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/2433#note_381587434

July 17, 2020 21:39 UTC
[Monitoring] Cloudflare seems to have resolved their DNS issues, and all services are operational now. We are monitoring for now. An incident issue was created and reviewed in gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 21:33 UTC
[Identified] We have confirmed that the Cloudflare issue has affected all our services including Support's ZenDesk instances and our status page. We will continue to provide updates as we learn more.

July 17, 2020 21:22 UTC
[Investigating] GitLab.com is experiencing issues related to DNS with Cloudflare. We are investigating.

May 2020

Project File Browser Inaccessible on Canary

May 30, 2020 02:58 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




May 30, 2020 02:58 UTC
[Resolved] The issue has been resolved and the canary environment is now operational. Full details at gitlab.com/gitlab-org/gitlab/-/issues/219478

May 30, 2020 02:09 UTC
[Monitoring] The fix has been fully deployed and the affected canary environment is now fully operational. We are monitoring at this time to ensure the issue doesn't recur.

May 30, 2020 00:08 UTC
[Identified] The fix is being applied and we have updated gitlab.com/gitlab-org/gitlab/-/issues/219478 with the details. The canary environment should no longer return an error once this is fully deployed.

May 29, 2020 22:24 UTC
[Identified] We've identified the cause of the issue and have a plan to deploy a fix to ensure that production remains unaffected. Details in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:05 UTC
[Investigating] To disable canary on GitLab.com head to next.gitlab.com and toggle the switch to Current. This should mitigate this issue if you're affected while we investigate a fix. More details are available in gitlab.com/gitlab-org/gitlab/-/issues/219478.

May 29, 2020 17:02 UTC
[Investigating] We're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. See status.gitlab.com for steps to disable canary if you're affected.

May 29, 2020 17:00 UTC
[Investigating] GitLab.com is operational, but we're investigating an issue on our canary environment causing the file browser of internal and private projects to not load. Disabling canary mitigates this. See status.gitlab.com for steps to disable it if you're affected.

April 2020

High volume credential stuffing and password spraying event

April 28, 2020 20:37 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 28, 2020 20:37 UTC
[Resolved] After a period of monitoring the implementation of the mitigation we put in place, the attack has subsided and the issue appears resolved.

April 28, 2020 19:23 UTC
[Identified] We're beginning to implement some countermeasures to mitigate the attack and are continuously monitoring the impact on GitLab.com. Stand by for further updates.

April 28, 2020 18:54 UTC
[Investigating] GitLab.com is seeing high volume credential stuffing and password spraying attempts. We're working to limit the impact but the volume of unique and regularly rotating IP's is making it tough. Stay tuned.

We're observing an increased error rate across the fleet

April 8, 2020 15:40 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




April 8, 2020 15:40 UTC
[Resolved] GCP has given the all clear that the incident has been resolved. GitLab.com is fully available. An incident review issue is attached to our status page and details are in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.

April 8, 2020 15:12 UTC
[Monitoring] Although Google Cloud Platform (GCP) hasn't announced the "all clear", we have observed most of our systems recovering. We're also hearing from our account manager that other GCP customers are observing similar recoveries. We'll continue to monitor, and we'll only resolve the incident once we have firm confirmation from GCP that they've recovered.

April 8, 2020 14:57 UTC
[Monitoring] Although we were primarily impacted by Google Cloud Storage failures, we've confirmed that all API requests to Google Cloud Platform from our systems have been failing since the start of this incident. We've received confirmation from our TAM and will be monitoring status.cloud.google.com and open issues for recovery confirmation.

April 8, 2020 14:45 UTC
[Identified] We're confident that the issue is related to Google Cloud Storage. We're working with our TAM to confirm the issue and are searching for any other means available to us to ensure performance doesn't decrease further.

April 8, 2020 14:30 UTC
[Investigating] This status update is providing status adjustments for each individual component in our stack.

April 8, 2020 14:27 UTC
[Investigating] We're observing multiple systems with object backend storage return errors indicating the service is unavailable, which is contributing to GitLab.com increased error rates. We're continuing to investigate the underlying cause.

April 8, 2020 14:11 UTC
[Investigating] Initial investigations are underway. We're observing an increased error rate on GitLab.com, possibly due to issues with object storage buckets. We'll be updating in gitlab.com/gitlab-com/gl-infra/production/-/issues/1919.

March 2020

GitLab.com disruption due to database issues related to failover

March 30, 2020 07:14 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 30, 2020 07:14 UTC
[Resolved] GitLab.com is operating normally and we have not seen a recurrence of the issue during our latest monitoring period. Our apologies for any inconvenience. We will be doing a root cause analysis (RCA) here: gitlab.com/gitlab-com/gl-infra/production/-/issues/1865

March 30, 2020 06:37 UTC
[Monitoring] The affected database replicas are now back to normal operation. We're monitoring the issue now and investigate to ensure this doesn't recur. Details can be found at gitlab.com/gitlab-com/gl-infra/production/-/issues/1865

March 30, 2020 06:29 UTC
[Identified] Our db replicas are continuing to recover and performance is now improving on GitLab.com.

March 30, 2020 06:13 UTC
[Identified] GitLab.com is operational, but our engineers are still working on bringing all DB replicas fully online.

March 30, 2020 06:02 UTC
[Identified] The recovery is still in process while we work to bring our other database replicas online.

March 30, 2020 05:53 UTC
[Identified] A few of other database replicas have recovered and we are continuing to bring other replicas back online.

March 30, 2020 05:43 UTC
[Identified] We are now getting replicas back online and GitLab.com should be starting recover.

March 30, 2020 05:36 UTC
[Identified] We are still working to bring our db replicas online so you may still experience slowdowns in your experience on GitLab.com at this point.

March 30, 2020 05:24 UTC
[Identified] We are continuing to work on bringing our db replicas back online.

March 30, 2020 05:08 UTC
[Identified] We are still facing slowdowns in some requests as we work on resolving the issue with our databases.

March 30, 2020 04:53 UTC
[Identified] We are tracking the incident on gitlab.com/gitlab-com/gl-infra/production/-/issues/1865.

March 30, 2020 04:43 UTC
[Identified] GitLab.com is experiencing a service disruption related to an automatic DB failover which did not fully succeed. We are currently working on bringing replicas back online.

High latencies for some repos on a fileserver with high load

March 28, 2020 15:31 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 28, 2020 15:31 UTC
[Resolved] We resolved the load issues on the affected fileserver. All repositories are responding with normal latencies again.

March 28, 2020 14:45 UTC
[Investigating] We are seeing high latencies for some repositories located on a fileserver with high load. We are taking measures to reduce the load on that server.

Potential password spraying activity

March 24, 2020 18:24 UTC

Incident Status

Security Issue


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 24, 2020 18:24 UTC
[Resolved] Our team noticed some potential password spraying activity that we suspect is taking advantage of vulnerable users on GitLab and likely other services. Here's a doc about 2FA on GitLab to keep you safe: docs.gitlab.com/ee/security/two_factor_authentication.html

GitLab Subscription Manager (customer portal) is down

March 12, 2020 12:31 UTC

Incident Status

Service Disruption


Components

GitLab Customers Portal


Locations

Azure




March 12, 2020 12:31 UTC
[Resolved] Customers portal operations have returned to normal.

March 12, 2020 12:00 UTC
[Monitoring] We have identified the issue and implemented a fix. We will continue to monitor.

March 12, 2020 11:46 UTC
[Investigating] We are investigating failures on customers.gitlab.com. More details in gitlab.com/gitlab-com/gl-infra/production/-/issues/1763.

February 2020

GitLab.com web UI is currently unavailable

February 22, 2020 09:18 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




February 22, 2020 09:18 UTC
[Resolved] We're fully operational and have now disabled archive downloads. across GitLab.com. We've opened a security incident and are working quickly to reenable this feature.

February 22, 2020 09:04 UTC
[Monitoring] We're fully operational and will be opening up a security issue. Continue to monitor our status page for updates.

February 22, 2020 08:54 UTC
[Monitoring] We're observing continued attacks and are continuing to monitor. Our services are currently online.

February 22, 2020 08:20 UTC
[Monitoring] No change in status. We're continuing to monitor. Our services are currently online.

February 22, 2020 08:08 UTC
[Monitoring] We're observing continued attacks. Our current mitigation strategy is continuing to be effective and all services are currently online. We'll continue to monitor and update.

February 22, 2020 08:03 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 08:02 UTC
[Monitoring] Operational status

February 22, 2020 08:00 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 07:48 UTC
[Monitoring] We've blocked the latest attack target and error rates are beginning to decline.

February 22, 2020 07:36 UTC
[Monitoring] We're still in a degraded state as we work to mitigate an incoming attack.

February 22, 2020 07:20 UTC
[Monitoring] Continuing to see some some errors as we work to mitigate on incoming attacks.

February 22, 2020 07:06 UTC
[Monitoring] We're seeing an increased error rate and are taking steps to mitigate an incoming attack.

February 22, 2020 06:46 UTC
[Monitoring] We're observing continued attacks. Our mitigation strategy is continuing to be effective and all services are currently online. We'll continue to monitor and update.

February 22, 2020 06:30 UTC
[Monitoring] We’re observing another wave of attacks, but appear to have mitigated the attempt to disrupt service. We’ll continue to monitor and update.

February 22, 2020 05:57 UTC
[Monitoring] Out of an abundance of caution we are temporarily disabling archive downloads.

February 22, 2020 05:26 UTC
[Monitoring] We're continuing to monitor for abuse. All systems are currently online and fully operational. We're leaving the incident in "Monitoring" state until we're confident the attack has ceased.

February 22, 2020 05:21 UTC
[Monitoring] We're continuing to monitor the situation.

February 22, 2020 05:02 UTC
[Monitoring] Operational status.

February 22, 2020 04:59 UTC
[Monitoring] Infrastructure is fully operational. We are continuing to monitor for abusers.

February 22, 2020 04:54 UTC
[Monitoring] The mitigation has been deployed and error rates are returning to normal. We are continuing to monitor the situation.

February 22, 2020 04:42 UTC
[Identified] We have identified and tested a mitigation to the increase of traffic and are working to deploy that change system wide.

February 22, 2020 04:26 UTC
[Identified] We're getting a higher than usual number of requests to download the gitlab-foss project. We are disabling downloads on this project temporarily.

February 22, 2020 04:12 UTC
[Investigating] We've observed a significant increase in traffic to Gitlab.com and are preping steps mitigate suspected abuse.

February 22, 2020 03:58 UTC
[Investigating] Attempts to load web pages returns a 502. We're investigating an issue with the http loadbalancers.

Increased web latencies

February 11, 2020 09:13 UTC

Increased web latenciesDegraded Performance

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




February 11, 2020 09:13 UTC
[Resolved] We identified and fixed the root cause for the high db insert rate. All systems are back to normal. Details can be found in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

February 11, 2020 08:56 UTC
[Investigating] The high db insert rate is still affecting our site, causing latencies and increased error rates. Details can be followed in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

February 11, 2020 08:26 UTC
[Investigating] We are investigating increased web latencies caused by an increased db insert rate. Details can be followed in the incident issue: gitlab.com/gitlab-com/gl-infra/production/issues/1651

November 2019

GitLab.com is Down

November 28, 2019 13:18 UTC

GitLab.com is DownService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




November 28, 2019 13:18 UTC
[Resolved] GitLab.com is back to operating normally. We have taken our working notes and added them to gitlab.com/gitlab-com/gl-infra/production/issues/1421.

November 28, 2019 12:43 UTC
[Monitoring] GitLab.com is now recovering. We found 2 last DB nodes which had not reverted their change. Apologies for the disruption.

November 28, 2019 12:28 UTC
[Identified] We rolled back the firewall change, but along the way the application encountered issues reconnecting to the database. We’re force restarting the application and hope to be back online soon.

November 28, 2019 11:49 UTC
[Identified] We have identified firewall misconfiguration was applied that is preventing applications from connecting to the database. We've rolling back that change and expect to be operational again shortly.

November 28, 2019 11:35 UTC
[Identified] We've identified an issue with database connectivity and are working to restore service.

November 28, 2019 11:18 UTC
[Investigating] We're investigating an outage. Currently GitLab.com is down.

Increased latencies on gitlab.com

November 18, 2019 11:51 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




November 18, 2019 11:51 UTC
[Resolved] We've alleviated the congestion around web database connections. See gitlab.com/gitlab-com/gl-infra/production/issues/1373 for more details.

November 18, 2019 10:59 UTC
[Monitoring] We've rebalanced a database configuration to favor web and API connections, which are the most latency sensitive to our users. We've seen an immediate improvement and we're monitoring closely.

November 18, 2019 09:53 UTC
[Investigating] We are seeing increased latencies on GitLab.com. Investigation of the issues is taking place in gitlab.com/gitlab-com/gl-infra/production/issues/1373.

Higher response times and error rates

November 6, 2019 16:04 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




November 6, 2019 16:04 UTC
[Resolved] We've resolved the issue and will conduct a full review in :gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8368.

November 6, 2019 15:20 UTC
[Monitoring] GitLab.com is currently operational, but we're monitoring the environment closely. A project import with a number of LFS objects in their repo was creating high latencies in the import queue after it created slow database queries. We hope to resolve the issue shortly.

November 6, 2019 13:51 UTC
[Identified] We are continuing to investigate disruptions on GitLab.com. We are tracking problems on gitlab.com/gitlab-com/gl-infra/production/issues/1327. GitLab.com is currently up, but we are continuing to monitor its health.

November 6, 2019 13:33 UTC
[Investigating] We are intermittently unavailable and we are investigating the cause. We are tracking on gitlab.com/gitlab-com/gl-infra/production/issues/1327

November 6, 2019 13:29 UTC
[Investigating] We are intermittently unavailable and we are investigating the cause. We are tracking on gitlab.com/gitlab-com/gl-infra/production/issues/1327

November 6, 2019 11:37 UTC
[Monitoring] We have identified and disabled a feature flag that was possibly related to the slower requests. We are tracking on gitlab.com/gitlab-com/gl-infra/production/issues/1327.

November 6, 2019 11:03 UTC
[Investigating] We are experiencing higher response times and error rates at the moment and are investigating the root cause in gitlab.com/gitlab-com/gl-infra/production/issues/1327.

October 2019

Database Failover

October 29, 2019 11:05 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 29, 2019 11:05 UTC
[Resolved] We confirmed all services operating normally.

October 29, 2019 09:54 UTC
[Monitoring] We experienced a database failover leading to a short spike of errors on GitLab.com. The situation is back to normal and we are further investigating in gitlab.com/gitlab-com/gl-infra/production/issues/1285.

Delays in job processing

October 25, 2019 14:04 UTC

Delays in job processingDegraded Performance

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 25, 2019 14:04 UTC
[Resolved] A patch was pushed yesterday evening to fix the root cause of the issue. See gitlab.com/gitlab-org/gitlab/commit/b4037524908171800e92d72a4f12eca5ce5e7972. CI shared runners are operational.

October 24, 2019 23:16 UTC
[Monitoring] We've cleared out another problematic build that caused a resurgence in the issue and are applying a patch to fix the underlying problem. Details in: gitlab.com/gitlab-org/gitlab/issues/34860 and gitlab.com/gitlab-org/gitlab/merge_requests/19124

October 24, 2019 20:41 UTC
[Monitoring] We're seeing vast improvements in job queue times for Shared Runners on GitLab.com. Service levels are nearing normal operation and we're now monitoring to ensure the issue does not recur.

October 24, 2019 19:18 UTC
[Identified] We are still seeing issues with job queue processing and are continuing to work towards getting the matter fully resolved. Tracking in gitlab.com/gitlab-com/gl-infra/production/issues/1275.

October 24, 2019 17:52 UTC
[Resolved] CI jobs On shared runners are fully operational again. We apologize for any delays you may have experienced.

October 24, 2019 15:34 UTC
[Monitoring] Shared runner CI jobs are starting and our queues are slowly coming down. We expect to achieve normal levels within 90 minutes. We'll continue to monitor and will update once we're fully operational again.

October 24, 2019 14:46 UTC
[Identified] We've identified an issue where malformed data from a project import began throwing errors and is preventing some CI pipelines from starting. We've canceled the pipelines in question and are monitoring metrics.

October 24, 2019 12:59 UTC
[Investigating] The job durations are still higher than usual. We are continuing to investigate the situation.

October 24, 2019 12:39 UTC
[Monitoring] Jobs duration times are looking good again. We are still monitoring and investigating the root cause of the durations in gitlab.com/gitlab-com/gl-infra/production/issues/1275.

October 24, 2019 11:31 UTC
[Investigating] We are currently seeing delays in CI job processing and are investigating.

gitlab.com outage

October 23, 2019 17:05 UTC

gitlab.com outageService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 23, 2019 17:05 UTC
[Resolved] The incident is resolved. We are conducting our review in gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8247.

October 23, 2019 16:02 UTC
[Monitoring] We've alleviated the memory pressure on our Redis cluster and we'll be monitoring for the next hour before sounding the all clear. All systems are operating normally.

October 23, 2019 13:30 UTC
[Identified] We confirmed the issues were caused by failures with our Redis cluster. We observed unusual activity that contributed to OOM errors on Redis. We'll be continuing to report our findings in an incident review issue: gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8247.

October 23, 2019 12:17 UTC
[Investigating] While the site is up again, we are investigating problems with our redis cluster as the root cause.

October 23, 2019 11:56 UTC
[Resolved] The site is flapping again. We are investigating the root cause in gitlab.com/gitlab-com/gl-infra/production/issues/1272.

October 23, 2019 11:39 UTC
[Investigating] The site is up again. We are still checking for the root cause of the short outage.

October 23, 2019 11:35 UTC
[Investigating] We are experiencing an outage of gitlab.com and are investigating the root cause.

Customers.gitlab.com is down

October 7, 2019 13:38 UTC

Incident Status

Service Disruption


Components

GitLab Customers Portal


Locations

Azure




October 7, 2019 13:38 UTC
[Resolved] All clear.

October 7, 2019 13:29 UTC
[Monitoring] Customers.gitlab.com is restored to working order and we are monitoring.

October 7, 2019 13:08 UTC
[Identified] Recent updates have caused customers.gitlab.com to cease to work correctly. We are reverting changes.

September 2019

Elevated Error Rates on GitLab.com

September 20, 2019 11:23 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com


Locations

Google Compute Engine, Azure, Zendesk, AWS




September 20, 2019 11:23 UTC
[Resolved] A fix to gitaly was made in gitlab.com/gitlab-org/gitaly/merge_requests/1492

September 18, 2019 09:07 UTC
[Monitoring] A single Gitaly file-server on GitLab.com went down briefly, leading to a momentary spike in errors. Service has been restored, but we are investigating the cause. gitlab.com/gitlab-com/gl-infra/production/issues/1165

August 2019

Partial degradation on the performance of git repositories

August 29, 2019 13:58 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com


Locations

Google Compute Engine, Azure, Zendesk, AWS




August 29, 2019 13:58 UTC
[Resolved] The partial degradation performance has been resolved. Thank you for your patience!

August 29, 2019 13:45 UTC
[Identified] We have identified the abuse pattern, and we are executing the corrective actions. We are tracking on gitlab.com/gitlab-com/gl-infra/production/issues/1099

August 29, 2019 13:00 UTC
[Investigating] We are observing some performance degradation on one storage node, due to possible abuse activity.

Degraded Performance of Web requests on GitLab.com

August 20, 2019 16:24 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com


Locations

Google Compute Engine, Azure, Zendesk, AWS




August 20, 2019 16:24 UTC
[Resolved] GitLab.com web and API request latencies are back at normal levels.

August 20, 2019 15:16 UTC
[Monitoring] GitLab.com web and API request latencies are back at normal levels. We'll continue to monitor the health of the requests as the day continues.

August 20, 2019 14:58 UTC
[Identified] GitLab.com requests are still slightly degraded. We are continuing to validate ideas on the one affected read-only database and it's pgbouncer cpu usage.

August 20, 2019 14:39 UTC
[Identified] We are continuing to test ideas to improve performance the 1 readonly DB node that is slowing web and API requests on GitLab.com.

August 20, 2019 14:19 UTC
[Identified] GitLab.com is still slightly degraded and we continue to investigate with notes on gitlab.com/gitlab-com/gl-infra/production/issues/1073

August 20, 2019 13:59 UTC
[Identified] We are investigating increased queued connections on one of our readonly databases and are continuing investigation.

August 20, 2019 13:40 UTC
[Investigating] We are continuing to investigate degraded web performance on GitLab.com. Tracking on gitlab.com/gitlab-com/gl-infra/production/issues/1073

August 20, 2019 13:25 UTC
[Investigating] We are investigating degraded performance on web requests to GitLab.com.





Back to current status