Planned Maintenance In Progress

Updated a few seconds ago

Back to current status

Status History



February 2025

Pipelines fail with "Unable to create pipeline" error

February 14, 2025 22:13 UTC

Incident Status

Partial Service Disruption


Components

CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners on macOS, CI/CD - Hosted runners for GitLab community contributions, CI/CD - Self-managed runners


Locations

Google Compute Engine, AWS, Self-Managed Runner Connectivity




February 14, 2025 22:13 UTC
[Resolved] This fix has been successfully deployed and we have confirmed the issue is no longer occurring. We appreciate your patience while we worked to address this. We will mark this incident as resolved. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

February 14, 2025 19:43 UTC
[Identified] We expect the revert MR to land in production within the next 3 hours. Impacted users can find a workaround within the incident issue. We will continue to provide updates here as needed. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

February 14, 2025 17:06 UTC
[Identified] We expect the revert MR to land in production within the next 6 hours. Additional updates will be posted here as needed. If you are one the subset of users impacted by this change, please see this issue for a workaround and other additional details: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

February 14, 2025 16:11 UTC
[Identified] The revert MR to address the issue has been merged. We are still waiting for this to land in production. These changes will be re-introduced to the Dependency-Scanning.latest.gitlab-ci.yml template, where breaking changes can be expected. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

February 14, 2025 15:25 UTC
[Identified] We have created a Merge Request to revert recent changes to the Dependency-Scanning.gitlab-ci.yml template. Continue following this incident issue for further updates: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

February 14, 2025 14:52 UTC
[Identified] We are currently experiencing issues with failing GitLab.com pipelines that include the Dependency-Scanning.gitlab-ci.yml template. We have identified the root cause and are preparing to revert the MR. For a workaround and more details, please see: gitlab.com/gitlab-com/gl-infra/production/-/issues/19310

Errors managing Kubernetes agents

February 11, 2025 15:23 UTC

Errors managing Kubernetes agentsPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners on macOS, CI/CD - Hosted runners for GitLab community contributions, CI/CD - Self-managed runners


Locations

Google Compute Engine, AWS, Self-Managed Runner Connectivity




February 11, 2025 15:23 UTC
[Resolved] The fix has been successfully deployed. Our monitoring confirms that the service has been restored to normal operation with no further errors observed. Thank you for your patience during this disruption. If you experience any further issues, please contact our Support team.

February 11, 2025 14:39 UTC
[Monitoring] We are currently rolling out a fix to production. Our team is actively monitoring the deployment and its effectiveness. Please continue to follow our status page for the latest updates.

February 11, 2025 10:24 UTC
[Identified] The ongoing incident investigation has been marked as confidential. While the original incident report will no longer be publicly visible, we continue to actively work on the resolution. Service impact remains unchanged. Further updates will be available on the status page - status.gitlab.com

February 11, 2025 09:35 UTC
[Identified] We identified the root cause of the incident, and we are reverting the relevant MR. The estimated time for the fix: around 5 hours from now. More details: gitlab.com/gitlab-com/gl-infra/production/-/issues/19278

February 11, 2025 09:00 UTC
[Investigating] Some users are experiencing errors managing Kubernetes agents. Details can be found in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19278

Degraded performance of the WebIDE

February 6, 2025 10:34 UTC

Incident Status

Degraded Performance


Components

Website


Locations

Google Compute Engine




February 6, 2025 10:34 UTC
[Resolved] We have fully recovered so we are marking this as resolved now. For details see gitlab.com/gitlab-com/gl-infra/production/-/issues/19244

February 6, 2025 09:29 UTC
[Monitoring] We are seeing recovery, and are going to continue monitoring. For details see gitlab.com/gitlab-com/gl-infra/production/-/issues/19244

February 6, 2025 09:14 UTC
[Investigating] We have noticed degraded performance of the WebIDE and are investigating. See gitlab.com/gitlab-com/gl-infra/production/-/issues/19244

January 2025

Customers Portal is Down (customers.gitlab.com)

January 23, 2025 20:26 UTC

Incident Status

Partial Service Disruption


Components

GitLab Customers Portal


Locations

Google Compute Engine




January 23, 2025 20:26 UTC
[Resolved] After a period of monitoring, we have observed no further issues with the Customers Portal (customers.gitlab.com). This incident is now considered resolved. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19163

January 23, 2025 19:43 UTC
[Monitoring] The login issue with the Customers Portal (customers.gitlab.com) has been resolved. We will continue to monitor the situation closely to ensure stability. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19163

January 23, 2025 19:20 UTC
[Investigating] We are still investigating intermittent login issues with the Customers Portal (customers.gitlab.com). While we work on resolving this, please continue using the legacy login: customers.gitlab.com/customers/sign_in?legacy=true. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19163

January 23, 2025 19:00 UTC
[Investigating] We are currently investigating an issue with logging into the Customers Portal (customers.gitlab.com). In the meantime, the legacy login remains functional: customers.gitlab.com/customers/sign_in?legacy=true. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19163

Merge request diffs for comments unavailable for parts of December / January

January 20, 2025 12:09 UTC

Incident Status

Partial Service Disruption


Components

Website


Locations

Google Compute Engine




January 20, 2025 12:09 UTC
[Resolved] After a monitoring period, we confirmed that the issue is now resolved and the diffs are showing again as expected. Please reach out to our Support team if you encounter issues.

January 20, 2025 03:02 UTC
[Monitoring] No material updates to report. Our team continue to monitor the fix. Another update will be provided in 24 hours.

January 19, 2025 01:09 UTC
[Monitoring] No material updates to report. We're continuing to assess the scope of impact and monitoring the fix. Another update will be provided in 24 hours.

January 18, 2025 02:34 UTC
[Monitoring] No material updates to report. We are still monitoring the fix and we will provide an update in 24 hours.

January 17, 2025 09:26 UTC
[Monitoring] The deploy to production has finished and image diff comments are rendered correctly again. Our engineers are monitoring the situation. The next update can be expected in 12 hours.

January 16, 2025 17:15 UTC
[Identified] The fix for Merge Request diff comments has been deployed to production. We are still working on a fix for image diff comments. MR: gitlab.com/gitlab-org/gitlab/-/merge_requests/178143. We will update once this has been deployed to production.

January 16, 2025 08:29 UTC
[Identified] A fix has been merged and is getting deployed to production. We will update the status page again, once the fix has been deployed successfully.

January 16, 2025 00:08 UTC
[Identified] We have been analyzing impacted merge request diff comments; we have noticed that some diff notes do not have a Reply form. We are working on a fix in this Merge Request: gitlab.com/gitlab-org/gitlab/-/merge_requests/178057

January 15, 2025 16:54 UTC
[Identified] We have verified the fix in production and it has been rolled out to all GitLab.com projects. The majority of comment display issues have been fixed and we are working on a followup to resolve any remaining issues.

January 15, 2025 08:07 UTC
[Identified] A fix for impacted diff comments has been merged, and is getting deployed to production. In our next status update, we will notify when the fix is in production.

January 15, 2025 00:27 UTC
[Identified] The issue where diff comments are sometimes not showing in Merge Requests appears to be a display issue. A initial fix was deployed on Jan 8 to prevent further occurrences. A fix for impacted diff comments is in progress.

January 14, 2025 20:29 UTC
[Identified] We are aware of an issue where diffs are not showing in some cases on merge requests created between December 9 and Jan 8. We are continuing to research and will update when we have further updates.

January 14, 2025 16:24 UTC
[Identified] We're aware of an issue where merge request comments created between Dec 9th and Jan 8th may contain comments that are no longer associated with a diff.

Runner 17.8.0 incorrectly tagged

January 16, 2025 20:46 UTC

Runner 17.8.0 incorrectly taggedPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners on macOS, CI/CD - Hosted runners for GitLab community contributions, CI/CD - Self-managed runners


Locations

Google Compute Engine, Self-Managed Runner Connectivity




January 16, 2025 20:46 UTC
[Resolved] GitLab has published a corrected image for GitLab Runner 17.8; the image can be used normally. Further investigations will be documented in gitlab.com/gitlab-com/gl-infra/production/-/issues/19129.

January 16, 2025 20:04 UTC
[Investigating] GitLab is working on uploading a corrected version of the 17.8.0 Runner image. We will provide an update once we have confirmed the update has been deployed.

January 16, 2025 19:38 UTC
[Investigating] GitLab is investigating an issue where the image for Runner 17.8.0 was not tagged correctly when uploaded to the Container Registry. We recommend continuing to use the Runner 17.7 image until the Runner 17.8 image can be tagged correctly.

Subscription plan disruptions on GitLab.com

January 15, 2025 22:51 UTC

Incident Status

Partial Service Disruption


Components

Website


Locations

Google Compute Engine




January 15, 2025 22:51 UTC
[Resolved] Our monitoring is now complete and we haven't seen additional reports of subscription plans on GitLab.com being impacted. We'll now be resolving this incident. Please reach out to our Support team if you encounter issues.

January 15, 2025 08:18 UTC
[Monitoring] No material updates to report. We are still monitoring the fix and we will provide an update in 12 hours.

January 14, 2025 21:56 UTC
[Monitoring] A fix has been implemented and impacted subscription plans should now be restored. Our team will be monitoring this fix over the next few hours.

January 14, 2025 20:20 UTC
[Identified] At the moment, there are no material updates at this time. A fix has been deployed and investigations are ongoing. We will update when we have a substantive update.

January 14, 2025 14:38 UTC
[Identified] No material updates at this point. A fix has been deployed, but investigations are still ongoing. The next update can be expected in 2 hours.

January 14, 2025 12:31 UTC
[Identified] We have deployed a fix, but investigations are still ongoing. The next update can be expected in 2 hours.

January 14, 2025 10:24 UTC
[Identified] No updates at this time. The next status update can be expected in 2 hours.

January 14, 2025 08:25 UTC
[Identified] No material updates at this point. The next update can be expected in 2 hours.

January 14, 2025 06:11 UTC
[Identified] No material updates at this time. We are working to reinstate the subscriptions as quickly as possible.

January 14, 2025 04:57 UTC
[Identified] A small number of GitLab.com customers are experiencing subscription plan disruptions as a result of a routine audit of our subscription records. Our team is currently working on reinstating these.

Packages on GitLab.com (packages.gitlab.com) become unavailable sporadically with 502 errors.

January 10, 2025 03:54 UTC

Incident Status

Degraded Performance


Components

packages.gitlab.com


Locations

AWS




January 10, 2025 03:54 UTC
[Resolved] The fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) has completed successfully. We are now marking this incident as resolved. More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 10, 2025 02:32 UTC
[Monitoring] The fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) is ongoing. It appears to be progressing as expected. More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 21:55 UTC
[Monitoring] The fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) is ongoing. We expect it to take about 20 hours, and will post updates every 4 hours. More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 19:53 UTC
[Monitoring] No updates at this time. The fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) is still processing. More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 18:13 UTC
[Monitoring] The fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) has been implemented, and is processing now. More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 16:30 UTC
[Identified] We're still working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 15:54 UTC
[Identified] We're still working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its progress can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 15:22 UTC
[Identified] We're still working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 14:49 UTC
[Identified] We're still working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 14:17 UTC
[Identified] We're still working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 13:43 UTC
[Identified] We're working on implementing the fix for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 13:09 UTC
[Identified] We've identified the root cause for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com) and working on a fix. More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 12:48 UTC
[Investigating] We're still investigating the root cause for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 12:33 UTC
[Investigating] We're still investigating the root cause for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 12:17 UTC
[Investigating] We're still investigating the root cause for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 12:02 UTC
[Investigating] We're still investigating the root cause for the sporadic unavailability of packages on GitLab.com (packages.gitlab.com). More about the incident and its updates can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

January 9, 2025 11:45 UTC
[Investigating] We are currently investigating an incident that is causing packages on GitLab.com (packages.gitlab.com) to become unavailable sporadically with 502 errors. More about the incident can be read here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19091

Accessing gitlab.com results in 500 errors

January 6, 2025 16:42 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners on macOS, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, docs.gitlab.com, Canary, Product Analytics - Configurator (Beta), Product Analytics - Collector (Beta), Product Analytics - Background processing (Beta), Product Analytics - Queries (Beta), GitLab Duo


Locations

Google Compute Engine, Digital Ocean, Zendesk, AWS




January 6, 2025 16:42 UTC
[Resolved] We've identified the cause of the errors and the errors have returned to normal. The incident is now resolved. More information on the incident can be found in this issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/19079

January 6, 2025 15:57 UTC
[Monitoring] We've identified the cause of the errors, and the errors have returned to normal. The incident is now mitigated, but we're continuing to monitor for any further impact. More information on the incident can be found in this issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/19079

January 6, 2025 15:27 UTC
[Monitoring] We're continuing to monitor for further errors related to the incident. We see further decreases in the errors.

January 6, 2025 15:17 UTC
[Monitoring] We saw elevated 500 errors on gitlab.com which have then decreased. We're continuing to monitor for further errors

Experiencing some slowness accessing Gitlab.com

January 3, 2025 22:55 UTC

Incident Status

Operational


Components

Website


Locations

Google Compute Engine




January 3, 2025 22:55 UTC
[Resolved] Mitigation has been implemented and GitLab.com has remained stable throughout our monitoring period and we are now marking this incident as resolved.

January 3, 2025 22:09 UTC
[Monitoring] We experienced some slowness around 9:38PM UTC time at around 10 min in duration. That has since been resolved, but we are still investigating the root cause.

January 3, 2025 21:50 UTC
[Investigating] We experienced some slowness around 9:38PM UTC time around 10 min in duration. That has since been resolved but we are still investigating the root cause.

December 2024

Intermittent errors across GitLab.com

December 20, 2024 19:47 UTC

Intermittent errors across GitLab.comPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, GitLab Pages, Background Processing


Locations

Google Compute Engine




December 20, 2024 19:47 UTC
[Resolved] GitLab.com has remained stable throughout our monitoring period and we are now marking this incident as resolved. Please see the following issue for further information related to this incident: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 20, 2024 00:08 UTC
[Monitoring] We are continuing to monitor the issue at this point, with further recovery being observed. Follow gitlab.com/gitlab-com/gl-infra/production/-/issues/19033 for more details.

December 19, 2024 23:32 UTC
[Monitoring] Mitigation steps have been completed. You should see delayed tasks start to recover now. Follow gitlab.com/gitlab-com/gl-infra/production/-/issues/19033 for more details.

December 19, 2024 23:09 UTC
[Identified] Mitigation steps are progressing and GitLab.com responsiveness is looking good. Follow gitlab.com/gitlab-com/gl-infra/production/-/issues/19033 for more details.

December 19, 2024 21:58 UTC
[Identified] Users may still see delays when interacting with GitLab.com. We are still actively working to mitigate the issue. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 21:11 UTC
[Identified] Our work to fully mitigate the underlying issue is still ongoing. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 20:31 UTC
[Identified] No material updates at this time. We have seen in overall improvement but are still are still working to fully mitigate the issue. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 19:51 UTC
[Identified] Our initial changes have helped alleviate the issue. Our efforts are still ongoing as this is not fully mitigated. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 19:14 UTC
[Identified] We have implemented mitigating changes. Services should begin to recover. We are continuing to adjust and monitor. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 18:37 UTC
[Identified] No material updates at this time. We are still working to mitigate the issue. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 18:09 UTC
[Identified] We are still working to mitigate the issue. Users may still encounter errors or delays on GitLab.com. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 17:46 UTC
[Identified] We believe to have identified the cause of this incident. We are working on a mitigation strategy. Users may see errors or delays when interacting with GitLab.com. Details will be posted in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

December 19, 2024 17:31 UTC
[Investigating] We are investigating reports of intermittent errors across GitLab.com and GitLab Pages. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19033

Intermittent timeouts for requests from some Utah IP addresses

December 20, 2024 02:51 UTC

Incident Status

Partial Service Disruption


Components

Git Operations, Container Registry


Locations

Google Compute Engine




December 20, 2024 02:51 UTC
[Resolved] We are now marking this incident as resolved. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19017

December 19, 2024 01:48 UTC
[Monitoring] Google's network engineers have identified the issue affecting the "us-west3" region and deployed a fix. We will continue monitoring. For questions and more details, please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/19017.

December 18, 2024 22:15 UTC
[Investigating] No material updates at this time. Cloudlfare and Google are still investigating these routing issues. We will provide additional information as it becomes available. Details will be posted to: gitlab.com/gitlab-com/gl-infra/production/-/issues/19017

December 18, 2024 03:57 UTC
[Investigating] Our Infrastructure Engineers have been working with CloudFlare and Google and have identified a routing issue between GCP "us-west3" and Cloudflare. Investigation is continuing. Please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/19017 for more details.

December 17, 2024 22:37 UTC
[Investigating] No material updates at this time. Requests originating from the Salt Lake City, Utah region may still see intermittent timeouts. We are working with our third party vendor to investigate further. We will provide updates as info becomes available. Details: gitlab.com/gitlab-com/gl-infra/production/-/issues/19017

December 17, 2024 13:45 UTC
[Investigating] No material updates at the moment. We continue to work with a third party vendor to get more updates and waiting for feedback from affected customers. Details to follow in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19017

December 16, 2024 23:49 UTC
[Investigating] We have received some reports of requests timing out for IP addresses originating in Salt Lake City, Utah. We are working with a third party vendor to identify the issue. Details to follow in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19017

Redis Cluster Saturation

December 18, 2024 16:30 UTC

Redis Cluster SaturationPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website


Locations

Google Compute Engine




December 18, 2024 16:30 UTC
[Resolved] We have confirmed our mitigation efforts were successful. We are now marking this incident as resolved. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/19025

December 18, 2024 15:48 UTC
[Monitoring] We have identified the cause of the issue and have taken the necessary measures to mitigate it. We are now monitoring Redis before marking the incident as resolved. More details here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19025

December 18, 2024 15:05 UTC
[Investigating] We're experiencing saturation in our Redis cluster. The investigation is still ongoing. Users might get 500 errors intermittently. More details here: gitlab.com/gitlab-com/gl-infra/production/-/issues/19025

Importing with GitHub, Bitbucket Server, and Gitea importer offline

December 18, 2024 14:27 UTC

Incident Status

Security Issue


Components

Website, API


Locations

Google Compute Engine




December 18, 2024 14:27 UTC
[Resolved] Import with GitHub, Bitbucket Server, and Gitea has now been re-enabled and importers are back online.

November 22, 2024 11:59 UTC
[Investigating] Import with GitHub, Bitbucket Server, and Gitea remains offline and we continue efforts to restore functionality. Migration by Direct Transfer is now re-enabled with improved functionality on GitLab.com. See details at docs.gitlab.com/ee/user/project/import/index.html#user-contribution-and-membership-mapping

October 9, 2024 08:22 UTC
[Investigating] Migration by direct transfer and importing with GitHub, Bitbucket Server, and Gitea importer is currently offline. We are working to restore the import functionality. We do not have an estimated resolution date at this time.

August 13, 2024 08:11 UTC
[Investigating] Migration by direct transfer and importing with GitHub, Bitbucket Server, and Gitea importer is currently offline and we are investigating the cause. We will update the status page when more information is available.

July 11, 2024 06:29 UTC
[Investigating] Migration by direct transfer and importing with GitHub, Bitbucket Server, and Gitea importer is currently offline and we are investigating the cause. We will update the status page when more information becomes available.

July 11, 2024 06:22 UTC
[Investigating] Migration by direct transfer and importing with GitHub, Bitbucket Server, and Gitea importer is currently offline and we are currently investigating the cause. We will update the status page when more information becomes available.

Duo Chat is not working for most cases in VSCode and JetBrains

December 11, 2024 13:37 UTC

Incident Status

Partial Service Disruption


Components

GitLab Duo


Locations

Google Compute Engine




December 11, 2024 13:37 UTC
[Resolved] This incident has been resolved and Duo Chat in IDEs is now fully operational. More information can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18980

December 11, 2024 11:51 UTC
[Monitoring] The fix has been deployed to production and we are no longer seeing these errors. We will continue monitoring. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18980

December 11, 2024 11:05 UTC
[Identified] The deployment to production has started and we are expecting it to complete in approximately 1 hour. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18980

December 11, 2024 02:43 UTC
[Identified] No material update to report. Working toward getting identified fix applied. ETA 6-8 hours. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18980

December 10, 2024 22:26 UTC
[Identified] This update is to clarify the scope of affected users. The incident only affects workflows configured using OAuth authentication. Users who authenticate using an access token remain unaffected.

December 10, 2024 21:17 UTC
[Identified] Our team has identified the issue and is working on a fix. Duo Chat may be temporarily unavailable in editor extensions until a fix is merged. We will provide an update once the fix has been applied.

December 10, 2024 19:31 UTC
[Investigating] Our team is still investigating the issue. We'll provide additional updates as more information becomes available.

December 10, 2024 16:52 UTC
[Investigating] Current impact is identifying that Duo Chat is not working for most cases in VSCode and JetBrains. We're still investigating and will provide updates as more information becomes available.

December 10, 2024 16:47 UTC
[Investigating] No material updates at this time. Our team is continuing to investigate the issue.

December 10, 2024 16:17 UTC
[Investigating] No material updates at this time. Our team is still investigating the issue.

December 10, 2024 15:55 UTC
[Investigating] Users are experiencing issues accessing Duo Chat functionality within VSCode. Our engineering team is actively investigating the root cause in gitlab.com/gitlab-com/gl-infra/production/-/issues/18980

Runner registration returning 500 errors

December 11, 2024 13:34 UTC

Incident Status

Degraded Performance


Components

CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners on macOS, CI/CD - Hosted runners for GitLab community contributions


Locations

Google Compute Engine




December 11, 2024 13:34 UTC
[Resolved] This incident has been resolved and runner registration is now fully operational. More information can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 11:47 UTC
[Monitoring] The fix has been deployed to production and we are no longer seeing these errors. We will continue monitoring. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 11:06 UTC
[Identified] The deployment to production has started and we are expecting it to complete in approximately 1 hour. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 04:56 UTC
[Identified] A fix for this problem has been merged into the codebase - it should become available on GitLab.com in the next 6-7 hours. For more information please see gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 04:36 UTC
[Identified] As a workaround is available (using an authentication token to register a new runner) the status of this incident has been downgraded to 'Partial Service Disruption'

December 11, 2024 04:10 UTC
[Identified] Work to fix the deprecated runner registration method continues. Workaround: register runner with an authentication token. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 03:39 UTC
[Identified] Work continues to fix the deprecated runner registration method. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984 Workaround: register runner with an authentication token.

December 11, 2024 02:59 UTC
[Identified] A workaround has been identified to register a runner using an authentication token: docs.gitlab.com/runner/register/#register-with-a-runner-authentication-token Incident only impacting the deprecated runner registration method.

December 11, 2024 02:50 UTC
[Identified] Work continues for a fix to the identified cause. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

December 11, 2024 02:22 UTC
[Identified] We are seeing 500 errors with runner registrations. We have identified the cause, and are working on a fix. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/18984

Some CI jobs are failing to run due to insufficient permissions

December 5, 2024 22:50 UTC

Incident Status

Partial Service Disruption


Components

CI/CD - Hosted runners on Linux


Locations

Google Compute Engine




December 5, 2024 22:50 UTC
[Resolved] No more reports of this error have been received. We conclude the monitoring period and consider this incident resolved. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18952 for details.

December 5, 2024 22:32 UTC
[Monitoring] We would like to clarify that this problem affected some jobs on GitLab.com groups that used the "Restrict group access by IP address" feature. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18952.

December 5, 2024 21:52 UTC
[Monitoring] Configuration changes have been put in place in our infrastructure to potentially mitigate this issue. We are currently monitoring our logs and user reports for confirmation. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18952

December 5, 2024 21:40 UTC
[Investigating] We believe the cause for CI clone failure is related to Group-level IP restrictions. We're reviewing internal logs to confirm. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18952 for more.

December 5, 2024 21:18 UTC
[Investigating] We see reports of some CI Jobs failing to clone repositories with an "insufficient permissions" error. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18952 for details on the investigation.

GitLab-hosted runners with the gitlab-org-docker tag are offline

December 4, 2024 22:20 UTC

Incident Status

Partial Service Disruption


Components

CI/CD - Hosted runners on Linux


Locations

Google Compute Engine




December 4, 2024 22:20 UTC
[Resolved] This incident is now resolved. Please make sure not to use the "gitlab-org-docker" tag for your workloads as they are intended for gitlab-org projects only. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18945.

December 4, 2024 22:19 UTC
[Monitoring] Runner performance metrics are back to normal levels and jobs are being properly picked up. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18945 for the full details.

December 4, 2024 21:29 UTC
[Monitoring] We have pushed a potential fix and see signs of recovery from the affected Runners. We will continue to monitor this to ensure jobs are properly picked up. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18945.

December 4, 2024 21:10 UTC
[Investigating] We have potentially identified the commit that caused this disruption in the Runner network. Investigation continues. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18945 for details.

December 4, 2024 20:46 UTC
[Investigating] We have found traces of connectivity issues in our Runner network infrastructure. We continue our investigation. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18945 for details.

December 4, 2024 20:10 UTC
[Investigating] We continue to investigate our Runner infrastructure to determine the cause of the issue. Please review gitlab.com/gitlab-com/gl-infra/production/-/issues/18945 for full details.

December 4, 2024 19:53 UTC
[Investigating] The "gitlab-org-docker" tag is meant for gitlab-org projects only and not for customer workloads. As a preliminary potential fix, please remove the tag from your affected jobs and retry them. See: gitlab.com/gitlab-com/gl-infra/production/-/issues/18945

December 4, 2024 19:38 UTC
[Investigating] Jobs tagged with the "gitlab-org-docker" tag are stuck in a "Pending" status as the runners are currently offline. Please see gitlab.com/gitlab-com/gl-infra/production/-/issues/18945 for further details.

Fargate runner error - file name too long

December 4, 2024 16:50 UTC

Incident Status

Partial Service Disruption


Components

Website, CI/CD - Hosted runners for GitLab community contributions


Locations

Google Compute Engine




December 4, 2024 16:50 UTC
[Resolved] As no new user reports have been received during our monitoring period we consider this incident resolved. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18939 for the full incident history.

December 4, 2024 15:46 UTC
[Monitoring] We have disabled the feature flag and are now monitoring the issue for 1 hour before marking as resolved. More details can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18939.

December 4, 2024 15:42 UTC
[Identified] We've identified cause of the issue and are working on resolving it. More details can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18939.

December 4, 2024 15:16 UTC
[Investigating] We are currently investigating issues with GitLab runners with Fargate driver returning "file name too long" errors. More details about this incident can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18939

Project mirror disabled due to excessive notifications

December 3, 2024 20:26 UTC

Incident Status

Degraded Performance


Components

Website, Background Processing


Locations

Google Compute Engine




December 3, 2024 20:26 UTC
[Resolved] After seeing no further issues arise during our monitoring period, we are considering this incident resolved. Please review gitlab.com/gitlab-com/gl-infra/production/-/issues/18929 for more details.

December 3, 2024 18:49 UTC
[Monitoring] Project mirroring has been re-enabled on GitLab.com and we are monitoring to make sure no further issues arise. See gitlab.com/gitlab-com/gl-infra/production/-/issues/18929 for further details.

December 3, 2024 15:51 UTC
[Identified] We've turned off the functionality relating to schedules for project mirroring. Project mirroring will be reenabled once we resolve this issue. Mirrored projects will not be updated during this time.

December 3, 2024 14:47 UTC
[Identified] We've turned off the functionality that sends out email updates temporarily for project mirrors. We are continuing to investigate this incident.

December 3, 2024 14:04 UTC
[Investigating] We are currently investigating emails being sent for older project mirrors and imports. More details about this incident can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/18929





Back to current status