All Systems Operational

Updated a few seconds ago

Back to current status

Status History

Filter: GitLab Pages (Clear)



June 2022

Network Performance issues in the India region

June 15, 2022 14:06 UTC

Incident Status

Operational


Components

Website, API, Git Operations, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions


Locations

Google Compute Engine




June 15, 2022 14:06 UTC
[Resolved] As we have seen traffic resume normally, this incident is now resolved. Any review will continue through the incident issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/7252

June 15, 2022 12:58 UTC
[Monitoring] Users should no longer see 5xx errors, Cloudflare have confirmed that a fix has been implemented. For more information, please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/7252

June 15, 2022 12:06 UTC
[Identified] Cloudflare has confirmed that the issue has been identified and a fix is being implemented. For more information please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/7252

June 15, 2022 11:28 UTC
[Investigating] GitLab.com is experiencing issues in the India region related to DNS with Cloudflare. We will continue to provide updates as we learn more. For more information, please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/7252

May 2022

GitLab Pages Disruption on GitLab.com

May 2, 2022 15:22 UTC

Incident Status

Operational


Components

GitLab Pages


Locations

Google Compute Engine




May 2, 2022 15:22 UTC
[Resolved] We declare the incident resolved. Error rates and Apdex have stayed stable across GitLab Pages. See gitlab.com/gitlab-com/gl-infra/production/-/issues/6961 for a full timeline of the events.

May 2, 2022 14:25 UTC
[Monitoring] Error rates for GitLab Pages have dropped and stabilized, Apdex is back to normal. We’re continuing to monitor the situation.

May 2, 2022 13:50 UTC
[Identified] Our infrastructure team is currently investigating a disruption of service with GitLab Pages on GitLab.com. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6961. We saw elevated traffic (13:00 - 13:15 UTC) on our GitLab Pages service for a short period of time.

April 2022

Elevated Error Rate on GitLab.com

April 28, 2022 07:37 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Digital Ocean, Zendesk, AWS




April 28, 2022 07:37 UTC
[Resolved] We transitioned this incident to resolved. Check the issue for more details: gitlab.com/gitlab-com/gl-infra/production/-/issues/6933

April 28, 2022 07:23 UTC
[Monitoring] We have identified the cause to this incident and do not anticipate any additional impacts at this time. We are continuing to monitor services as well as complete additional investigation. More details here: gitlab.com/gitlab-com/gl-infra/production/-/issues/6933

April 28, 2022 06:34 UTC
[Investigating] We are still investigating the root cause. The error rates on affected systems recovering, and we are monitoring to ensure that the issue does not recur.

April 28, 2022 06:12 UTC
[Investigating] We're investigating increased error rates across GitLab-com services, some users might be experiencing 500 errors intermittently, more info can be found in here: gitlab.com/gitlab-com/gl-infra/production/-/issues/6933

Elevated Error Rate on GitLab

April 26, 2022 02:26 UTC

Elevated Error Rate on GitLab Partial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages


Locations

Google Compute Engine




April 26, 2022 02:26 UTC
[Resolved] After a period of monitoring, error rates have remained stable. This incident has been marked as resolved. A full timeline is available in gitlab.com/gitlab-com/gl-infra/production/-/issues/6910

April 26, 2022 01:51 UTC
[Monitoring] We experienced an elevated rate of errors for gitlab.com for a short period of time. The service has automatically recovered but we are monitoring and investigating the cause. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6910

March 2022

Gitlab.com Degraded Performance

March 14, 2022 20:43 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, Canary


Locations

Google Compute Engine




March 14, 2022 20:43 UTC
[Resolved] We can confirm that degraded services have recovered to expected levels and we have identified the steps required to mitigate the issue. To follow along or review the full timeline see: gitlab.com/gitlab-com/gl-infra/production/-/issues/6586. status.gitlab.com

March 14, 2022 18:42 UTC
[Monitoring] We have identified a root cause and mitigating steps have been performed. We are currently monitoring site performance. You can follow along with this issue here: gitlab.com/gitlab-com/gl-infra/production/-/issues/6586

March 14, 2022 18:18 UTC
[Identified] We are still continuing to investigate. You can follow along with this issue here: gitlab.com/gitlab-com/gl-infra/production/-/issues/6586

March 14, 2022 17:55 UTC
[Identified] We are seeing site performance return to expected levels, but we are currently investigating some async job issues. You can follow along with this issue here: gitlab.com/gitlab-com/gl-infra/production/-/issues/6586

March 14, 2022 17:25 UTC
[Investigating] We are currently seeing degraded performance for Gitlab.com and are investigating. We will update with an impact shortly.

Elevated Error Rate on GitLab SaaS

March 12, 2022 18:07 UTC

Incident Status

Degraded Performance


Components

Website, Git Operations, GitLab Pages


Locations

Google Compute Engine




March 12, 2022 18:07 UTC
[Resolved] After a period of monitoring we've seen a full recovery and consider the issue resolved. Full timeline in gitlab.com/gitlab-com/gl-infra/production/-/issues/6571.

March 12, 2022 17:23 UTC
[Monitoring] We're continuing to see error rates recover but will be monitoring for another hour and will provide an update at that time. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6571.

March 12, 2022 16:43 UTC
[Monitoring] We're seeing error rates on affected systems recovering and are monitoring to ensure that the issue does not recur before we close the issue. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6571.

March 12, 2022 16:29 UTC
[Investigating] We're investigating an issue with a loadbalancer that may be causing certain users to receive 503 errors on GitLab SaaS. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6571.

February 2022

Performance degradation across multiple services

February 19, 2022 22:01 UTC

Incident Status

Degraded Performance


Components

API, Git Operations, Container Registry, GitLab Pages, Background Processing, Canary


Locations

Google Compute Engine




February 19, 2022 22:01 UTC
[Resolved] Mitigating updates have been applied and performance has returned to normal. Gitlab.com is fully operational.

February 19, 2022 21:46 UTC
[Monitoring] We have taken some mitigation steps and are seeing improvement in performance. We are continuing to monitor and investigate.

February 19, 2022 21:29 UTC
[Investigating] We’re currently investigating a performance degradation across multiple services. More information as we investigate in gitlab.com/gitlab-com/gl-infra/production/-/issues/6388

Performance degradation across multiple services

February 19, 2022 06:45 UTC

Incident Status

Degraded Performance


Components

API, Git Operations, Container Registry, GitLab Pages, Background Processing, Canary


Locations

Google Compute Engine




February 19, 2022 06:45 UTC
[Resolved] GitLab.com is currently fully operational

February 19, 2022 04:37 UTC
[Resolved] Mitigation steps have restored services. GitLab.com is currently fully operational.

February 19, 2022 04:14 UTC
[Monitoring] Mitigating updates have been applied and performance has been restored. We are continuing to monitor the situation.

February 19, 2022 04:09 UTC
[Monitoring] We have taken some mitigation steps and are seeing improvement in performance. We are continuing to monitor and investigate.

February 19, 2022 03:59 UTC
[Investigating] We’re currently investigating a performance degradation across multiple services. More information as we investigate in gitlab.com/gitlab-com/gl-infra/production/-/issues/6386

Increased error rates for the GitLab Pages service

February 14, 2022 09:51 UTC

Incident Status

Degraded Performance


Components

Website, GitLab Pages


Locations

Google Compute Engine




February 14, 2022 09:51 UTC
[Resolved] The incident has been mitigated.

February 14, 2022 09:25 UTC
[Investigating] We are investigating increased error rates for the GitLab Pages service, see gitlab.com/gitlab-com/gl-infra/production/-/issues/6347 for details

System Wide Outage

February 1, 2022 06:37 UTC

System Wide OutageService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing


Locations

Google Compute Engine




February 1, 2022 06:37 UTC
[Resolved] We have transitioned this incident to Resolved. All services continue to be fully operational. We have some additional investigation and follow up pending, but no further impact or user action is expected at this time.

January 31, 2022 20:04 UTC
[Monitoring] Our team is currently investigating data integrity with our cloud provider. Further updates will be minimal but more information can be found in the production issue gitlab.com/gitlab-com/gl-infra/production/-/issues/6253

January 31, 2022 18:22 UTC
[Monitoring] We've confirmed the root cause with our cloud provider and expect to call the issue resolved shortly. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6253

January 31, 2022 16:43 UTC
[Monitoring] Our team is continuing to investigate the root cause in gitlab.com/gitlab-com/gl-infra/production/-/issues/6253

January 31, 2022 16:12 UTC
[Monitoring] GitLab.com remains available and our team is investigating the root cause. For details see gitlab.com/gitlab-com/gl-infra/production/-/issues/6253

January 31, 2022 15:52 UTC
[Monitoring] GitLab.com is accessible again. The mitigation is in place, and we are continuing to work on restoring full capacity.

January 31, 2022 15:35 UTC
[Investigating] Our team is seeing some recovery on Gitlab.com. However, we are still investigating the issue as it is on-going.

January 31, 2022 15:22 UTC
[Investigating] GitLab.com is unavailable and users are getting 500 errors.

Errors detected gitlab.io pages. Automatic recovery

February 1, 2022 01:25 UTC

Incident Status

Operational


Components

GitLab Pages


Locations

Google Compute Engine




February 1, 2022 01:25 UTC
[Resolved] Additional capacity was automatically deployed to cope with a sudden increase in traffic to gitlab.io. This incident is resolved. See the incident issue for details:gitlab.com/gitlab-com/gl-infra/production/-/issues/6255

February 1, 2022 01:07 UTC
[Monitoring] We detected increased errors on gitlab.io pages between 00:30-00:45. The service has recovered automatically. We're monitoring and investigating the cause. Details in gitlab.com/gitlab-com/gl-infra/production/-/issues/6255

December 2021

GitLab.com production database reliability update

December 18, 2021 16:46 UTC

Description

During this maintenance we will be upgrading our GitLab.com production database to a minor version of PostgreSQL 12.9 to improve overall reliability. This maintenance will not require downtime and is not expected to otherwise impact performance or availability of GitLab.com functionality. We are providing notice of this work due to the critical nature of the database to our operations, even though we anticipate no impact.


Components

Website, API, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, Canary


Locations

Google Compute Engine


Schedule

December 18, 2021 15:00 - December 18, 2021 17:00 UTC



December 18, 2021 16:46 UTC
[Update] Maintenance work is now completed. The full anticipated work was not successfully completed, but no further changes will be executed today.

December 18, 2021 16:02 UTC
[Update] Maint Started 15:00 UTC

September 2021

Increased error rates across all GitLab services

September 12, 2021 21:48 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




September 12, 2021 21:48 UTC
[Resolved] After investigation we determined that the elevated error rates were caused by database connection saturation; further details will be in this issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

September 12, 2021 21:23 UTC
[Monitoring] GitLab performance and error rates have returned to normal; we are investigating the root cause to understand the likelyhood it will reoccur.

September 12, 2021 21:04 UTC
[Investigating] We are investigating slow response times and increased error rates across all GitLab services. Incident: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

August 2021

Elevated error rates on GitLab.com

August 11, 2021 17:51 UTC

Elevated error rates on GitLab.comPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 11, 2021 17:51 UTC
[Resolved] As GitLab.com is operating normally, we'll now be marking this incident as resolved. Further investigation efforts and details surrounding the issue can be found in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:44 UTC
[Investigating] No material updates to report - our investigation is still ongoing. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:27 UTC
[Investigating] All GitLab.com services are operating normally, but our investigation into the cause of the errors continues. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:18 UTC
[Investigating] At 17:04 UTC, there was a 5 minute elevated error ratio event for all GitLab.com services. It has recovered, but we are investigating. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

Increased error rates across all services

August 10, 2021 10:39 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 10, 2021 10:39 UTC
[Resolved] The issue has been resolved, and we can confirm that all services on GitLab.com are fully operational. A full timeline of this incident and more details are available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5324

August 10, 2021 10:05 UTC
[Monitoring] We are back to normal error rates across all services. We keep monitoring the situation. Refer to gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more updates.

August 10, 2021 09:52 UTC
[Investigating] We are investigating increased error rates across all services, see gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more details

GitLab performance degraded

August 6, 2021 12:48 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 6, 2021 12:48 UTC
[Resolved] From 12:05 - 12:15 UTC, we saw a system degradation across all of our components, which seems to have recovered at the moment. Our engineers are investigating. Further details are available in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5306

July 2021

GitLab pages performance degraded

July 24, 2021 22:14 UTC

GitLab pages performance degradedPartial Service Disruption

Incident Status

Partial Service Disruption


Components

GitLab Pages


Locations

Google Compute Engine




July 24, 2021 22:14 UTC
[Resolved] GitLab Pages status is back to fully operational state. For more information including post-mortem please review gitlab.com/gitlab-com/gl-infra/production/-/issues/5221

July 24, 2021 22:07 UTC
[Monitoring] Traffic spike has ceased at this point and Pages service status should be returning to normal. We will continue to monitor the situation to confirm full service availability. More details on gitlab.com/gitlab-com/gl-infra/production/-/issues/5221.

July 24, 2021 21:49 UTC
[Investigating] We are seeing elevated requests for our sites on GitLab Pages. Pages sites may be slow to respond or return errors. Please follow gitlab.com/gitlab-com/gl-infra/production/-/issues/5221 for more information.

GitLab pages is down

July 21, 2021 09:32 UTC

Incident Status

Operational


Components

Container Registry, GitLab Pages


Locations

Google Compute Engine




July 21, 2021 09:32 UTC
[Resolved] GitLab Pages (as well as docs.gitlab.com) and GitLab Registry are back to an operational state. We’ll continue the root cause analysis over at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 09:01 UTC
[Identified] GitLab Pages and Registry are operational. We’ll continue to investigate the root cause of this incident at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 08:49 UTC
[Identified] We’ve identified a mitigation path and are actively applying this to affected services. GitLab Pages and Registry should now be at least partially available. During mitigation, the affected services (GitLab Services, docs.gitlab.com and GitLab Registry) will be unstable. Investigation is ongoing at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 08:43 UTC
[Identified] We’ve identified a mitigation path and are actively applying this to affected services. GitLab Registry should now be at least partially available. During mitigation, the affected services (GitLab Services, docs.gitlab.com and GitLab Registry) will be unstable. Investigation is ongoing at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 08:29 UTC
[Investigating] The investigation of the load balancer issue is still ongoing, and we’ve engaged support from a third-party vendor in investigating. The affected services (GitLab Services, docs.gitlab.com and GitLab Registry) remain unavailable for the time being. Investigation is ongoing at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 08:01 UTC
[Investigating] We’re still investigating the load balancer issue. The affected services (GitLab Services, docs.gitlab.com and GitLab Registry) remain unavailable for the time being. Investigation is ongoing at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 07:34 UTC
[Investigating] Specific load-balanced services (GitLab Pages, GitLab Registry and docs.gitlab.com) are currently unavailable. We are looking into the cause and determining steps to mitigate. Investigation is ongoing at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196

July 21, 2021 07:14 UTC
[Investigating] GitLab Pages is currently unavailable and the cause is currently under investigation at gitlab.com/gitlab-com/gl-infra/production/-/issues/5196 and we will post further updates.

Issue Uploading Artifacts from CI

July 21, 2021 01:39 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 21, 2021 01:39 UTC
[Resolved] This incident is now resolved and we are no longer seeing any errors. We will continue to investigate the cause internally. Full timeline of this incident and more details available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 01:12 UTC
[Monitoring] The rollout has completed and we are no longer seeing errors with the upload. This has been applied to all our servers. We are continuously monitoring this further for any issues.

July 21, 2021 01:04 UTC
[Identified] The rollout has completed on two of our clusters. This will continue on our last few clusters and we will post further updates once completed. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:44 UTC
[Identified] We are currently working on a recent configuration rollback that affected uploading CI artifacts. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:28 UTC
[Investigating] We have identified a failure in uploading artifacts from CI which is currently showing a 400 error. We are investigating the issue at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194 and will post more updates.

June 2021

Short service disruption on GitLab.com

June 21, 2021 21:34 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




June 21, 2021 21:34 UTC
[Resolved] Our team has confirmed that the disruption is now gone, and that services are back to normal, Investigation will continue on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945 for additional details and root cause analysis.

June 21, 2021 21:14 UTC
[Investigating] GitLab.com is up and operational, but from 20:53 to 20:58, we saw an increase in errors on GitLab.com web, API, and Git services that were a short disruption, now gone. Investigation is being done on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945





Back to current status