All Systems Operational

Updated a few seconds ago

Back to current status

Status History

Filter: Canary (Clear)



February 2022

500s on gitlab.com

February 17, 2022 14:30 UTC

500s on gitlab.comPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, Canary


Locations

Google Compute Engine




February 17, 2022 14:30 UTC
[Resolved] Gitlab.com is operating normally and we are closing the issue. Please see gitlab.com/gitlab-com/gl-infra/production/-/issues/6372 for details.

February 17, 2022 12:31 UTC
[Monitoring] All services are operational since the last update and we continue to monitor the revert of the merge request identified as the potential cause of the issue. More information can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/6372.

February 17, 2022 11:02 UTC
[Monitoring] We are working on reverting the merge request identified as the potential cause of this issue. More details can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/6372

February 17, 2022 10:43 UTC
[Identified] We've identified the root cause and took measures to mitigate it, the incident appears to have affected up to 5% of the traffic to gitlab-com. We're continuing to work on fixing the root cause, more details can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/6372

February 17, 2022 10:30 UTC
[Investigating] We're investigating increased error rates across GitLab-com services, some users might be experiencing 500 errors intermittently, more info can be found in gitlab.com/gitlab-com/gl-infra/production/-/issues/6372

December 2021

GitLab.com production database reliability update

December 18, 2021 16:46 UTC

Description

During this maintenance we will be upgrading our GitLab.com production database to a minor version of PostgreSQL 12.9 to improve overall reliability. This maintenance will not require downtime and is not expected to otherwise impact performance or availability of GitLab.com functionality. We are providing notice of this work due to the critical nature of the database to our operations, even though we anticipate no impact.


Components

Website, API, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, Canary


Locations

Google Compute Engine


Schedule

December 18, 2021 15:00 - December 18, 2021 17:00 UTC



December 18, 2021 16:46 UTC
[Update] Maintenance work is now completed. The full anticipated work was not successfully completed, but no further changes will be executed today.

December 18, 2021 16:02 UTC
[Update] Maint Started 15:00 UTC

September 2021

Users with private commit emails cannot create issues or MRs assigned to themselves in Canary

September 16, 2021 05:10 UTC

Incident Status

Operational


Components

Canary


Locations

Google Compute Engine




September 16, 2021 05:10 UTC
[Resolved] We experienced an incident 45 minutes ago where users with private commit emails cannot create issues or MRs assigned to themselves in Canary. The root cause has been identified, and the incident has now been mitigated. GitLab Issue Link: gitlab.com/gitlab-com/gl-infra/production/-/issues/5549

Increased error rates across all GitLab services

September 12, 2021 21:48 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




September 12, 2021 21:48 UTC
[Resolved] After investigation we determined that the elevated error rates were caused by database connection saturation; further details will be in this issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

September 12, 2021 21:23 UTC
[Monitoring] GitLab performance and error rates have returned to normal; we are investigating the root cause to understand the likelyhood it will reoccur.

September 12, 2021 21:04 UTC
[Investigating] We are investigating slow response times and increased error rates across all GitLab services. Incident: gitlab.com/gitlab-com/gl-infra/production/-/issues/5514

August 2021

Elevated error rates on GitLab.com

August 11, 2021 17:51 UTC

Elevated error rates on GitLab.comPartial Service Disruption

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 11, 2021 17:51 UTC
[Resolved] As GitLab.com is operating normally, we'll now be marking this incident as resolved. Further investigation efforts and details surrounding the issue can be found in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:44 UTC
[Investigating] No material updates to report - our investigation is still ongoing. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:27 UTC
[Investigating] All GitLab.com services are operating normally, but our investigation into the cause of the errors continues. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

August 11, 2021 17:18 UTC
[Investigating] At 17:04 UTC, there was a 5 minute elevated error ratio event for all GitLab.com services. It has recovered, but we are investigating. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5333

Increased error rates across all services

August 10, 2021 10:39 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, SAML SSO - GitLab SaaS, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 10, 2021 10:39 UTC
[Resolved] The issue has been resolved, and we can confirm that all services on GitLab.com are fully operational. A full timeline of this incident and more details are available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5324

August 10, 2021 10:05 UTC
[Monitoring] We are back to normal error rates across all services. We keep monitoring the situation. Refer to gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more updates.

August 10, 2021 09:52 UTC
[Investigating] We are investigating increased error rates across all services, see gitlab.com/gitlab-com/gl-infra/production/-/issues/5324 for more details

GitLab performance degraded

August 6, 2021 12:48 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 6, 2021 12:48 UTC
[Resolved] From 12:05 - 12:15 UTC, we saw a system degradation across all of our components, which seems to have recovered at the moment. Our engineers are investigating. Further details are available in: gitlab.com/gitlab-com/gl-infra/production/-/issues/5306

July 2021

Issue Uploading Artifacts from CI

July 21, 2021 01:39 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 21, 2021 01:39 UTC
[Resolved] This incident is now resolved and we are no longer seeing any errors. We will continue to investigate the cause internally. Full timeline of this incident and more details available in gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 01:12 UTC
[Monitoring] The rollout has completed and we are no longer seeing errors with the upload. This has been applied to all our servers. We are continuously monitoring this further for any issues.

July 21, 2021 01:04 UTC
[Identified] The rollout has completed on two of our clusters. This will continue on our last few clusters and we will post further updates once completed. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:44 UTC
[Identified] We are currently working on a recent configuration rollback that affected uploading CI artifacts. More details at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194

July 21, 2021 00:28 UTC
[Investigating] We have identified a failure in uploading artifacts from CI which is currently showing a 400 error. We are investigating the issue at gitlab.com/gitlab-com/gl-infra/production/-/issues/5194 and will post more updates.

June 2021

Short service disruption on GitLab.com

June 21, 2021 21:34 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, CI/CD - Hosted runners for GitLab community contributions, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




June 21, 2021 21:34 UTC
[Resolved] Our team has confirmed that the disruption is now gone, and that services are back to normal, Investigation will continue on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945 for additional details and root cause analysis.

June 21, 2021 21:14 UTC
[Investigating] GitLab.com is up and operational, but from 20:53 to 20:58, we saw an increase in errors on GitLab.com web, API, and Git services that were a short disruption, now gone. Investigation is being done on gitlab.com/gitlab-com/gl-infra/production/-/issues/4945

May 2021

Unscheduled maintenance - Switchover of our primary database

May 8, 2021 14:59 UTC

Description

We will be conducting a switchover of our primary database at 14:45 UTC to mitigate a potential instability. Users will be unable to access GitLab.com for up to 2 minutes during this switchover. Please check gitlab.com/gitlab-com/gl-infra/production/-/issues/4528


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS


Schedule

May 7, 2021 14:45 - May 7, 2021 14:50 UTC



May 8, 2021 14:59 UTC
[Update] The primary database switchover is done and only affected Gitlab.com for 4 seconds. All systems are up and operational. For more details, see gitlab.com/gitlab-com/gl-infra/production/-/issues/4528

May 8, 2021 14:53 UTC
[Update] The primary database switchover is complete. For more details, please see gitlab.com/gitlab-com/gl-infra/production/-/issues/4528

GitLab.com PostgreSQL Database Upgrade

May 8, 2021 11:01 UTC

Description

We'll be upgrading GitLab.com PostgreSQL database to version 12. This maintenance will include primary transition of the cluster within a 1 hour window with downtime during part or all of the window. We intend to minimize the length of downtime, but you should plan for all services will be unavailable during the hour maintenance. Why are we doing this upgrade? 1. Keeping up with new Gitlab.com release support 2. General improvements available in newer versions of PostgreSQL 3. Specific improvements in the PostgreSQL query planner which have recently led to performance impacts and service disruptions for GitLab.com Why does it require downtime? The upgrade approach requires a short downtime while we transition all services from the existing PostgreSQL cluster to this new version 12 PostgreSQL cluster and ensure that all runtime and asynchronous functions are operating as expected. Additional information is available in this GitLab Issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/4037


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, Canary


Locations

Google Compute Engine


Schedule

May 8, 2021 09:00 - May 8, 2021 10:00 UTC



May 8, 2021 11:01 UTC
[Update] Our maintenance finished successfully. Thank you for your patience!

May 8, 2021 10:55 UTC
[Update] The PostgreSQL database upgrade is complete and GitLab.com is available again. We are back to regular operations and will continue monitoring all systems.

May 8, 2021 10:39 UTC
[Update] Maintenance update - The PostgreSQL database upgrade is now complete, and we are running some final verification steps. We expect Gitlab.com to be available in about 15 minutes from now. Thank you for your patience.

May 8, 2021 10:08 UTC
[Update] Maintenance update - The PostgreSQL database upgrade is continuing, and it is at around 60% done. We are extending the maintenance window until 11:00 UTC.

May 8, 2021 08:59 UTC
[Update] GitLab.com planned maintenance for PostgreSQL upgrade is starting. See you on the other side!

May 8, 2021 08:48 UTC
[Update] GitLab.com will soon shutdown for the planned maintenance to upgrade our PostgreSQL Database services. See you on the other side!

May 8, 2021 08:26 UTC
[Update] GitLab.com will begin maintenance at 9:00 UTC. Please note that any CI jobs that start before the maintenance window but complete during the maintenance window will fail and may need to be restarted. Maintenance is scheduled to end at 10:00 UTC.

May 8, 2021 08:10 UTC
[Update] GitLab.com will undergo maintenance in 1 hour at 09:00 UTC. Please note that any CI jobs that start before the maintenance window but complete during the window period will fail and may need to be restarted. Maintenance is scheduled to end at 10:00 UTC.

May 7, 2021 11:11 UTC
[Update] Tomorrow at 09:00 UTC, we will be undergoing some scheduled maintenance to upgrade our GitLab.com PostgreSQL Database services. We expect the maintenance window to be less than 1 hour.

May 5, 2021 16:04 UTC
[Update] The GitLab.com PostgreSQL Database Upgrade maintenance will take place on Saturday , 2021 May 8, 09:00 - 10:00 UTC as previously announced. status.gitlab.com

April 21, 2021 18:14 UTC
[Update] The PostgreSQL 12 update has been confirmed for May 8, 2021 09:00 - 10:00 UTC. Status page subscribers will receive reminders 72/24/1h in advance.

April 16, 2021 17:47 UTC
[Update] In a final review of our plans for the PostgreSQL upgrade we have decided to postpone this maintenance until May 8th. This date is tentative and will be confirmed prior to April 23rd.

April 14, 2021 18:42 UTC
[Update] The GitLab.com PostgreSQL Database Upgrade maintenance will take place on Saturday April 17, 2021 09:00 - 10:00 UTC as previously announced.

March 2021

Possible Database & Redis degradation

March 18, 2021 15:06 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 18, 2021 15:06 UTC
[Resolved] Our services have fully recovered and the incident is now mitigated. For the full root cause analysis follow our investigation at gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:57 UTC
[Investigating] No material updates; we continue to see recovery across our services. Our engineers are still investigating the root cause. More details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:40 UTC
[Investigating] We're seeing a recovery across our different services and continuing to investigate to confirm the root cause of the degradation. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

March 18, 2021 14:23 UTC
[Investigating] We're investigating a possible service degradation across our Redis and DB components. Users might be experiencing delays in CI/CD pipelines. Details in: gitlab.com/gitlab-com/gl-infra/production/-/issues/4011

Database latency resulting in 500 errors

March 15, 2021 14:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 15, 2021 14:19 UTC
[Resolved] While the root cause is still being investigated, we have seen no further alerts and GitLab.com is fully operational. We are therefore marking this incident as resolved. For further details please check: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:51 UTC
[Monitoring] Gitlab.com appears to be stable, with no additional updates to share. Our engineers are still monitoring our application and investigating the root cause of this problem. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:35 UTC
[Investigating] No material updates, our engineers are investigating the root cause. We're currently waiting for more database replicas to come online to help with the load. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 12:15 UTC
[Investigating] We're still experiencing 500 errors on GitLab.com, investigation is underway. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:51 UTC
[Investigating] We're still investigating the overall cause of the issue and are currently checking if this could be related to database re-indexing of a particular table. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

March 15, 2021 11:29 UTC
[Investigating] We are experiencing database latency resulting in customers seeing 500 errors when trying to reach GitLab.com. We are currently investigating. Details in issue: gitlab.com/gitlab-com/gl-infra/production/-/issues/3962

Intermittent issue on GitLab.com

March 1, 2021 05:19 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




March 1, 2021 05:19 UTC
[Resolved] It seems the error rate has not gone back up as the monthly CI minutes reset job has completed. All the affected components are back to operating normally. We will post further updates and fix in the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

March 1, 2021 03:13 UTC
[Monitoring] It seems the CI minutes resetting jobs have finished. We're continuing to monitor the performance.

March 1, 2021 02:34 UTC
[Identified] We're expecting for the error to gradually recover on its own over the next 30 minutes. We're exploring a permanence solution so that it doesn't come up during the next month CI minute reset jobs.

March 1, 2021 02:01 UTC
[Identified] We notice there's degraded performance on GitLab.com. It seems to relate to our monthly CI minutes reset jobs. While the performance is recovering, we're exploring possible solution. You can read more about our investigation on gitlab.com/gitlab-com/gl-infra/production/-/issues/3823

October 2020

SSO warnings

October 28, 2020 01:45 UTC

SSO warningsOperational

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 28, 2020 01:45 UTC
[Resolved] Closing incident per updates on gitlab.com/gitlab-com/gl-infra/production/-/issues/2916

October 28, 2020 00:35 UTC
[Investigating] We’ve received reports that on SSH pushes users were getting an unintended message. This has been remediated.

Degraded performance related to GCS issues

October 24, 2020 21:56 UTC

Incident Status

Degraded Performance


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 21:56 UTC
[Resolved] GitLab.com Registry is back to normal.

October 24, 2020 18:19 UTC
[Monitoring] GitLab.com Registry appears to be recovering. We are continuing to monitor its status along with GCS.

October 24, 2020 17:15 UTC
[Identified] We are continuing to monitor issues with the Registry on GitLab.com and looking for updates from our provider.

October 24, 2020 16:28 UTC
[Identified] We are continuing to monitor the issues with Registry on GitLab.com and are working with our provider on those issues.

October 24, 2020 16:04 UTC
[Identified] We are continuing to investigate issues with GitLab Pages and Registry on GitLab.com which appear to be related to underlying problems with GCS.

October 24, 2020 15:43 UTC
[Investigating] It appears GitLab pages may be affected. Other GitLab.com services appear to operating correctly.

October 24, 2020 15:40 UTC
[Investigating] We are experiencing degraded performance probably related to GCS issues. We are investigating in gitlab.com/gitlab-com/gl-infra/production/-/issues/2887

GitLab.com is experiencing degraded availability because of high database load.

October 24, 2020 11:32 UTC

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 24, 2020 11:32 UTC
[Resolved] GitLab.com is back to operating normally.

October 24, 2020 11:19 UTC
[Monitoring] GitLab.com is back to operating normally. We are continuing to monitor on the incident issue.

October 24, 2020 10:57 UTC
[Identified] We are continuing to investigate the origin of some slow database queries. We are tracking on gitlab.com/gitlab-com/gl-infra/production/-/issues/2885.

October 24, 2020 10:27 UTC
[Identified] We have identified a particular endpoint that was related to the issues and have disabled it temporarily on GitLab.com. The application appears to be recovering, but we are continuing to investigate.

October 24, 2020 10:02 UTC
[Investigating] We are continuing to investigate issues related to high load on the DB cluster for GitLab.com.

October 24, 2020 09:36 UTC
[Investigating] GitLab.com is experiencing degraded availability because of high database load. We are investigating.

Potential DDoS on Gitlab.com

October 23, 2020 14:53 UTC

Incident Status

Operational


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




October 23, 2020 14:53 UTC
[Resolved] Thank you all of your patience, we can confirm that the DDoS incident is now fully mitigated, Gitlab.com is fully operational and secure.

October 23, 2020 13:58 UTC
[Monitoring] GitLab.com is continuing to operate normally. We've identified the source of an apparent attack and have mitigated its impact. We will continue investigating this incident and how we can further lessen the impact of attacks like these.

October 23, 2020 13:23 UTC
[Investigating] GitLab.com experienced a brief disruption due to a potential DDoS attempt. GitLab.com is now operating as normal and we are continuing to look into the source of the disruption.

August 2020

Routing failures from some locations impacting GitLab.com

August 30, 2020 16:59 UTC

Incident Status

Partial Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




August 30, 2020 16:59 UTC
[Resolved] This incident is now resolved.

August 30, 2020 14:51 UTC
[Monitoring] Cloudflare is reporting this issue to be mitigated. We will monitor the situation.

August 30, 2020 13:26 UTC
[Identified] There is an ongoing issue with the CenturyLink ISP that is impacting some traffic to GitLab.com. We are tracking this incident here: cloudflarestatus.com/incidents/hptvkprkvp23.

July 2020

Cloudflare DNS issues

July 17, 2020 22:34 UTC

Cloudflare DNS issuesService Disruption

Incident Status

Service Disruption


Components

Website, API, Git Operations, Container Registry, GitLab Pages, CI/CD - Hosted runners on Linux, CI/CD - Hosted runners on Windows, Background Processing, GitLab Customers Portal, Support Services, packages.gitlab.com, version.gitlab.com, forum.gitlab.com, Canary


Locations

Google Compute Engine, Azure, Digital Ocean, Zendesk, AWS




July 17, 2020 22:34 UTC
[Resolved] Thanks everyone for their patience. We have seen traffic resume normally. Any review will continue through the incident issue gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 22:00 UTC
[Monitoring] Some users are still experiencing issues accessing GitLab services especially if they're using 1.1.1.1 Cloudflare's DNS. Cloudflare's status page also suggests some locations are still affected. We recommend using a different DNS resolver at least temporarily. More details gitlab.com/gitlab-com/gl-infra/production/-/issues/2433#note_381587434

July 17, 2020 21:39 UTC
[Monitoring] Cloudflare seems to have resolved their DNS issues, and all services are operational now. We are monitoring for now. An incident issue was created and reviewed in gitlab.com/gitlab-com/gl-infra/production/-/issues/2433

July 17, 2020 21:33 UTC
[Identified] We have confirmed that the Cloudflare issue has affected all our services including Support's ZenDesk instances and our status page. We will continue to provide updates as we learn more.

July 17, 2020 21:22 UTC
[Investigating] GitLab.com is experiencing issues related to DNS with Cloudflare. We are investigating.





Back to current status