2026-05-10
Git repository architecture for the large enterprise: a GitLab field guide
The repository architecture decision is the most load-bearing engineering choice most enterprises don’t realize they’re making. Teams that pick a structure early — even a sub-optimal one — and commit, run circles around teams that re-organize their Git topology every 18 months. The compounding effects of consistent CI patterns, predictable code locations, and shared tooling outweigh the marginal benefit of “the right” structure for most decisions.
This post is the wide introduction: the three patterns that exist (monorepo, polyrepo, federation), GitLab’s hierarchical model that makes federation practical, GitLab’s full capability surface (most teams use ~30% of it), and an opinionated implementation playbook for actually getting value from the platform.
The three repository patterns
Monorepo. Everything in one repository. Apps, libraries, infrastructure, docs all share a tree. Polyrepo. Many small repositories, one per service or library. Federation. Hybrid — a small number of shared monorepos plus team-owned polyrepos.
The brief is misleading. Each pattern is a tradeoff network, not a single property, and the depth of the tradeoffs is what determines whether you’ll regret the choice in 18 months.
Monorepo: pros, cons, and what it really takes
| Pros | Cons |
|---|---|
| Atomic cross-service refactors. Rename a function signature, update every caller, single commit. | Build tooling becomes mandatory. Bazel / Buck2 / Pants / Nx / Turborepo aren’t optional past ~50 engineers; without them CI times balloon. |
One source of dependency truth. One package.json (or per-language lockfile) prevents the “which version of X are we on?” question. | Storage size grows large. Mature monorepos run 10-100+ GB. Specialized Git tooling (Sparse Checkout, partial clone, virtual file systems) becomes necessary. |
| Consistent tooling. One linter config, one test runner, one CI template — and they apply everywhere. | Coarse access control. Anyone with read sees everything. Sensitive code (compliance-restricted, exec compensation, etc.) needs separate repos anyway. |
Easy discoverability. grep across the entire codebase. Refactor confidence comes from being able to see every caller. | IDE performance struggles. Opening a 50GB workspace in VS Code or JetBrains is its own discipline. |
| Atomic library API changes. Update a shared library and all its consumers in one PR. No multi-week migration. | Branch noise. Every team’s commits appear in everyone’s git log. Cognitive overhead. |
| Strong test impact analysis (with the right build tool). Only test what’s affected. | Hard to open-source individual components. Carving out a piece for OSS release is a project. |
| Atomic deployments possible. Deploy multiple services from one commit hash. | Onboarding takes longer. New engineers need orientation across more code than they’ll touch in their first quarter. |
Real adopters and what they use:
- Google. Piper (proprietary VCS), Bazel build system. ~2B lines of code. Test impact analysis is foundational; without it, the model collapses.
- Meta. Sapling (formerly EdenSCM), Buck2 build system. Same architectural approach as Google.
- Microsoft. Windows partially on Git VFS / Scalar — invented to make Git work at that scale.
- Uber. Go monorepo with Bazel. ~75M LoC.
- Twitter / X. Pants build system, then Bazel.
- Stripe. Ruby monorepo with Sorbet for static type checking across millions of lines.
- Many startups. Use Nx (JS/TS), Turborepo, Lerna, or Yarn/PNPM workspaces for smaller-scale monorepos before they hit Google-scale problems.
The non-negotiable infrastructure: a build system that understands the dependency graph and can build/test only what’s affected. Bazel, Buck2, Pants, Nx, Turborepo are all variations on this theme. Without it, a 200-engineer monorepo runs 4-hour CI on a one-line change and gets abandoned.
Polyrepo: pros, cons, and where it cracks
| Pros | Cons |
|---|---|
| Team autonomy. Each team owns their repo end-to-end. Hiring, on-call, deployment cadence — all theirs. | Dependency-version hell. Service A uses auth-sdk v1.2; Service B uses v1.0 with a known CVE. Tracking and updating across 200 repos is a quarter-long project. |
| Smaller blast radius per change. A broken commit affects one service, not the whole org. | Cross-repo refactors require coordinated PRs. Renaming a shared API means 30 PRs landing in coordinated order, often impossible to actually achieve. |
| Simple per-repo CI/CD. Each pipeline is small and focused. | Tooling drift between repos. Different linters, test runners, build tools, code styles. New engineers context-switch constantly. |
| Fine-grained access control. Per-repo permissions are easy. Compliance scoping is obvious. | Code duplication. “We need a HTTP client” — five teams write five wrappers. |
| Easier to open-source. Individual repos can become public without surgery. | Hard to enforce platform standards. Mandatory CI scans, logging conventions, security policies — each repo has to opt in (or be forced via configuration management). |
| Faster IDE performance. Workspaces are small. | Multiple library versions in production simultaneously. A CVE in shared-auth-sdk v1.2 doesn’t disappear because v1.3 was released — 80 services still pin v1.2. |
| Independent versioning per service. No “what does v2.4.1 mean” debates. | Slower platform-wide changes. A telemetry standard rollout takes a year as each team upgrades on their own schedule. |
| Easy to retire / replace a service. Delete the repo, point traffic elsewhere. | ”Where does this code live?” friction. Onboarding maps become fragmented. |
Real adopters:
- Amazon. ~100,000+ microservice repos. The “two-pizza team” model literalized in source control. Pairs with Brazil (their build system) and Apollo (their deployment platform) to manage the cross-repo coordination.
- Netflix. Heavy polyrepo, Spring-based JVM services.
- Most early-stage startups. Default until they hit the pain points around year 3.
Mitigation tools that make polyrepo scale workable:
- Renovate / Dependabot — centralized dependency updates as automated PRs across all repos.
- Backstage — service catalog so “where does this code live?” has an answer.
- Shared CI templates / GitHub Actions composite actions — to fight tooling drift.
- Policy-as-code enforcement (OPA, Conftest) — to force conventions.
- API governance — versioned API contracts so cross-service refactors at least follow a predictable shape.
Polyrepo without these mitigations works for 30 engineers and breaks at 300.
Federation: the layered compromise
The pattern enterprises converge on, intentionally or by accumulation. Federation explicitly accepts that:
- Platform code (shared by everyone) benefits from monorepo discipline — atomic updates, single source of truth, consistent tooling.
- Product code (independent per team) benefits from polyrepo isolation — team autonomy, smaller blast radius.
- The boundary is a deliberate architectural decision, not an accident.
In practice this resolves into three tiers:
Tier 1: Platform monorepo(s). A small number (1-5) used by everyone:
platform/infrastructure/— terraform modules, Kubernetes manifests, network configsplatform/ci-templates/— shared CI components, security scanning, compliance pipelinesplatform/design-system/— UI components, design tokens, theme librariesplatform/shared-libs/— auth SDKs, logging clients, telemetry libraries
Tier 1 has the discipline of a monorepo (atomic library changes, consistent tooling) and a clear platform-team ownership boundary.
Tier 2: Product-area monorepos. Cohesive groups of services that ship together:
product-banking/services/— accounts, payments, cards, fraud — services with high coupling and shared release cadenceproduct-trading/— trading platform front-end + back-end + analytics
Each Tier 2 monorepo is a single team’s or product’s world. Atomic refactors within it; explicit API contracts at its edges.
Tier 3: Independent service polyrepos. Truly autonomous services with their own teams, deployment cadence, and tech choices:
notification-service/— sidecar service used by many products, owned by one teamcompliance-archive/— regulatory data store, independent compliance scope- Each in its own repo with shared CI templates from Tier 1
Why this is the compromise that works: Tier 1 captures the atomic-refactor and consistent-platform wins of monorepo where they actually matter — in the code everyone depends on. Tier 3 captures the team-autonomy and blast-radius wins of polyrepo where they actually matter — in services that don’t need to ship together. Tier 2 captures the middle ground — cohesive product domains where you want internal atomicity but external isolation.
The discipline is in the boundary decisions:
- What goes in Tier 1? Code consumed by ≥3 teams and with a cross-team release cadence. Not “all utility code.”
- What goes in Tier 2? Services with shared deployment lifecycle, owned by one team or one cohesive group. Not “any group of related services.”
- What goes in Tier 3? Services with independent SLOs, independent release cadence, autonomous teams. Not “everything else by default.”
Most “federation gone wrong” failures are about Tier 2 — teams either pile every related service into a Tier 2 monorepo (and recreate the polyrepo problem internally), or shard too aggressively into Tier 3 (and recreate the dependency-hell problem).
How to decide: the factor table
| Factor | Pushes toward monorepo | Pushes toward polyrepo |
|---|---|---|
| Team size | < 20 engineers | > 50 engineers |
| Service coupling | High (shared release cycle) | Low (independent deploys) |
| Cross-cutting changes | Frequent | Rare |
| Build tool maturity | Have Bazel / Nx / Buck / Pants | Standard per-repo CI is fine |
| Compliance scope | Same requirements across services | Different per service |
| Open-source strategy | All internal | Some open-source |
| Developer experience priority | Consistent tooling | Team autonomy |
| Storage growth tolerance | Comfortable with 10s of GB | Want each repo under 1 GB |
| Cross-language refactors | Common | Rare |
| Build infra investment | Willing to invest | Want default GitLab CI |
Most enterprises score mixed on this table — that’s why federation wins. The platform code answers row-by-row toward monorepo (shared release cycle, frequent cross-cutting changes, common compliance, consistent tooling matters). The service code answers row-by-row toward polyrepo (autonomous teams, independent deploys, different compliance per service, team autonomy matters).
Federation isn’t a third pattern. It’s the structural acknowledgement that the answer is “both, and the boundary is the work.”
The trap
The trap in this decision: picking one pattern globally. Greenfield teams pick monorepo or polyrepo based on whatever the loudest engineer believes, and lock into a model that fits one half of the future codebase. The cost shows up at year 3 when the federation conversation gets had under duress, with thousands of files needing to move and dozens of release pipelines needing rewiring.
The early commitment that pays off isn’t “monorepo” or “polyrepo.” It’s deciding now what your platform layer is and where the boundary sits between platform code and team code. Get that boundary right and federation falls out naturally. Get it wrong and you’ll re-litigate it for years.
GitLab’s hierarchical model
GitLab’s structural primitives map naturally to federation:
- Top-level group (e.g.,
acme-corp) — the organization - Sub-groups (nested arbitrarily deep) — divisions, products, teams
- Projects — individual repositories
- Members — users assigned at any group or project level, with role inheritance flowing down
A typical large-enterprise layout:
acme-corp/
├── platform/ (group)
│ ├── infrastructure/ (subgroup)
│ │ ├── terraform-modules/ (project — shared monorepo)
│ │ ├── k8s-platform/ (project — shared monorepo)
│ │ └── ci-templates/ (project — shared CI library)
│ ├── shared-libs/ (subgroup)
│ │ ├── design-system/ (project)
│ │ └── auth-sdk/ (project)
│ └── developer-portal/ (project)
├── product-banking/ (group)
│ ├── services/ (subgroup)
│ │ ├── accounts-service/ (project)
│ │ ├── payments-service/ (project)
│ │ └── fraud-detection/ (project)
│ └── apps/ (subgroup)
│ ├── mobile-app/ (project)
│ └── web-banking/ (project)
├── product-insurance/ (group)
│ └── ...
└── shared/ (group)
├── docs/ (project)
└── runbooks/ (project)
Critical mechanics this gives you:
- Permissions cascade. Grant a user
Maintaineronplatform/and they get it on every project underneath. Adjust at the leaf project to override. - Group-level resources. CI/CD variables, runners, container registry credentials, milestones — defined at the group level and inherited by all projects.
- Group epics and roadmaps. Cross-project planning lives in the group, not in any one project.
- Audit events roll up to the group level for compliance reporting.
- Compliance frameworks apply to all projects in a group.
This hierarchy is the enabler of federation in GitLab. It’s also the place most enterprises mis-design — flat group layouts force re-organization in year 2.
The GitLab capability surface
What you’re actually buying when you adopt GitLab — far more than “Git hosting”:
Eight branches; ~64 capabilities. Most teams use 15-20 of these at any point in time. The flywheel comes from compounding adoption across the surface.
Source control and code review
The Git-hosting basics plus the workflow primitives that make code review work at enterprise scale:
- Branches and tags. Standard Git. Protected branches enforce who can push (typically: nobody directly; all changes via Merge Request) and protect tags from being moved or deleted.
- Merge Requests (MRs). GitLab’s PR equivalent. Discussion threads, suggested changes, file-level comments, resolved discussion tracking. MR templates for consistent review.
- CODEOWNERS file. Per-path ownership; specified reviewers required for changes in their paths. Enables true codebase-wide review enforcement without manual reviewer assignment.
- Approval rules. Configurable per project or per branch. “2 approvals from backend team + 1 from security + 1 from product owner” is expressible declaratively.
- Push rules (Premium+). Server-side enforcement of commit message format, signed commits, file name patterns, prohibited filenames, branch name conventions. Catches at push, not at MR time.
- Mirroring + sync. Bi-directional mirroring with GitHub, Bitbucket, or another GitLab. Useful for open-sourcing internal projects or vendoring external code with controlled sync.
- Reviewer roulette. Auto-assigns reviewers based on CODEOWNERS or load-balancing rules.
Planning and tracking entities
GitLab is also a project management tool. Most teams under-use this. The hierarchy:
| Entity | Scope | What it’s for |
|---|---|---|
| Issue | Project (or group) | Atomic unit of work — a bug, a feature, a chore. Templates per project. |
| Epic | Group level (Premium+) | Multi-issue initiative. Links to many issues across many projects. |
| Milestone | Project or group | Time-boxed delivery target (e.g., “Q3 2026 release”). Issues tagged with milestone. |
| Iteration (Premium+) | Group | Sprint primitive. Auto-rotates. Issues assigned to iterations. |
| Board | Project or group | Kanban view of issues with configurable columns, swimlanes, weight aggregation. |
| Roadmap (Premium+) | Group | Timeline visualization of epics. Strategic view across products. |
| Service Desk | Project | Email-to-issue gateway. Customer emails create issues; replies threaded in. |
| To-Do list | User | Personal queue of mentions, assigned items, requested reviews. |
A few patterns that work:
- Epic per feature, milestone per release. Epics group all the work for a feature regardless of how many projects touch it; milestones track delivery to a release date. They’re orthogonal.
- Issue templates per project.
bug.md,feature-request.md,incident.mdin.gitlab/issue_templates/. Forces consistent fields. - Weighted issues for sprint capacity. Each issue gets a
Weight: 1/2/3/5/8(Fibonacci-ish). Iteration boards aggregate weight. Teams calibrate their per-iteration capacity over 3-5 iterations. - Boards per workflow stage. One project might have multiple boards: “Backlog refinement,” “Active sprint,” “Bugs,” “Security.” Same issues, different views.
- Service Desk for external interfaces. Customer support tickets become issues. Engineering and support work the same queue.
- Cross-project epics are how product management actually works at scale. An epic in the
product-banking/group can have child issues inaccounts-service,mobile-app,web-banking, andpayments-service— and a single roadmap view shows progress across all of them.
CI/CD in GitLab
The capability that distinguishes GitLab from being “just Git hosting” was always its CI/CD. The mechanics:
.gitlab-ci.ymlat the root of any repo defines the pipeline. Jobs in stages run in parallel within a stage, sequentially across stages.- Runners execute jobs. Three flavors: shared (GitLab-managed on SaaS, or shared across the GitLab self-managed instance), group (registered to a group, available to all projects within), project-specific. Executor types: Docker (most common), Kubernetes (best for autoscaling), shell, SSH.
- Parent-child pipelines. A pipeline can trigger another pipeline within the same project (e.g., one parent pipeline that fans out to children based on what changed).
- Multi-project pipelines. A pipeline in project A can trigger a pipeline in project B, waiting on its completion. Cross-project release orchestration.
- Environments. Declare deployment targets (
staging,production,feature-XYZ). Track deployment history per environment. Rollback by re-running an older job. - Dynamic environments. A per-MR preview environment created on push, destroyed on merge. The preview URL appears in the MR.
- Schedules. Pipeline schedules (cron-syntax) for nightly builds, security scans, data refreshes.
- Auto DevOps. Template-driven full CI/CD with sensible defaults for many languages. Less popular at large enterprises (every team wants their own variations) but excellent for prototypes.
- Components / Includes. Pull shared CI logic from a central project (
include: project: 'platform/ci-templates'). The DRY mechanism for federation-style CI.
Security scanning (Ultimate)
GitLab Ultimate’s distinctive value. Built-in:
- SAST. ~30 languages, including Go, Java, Python, C#, JavaScript/TypeScript, PHP, Ruby. Based on Semgrep-derived rules plus other engines per-language.
- DAST. Browser-based scanner against deployed applications. Also includes API security testing (REST, GraphQL, SOAP).
- Dependency Scanning. CVE detection in transitive dependencies. Supports npm, Maven, pip, Go, etc.
- Secret Detection. Pre-receive hook + pipeline scan for committed API keys, tokens, certs.
- Container Scanning. CVE scanning of built container images. Uses Trivy under the hood.
- License Compliance. Detects OSS licenses in dependencies. Allow / deny list per group.
- Fuzz Testing (Ultimate). Coverage-guided fuzzing for Go, C/C++, Java, Python.
- Infrastructure-as-Code Scanning. Terraform, CloudFormation, Kubernetes manifests checked against policy rules.
- Vulnerability Dashboard. Cross-project aggregation. Filter by severity, tool, status, project. Triage workflow with approver paths.
For the broader security context, see the shift-left/shift-right post.
Container and package registries
Often-underused capability that eliminates separate Artifactory / Nexus deployments for many enterprises:
- Container Registry. Per-project Docker image registry. Tied to GitLab’s auth (any user with project access can pull; tokens are short-lived). Image expiration policies.
- Package Registry. Supports npm, Maven, PyPI, Composer, NuGet, Go modules, Conan, Helm, Debian, RubyGems, and a generic package format. Each project gets its own.
- Dependency Proxy. Caches DockerHub / npm / PyPI public packages within GitLab. Reduces external dependency on registry availability. Saves bandwidth.
- Terraform Module Registry. Per-group module registry. Teams reference modules via
source = "https://gitlab.com/api/v4/projects/.../terraform/modules/...". - Artifact retention. Configurable per project / group. Critical at scale — without retention policies, artifact storage grows unboundedly.
Replacing JFrog Artifactory or Sonatype Nexus with GitLab’s registries is a meaningful cost reduction for many organizations.
Compliance and governance
- Audit Events. Every action (login, push, MR creation, merge, project setting change) recorded. Group-level audit log streaming to external SIEM.
- Compliance Frameworks. Define a framework (e.g., “PCI-DSS,” “SOC 2”) and apply to projects. Compliance projects get mandatory CI/CD configurations they can’t bypass.
- Compliance Pipelines. A compliance framework can inject required pipeline jobs into every CI run on affected projects — guaranteeing security scans, license checks, signing.
- License Scanning. GPL-3.0 in your dependency tree? Compliance policy blocks merge.
- SBOM Export. CycloneDX and SPDX formats supported. Generates on every build.
- Push Rules. Server-side enforcement of commit signing, branch name patterns, file size limits, prohibited filenames.
- Group Access Tokens. Short-lived API tokens scoped to a group. Better than long-lived personal access tokens.
- Read-only mode for archived projects. Preserves history while preventing changes.
DevOps tools
- Feature Flags. Built-in feature flag UI (Unleash-based). Define flags per environment; toggle without redeploying.
- Error Tracking. Sentry-compatible integration. Errors aggregated in the GitLab UI.
- Value Stream Analytics. Cycle time from issue created → MR merged → deployed. Identifies bottlenecks.
- Code Quality reports. Pluggable code quality tools (e.g., RuboCop, ESLint) report into MRs.
- Test Coverage. Coverage badges and per-MR coverage diff.
- Insights Dashboards (Premium+). Customizable charts on issues, MRs, deployments. Engineering metrics without a separate tool.
- GitLab Pages. Static site hosting per project. Useful for project docs, design systems, marketing micro-sites.
- Wiki and Snippets. Markdown wikis per project / group. Snippets for sharing code without creating projects.
AI capabilities (Duo)
GitLab’s AI add-on, available across tiers in 2026:
- Duo Code Suggestions. Inline code completion in supported IDEs (VS Code, JetBrains). Comparable to Copilot.
- Duo Chat. Conversational assistant. Answers questions about your code, explains vulnerabilities, suggests fixes.
- Vulnerability Explanation. For each security finding, an LLM-generated explanation of the vulnerability and remediation.
- MR Summaries. Auto-generated summary of what a merge request changes. Useful for reviewers and audit.
- Refactor Suggestions. Recommend code restructuring within a file or across files.
- Test Generation. Generate unit tests from existing code.
- Issue Summarization. Summarize long issue threads.
- Root Cause Analysis. For failed pipelines, suggest the likely failure cause.
The pattern: AI augmenting existing workflows rather than replacing them. The features feel similar to GitHub Copilot Enterprise’s offering, with GitLab Duo’s competitive differentiator being deeper integration with planning, security findings, and CI/CD context.
Implementation: how to actually get value
Adopting GitLab is easy. Getting value out of GitLab takes deliberate work. The 12-step playbook for enterprises:
- Pick the deployment model. SaaS (gitlab.com) vs Self-Managed vs Dedicated. SaaS for most; Self-Managed if regulatory or air-gapped requirements; Dedicated for “we need single-tenant SaaS.”
- SSO + SCIM from day one. SAML/OIDC to your enterprise IdP. SCIM for automated user provisioning. Manual user management does not scale past 200 people.
- Design the group hierarchy. This is the load-bearing decision. Plan for 3+ years. Reorganization is painful — issue references break, permissions reset, CI variable inheritance shifts.
- Establish shared CI templates in a
platform/ci-templatesproject. Every team includes from there. Updates propagate organically. - Configure runners properly. Group-level Kubernetes runners scale better than shared SaaS runners for most enterprises. Plan capacity per team.
- Enable security scanning at the group level. Compliance frameworks make it mandatory for production projects.
- Set up Container Registry + Package Registry. Deprecate parallel Artifactory / Nexus where possible. Massive cost saving.
- Implement protected branches and approval rules universally. CODEOWNERS files in every repo.
- Define compliance frameworks for regulated projects. Tie them to compliance pipelines that enforce the controls.
- Roll out Duo to a pilot team. Measure productivity impact before broader rollout.
- Establish a platform team that owns shared CI templates, runner capacity, group hierarchy, and the developer portal. Without ownership, drift compounds.
- Measure adoption. Insights dashboards, Value Stream Analytics. The capabilities exist; the discipline of using them is the differentiator.
Common anti-patterns
- Flat group hierarchy. Everyone is in one top-level group with hundreds of projects. Permissions become unmanageable. Refactoring is expensive.
- Per-project CI templates copy-pasted. When the security policy changes, you touch 80 repos. Use group-level includes.
- Long-lived personal access tokens for service accounts. Use Project / Group Access Tokens with explicit expiration.
- No protected branches. Direct pushes to main happen. Audit trail breaks down.
- Ignoring the planning capabilities. Teams keep using Jira “because we’ve always used Jira” while paying for GitLab Premium. The integration is one-way fine, but the in-platform experience is meaningfully better.
- Treating Auto DevOps as production-ready. Auto DevOps is a great prototype; most enterprises customize it heavily before production use.
- Underestimating storage. Container images + packages + artifacts + LFS objects accumulate. Set retention policies early or pay the migration tax later.
- Letting compliance pipelines be optional. Compliance Frameworks exist for a reason — apply them so they can’t be bypassed.
Beyond GitLab: when to look elsewhere
GitLab is excellent. It is not the only choice:
- GitHub Enterprise — larger community, better Copilot integration, weaker built-in CI/CD compared to GitLab’s. Default for open-source-heavy organizations and ML/AI startups.
- Bitbucket Cloud / Data Center — strongest if you’re already deep in the Atlassian (Jira) ecosystem.
- Azure DevOps — Microsoft shops, strong Azure integration. The Repos product is shrinking; Microsoft mostly directs users to GitHub.
- Gitea / Forgejo — self-hosted lightweight alternatives. Good for small teams or air-gapped labs.
- AWS CodeCommit — being sunset. Don’t start new projects here.
The decision is rarely “which is best” and almost always “which fits our existing ecosystem and team skills.” GitLab’s natural lane: organizations that want one platform for source, CI/CD, security, and registries with strong self-managed options and a clear enterprise tier.
The closing pattern
The teams that get the most from GitLab share three habits:
- Treat the platform as a product. A platform team owns the CI templates, the runner fleet, the group hierarchy, and the developer experience. Drift is everyone’s problem; the platform team is paid to prevent it.
- Lean into the capability surface. Every quarter, pick one underused capability (Compliance Pipelines, Iterations, Feature Flags) and roll it out properly. Compounding returns.
- Standardize aggressively at the platform level, vary at the team level. Shared CI templates, shared security scanning, shared registry strategy. Team-specific application code, team-specific test choices, team-specific feature flags. Federation in practice.
The mistake to avoid: buying GitLab for the Git hosting and never adopting the rest. The competitive value of GitLab in 2026 vs GitHub or Bitbucket isn’t Git — it’s the integration of source, planning, CI/CD, security, and registries under one roof. Teams that use 30% of the platform get 30% of the value. Teams that use 70% — same license, vastly different leverage.