Published on: March 12, 2026

7 min read

How to use GitLab Container Virtual Registry with Docker Hardened Images

Learn how to simplify container image management with this step-by-step guide.

If you're a platform engineer, you've probably had this conversation:

"Security says we need to use hardened base images."

"Great, where do I configure credentials for yet another registry?"

"Also, how do we make sure everyone actually uses them?"

Or this one:

"Why are our builds so slow?"

"We're pulling the same 500MB image from Docker Hub in every single job."

"Can't we just cache these somewhere?"

I've been working on Container Virtual Registry at GitLab specifically to solve these problems. It's a pull-through cache that sits in front of your upstream registries — Docker Hub, dhi.io (Docker Hardened Images), MCR, and Quay — and gives your teams a single endpoint to pull from. Images get cached on the first pull. Subsequent pulls come from the cache. Your developers don't need to know or care which upstream a particular image came from.

This article shows you how to set up Container Virtual Registry, specifically with Docker Hardened Images in mind, since that's a combination that makes a lot of sense for teams concerned about security and not making their developers' lives harder.

What problem are we actually solving?

The Platform teams I usually talk to manage container images across three to five registries:

  • Docker Hub for most base images
  • dhi.io for Docker Hardened Images (security-conscious workloads)
  • MCR for .NET and Azure tooling
  • Quay.io for Red Hat ecosystem stuff
  • Internal registries for proprietary images

Each one has its own:

  • Authentication mechanism
  • Network latency characteristics
  • Way of organizing image paths

Your CI/CD configs end up littered with registry-specific logic. Credential management becomes a project unto itself. And every pipeline job pulls the same base images over the network, even though they haven't changed in weeks.

Container Virtual Registry consolidates this. One registry URL. One authentication flow (GitLab's). Cached images are served from GitLab's infrastructure rather than traversing the internet each time.

How it works

The model is straightforward:

      Your pipeline pulls:
  gitlab.com/virtual_registries/container/1000016/python:3.13

Virtual registry checks:
  1. Do I have this cached? → Return it
  2. No? → Fetch from upstream, cache it, return it


    

You configure upstreams in priority order. When a pull request comes in, the virtual registry checks each upstream until it finds the image. The result gets cached for a configurable period (default 24 hours).

      ┌─────────────────────────────────────────────────────────┐
│                    CI/CD Pipeline                       │
│                          │                              │
│                          ▼                              │
│   gitlab.com/virtual_registries/container/<id>/image   │
└─────────────────────────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────┐
│            Container Virtual Registry                   │
│                                                         │
│  Upstream 1: Docker Hub ────────────────┐               │
│  Upstream 2: dhi.io (Hardened) ────────┐│               │
│  Upstream 3: MCR ─────────────────────┐││               │
│  Upstream 4: Quay.io ────────────────┐│││               │
│                                      ││││               │
│                    ┌─────────────────┴┴┴┴──┐            │
│                    │        Cache          │            │
│                    │  (manifests + layers) │            │
│                    └───────────────────────┘            │
└─────────────────────────────────────────────────────────┘

    

Why this matters for Docker Hardened Images

Docker Hardened Images are great because of the minimal attack surface, near-zero CVEs, proper software bills of materials (SBOMs), and SLSA provenance. If you're evaluating base images for security-sensitive workloads, they should be on your list.

But adopting them creates the same operational friction as any new registry:

  • Credential distribution: You need to get Docker credentials to every system that pulls images from dhi.io.
  • CI/CD changes: Every pipeline needs to be updated to authenticate with dhi.io.
  • Developer friction: People need to remember to use the hardened variants.
  • Visibility gap: It's difficulat to tell if teams are actually using hardened images vs. regular ones.

Virtual registry addresses each of these:

Single credential: Teams authenticate to GitLab. The virtual registry handles upstream authentication. You configure Docker credentials once, at the registry level, and they apply to all pulls.

No CI/CD changes per-team: Point pipelines at your virtual registry. Done. The upstream configuration is centralized.

Gradual adoption: Since images get cached with their full path, you can see in the cache what's being pulled. If someone's pulling library/python:3.11 instead of the hardened variant, you'll know.

Audit trail: The cache shows you exactly which images are in active use. Useful for compliance, useful for understanding what your fleet actually depends on.

Setting it up

Here's a real setup using the Python client from this demo project.

Create the virtual registry

      from virtual_registry_client import VirtualRegistryClient

client = VirtualRegistryClient()

registry = client.create_virtual_registry(
    group_id="785414",  # Your top-level group ID
    name="platform-images",
    description="Cached container images for platform teams"
)

print(f"Registry ID: {registry['id']}")
# You'll need this ID for the pull URL

    

Add Docker Hub as an upstream

For official images like Alpine, Python, etc.:

      docker_upstream = client.create_upstream(
    registry_id=registry['id'],
    url="https://registry-1.docker.io",
    name="Docker Hub",
    cache_validity_hours=24
)

    

Add Docker Hardened Images (dhi.io)

Docker Hardened Images are hosted on dhi.io, a separate registry that requires authentication:

      dhi_upstream = client.create_upstream(
    registry_id=registry['id'],
    url="https://dhi.io",
    name="Docker Hardened Images",
    username="your-docker-username",
    password="your-docker-access-token",
    cache_validity_hours=24
)

    

Add other upstreams

      # MCR for .NET teams
client.create_upstream(
    registry_id=registry['id'],
    url="https://mcr.microsoft.com",
    name="Microsoft Container Registry",
    cache_validity_hours=48
)

# Quay for Red Hat stuff
client.create_upstream(
    registry_id=registry['id'],
    url="https://quay.io",
    name="Quay.io",
    cache_validity_hours=24
)

    

Update your CI/CD

Here's a .gitlab-ci.yml that pulls through the virtual registry:

      variables:
  VIRTUAL_REGISTRY_ID: <your_virtual_registry_ID>

  
build:
  image: docker:24
  services:
    - docker:24-dind
  before_script:
    # Authenticate to GitLab (which handles upstream auth for you)
    - echo "${CI_JOB_TOKEN}" | docker login -u gitlab-ci-token --password-stdin gitlab.com
  script:
    # All of these go through your single virtual registry
    
    # Official Docker Hub images (use library/ prefix)
    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/library/alpine:latest
    
    # Docker Hardened Images from dhi.io (no prefix needed)
    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/python:3.13
    
    # .NET from MCR
    - docker pull gitlab.com/virtual_registries/container/${VIRTUAL_REGISTRY_ID}/dotnet/sdk:8.0

    

Image path formats

Different registries use different path conventions:

RegistryPull URL Example
Docker Hub (official).../library/python:3.11-slim
Docker Hardened Images (dhi.io).../python:3.13
MCR.../dotnet/sdk:8.0
Quay.io.../prometheus/prometheus:latest

Verify it's working

After some pulls, check your cache:

      upstreams = client.list_registry_upstreams(registry['id'])
for upstream in upstreams:
    entries = client.list_cache_entries(upstream['id'])
    print(f"{upstream['name']}: {len(entries)} cached entries")

    

What the numbers look like

I ran tests pulling images through the virtual registry:

MetricWithout CacheWith Warm Cache
Pull time (Alpine)10.3s4.2s
Pull time (Python 3.13 DHI)11.6s~4s
Network roundtrips to upstreamEvery pullCache misses only

The first pull is the same speed (it has to fetch from upstream). Every pull after that, for the cache validity period, comes straight from GitLab's storage. No network hop to Docker Hub, dhi.io, MCR, or wherever the image lives.

For a team running hundreds of pipeline jobs per day, that's hours of cumulative build time saved.

Practical considerations

Here are some considerations to keep in mind:

Cache validity

24 hours is the default. For security-sensitive images where you want patches quickly, consider 12 hours or less:

      client.create_upstream(
    registry_id=registry['id'],
    url="https://dhi.io",
    name="Docker Hardened Images",
    username="your-username",
    password="your-token",
    cache_validity_hours=12
)

    

For stable, infrequently-updated images (like specific version tags), longer validity is fine.

Upstream priority

Upstreams are checked in order. If you have images with the same name on different registries, the first matching upstream wins.

Limits

  • Maximum of 20 virtual registries per group
  • Maximum of 20 upstreams per virtual registry

Configuration via UI

You can also configure virtual registries and upstreams directly from the GitLab UI—no API calls required. Navigate to your group's Settings > Packages and registries > Virtual Registry to:

  • Create and manage virtual registries
  • Add, edit, and reorder upstream registries
  • View and manage the cache
  • Monitor which images are being pulled

What's next

We're actively developing:

  • Allow/deny lists: Use regex to control which images can be pulled from specific upstreams.

This is beta software. It works, people are using it in production, but we're still iterating based on feedback.

Share your feedback

If you're a platform engineer dealing with container registry sprawl, I'd like to understand your setup:

  • How many upstream registries are you managing?
  • What's your biggest pain point with the current state?
  • Would something like this help, and if not, what's missing?

Please share your experiences in the Container Virtual Registry feedback issue.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum.

Share your feedback

Start building faster today

See what your team can do with the intelligent orchestration platform for DevSecOps.