Overview

Our own daemon, called scrd

Our implementation of a container registry is architecturally different from most managed registries on the market. The registry is powered by a binary we maintain ourselves. We call it scrd, short for ServersCamp Container Registry Daemon. It is a single static Go binary that bundles auth, RBAC, the scanning bridge, lifecycle, audit and anomaly detection in one process. We ship updates on our own cadence; when you ask for a feature in a ticket, it goes into the same backlog as the rest of the platform, with no third-party vendor in the path.

Built on CNCF Distribution v3

scrd is built on top of CNCF Distribution v3, the OCI Distribution reference implementation (originally Docker Distribution, now stewarded under CNCF). It is widely used across the OCI ecosystem, with many major registries (Harbor, GitLab, GitHub, Quay) building on its codebase or protocol. We use it as the OCI protocol layer and storage driver, and track upstream so spec fixes land in scrd without us reinventing the wire format. Everything around it (dedicated-VM topology, per-tenant auth, scanner integration, lifecycle, audit, anomaly engine, operator controls) is ours.

Why we did not just deploy upstream Distribution

Every off-the-shelf managed registry we evaluated assumes a multi-tenant deployment model that conflicts with how we run the rest of the platform. Wrapping that in glue scripts would have meant accepting the architectural mismatch. Maintaining our own daemon lets us fit the registry into the same single-tenant invariants as our managed PostgreSQL and object storage, and lets us ship capabilities in the base product that some other registries split out into upsells. There is no scrd-pro SKU.

Topology

1 registry

A single scrd instance with its own catalog and config

= 1 VM

Dedicated CPU, RAM and replicated NVMe in your VPC

= 1 tenant

No shared blob store, HMAC secret or request scheduler

Two consequences fall out of that

Security envelope = the VM

The same isolation that protects your VPC and your other VMs protects your registry. One customer's compromise, mistake or runaway client cannot reach another customer's blobs or metadata. The blast radius of any single incident is one tenant.

Continuous Trivy scanning, no extra cost

Because the Trivy scanner process runs inside the same VM as the daemon, the CPU cycles for it are already paid for in the tier price. Other registries charge per-image scan or split scanning into a separate SKU. We can include it without a separate line item.

What you operate day-to-day

A walk through the cabinet pages you'll actually live in.

Pricing model

Two pricing axes: the tier price and egress overage above the bundled allowance.

Hard disk cap. Period.

A 100 GB tier is exactly 100 GB. Going over the cap does not silently appear on your invoice: the registry hits one of two policies below. There is no soft per-GB overage, by design.

Two policies when the disk fills

read_only (default): at 90% used, docker push returns 503. Pulls keep working until you free space or upgrade. Recovery at 85%.

auto_upgrade: the instance is promoted to the next tier automatically. Email + cabinet notification, billing flips to the new hourly rate.

Egress gets cheaper as you scale

Each tier ships with included monthly egress. Anything above is metered at a per-GB rate that drops with the tier - €0.005/GB on the smallest tier, €0.0015/GB on the largest. Honest, not punitive.

Built on the same stack as your VMs

No separate object-storage backend, no proprietary blob format, no special-cased SLA. Whatever runs your VMs runs your registry.

3× replicated NVMe

Every blob lives on the same SDS pool as the rest of your infrastructure: 3-way replication, single-disk failures handled at the storage layer. No "object durability nines" fine print.

Opt-in incremental backups

Schedule daily/weekly snapshots of the registry VM disk. Billing is by actual occupied space, not the full tier size: a 250 GB tier with 40 GB of data and 7 daily snapshots costs roughly 40 GB plus deltas, not 250 GB × 7. Restore rolls the VM back to the snapshot, blobs and audit log restored together.

99.9% uptime target

Built on top of redundant networking, an HA control plane and 3× replicated storage. Planned daemon upgrades and migrations run through maintenance mode, with the registry returning a 503 and a clear reason header so docker clients can back off and retry.

TLS terminated by the daemon

No nginx in front, no Caddy, no proxy layer. The scrd daemon serves HTTPS directly with auto-rotated certificates. One process, one log stream, one place to debug.

A note on single-VM architecture and 99.9% uptime

We target 99.9% uptime, and our topology is single-VM per registry.

In practice, that can mean long periods of uninterrupted operation, and in rare failure cases it can mean downtime measured in tens of minutes.

A registry is also not on the same runtime path as a database:

  • CI pushes and pulls depend on it directly,
  • new deployments depend on it,
  • existing running workloads usually do not query the registry per request (images are already on nodes).

If a registry instance is temporarily unavailable:

  • CI jobs may wait or fail and then retry,
  • new deployments may be delayed,
  • existing running pods continue serving traffic.

If your environment requires stronger continuity guarantees, we can help design around it:

  • pull-through cache against a secondary registry,
  • mirroring critical images,
  • a second scrd instance.

For many teams, this is the right cost/reliability balance for a managed registry.

Every feature, every tier

All capabilities below are enabled on the smallest tier. Tiers differ only in disk size and included egress.

Authentication & access

HMAC-JWT authentication

Token-based auth with key rotation. Instant token revocation on user delete or disable. Rate-limit and lockout on brute-force attempts against /auth/token.

Per-user RBAC

N docker users × M repositories with a read/write access matrix. Service accounts for CI, humans get scoped credentials, no shared root token.

Public read repos

Per-repo flag for anonymous docker pull. Open-source distribution without a separate hosting hop.

Account lockout

Per-user thresholds on auth failures. Configurable cool-down, full audit trail of every lockout event.

Security & scanning

Trivy vulnerability scanning

Scans run on every push and on a recurring schedule for the latest and recently-pushed tags. Manual rescan available from the cabinet. Results queryable per tag with full CVE detail.

Anomaly detection

Alerts on auth-failure bursts, account lockouts, disk-threshold events and other operational signals. Delivered via webhook and the cabinet notification feed.

Operations

Lifecycle policies

Glob-based retention rules - keep v* 365 days, drop pr-* after 7. keep_last_n per-repo. Immutable tags to block overwrite of release artifacts.

Automatic garbage collection

Mark-and-sweep on a cron schedule. Reclaims unreferenced blobs and stale uploads automatically. Manual trigger available when you want it now.

Maintenance mode

Manual read-only flip with a custom message shown to docker clients. Triggered by you for coordinated upgrades, or by ServersCamp ops during planned migrations.

Audit log, 90 days

Action events stored 90 days. Queryable via API, real-time SSE stream, CSV / NDJSON export for compliance. No "audit add-on tier".

Integrations

OCI Distribution v2

Standard protocol. Works with docker, podman, skopeo, BuildKit, and any OCI client. No proprietary client required.

Webhooks

HTTP callbacks on push, pull, delete, anomaly and vulnerability events. HMAC-signed payload (Slack/GitHub-style). Wire it to your CI, your alerting, your deploy bot.

Pull-through cache

Proxy and cache upstream registries (Docker Hub, ghcr.io, quay.io). Bypasses Docker Hub anonymous rate limits (100 pulls / 6h), speeds up CI by serving cached layers locally, keeps builds working when upstream is down.

Insights

Customer dashboard

Storage growth over time, top repos and top users by traffic, push/pull bandwidth, vulnerability summary, cleanup suggestions, an algorithmic insights feed.

Layer dedup analytics

Shows the storage savings from shared base layers across repos. Useful when you're deciding whether to consolidate base images or split them out.

Pick a tier

Ten fixed packages. Storage and included egress scale together. Feature set identical across every row.

Tier Disk Egress included Egress overage Monthly Best for
scr-1010 GB200 GB€0.005/GB€3.02Hobby projects, personal dev
scr-2525 GB750 GB€0.0045/GB€5.47Hobby+, side projects
scr-5050 GB1.5 TB€0.004/GB€7.49Small teams, CI artifacts
scr-7575 GB2.25 TB€0.0035/GB€10.01Between Light and Standard
scr-100100 GB3 TB€0.003/GB€12.10Production for small services
scr-200200 GB6 TB€0.0028/GB€19.94Production with headroom
scr-250250 GB8 TB€0.0025/GB€23.90Multi-service production
scr-500500 GB16 TB€0.002/GB€43.63Heavy CI, frequent rebuilds
scr-750750 GB24 TB€0.0017/GB€63.36Large catalogs, many tags
scr-10241.0 TB32 TB€0.0015/GB€85.03Enterprise, long retention

Backups are billed separately (€0.05/GB-month, charged on actual occupied space, not full tier size). Auto-upgrade can promote you across rows automatically if you opt in.

What people use it for

Four patterns we see most often.

CI artifact storage

Every commit produces a tagged image: app:pr-123, app:main-abc1234, app:v1.4.0. Lifecycle policies clean up the noisy ones (pr-* after 7 days, main-* after 30, immutable on v*) without you writing a single cron job.

Private team images

Internal services that should never leave your VPC: base images with your secrets, customer-specific builds, prerelease binaries. RBAC keeps each team scoped, audit log shows who pulled what when.

Open-source distribution

Mark a repo public, point your README at it, your users docker pull anonymously. No Docker Hub rate limits to inherit, no separate hosting story for OSS releases.

Pull-through cache for CI

Configure pull-through caching for upstream registries (Docker Hub, ghcr.io, quay.io). CI runners hit your registry, get cached layers at LAN speed, and keep building when upstream is down or rate-limited. Hub's 100-pull-per-6h anonymous limit becomes irrelevant.

Try it

Smallest tier is €3.02/mo. Spin up a registry, push your first images, see the dashboard.