Private OCI registry on a dedicated VM. Same isolation, same SLA model and the same feature set across all tiers.
Smallest tier starts at €3.02/mo - 10 GB disk, 200 GB included egress, all current features included.
scrdOur implementation of a container registry is architecturally different from most managed registries on the market. The registry is powered by a binary we maintain ourselves. We call it scrd, short for ServersCamp Container Registry Daemon. It is a single static Go binary that bundles auth, RBAC, the scanning bridge, lifecycle, audit and anomaly detection in one process. We ship updates on our own cadence; when you ask for a feature in a ticket, it goes into the same backlog as the rest of the platform, with no third-party vendor in the path.
scrd is built on top of CNCF Distribution v3, the OCI Distribution reference implementation (originally Docker Distribution, now stewarded under CNCF). It is widely used across the OCI ecosystem, with many major registries (Harbor, GitLab, GitHub, Quay) building on its codebase or protocol. We use it as the OCI protocol layer and storage driver, and track upstream so spec fixes land in scrd without us reinventing the wire format. Everything around it (dedicated-VM topology, per-tenant auth, scanner integration, lifecycle, audit, anomaly engine, operator controls) is ours.
Every off-the-shelf managed registry we evaluated assumes a multi-tenant deployment model that conflicts with how we run the rest of the platform. Wrapping that in glue scripts would have meant accepting the architectural mismatch. Maintaining our own daemon lets us fit the registry into the same single-tenant invariants as our managed PostgreSQL and object storage, and lets us ship capabilities in the base product that some other registries split out into upsells. There is no scrd-pro SKU.
A single scrd instance with its own catalog and config
Dedicated CPU, RAM and replicated NVMe in your VPC
No shared blob store, HMAC secret or request scheduler
The same isolation that protects your VPC and your other VMs protects your registry. One customer's compromise, mistake or runaway client cannot reach another customer's blobs or metadata. The blast radius of any single incident is one tenant.
Because the Trivy scanner process runs inside the same VM as the daemon, the CPU cycles for it are already paid for in the tier price. Other registries charge per-image scan or split scanning into a separate SKU. We can include it without a separate line item.
A walk through the cabinet pages you'll actually live in.
Two pricing axes: the tier price and egress overage above the bundled allowance.
A 100 GB tier is exactly 100 GB. Going over the cap does not silently appear on your invoice: the registry hits one of two policies below. There is no soft per-GB overage, by design.
read_only (default): at 90% used, docker push returns 503. Pulls keep working until you free space or upgrade. Recovery at 85%.
auto_upgrade: the instance is promoted to the next tier automatically. Email + cabinet notification, billing flips to the new hourly rate.
Each tier ships with included monthly egress. Anything above is metered at a per-GB rate that drops with the tier - €0.005/GB on the smallest tier, €0.0015/GB on the largest. Honest, not punitive.
No separate object-storage backend, no proprietary blob format, no special-cased SLA. Whatever runs your VMs runs your registry.
Every blob lives on the same SDS pool as the rest of your infrastructure: 3-way replication, single-disk failures handled at the storage layer. No "object durability nines" fine print.
Schedule daily/weekly snapshots of the registry VM disk. Billing is by actual occupied space, not the full tier size: a 250 GB tier with 40 GB of data and 7 daily snapshots costs roughly 40 GB plus deltas, not 250 GB × 7. Restore rolls the VM back to the snapshot, blobs and audit log restored together.
Built on top of redundant networking, an HA control plane and 3× replicated storage. Planned daemon upgrades and migrations run through maintenance mode, with the registry returning a 503 and a clear reason header so docker clients can back off and retry.
No nginx in front, no Caddy, no proxy layer. The scrd daemon serves HTTPS directly with auto-rotated certificates. One process, one log stream, one place to debug.
We target 99.9% uptime, and our topology is single-VM per registry.
In practice, that can mean long periods of uninterrupted operation, and in rare failure cases it can mean downtime measured in tens of minutes.
A registry is also not on the same runtime path as a database:
If a registry instance is temporarily unavailable:
If your environment requires stronger continuity guarantees, we can help design around it:
scrd instance.For many teams, this is the right cost/reliability balance for a managed registry.
All capabilities below are enabled on the smallest tier. Tiers differ only in disk size and included egress.
Token-based auth with key rotation. Instant token revocation on user delete or disable. Rate-limit and lockout on brute-force attempts against /auth/token.
N docker users × M repositories with a read/write access matrix. Service accounts for CI, humans get scoped credentials, no shared root token.
Per-repo flag for anonymous docker pull. Open-source distribution without a separate hosting hop.
Per-user thresholds on auth failures. Configurable cool-down, full audit trail of every lockout event.
Scans run on every push and on a recurring schedule for the latest and recently-pushed tags. Manual rescan available from the cabinet. Results queryable per tag with full CVE detail.
Alerts on auth-failure bursts, account lockouts, disk-threshold events and other operational signals. Delivered via webhook and the cabinet notification feed.
Glob-based retention rules - keep v* 365 days, drop pr-* after 7. keep_last_n per-repo. Immutable tags to block overwrite of release artifacts.
Mark-and-sweep on a cron schedule. Reclaims unreferenced blobs and stale uploads automatically. Manual trigger available when you want it now.
Manual read-only flip with a custom message shown to docker clients. Triggered by you for coordinated upgrades, or by ServersCamp ops during planned migrations.
Action events stored 90 days. Queryable via API, real-time SSE stream, CSV / NDJSON export for compliance. No "audit add-on tier".
Standard protocol. Works with docker, podman, skopeo, BuildKit, and any OCI client. No proprietary client required.
HTTP callbacks on push, pull, delete, anomaly and vulnerability events. HMAC-signed payload (Slack/GitHub-style). Wire it to your CI, your alerting, your deploy bot.
Proxy and cache upstream registries (Docker Hub, ghcr.io, quay.io). Bypasses Docker Hub anonymous rate limits (100 pulls / 6h), speeds up CI by serving cached layers locally, keeps builds working when upstream is down.
Storage growth over time, top repos and top users by traffic, push/pull bandwidth, vulnerability summary, cleanup suggestions, an algorithmic insights feed.
Shows the storage savings from shared base layers across repos. Useful when you're deciding whether to consolidate base images or split them out.
Ten fixed packages. Storage and included egress scale together. Feature set identical across every row.
| Tier | Disk | Egress included | Egress overage | Monthly | Best for |
|---|---|---|---|---|---|
scr-10 | 10 GB | 200 GB | €0.005/GB | €3.02 | Hobby projects, personal dev |
scr-25 | 25 GB | 750 GB | €0.0045/GB | €5.47 | Hobby+, side projects |
scr-50 | 50 GB | 1.5 TB | €0.004/GB | €7.49 | Small teams, CI artifacts |
scr-75 | 75 GB | 2.25 TB | €0.0035/GB | €10.01 | Between Light and Standard |
scr-100 | 100 GB | 3 TB | €0.003/GB | €12.10 | Production for small services |
scr-200 | 200 GB | 6 TB | €0.0028/GB | €19.94 | Production with headroom |
scr-250 | 250 GB | 8 TB | €0.0025/GB | €23.90 | Multi-service production |
scr-500 | 500 GB | 16 TB | €0.002/GB | €43.63 | Heavy CI, frequent rebuilds |
scr-750 | 750 GB | 24 TB | €0.0017/GB | €63.36 | Large catalogs, many tags |
scr-1024 | 1.0 TB | 32 TB | €0.0015/GB | €85.03 | Enterprise, long retention |
Backups are billed separately (€0.05/GB-month, charged on actual occupied space, not full tier size). Auto-upgrade can promote you across rows automatically if you opt in.
Four patterns we see most often.
Every commit produces a tagged image: app:pr-123, app:main-abc1234, app:v1.4.0. Lifecycle policies clean up the noisy ones (pr-* after 7 days, main-* after 30, immutable on v*) without you writing a single cron job.
Internal services that should never leave your VPC: base images with your secrets, customer-specific builds, prerelease binaries. RBAC keeps each team scoped, audit log shows who pulled what when.
Mark a repo public, point your README at it, your users docker pull anonymously. No Docker Hub rate limits to inherit, no separate hosting story for OSS releases.
Configure pull-through caching for upstream registries (Docker Hub, ghcr.io, quay.io). CI runners hit your registry, get cached layers at LAN speed, and keep building when upstream is down or rate-limited. Hub's 100-pull-per-6h anonymous limit becomes irrelevant.
Smallest tier is €3.02/mo. Spin up a registry, push your first images, see the dashboard.