Infrastructure update: our hypervisor fleet (Q3 2025)
We’ve standardized on HP ProLiant Gen10 Plus nodes with Intel Xeon Scalable Gold (≈3 GHz base), 768 GB ECC RAM per host, redundant 25/40 GbE with dual-port links to separate switches (LACP), and enterprise SSD/NVMe tiers. We cap per-host memory utilization at ~60% and keep reserve hypervisors for headroom.
Specs at a glance
- Platform: HP ProLiant Gen10 Plus
- CPU: Intel Xeon Scalable Gold, base clocks around 3 GHz
- Memory per host: 768 GB ECC
- Network: 25/40 GbE, dual-port NICs cabled to separate ToR switches (active LACP)
- Storage (local): data-center-grade SATA/SAS SSD + PCIe 4.0 NVMe
- Storage (shared): NVMe-backed Ceph for high-IO workloads
- Backups: dedicated backup storage + Proxmox Backup Server domain
- RAID/Controllers: all SATA/SAS/NVME SSD arrays on hardware RAID controllers with cache
- Capacity policy: hypervisor memory utilization capped at ~60%, plus reserve (idle) hypervisors for failover
What we run
We operate KVM on Proxmox VE with staged updates and continuous monitoring. Each host is a standardized 2025 class: HP Gen10 Plus, Intel Xeon Scalable Gold (~3 GHz), 768 GB ECC RAM. This gives us predictable performance under sustained load and simplifies capacity planning.
Where we run
All our hardware is hosted in Tier III–class European data centers, inside dedicated private cages operated exclusively by our team. Romania and Germany are live. France will follow the same standard at launch.
- Private cages only. Locked, video-surveilled enclosures with badge + biometric access. Entry is limited to ServersCamp engineers; visitors are escorted.
- Concurrently maintainable (Tier III). Dual power paths (A+B) per rack, UPS and generator backup, N+1 cooling. Planned maintenance does not require downtime.
- Redundant network fabric. Diverse fiber paths to the building core; each node uplinks to separate top-of-rack switches in LACP.
- 24/7 security & monitoring. On-site guards, CCTV, and facility NOC; our own monitoring covers power, environment, and links inside each cage.
- Fire protection. Early smoke detection and clean-agent fire suppression designed for live equipment.
- Hardware handling. Strict chain-of-custody for deliveries and RMAs; assets are tagged, access-logged, and sanitized (cryptographic wipe) on decommission or replacement.
This setup gives us the physical security and resilience you expect in production: predictable power, cooling, and connectivity—with controlled access at every layer.
Network & availability
Every node has dual-port NICs, each port uplinked to a different top-of-rack switch. Links are bonded via LACP; the access network runs at 25/40 GbE depending on rack. All switches are deployed in redundant pairs to avoid single points of failure.
Storage tiers
- Local SSD (SATA/SAS) on hardware RAID controllers with cache for balanced throughput and reliability.
- Local NVMe (PCIe 4.0) for ultra-low latency workloads on a single host.
- Shared storage: NVMe-backed Ceph clusters for distributed workloads and fast failover.
- Backups: Proxmox Backup Server with separate backup storage; retention and schedules according to plan.
Capacity & stability policy
To keep headroom for bursts and maintenance, we intentionally cap memory utilization per hypervisor at ~60% and maintain reserve hypervisors. This policy reduces noisy-neighbor effects and enables rapid rescheduling in case of hardware maintenance or failure.
By region
- RO (Romania): production clusters on the 2025 class, optimized for low-latency I/O.
- DE (Germany): active refresh to 2025 class; same networking and storage policies.
- FR (France, planned): rollout in progress with the same standards from day one.
Methodology & results
We validate each class with repeatable tests (fio for latency/IOPS, iperf for throughput, sustained CPU load). We’ll publish representative numbers for the 2025 class and update them after major refreshes.
What we don’t publish
For security reasons we don’t share hostnames, IP ranges, serial numbers, exact firmware/BMC details, or photos that reveal identifiers. If you need specifics for compliance, contact us privately.
Ready to proceed?
Explore our Cloud Servers and tell us about your workload — we’ll map you to the right class and outline next steps.
Similar Posts
Smarter Performance Insights for Your Cloud Servers
We’re excited to introduce a new section in the ServersCamp control panel –Performance Insights.
Hypervisor fleet upgraded to Proxmox VE 9
We are pleased to announce that our hypervisor infrastructure has been upgraded toProxmox VE 9.0, based onDebian 13 “Trixie”. The new release integrates theLinux kernel 6.14, providing improved performance, broader hardware support, and long-term stability for production workloads.
ServersCamp Terraform Provider is Now Available
We’re excited to announce that theofficial ServersCamp Terraform provideris now published and available on the Terraform Registry.