Project Expo

16-Node OpenStack Cluster Build

Designed and delivered a production OpenStack environment using OpenStack-Ansible: 3 controller nodes running a containerized control plane, 13 compute nodes, and a 7-node Ceph backend providing 96 TB of distributed storage over a 4-node Extreme switch stack.

16 Nodes

3 controller + 13 compute nodes supporting business workloads.

96 TB Ceph

7-node storage backend with replication for resilience and scale.

Automation First

OpenStack-Ansible roles and inventory controls for repeatable operations.

Physical / Network Topology

The physical design centered on segmented traffic planes (management, storage, and tenant/data) across a 4-node Extreme switch stack to keep control-plane reliability and Ceph throughput stable under business demand.

16-node OpenStack physical topology openstack private cloud fabric 4-node extreme switch stack segmented vlans: mgmt | storage | tenant/data controller cluster (3) ctrl-1 ctrl-2 ctrl-3 containerized control plane services compute pool (13) nova / neutron compute nodes +9 additional compute nodes (collapsed view) ceph backend (7) ceph-1 ceph-2 ceph-3 ceph-4 ceph-5/6/7 (collapsed) 96 TB distributed storage Traffic path: users/apps -> API/control plane -> compute -> Ceph block/object storage

Deployment and Service Flow

OpenStack-Ansible provided an idempotent deployment workflow from inventory modeling to cluster rollout, with consistent service placement across controller, compute, and Ceph nodes.

OpenStack-Ansible deployment flow openstack-ansible inventory + roles + vars bootstrap + deploy containers + services controller plane keystone/nova/neutron/glance business ops tenant enablement 13 compute nodes vm scheduling + network attachments 7-node ceph cluster 96 TB block/object storage pool operational outcomes repeatable deployments, resilient services, business-ready capacity Outcome: OpenStack infrastructure that scaled internal services and supported day-to-day business operations

Execution Notes

Control Plane Design

Built a 3-node controller quorum and distributed containerized control-plane services for API availability, scheduler resilience, and easier operational upgrades.

Compute + Storage Balance

Matched 13 compute nodes with a 7-node Ceph backend to keep virtualization throughput and storage durability aligned as tenant workloads expanded.

Business Enablement

Used OpenStack-Ansible automation to reduce deployment variance, improve recovery speed, and deliver a stable private-cloud foundation for internal teams.