Skip to content

Commit 7e797c9

Browse files
apeirora-service-user[bot]andreasschlossermlenkeitMaximilian Lenkeit <maximilian.lenkeit@sap.com>Co-authored-by: hyperspace-insights[bot]vasu1124
committed
Release 2.5.0
Co-authored-by: Andreas Schlosser <andreas.schlosser@sap.com> Co-authored-by: Maximilian Lenkeit <maximilian.lenkeit@sap.com> Co-authored-by: Maximilian Lenkeit <maximilian.lenkeit@sap.com>Co-authored-by: hyperspace-insights[bot] <106787+hyperspace-insights[bot]@users.noreply.github.tools.sap> Co-authored-by: Vasu Chandrasekhara <vasu.chandrasekhara@sap.com> Co-authored-by: Vedran Lerenc <vlerenc@gmail.com> Co-authored-by: apeirora-service-user[bot] <138167+apeirora-service-user[bot]@users.noreply.github.tools.sap> Co-authored-by: hyperspace-insights[bot] <106787+hyperspace-insights[bot]@users.noreply.github.tools.sap> Co-authored-by: ospo-renovate[bot] <40221+ospo-renovate[bot]@users.noreply.github.tools.sap>
1 parent e468190 commit 7e797c9

File tree

27 files changed

+995
-151
lines changed

27 files changed

+995
-151
lines changed

.github/workflows/gh-pages.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
name: Build VitePress site
1313
runs-on: ubuntu-latest
1414
steps:
15-
- uses: actions/checkout@v5
15+
- uses: actions/checkout@v6
1616
with:
1717
fetch-depth: 0
1818
- uses: actions/setup-node@v6

.vitepress/components/Project.vue

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ onMounted(() => {
3333
3434
const project = projects[projectName.trim()] || projects[projectName.toLowerCase().trim()];
3535
if (!project) {
36-
throw new Error("Project '" + project + "' does not exist!");
36+
throw new Error("Project '" + projectName + "' does not exist!");
3737
}
3838
39-
projectText.value = projectName
39+
projectText.value = project.name
4040
projectUrl.value = project.url
4141
descriptionText.value = project.description
4242
iconSrc.value = project.icon ?? undefined

.vitepress/theme/docs-sidebar.ts

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,12 @@ type SidebarItemAugmented = SidebarItem & {
2222
__frontmatter?: any
2323
}
2424

25+
const topLevelFolderSequenceLowerCase = [
26+
'showroom',
27+
'best-practices',
28+
'resources'
29+
]
30+
2531
const sortByFrontmatterSidebarPosition = (items: SidebarItemAugmented[]) => {
2632
const augmentedItems = items.map(it => {
2733
if (it.link) {
@@ -38,6 +44,13 @@ const sortByFrontmatterSidebarPosition = (items: SidebarItemAugmented[]) => {
3844
if (it.items) {
3945
it.items = sortByFrontmatterSidebarPosition(it.items)
4046
}
47+
48+
// manual sequence corrections
49+
if (topLevelFolderSequenceLowerCase.includes(it.text?.toLowerCase() || '')) {
50+
it.__frontmatter = it.__frontmatter || {}
51+
it.__frontmatter.sidebar_position = topLevelFolderSequenceLowerCase.indexOf(it.text.toLowerCase())
52+
}
53+
4154
if (it.text?.toLowerCase() === 'best-practices') {
4255
it.text = 'Best Practices'
4356
}

apeiro-projects.json

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,5 +46,41 @@
4646
"url": "https://openmfp.org",
4747
"description": "The Open Micro Front End Platform (OpenMFP) brings together micro front ends and APIs into a cohesive platform, allowing teams to contribute components while maintaining their independence.",
4848
"icon": "/img/projects/openmfp.png"
49+
},
50+
"gardenlinux": {
51+
"name": "Garden Linux",
52+
"url": "https://github.com/gardenlinux",
53+
"description": "Garden Linux is a Debian GNU/Linux derivate that aims to provide small, auditable Linux images for most cloud providers (e.g. AWS, Azure, GCP etc.) and bare-metal machines.",
54+
"icon": "/img/projects/gardenlinux.svg"
55+
},
56+
"platformmesh": {
57+
"name": "Platform Mesh",
58+
"url": "https://platform-mesh.io",
59+
"description": "The Platform Mesh is the main Platform API for users and technical services to order and orchestrate capabilities attached to the environment. Its guiding and design principle is inherited from Kubernetes’s declarative API approach with its digital twin manifests, the Kubernetes Resource Model (KRM). It utilizes and refines the upstream project KCP for its purpose.",
60+
"icon": "/img/projects/platformmesh.svg"
61+
},
62+
"ord": {
63+
"name": "Open Resource Discovery",
64+
"url": "https://open-resource-discovery.github.io/specification/",
65+
"description": "ORD is an open protocol for the decentralized publishing and discovery of application and service metadata. It provides a structured schema for metadata such as endpoints, capabilities, documentation links, and ownership details, ensuring that application resources like APIs, events, data products and AI agents can be discovered, understood, and integrated consistently across different systems and marketplaces.",
66+
"icon": "/img/projects/ord.svg"
67+
},
68+
"konfidence": {
69+
"name": "Konfidence",
70+
"url": "https://konfidence.cloud",
71+
"description": "Konfidence is an open-source software delivery framework. It ensures that only tested and approved versions reach production, addressing a common challenge in complex IT landscapes.",
72+
"icon": "/img/projects/konfidence.svg"
73+
},
74+
"greenhouse": {
75+
"name": "Greenhouse",
76+
"url": "https://github.com/cloudoperators/greenhouse",
77+
"description": "Greenhouse is a Kubernetes based day 2 operations platform focusing on providing a set of opinionated tools & operational processes for managing cloud native infrastructure.",
78+
"icon": "/img/projects/greenhouse.svg"
79+
},
80+
"openmcp": {
81+
"name": "Open Managed Control Plane",
82+
"url": "https://github.com/openmcp-project",
83+
"description": "OpenMCP is a managed Infrastructure-as-data orchestration layer, designed to streamline and automate the management of cloud resources and corresponding services. It involves the coordination and management of various Infrastructure-as-data services, to support the automation of workflows such as provisioning and scaling of workloads.",
84+
"icon": "/img/projects/omcp.svg"
4985
}
5086
}

changelog.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,16 @@
22
title: Changelog
33
---
44

5+
## v2.5.0 (2025-12-22)
6+
7+
**New content**
8+
9+
- [Showroom Hardware Recommendations](./docs/showroom/hardware-recommendations/index.md) - hardware recommendations for setting up an Apeiro-based cloud infrastructure
10+
11+
**Updated content**
12+
13+
- [Architecture Overview](./docs/architecture/index.md#_8ra-and-the-ipcei-cis-reference-architecture) - mapping between IPCEI-CIS Reference Architecture layers and Apeiro components
14+
515
## v2.4.0 (2025-11-12)
616

717
**New content**

docs/architecture/index.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,38 @@ Apeiro conceptually pursues a _declarative approach_ across its components, just
5757

5858
[^cncf-landscape]: CNCF Cloud Native Landscape, see https://landscape.cncf.io
5959
[^kubeception]: see [Hosted Control Planes](./../best-practices/multi-cluster-federation/hosted-control-planes.md)
60+
61+
## 8ra and the IPCEI-CIS Reference Architecture
62+
63+
The Apeiro reference architecture is developed as part of the [8ra](https://www.8ra.com) and [IPCEI-CIS](https://www.8ra.com/ipcei-cis/) initiative.
64+
The IPCEI-CIS published an overall [reference architecture](https://www.8ra.com/resources/) that provides the framework to all IPCEI-CIS projects and partners for describing their specific contributions to an overall cloud-edge infrastructure.
65+
The Apeiro reference architecture and its components fit well into the holistic IPCEI-CIS architecture and the structures, layers, and domains prescribed in this central document.
66+
<!-- add reference to https://landscape.apeirora.eu -->
67+
68+
<ApeiroFigure src="/architecture/apeiro-icra.png"
69+
alt="The Apeiro components mapped to the IPCEI-CIS Reference Architecture"
70+
caption="Mapping the Apeiro components to the IPCEI-CIS Reference Architecture"
71+
width="100%"/>
72+
73+
These Apeiro components are part of the **Virtualization** layer:
74+
- <Project name="gardenlinux">Garden Linux</Project>
75+
- <Project>CobaltCore</Project>
76+
- <Project>IronCore</Project>
77+
78+
These Apeiro components are part of the **Cloud Edge Platform** layer:
79+
- <Project>Gardener</Project>
80+
81+
These Apeiro components are part of the **Service Orchestration** layer:
82+
- <Project name="platformmesh">Platform Mesh</Project>
83+
84+
These Apeiro components are part of the **Data** layer:
85+
- <Project name="ord">Open Resource Discovery</Project>
86+
87+
These Apeiro components are part of the **Application** layer:
88+
- <Project>Konfidence</Project>
89+
90+
These Apeiro components are part of the **Management** domain:
91+
- <Project>Greenhouse</Project>
92+
- <Project name="openmfp">Open Micro Frontend Platform</Project>
93+
- <Project name="openmcp">Open Managed Control Plane</Project>
94+
- <Project name="ocm">Open Component Model</Project>

docs/getting-started/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,6 @@ Explore the following Apeiro components for existing workloads:
4242

4343
## Next Steps
4444

45-
- **Assemble Apeiro** - the [Showroom](./../showroom/index.md) demonstrates how Apeiro assembles the individual components as a working environment.
45+
- **Assemble Apeiro** - the [Showroom](./../showroom/scenarios.md) demonstrates how Apeiro assembles the individual components as a working environment.
4646
- **Adapt Apeiro** - most components of Apeiro are extensible and adjustable, you can adapt them to your own infrastructure or environment constraints.
4747
- **Pick and choose** - Apeiro is a toolkit and you can pick-and-choose the components that provide the most value for your use case.

docs/index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,6 @@ const steps = [
8383
step: "5",
8484
name: "Platform Mesh",
8585
url: "./best-practices/platform-mesh",
86-
main: true,
8786
technologies: [],
8887
},
8988
{
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
---
2+
sidebar_position: 1
3+
title: Control Plane Hardware
4+
---
5+
6+
The minimal control plane footprint is designed for reliability and cost efficiency.
7+
We do not include the required capacity to run other optional Apeiro services from the COS layer and above in this consideration for the control plane sizing.
8+
Depending on the complete target scenario, additional capacity needs to be reserved to run Gardener and other services on the control plane.
9+
This page focuses on the recommended size of the control plane for a plain installation of the BOS layer and bare metal automation to manage the infrastructure in the data plane that will carry the workload.
10+
11+
Additional sizing optimization for minimal footprint installations might be achieved by merging control and data plane into a single rack.
12+
Our focus here is on a sustainable setup that can also be scaled out during productive operations, depending on increased resource demand.
13+
14+
## Bare Metal Hardware Specifications
15+
The Control Plane of a pure bare metal setup that focuses on managing hardware resources without the additional IaaS capabilities requires a single rack for the complete stack.
16+
17+
The minimal setup for a bare metal offering includes:
18+
- Management Nodes: Minimum of three servers to ensure high availability and redundancy for orchestration, monitoring, and API endpoints.
19+
- Network Switch: One management switch for interconnecting control and data plane components, supporting both internal and external traffic.
20+
- Compute Nodes: Two or more servers dedicated to workload execution and storage, sized according to anticipated resource demand.
21+
- Storage: Shared storage system accessible by all compute nodes for persistent data and VM images.
22+
- Firewall: At least one firewall for basic network segmentation and security between control, data plane, and external connections.
23+
- Console/Management Access: One console for out-of-band management and troubleshooting.
24+
25+
This list presents the essential hardware components for a minimal yet scalable single rack deployment, combining both control plane and data plane functions for the Apeiro cloud infrastructure.
26+
Please refer to the subsequent sections for more detail on the respective components.
27+
28+
## CobaltCore Hardware Specifications
29+
The Control Plane of CobaltCore, i.e. BareMetal management plus IaaS functionality provided by OpenStack, consists typically of two racks, one for hosting the required network management functionality and one for providing the necessary compute power.
30+
31+
A typical **network fabric** pod for a Cobalt Core deployment in a modern data center environment generally consists of these key components designed to ensure robust connectivity, scalability, and high availability:
32+
- Spine Switches: Usually three or more high-capacity switches that serve as the backbone of the network, interconnecting with all leaf switches to provide non-blocking bandwidth and low latency across the pod.
33+
- Leaf Switches: Typically, two switches that connect directly to servers, storage devices, and other endpoints. The leaf-spine topology helps facilitate east-west traffic within the pod and supports scalable expansion.
34+
- Core Switches: Two or more switches that aggregate traffic from the spine layer and connect the pod to external networks or additional data center pods, contributing to redundancy and load balancing.
35+
- Firewalls: At least two firewalls are deployed for security, ensuring traffic inspection, segmentation, and protection against unauthorized access.
36+
- Management Switch: A dedicated switch for out-of-band management, providing secure access to network and server management interfaces.
37+
- Console Server: One or more console servers for centralized access to the serial management ports of network and compute devices, supporting remote troubleshooting and maintenance.
38+
39+
Specifications for each component may vary depending on performance requirements and vendor selection, but common features include support for high-speed interfaces (such as 100G QSFP28), redundant power supplies, and advanced network protocols (e.g., DMTF Redfish, VXLAN, EVPN).
40+
This general architecture is designed to provide scalable, resilient, and secure networking for control plane and data plane operations in bare metal and IaaS environments.
41+
42+
A typical **compute pod** deployment is designed to deliver scalable, efficient, and manageable compute resources.
43+
These pods commonly consist of a set of servers, network switches, and management components that together provide the necessary performance, connectivity, and operational flexibility for a wide variety of workloads.
44+
- Compute Nodes: Usually between 8 and 32 servers per pod, each equipped with single- or dual-socket CPUs from leading manufacturers like Intel or AMD. These nodes often feature high core counts (ranging from 64 to 144 cores per socket), a substantial amount of RAM (256GB to 1TB per node), and fast local storage (such as NVMe SSDs) to support demanding applications. Single-socket configurations are preferred for lower power consumption and easier scaling, while dual-socket options are chosen for memory-intensive workloads.
45+
- Network Connectivity: High-speed network interfaces, such as 25G, 40G, or 100G Ethernet ports, are standard for east-west traffic between compute nodes and for uplink to the broader network. SmartNICs (like NVIDIA Bluefield or Mellanox ConnectX) are often deployed to offload network processing, improve bandwidth, and reduce latency, especially in environments focused on virtualization, high-performance computing (HPC), or large-scale cloud operations.
46+
- Top-of-Rack (ToR) Switches: Each pod typically includes two or more ToR switches. These switches aggregate traffic from the compute nodes and provide connections to the spine/leaf fabric of the data center for redundancy and high availability.
47+
- Management Switch: A dedicated management switch is used for out-of-band management connections, allowing secure and reliable access to server and network management interfaces.
48+
49+
Support for standardized management protocols, such as DMTF Redfish, is recommended to provide vendor-agnostic, RESTful API-based hardware management.
50+
This ensures seamless integration with automation tools and reduces complexity.
51+
Compute pods are architected to be modular, allowing for easy expansion and maintenance.
52+
Power efficiency, density, and cooling requirements are key factors in hardware selection.
53+
Network topology is optimized for low latency and high bandwidth, supporting both control plane and data plane operations.
54+
55+
## IronCore Hardware Specifications
56+
The Control Plane of IronCore consists typically of a single rack, holding management functionality for network, compute, and storage.
57+
58+
For **networking** in a modern data center compute pod, a typical stack includes a combination of high-performance switches and dedicated management infrastructure to ensure robust connectivity, redundancy, and operational flexibility:
59+
- Spine Switches: Two or more high-throughput, low-latency switches (commonly 32-port or higher, supporting 100G Ethernet) serve as the backbone of the data center's leaf-spine architecture. These switches aggregate traffic from leaf switches and provide scalable bandwidth for east-west and north-south data flows. Popular models include those from vendors such as Edgecore, Arista, Cisco, or Juniper.
60+
- Leaf Switches: Two or more top-of-rack (ToR) switches (often matching the spine switch in hardware family and supporting 25G or 100G uplinks) connect directly to compute nodes. These switches aggregate server traffic and uplink to the spine for high availability and load balancing.
61+
- Out-of-Band (OOB) Management Stack: Dedicated OOB switches (both spine and leaf) and a console server provide secure, isolated management access to all infrastructure devices. OOB switches typically support a mix of 1G and 10G ports for management traffic, while the console server (from vendors like Perle, Opengear, or Lantronix) offers serial and network-based remote access to device consoles.
62+
- Router Servers: Can share specification with the servers used for general management services. One or more high-performance x86 servers equipped with modern multi-core CPUs (such as AMD EPYC or Intel Xeon), large memory (e.g., 192GB+), NVMe SSDs, and multiple high-speed network interfaces (such as three or more dual-port 100G Ethernet NICs). These servers are used for routing, network services, or as network function virtualization (NFV) hosts, and often include features like TPM modules and support for Redfish or similar management standards.
63+
64+
This architecture ensures high throughput, redundancy, and a clear separation between production and management networks.
65+
All network hardware should support advanced features such as Layer 2/Layer 3 switching, network automation (via APIs like DMTF Redfish), and hardware-based security modules for trusted operations.
66+
The use of white-box or branded switches is common, with hardware selection driven by performance, compatibility, and support requirements.
67+
68+
For **management services**, it is recommended to deploy a set of dedicated management servers with robust hardware configurations to ensure reliable performance, security, and scalability.
69+
A typical setup includes three or more management servers, each equipped with the following or equivalent specifications:
70+
- Multi-core server-grade processor (such as AMD EPYC or Intel Xeon) with at least 32 cores to handle management workloads and virtualization tasks.
71+
- Large memory capacity, typically 192GB RAM or higher, to support concurrent management operations and monitoring tools.
72+
- High-performance NVMe SSD storage, around 3TB or more, for fast boot times, logging, and management software storage.
73+
- Multiple high-speed network interfaces, such as dual-port 100G Ethernet adapters (e.g., Mellanox/ConnectX series), to ensure high availability and rapid management traffic handling.
74+
- Additional network connectivity through 10G SFP+ and 1G RJ45 ports for versatile management network integration and out-of-band access.
75+
- Hardware-based security features, including a TPM 2.0 module, for secure boot and trusted operations.
76+
- Support for advanced management standards like BMC with Redfish API, allowing for remote and automated management of server hardware.
77+
78+
These management servers are typically deployed in a redundant configuration to provide high availability and are integrated with the broader out-of-band management stack, ensuring secure, isolated access to critical infrastructure devices.
79+
Hardware selection should be based on compatibility with existing systems, support for automation, and the ability to scale as operational needs grow.

0 commit comments

Comments
 (0)