diff --git a/docs/installation/installation.md b/docs/installation/installation.md index 9a596866..d35ccd56 100644 --- a/docs/installation/installation.md +++ b/docs/installation/installation.md @@ -17,28 +17,224 @@ However, if you want to test FLUIDOS Node on your cluster already setup, we sugg +--> +## Prerequisites + +Below are the required tools, along with the versions used by the script: + +- [Docker](https://docs.docker.com/get-docker/) v28.1.1 +- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) v1.33.0 +- [KIND](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) v0.27.0 +- [Helm](https://helm.sh/docs/intro/install/) v3.17.3 +- [Liqo CLI tool](https://docs.liqo.io/en/stable/installation/liqoctl.html) v1.0.0 + +> **Note** The installation script will automatically check if these tools are installed and will ask for your confirmation to install them if they are missing. It will install each tool using a fixed version, except for Docker, which will be installed at the stable version. After Docker installation, an additional CLI command will be required to ensure its proper functionality. + +## Common issues with KIND + +This setup leverages KIND (Kubernetes IN Docker) to quickly provision a reliable testbed for evaluating the FLUIDOS Node architecture. + +When running multiple KIND clusters, certain issues may arise—particularly related to swap memory usage and system-level resource limits. To mitigate these problems, it is strongly recommended to execute the following commands prior to starting the installation: + +1. `sudo swapoff -a` +2. `sudo sysctl fs.inotify.max_user_instances=8192` +3. `sudo sysctl fs.inotify.max_user_watches=524288` + ## Testbed installation -To execute the script, use the following command: +### What will be installed -```bash -cd tools/scripts -. ./setup.sh -``` +The script will create two different types of Kubernetes clusters, each consisting of 3 nodes, as defined in the files located in the `quickstart/kind` directory: + +- **fluidos-consumer**: This cluster (also known as a FLUIDOS node) will act as a consumer of FLUIDOS resources. It will use the REAR protocol to communicate with the provider cluster, retrieve available Flavors, and reserve the one that best matches the solver’s request, proceeding to purchase it. + +- **fluidos-provider**: This cluster (also known as a FLUIDOS node) will act as a provider of FLUIDOS resources. It will offer its available Flavors in response to requests from the consumer, managing their reservation and sale accordingly. + +### Installation + +1. Clone the repository + + ```sh + git clone https://github.com/fluidos-project/node.git + ``` + +2. Move into the KIND Example folder + + ```sh + cd node/tools/scripts + ``` + +3. Launch the `setup.sh` script + + ```sh + ./setup.sh + ``` + +4. No command-line arguments are currently supported; instead, the installation mode is selected interactively at the beginning of the script execution. The available options are: + + - `1` Install the FLUIDOS Node using the demo testbed (one consumer and one provider cluster) via KIND. + + - `2` Install the FLUIDOS Node using a custom setup with n consumer clusters and m provider clusters via KIND. + + For both options, you will be prompted to choose: + + - Whether to install from the official remote FLUIDOS repository or use local repositories and build all components locally. + + - Whether to enable resource auto-discovery. + + - Whether to enable LAN node discovery. + + An example prompt flow is shown below: + + ```sh + 1. Use demo KIND environment (one consumer and one provider) + 2. Use a custom KIND environment with n consumer and m provides + Please enter the number of the option you want to use: + 1 + Do you want to use local repositories? [y/n] y + Do you want to enable resource auto discovery? [y/n] y + Do you want to enable LAN node discovery? [y/n] y + ``` + +5. At the beginning of the script execution, a check is performed to ensure all [required tools](#prerequisites) are installed. If any dependencies are missing, the script will prompt you for confirmation before proceeding with their automatic installation. + > **Note** The tools will be installed assuming Linux as the operating system. If you are not using Linux, you will need to install them manually. + + If Docker is not installed and you choose to install it, the script will terminate after its installation. This is necessary because the Docker group must be reloaded in order to use Docker commands without sudo. You can achieve this by running: + + ```sh + newgrp docker + ``` + + This command opens a new shell with updated group permissions. After executing it, simply restart the installation script. + +6. After executing the script, you can verify the status of the pods in the consumer cluster using the following commands: + + ```sh + export KUBECONFIG=fluidos-consumer-1-config + kubectl get pods -n fluidos + ``` + + To inspect the provider cluster, use its corresponding kubeconfig file: -No options are available through the CLI, but you can choose the installation mode by choosing the right option during the script execution. -The option supported are: + ```sh + export KUBECONFIG=fluidos-provider-1-config + kubectl get pods -n fluidos + ``` -- `1` to install the FLUIDOS Node as the demo testbed through KIND + Alternatively, to avoid switching the KUBECONFIG environment variable each time, you can directly specify the configuration file path and context when using kubectl. The paths to the generated configuration files are displayed at the end of the script execution: -- `2` to install the FLUIDOS Node in n consumer clusters and m provider clusters through KIND + ```sh + kubectl get pods --kubeconfig "$PWD/fluidos-consumer-1-config" --context kind-fluidos-consumer -n fluidos + ``` -For both options, you can choose to install from either the official remote FLUIDOS repository or the local repository, building all the components locally. + This approach enables seamless monitoring of both consumer and provider clusters without needing to re-export environment variables manually. + +7. You should see 4 pods running on the `fluidos-consumer` cluster and 4 pods running on the `fluidos-provider` cluster: + + - `node-local-resource-manager-` + - `node-network-manager-` + - `node-rear-controller-` + - `node-rear-manager-` + +### Usage + +In this section, we will guide you through interacting with the FLUIDOS Node at a high level. If you prefer to interact with the FLUIDOS Node using its Custom Resource Definitions (CRDs), please refer to the [Low-Level Usage](../../docs/usage/usage.md) section. + +Let’s start by deploying an example `solver` Custom Resource (CR) on the `fluidos-consume`r cluster. + +1. Open a new terminal on the repo and move into the `deployments/node/samples` folder + + ```sh + cd deployments/node/samples + ``` + +2. Set the `KUBECONFIG` environment variable to the `fluidos-consumer` cluster + + ```sh + export KUBECONFIG=../../../tools/scripts/fluidos-consumer-1-config + ``` + +3. Deploy the `solver` CR + + ```sh + kubectl apply -f solver.yaml + ``` + + > **Note** + > Please review the **architecture** field and change it to **amd64** or **arm64** according to your local machine architecture. + +4. Check the result of the deployment + + ```sh + kubectl get solver -n fluidos + ``` + + The result should be something like this: + + ```sh + NAME INTENT ID FIND CANDIDATE RESERVE AND BUY PEERING STATUS MESSAGE AGE + solver-sample intent-sample true true true Solved Solver has completed all the phases 83s + ``` + +5. Other resources have been created and can be inspected using the following commands: + + ```sh + kubectl get flavors.nodecore.fluidos.eu -n fluidos + kubectl get discoveries.advertisement.fluidos.eu -n fluidos + kubectl get reservations.reservation.fluidos.eu -n fluidos + kubectl get contracts.reservation.fluidos.eu -n fluidos + kubectl get peeringcandidates.advertisement.fluidos.eu -n fluidos + kubectl get transactions.reservation.fluidos.eu -n fluidos + ``` + +6. The infrastructure for resource sharing has been established. A demo namespace should now be created in the fluidos-consumer cluster: + + ```sh + kubectl create namespace demo + ``` + + The namespace can then be offloaded to the fluidos-provider cluster using the following command: + + ```sh + liqoctl offload namespace demo --pod-offloading-strategy Remote + ``` + + Once the namespace has been offloaded, any workload can be deployed within it. As an example, the provided Kubernetes deployment can be used: + + ```sh + kubectl apply -f nginx-deployment.yaml -n demo + ``` + +Another example involves deploying the `solver-service`, which requests a provider with a database service. + +1. Deploy the mock database service in the provider cluster. + + ```sh + export KUBECONFIG=../../../tools/scripts/fluidos-consumer-1-config + kubectl apply -f service-blueprint-db.yaml + ``` + +2. Deploy the `solver` CR + + ```sh + kubectl apply -f solver-service.yaml + ``` + +3. Check the result of the deployment + + ```sh + kubectl get solver -n fluidos + ``` + + The result should be something like this: + + ```sh + NAME INTENT ID FIND CANDIDATE RESERVE AND BUY PEERING STATUS MESSAGE AGE + solver-sample-service intent-sample-service true true true Solved Solver has completed all the phases 83s + ``` ### Clean Development Environment @@ -65,7 +261,8 @@ To ensure you have Liqo, please run the following script: ```bash cd ../../tools/scripts -./install-liqo.sh $KUBECONFIG +chmod +x install_liqo.sh +./install_liqo.sh $KUBECONFIG ``` Please, note that you need to pass a few parameters. @@ -75,11 +272,14 @@ Please, note that you need to pass a few parameters. 1. kubeadm 2. k3s 3. kind + - "cluster-name": this is the name you want to give to your Liqo local cluster (e.g.: `fluidos-turin-1`) - $KUBECONFIG: it is the typical environment variable that points to the path of your Kubernetes cluster configuration. -For more information, check out [Liqo official documentation](https://docs.liqo.io/en/v0.10.3/installation/install.html#install-with-liqoctl) for all supported providers. +- "liqoctl PATH": it is the path to the liqoctl command. If installed via Liqo guide, liqoctl is sufficient. + +For more information, check out [Liqo official documentation](https://docs.liqo.io/en/v1.0.0/installation/install.html#install-with-liqoctl) for all supported providers. **DISCLAIMER:** before going ahead, ensure that at least one node is tagged with `node-role.fluidos.eu/worker: "true"` and, if acting as a provider, choose the nodes that exposes their Kubernetes resources with the label `node-role.fluidos.eu/resources: "true"`. @@ -91,7 +291,7 @@ helm repo add fluidos https://fluidos-project.github.io/node/ helm upgrade --install node fluidos/node \ -n fluidos --version "$FLUIDOS_VERSION" \ - --create-namespace -f consumer-values.yaml \ + --create-namespace -f $PWD/node/quickstart/utils/consumer-values.yaml \ --set networkManager.configMaps.nodeIdentity.ip="$NODE_IP" \ --set rearController.service.gateway.nodePort.port="$REAR_PORT" \ --set networkManager.config.enableLocalDiscovery="$ENABLE_LOCAL_DISCOVERY" \ diff --git a/testbed/kind/README.md b/testbed/kind/README.md deleted file mode 100644 index 4519d93d..00000000 --- a/testbed/kind/README.md +++ /dev/null @@ -1,181 +0,0 @@ -# - -

- -

FLUIDOS Node - Testbed (KIND)

-

- -## Getting Started - -This guide will help you to install a FLUIDOS Node **Testbed** using KIND (Kubernetes in Docker). This is the easiest way to install the FLUIDOS Node on a local machine. - -This guide has been made only for testing purposes. If you want to install FLUIDOS Node on a production environment, please follow the [official installation guide](/docs/installation/installation.md) - -## What will be installed - -This guide will create two different Kubernetes clusters: - -- **fluidos-consumer**: This cluster (a.k.a., FLUIDOS node) will act as a consumer of FLUIDOS resources. It will be used to deploy a `solver` example CR that will simulate an _Intent resolution_ request. This cluster will use the REAR protocol to communicate with the Provider cluster and to receive available Flavors, reserving the one that best fits the request and purchasing it. - -- **fluidos-provider**: This cluster (a.k.a. FLUIDOS node) will act as a provider of FLUIDOS resources. It will offer its own Flavors on the specific request made by the consumer, reserving and selling it. - -### Prerequisites - -- [Docker](https://docs.docker.com/get-docker/) -- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- [KIND](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) -- [Helm](https://helm.sh/docs/intro/install/) -- [Liqo CLI tool](https://docs.liqo.io/en/v0.10.1/installation/liqoctl.html) - -## Common issues with KIND - -This setup relies on KIND (Kubernetes in Docker) to quickly establish a reliable testbed for testing the FLUIDOS Node architecture. - -There are common issues encountered when running multiple clusters with KIND, particularly related to swap space memory usage and resource limits within the operating systems. - -Therefore, we highly recommend running the following commands before proceeding with the installation process: - -1. `sudo swapoff -a` -2. `sudo sysctl fs.inotify.max_user_instances=512` -3. `sudo sysctl fs.inotify.max_user_watches=524288` - -## Installation - -1. Clone the repository - -```sh -git clone https://github.com/fluidos-project/node.git -``` - -2. Move into the KIND Example folder - -```sh -cd testbed/kind -``` - -3. Set the execution permission on the `setup.sh` script - -```sh -chmod +x setup.sh -``` - -4. Launch the `setup.sh` script - -```sh - ./setup.sh -``` - -5. Wait for the script to finish. It will take some minutes. - -6. After running the script, you can check the status of the pods in the consumer cluster using the following commands: - -```sh -export KUBECONFIG=consumer/config -kubectl get pods -n fluidos -``` - -To inspect resources within the provider cluster, use the kube configuration file of the provider cluster: - -```sh -export KUBECONFIG=provider/config -kubectl get pods -n fluidos -``` - -Alternatively, to avoid continuously changing the **KUBECONFIG** environment variable, you can run `kubectl` by explicitly referencing the kube config file: - -```sh -kubectl get pods --kubeconfig "$PWD/consumer/config" --context kind-fluidos-consumer -n fluidos -``` - -This allows for convenient monitoring of both consumer and provider clusters without the need for manual configuration changes. - -6. You should see 3 pods running on the `fluidos-consumer` cluster and 3 pods running on the `fluidos-provider` cluster: - -- `node-local-resource-manager-` -- `node-rear-manager-` -- `node-rear-controller-` - -7. You can also check the status of the generated flavors with the following command: - -```sh -kubectl get flavors.nodecore.fluidos.eu -n fluidos -``` - -The result should be something like this: - -``` -NAME PROVIDER ID TYPE CPU MEMORY OWNER NAME OWNER DOMAIN AVAILABLE AGE --k8s-fluidos- kc1pttf3vl k8s-fluidos 4963020133n 26001300Ki kc1pttf3vl fluidos.eu true 168m --k8s-fluidos- kc1pttf3vl k8s-fluidos 4954786678n 25966964Ki kc1pttf3vl fluidos.eu true 168m -``` - -### Usage - -In this section, we will instruct you on how you can interact with the FLUIDOS Node using an high-level approach. In case you want to interact with the FLUIDOS Node using its CRDs, please refer to the [low-level usage](../../docs/usage/usage.md) section. - -Now lets try to deploy a `solver` example CR on the `fluidos-consumer` cluster. - -1. Open a new terminal on the repo and move into the `deployments/node/samples` folder - -```sh -cd deployments/node/samples -``` - -2. Set the `KUBECONFIG` environment variable to the `fluidos-consumer` cluster - -```sh -export KUBECONFIG=../../../testbed/kind/consumer/config -``` - -3. Deploy the `solver` CR - -```sh -kubectl apply -f solver.yaml -``` - -> **Note** -> Please review the **architecture** field and change it to **amd64** or **arm64** according to your local machine architecture. - -4. Check the result of the deployment - -```sh -kubectl get solver -n fluidos -``` - -The result should be something like this: - -``` -NAMESPACE NAME INTENT ID FIND CANDIDATE RESERVE AND BUY PEERING CANDIDATE PHASE RESERVING PHASE PEERING PHASE STATUS MESSAGE AGE -fluidos solver-sample intent-sample true true false Solved Solved Solved No need to enstablish a peering 5s -``` - -5. Other resources have been created, you can check them with the following commands: - -```sh -kubectl get flavors.nodecore.fluidos.eu -n fluidos -kubectl get discoveries.advertisement.fluidos.eu -n fluidos -kubectl get reservations.reservation.fluidos.eu -n fluidos -kubectl get contracts.reservation.fluidos.eu -n fluidos -kubectl get peeringcandidates.advertisement.fluidos.eu -n fluidos -kubectl get transactions.reservation.fluidos.eu -n fluidos -``` - -6. The infrastructure for the resource sharing has been created. - -You can now create a demo namespace on the `fluidos-consumer` cluster: - -```sh -kubectl create namespace demo -``` - -And then offload the namespace to the `fluidos-provider` cluster: - -```sh -liqoctl offload namespace demo --pod-offloading-strategy Remote -``` - -You can now create a workload inside this offloaded namespace through and already provided Kubernetes deployment: - -```sh -kubectl apply -f nginx-deployment.yaml -n demo -``` diff --git a/testbed/kind/consumer/cluster-multi-worker.yaml b/testbed/kind/consumer/cluster-multi-worker.yaml deleted file mode 100644 index 17517cec..00000000 --- a/testbed/kind/consumer/cluster-multi-worker.yaml +++ /dev/null @@ -1,14 +0,0 @@ -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: - - role: control-plane - image: kindest/node:v1.28.0 - - role: worker - labels: - node-role.fluidos.eu/resources: "true" - node-role.fluidos.eu/worker: "true" - image: kindest/node:v1.28.0 - - role: worker - image: kindest/node:v1.28.0 - labels: - node-role.fluidos.eu/resources: "true" \ No newline at end of file diff --git a/testbed/kind/consumer/values.yaml b/testbed/kind/consumer/values.yaml deleted file mode 100644 index f00beece..00000000 --- a/testbed/kind/consumer/values.yaml +++ /dev/null @@ -1,151 +0,0 @@ -# Default values for fluidos-node. -# This is a YAML-formatted file. -# Declare variables to be passed into your templates. - -# -- Images' tag to select a development version of fluidos-node instead of a release -tag: "" -# -- The pullPolicy for fluidos-node pods. -pullPolicy: "IfNotPresent" - -common: - # -- NodeSelector for all fluidos-node pods - nodeSelector: { - node-role.fluidos.eu/worker: "true" - } - # -- Tolerations for all fluidos-node pods - tolerations: [] - # -- Affinity for all fluidos-node pods - affinity: {} - # -- Extra arguments for all fluidos-node pods - extraArgs: [] - -localResourceManager: - # -- The number of REAR Controller, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the local-resource-manager pod. - annotations: {} - # -- Labels for the local-resource-manager pod. - labels: {} - # -- Extra arguments for the local-resource-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the local-resource-manager pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/local-resource-manager" - config: - # -- Label used to identify the nodes from which resources are collected. - nodeResourceLabel: "node-role.fluidos.eu/resources" - # -- This flag defines the resource type of the generated flavours. - resourceType: "k8s-fluidos" - flavour: - # -- The minimum number of CPUs that can be requested to purchase a flavour. - cpuMin: "0" - # -- The minimum amount of memory that can be requested to purchase a flavour. - memoryMin: "0" - # -- The CPU step that must be respected when requesting a flavour through a Flavour Selector. - cpuStep: "1000m" - # -- The memory step that must be respected when requesting a flavour through a Flavour Selector. - memoryStep: "100Mi" - -rearManager: - # -- The number of REAR Manager, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the rear-manager pod. - annotations: {} - # -- Labels for the rear-manager pod. - labels: {} - # -- Extra arguments for the rear-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the rear-manager pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/rear-manager" - -rearController: - # -- The number of REAR Controller, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the rear-controller pod. - annotations: {} - # -- Labels for the rear-controller pod. - labels: {} - # -- Extra arguments for the rear-controller pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the rear-controller pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/rear-controller" - service: - grpc: - name: "grpc" - # -- Kubernetes service used to expose the gRPC Server to liqo. - type: "ClusterIP" - # -- Annotations for the gRPC service. - annotations: {} - # -- Labels for the gRPC service. - labels: {} - # -- The gRPC port used by Liqo to connect with the Gateway of the rear-controller to obtain the Contract resources for a given consumer ClusterID. - port: 2710 - # -- The target port used by the gRPC service. - targetPort: 2710 - gateway: - name: "gateway" - # -- Kubernetes service to be used to expose the REAR gateway. - type: "NodePort" - # -- Annotations for the REAR gateway service. - annotations: {} - # -- Labels for the REAR gateway service. - labels: {} - # -- Options valid if service type is NodePort. - nodePort: - # -- Force the port used by the NodePort service. - port: 30000 - # -- Options valid if service type is LoadBalancer. - loadBalancer: - # -- Override the IP here if service type is LoadBalancer and you want to use a specific IP address, e.g., because you want a static LB. - ip: "" - # -- The port used by the rear-controller to expose the REAR Gateway. - port: 3004 - # -- The target port used by the REAR Gateway service. - targetPort: 3004 - -networkManager: - # -- The number of Network Manager, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the network-manager pod. - annotations: {} - # -- Labels for the network-manager pod. - labels: {} - # -- Extra arguments for the network-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the network-manager pod. - resources: - limits: {} - requests: {} - # -- The resource image to be used by the network-manager pod. - imageName: "ghcr.io/fluidos/network-manager" - configMaps: - providers: - # -- The name of the ConfigMap containing the list of the FLUIDOS Providers and the default FLUIDOS Provider (SuperNode or Catalogue). - name: "fluidos-network-manager-config" - # -- The IP List of Local knwon FLUIDOS Nodes separated by commas. - local: - # -- The IP List of Remote known FLUIDOS Nodes separated by commas. - remote: - # -- The IP List of SuperNodes separated by commas. - default: - nodeIdentity: - # -- The name of the ConfigMap containing the FLUIDOS Node identity info. - name: "fluidos-network-manager-identity" - # -- The domain name of the FLUIDOS closed domani: It represents for instance the Enterprise and it is used to generate the FQDN of the owned FLUIDOS Nodes - domain: "fluidos.eu" - # -- The IP address of the FLUIDOS Node. It can be public or private, depending on the network configuration and it corresponds to the IP address to reach the Network Manager from the outside of the cluster. - ip: - # -- The NodeID is a UUID that identifies the FLUIDOS Node. It is used to generate the FQDN of the owned FLUIDOS Nodes and it is unique in the FLUIDOS closed domain - nodeID: diff --git a/testbed/kind/metrics-server.yaml b/testbed/kind/metrics-server.yaml deleted file mode 100644 index d84b7e2e..00000000 --- a/testbed/kind/metrics-server.yaml +++ /dev/null @@ -1,197 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: metrics-server - name: metrics-server - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - k8s-app: metrics-server - rbac.authorization.k8s.io/aggregate-to-admin: "true" - rbac.authorization.k8s.io/aggregate-to-edit: "true" - rbac.authorization.k8s.io/aggregate-to-view: "true" - name: system:aggregated-metrics-reader -rules: -- apiGroups: - - metrics.k8s.io - resources: - - pods - - nodes - verbs: - - get - - list - - watch ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - k8s-app: metrics-server - name: system:metrics-server -rules: -- apiGroups: - - "" - resources: - - nodes/metrics - verbs: - - get -- apiGroups: - - "" - resources: - - pods - - nodes - verbs: - - get - - list - - watch ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - labels: - k8s-app: metrics-server - name: metrics-server-auth-reader - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: extension-apiserver-authentication-reader -subjects: -- kind: ServiceAccount - name: metrics-server - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - labels: - k8s-app: metrics-server - name: metrics-server:system:auth-delegator -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:auth-delegator -subjects: -- kind: ServiceAccount - name: metrics-server - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - labels: - k8s-app: metrics-server - name: system:metrics-server -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:metrics-server -subjects: -- kind: ServiceAccount - name: metrics-server - namespace: kube-system ---- -apiVersion: v1 -kind: Service -metadata: - labels: - k8s-app: metrics-server - name: metrics-server - namespace: kube-system -spec: - ports: - - name: https - port: 443 - protocol: TCP - targetPort: https - selector: - k8s-app: metrics-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - k8s-app: metrics-server - name: metrics-server - namespace: kube-system -spec: - selector: - matchLabels: - k8s-app: metrics-server - strategy: - rollingUpdate: - maxUnavailable: 0 - template: - metadata: - labels: - k8s-app: metrics-server - spec: - containers: - - args: - - --cert-dir=/tmp - - --secure-port=4443 - - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - - --kubelet-use-node-status-port - - --metric-resolution=15s - - --kubelet-insecure-tls - image: registry.k8s.io/metrics-server/metrics-server:v0.6.4 - imagePullPolicy: IfNotPresent - livenessProbe: - failureThreshold: 3 - httpGet: - path: /livez - port: https - scheme: HTTPS - periodSeconds: 10 - name: metrics-server - ports: - - containerPort: 4443 - name: https - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: /readyz - port: https - scheme: HTTPS - initialDelaySeconds: 20 - periodSeconds: 10 - resources: - requests: - cpu: 100m - memory: 200Mi - securityContext: - allowPrivilegeEscalation: false - readOnlyRootFilesystem: true - runAsNonRoot: true - runAsUser: 1000 - volumeMounts: - - mountPath: /tmp - name: tmp-dir - nodeSelector: - kubernetes.io/os: linux - priorityClassName: system-cluster-critical - serviceAccountName: metrics-server - volumes: - - emptyDir: {} - name: tmp-dir ---- -apiVersion: apiregistration.k8s.io/v1 -kind: APIService -metadata: - labels: - k8s-app: metrics-server - name: v1beta1.metrics.k8s.io -spec: - group: metrics.k8s.io - groupPriorityMinimum: 100 - insecureSkipTLSVerify: true - service: - name: metrics-server - namespace: kube-system - version: v1beta1 - versionPriority: 100 diff --git a/testbed/kind/provider/cluster-multi-worker.yaml b/testbed/kind/provider/cluster-multi-worker.yaml deleted file mode 100644 index 43a4d7c8..00000000 --- a/testbed/kind/provider/cluster-multi-worker.yaml +++ /dev/null @@ -1,14 +0,0 @@ -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: - - role: control-plane - image: kindest/node:v1.28.0 - - role: worker - labels: - node-role.fluidos.eu/resources: "true" - node-role.fluidos.eu/worker: "true" - image: kindest/node:v1.28.0 - - role: worker - image: kindest/node:v1.28.0 - labels: - node-role.fluidos.eu/resources: "true" diff --git a/testbed/kind/provider/values.yaml b/testbed/kind/provider/values.yaml deleted file mode 100644 index a8c67943..00000000 --- a/testbed/kind/provider/values.yaml +++ /dev/null @@ -1,151 +0,0 @@ -# Default values for fluidos-node. -# This is a YAML-formatted file. -# Declare variables to be passed into your templates. - -# -- Images' tag to select a development version of fluidos-node instead of a release -tag: "" -# -- The pullPolicy for fluidos-node pods. -pullPolicy: "IfNotPresent" - -common: - # -- NodeSelector for all fluidos-node pods - nodeSelector: { - node-role.fluidos.eu/worker: "true" - } - # -- Tolerations for all fluidos-node pods - tolerations: [] - # -- Affinity for all fluidos-node pods - affinity: {} - # -- Extra arguments for all fluidos-node pods - extraArgs: [] - -localResourceManager: - # -- The number of REAR Controller, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the local-resource-manager pod. - annotations: {} - # -- Labels for the local-resource-manager pod. - labels: {} - # -- Extra arguments for the local-resource-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the local-resource-manager pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/local-resource-manager" - config: - # -- Label used to identify the nodes from which resources are collected. - nodeResourceLabel: "node-role.fluidos.eu/resources" - # -- This flag defines the resource type of the generated flavours. - resourceType: "k8s-fluidos" - flavour: - # -- The minimum number of CPUs that can be requested to purchase a flavour. - cpuMin: "0" - # -- The minimum amount of memory that can be requested to purchase a flavour. - memoryMin: "0" - # -- The CPU step that must be respected when requesting a flavour through a Flavour Selector. - cpuStep: "1000m" - # -- The memory step that must be respected when requesting a flavour through a Flavour Selector. - memoryStep: "100Mi" - -rearManager: - # -- The number of REAR Manager, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the rear-manager pod. - annotations: {} - # -- Labels for the rear-manager pod. - labels: {} - # -- Extra arguments for the rear-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the rear-manager pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/rear-manager" - -rearController: - # -- The number of REAR Controller, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the rear-controller pod. - annotations: {} - # -- Labels for the rear-controller pod. - labels: {} - # -- Extra arguments for the rear-controller pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the rear-controller pod. - resources: - limits: {} - requests: {} - imageName: "ghcr.io/fluidos-project/rear-controller" - service: - grpc: - name: "grpc" - # -- Kubernetes service used to expose the gRPC Server to liqo. - type: "ClusterIP" - # -- Annotations for the gRPC service. - annotations: {} - # -- Labels for the gRPC service. - labels: {} - # -- The gRPC port used by Liqo to connect with the Gateway of the rear-controller to obtain the Contract resources for a given consumer ClusterID. - port: 2710 - # -- The target port used by the gRPC service. - targetPort: 2710 - gateway: - name: "gateway" - # -- Kubernetes service to be used to expose the REAR gateway. - type: "NodePort" - # -- Annotations for the REAR gateway service. - annotations: {} - # -- Labels for the REAR gateway service. - labels: {} - # -- Options valid if service type is NodePort. - nodePort: - # -- Force the port used by the NodePort service. - port: 30001 - # -- Options valid if service type is LoadBalancer. - loadBalancer: - # -- Override the IP here if service type is LoadBalancer and you want to use a specific IP address, e.g., because you want a static LB. - ip: "" - # -- The port used by the rear-controller to expose the REAR Gateway. - port: 3004 - # -- The target port used by the REAR Gateway service. - targetPort: 3004 - -networkManager: - # -- The number of Network Manager, which can be increased for active/passive high availability. - replicas: 1 - pod: - # -- Annotations for the network-manager pod. - annotations: {} - # -- Labels for the network-manager pod. - labels: {} - # -- Extra arguments for the network-manager pod. - extraArgs: [] - # -- Resource requests and limits (https://kubernetes.io/docs/user-guide/compute-resources/) for the network-manager pod. - resources: - limits: {} - requests: {} - # -- The resource image to be used by the network-manager pod. - imageName: "ghcr.io/fluidos/network-manager" - configMaps: - providers: - # -- The name of the ConfigMap containing the list of the FLUIDOS Providers and the default FLUIDOS Provider (SuperNode or Catalogue). - name: "fluidos-network-manager-config" - # -- The IP List of Local knwon FLUIDOS Nodes separated by commas. - local: "" - # -- The IP List of Remote known FLUIDOS Nodes separated by commas. - remote: - # -- The IP List of SuperNodes separated by commas. - default: - nodeIdentity: - # -- The name of the ConfigMap containing the FLUIDOS Node identity info. - name: "fluidos-network-manager-identity" - # -- The domain name of the FLUIDOS closed domani: It represents for instance the Enterprise and it is used to generate the FQDN of the owned FLUIDOS Nodes - domain: "fluidos.eu" - # -- The IP address of the FLUIDOS Node. It can be public or private, depending on the network configuration and it corresponds to the IP address to reach the Network Manager from the outside of the cluster. - ip: - # -- The NodeID is a UUID that identifies the FLUIDOS Node. It is used to generate the FQDN of the owned FLUIDOS Nodes and it is unique in the FLUIDOS closed domain - nodeID: diff --git a/testbed/kind/setup.sh b/testbed/kind/setup.sh deleted file mode 100755 index 2b50bb3a..00000000 --- a/testbed/kind/setup.sh +++ /dev/null @@ -1,50 +0,0 @@ -#!/usr/bin/bash - -set -xeu - -consumer_node_port=30000 -provider_node_port=30001 - -kind create cluster --config consumer/cluster-multi-worker.yaml --name fluidos-consumer --kubeconfig "$PWD/consumer/config" -kind create cluster --config provider/cluster-multi-worker.yaml --name fluidos-provider --kubeconfig "$PWD/provider/config" - -consumer_controlplane_ip=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fluidos-consumer-control-plane) -provider_controlplane_ip=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fluidos-provider-control-plane) - -helm repo add fluidos https://fluidos-project.github.io/node/ - -export KUBECONFIG=$PWD/consumer/config - -kubectl apply -f ../../deployments/node/crds -kubectl apply -f "$PWD/metrics-server.yaml" - -echo "Waiting for metrics-server to be ready" -kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=300s - -helm install node fluidos/node -n fluidos \ - --create-namespace -f consumer/values.yaml \ - --set networkManager.configMaps.nodeIdentity.ip="$consumer_controlplane_ip:$consumer_node_port" \ - --set networkManager.configMaps.providers.local="$provider_controlplane_ip:$provider_node_port" - -liqoctl install kind --cluster-name fluidos-consumer \ - --set controllerManager.config.resourcePluginAddress=node-rear-controller-grpc.fluidos:2710 \ - --set controllerManager.config.enableResourceEnforcement=true - -export KUBECONFIG=$PWD/provider/config - -kubectl apply -f ../../deployments/node/crds -kubectl apply -f "$PWD/metrics-server.yaml" - -echo "Waiting for metrics-server to be ready" -kubectl wait --for=condition=ready pod -l k8s-app=metrics-server -n kube-system --timeout=300s - -helm install node fluidos/node -n fluidos \ - --create-namespace -f provider/values.yaml \ - --set networkManager.configMaps.nodeIdentity.ip="$provider_controlplane_ip:$provider_node_port" \ - --set networkManager.configMaps.providers.local="$consumer_controlplane_ip:$consumer_node_port" - -liqoctl install kind --cluster-name fluidos-provider \ - --set controllerManager.config.resourcePluginAddress=node-rear-controller-grpc.fluidos:2710 \ - --set controllerManager.config.enableResourceEnforcement=true - - diff --git a/tools/scripts/environment.sh b/tools/scripts/environment.sh index 24808ff5..6ce19d27 100644 --- a/tools/scripts/environment.sh +++ b/tools/scripts/environment.sh @@ -49,6 +49,17 @@ create_kind_clusters() { # Get provider JSON tmp file from parameter provider_json=$2 + # Check AMD64 or ARM64 + ARCH=$(uname -m) + if [ "$ARCH" == "x86_64" ]; then + ARCH="amd64" + elif [ "$ARCH" == "aarch64" ] || [ "$ARCH" == "arm64" ]; then + ARCH="arm64" + else + echo "Unsupported architecture." + exit 1 + fi + print_title "Create KIND clusters..." # Map of clusters: @@ -78,8 +89,8 @@ create_kind_clusters() { for j in $(seq 1 "$num_workers"); do ( docker exec --workdir /tmp "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" mkdir -p cni-plugins - docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz - docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" tar xvfz cni-plugins-linux-amd64-v1.5.1.tgz + docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-$ARCH-v1.5.1.tgz + docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" tar xvfz cni-plugins-linux-$ARCH-v1.5.1.tgz docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" cp macvlan /opt/cni/bin docker exec --workdir /tmp "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" rm -r cni-plugins ) @@ -110,8 +121,8 @@ create_kind_clusters() { for j in $(seq 1 "$num_workers"); do ( docker exec --workdir /tmp "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" mkdir -p cni-plugins - docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz - docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" tar xvfz cni-plugins-linux-amd64-v1.5.1.tgz + docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-$ARCH-v1.5.1.tgz + docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" tar xvfz cni-plugins-linux-$ARCH-v1.5.1.tgz docker exec --workdir /tmp/cni-plugins "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" cp macvlan /opt/cni/bin docker exec --workdir /tmp "$name"-worker"$([ "$j" = 1 ] && echo "" || echo "$j")" rm -r cni-plugins ) diff --git a/tools/scripts/install_liqo.sh b/tools/scripts/install_liqo.sh old mode 100644 new mode 100755 index 2140cbb6..9f98afe0 --- a/tools/scripts/install_liqo.sh +++ b/tools/scripts/install_liqo.sh @@ -5,7 +5,21 @@ if [ -z "$1" ]; then echo "No provider specified. Please provide a cloud provider (aws, azure, gcp, etc.)." exit 1 fi - +# Check if cluster name parameter is provided +if [ -z "$2" ]; then + echo "No cluster name specified. Please provide a cluster name." + exit 1 +fi +# Check if kubeconfig parameter is provided +if [ -z "$3" ]; then + echo "No kubeconfig specified. Please provide a kubeconfig file." + exit 1 +fi +# Check if liqoctl path is provided +if [ -z "$4" ]; then + echo "No liqoctl path specified. Please provide the path to liqoctl." + exit 1 +fi # Get the provider parameter # Get the provider parameter diff --git a/tools/scripts/requirements.sh b/tools/scripts/requirements.sh index 02645001..1480eec2 100644 --- a/tools/scripts/requirements.sh +++ b/tools/scripts/requirements.sh @@ -6,6 +6,22 @@ SCRIPT_DIR="$(dirname "$SCRIPT_PATH")" # shellcheck disable=SC1091 source "$SCRIPT_DIR"/utils.sh + +function check_kind() { + print_title "Check kind..." + if ! kind version; then + # Ask the user if they want to install kind + read -r -p "Do you want to install kind? (y/n): " install_kind + if [ "$install_kind" == "y" ]; then + install_kind + else + echo "Please install kind first. Exiting..." + return 1 + fi + fi +} + + # Install KIND function function install_kind() { print_title "Install kind..." @@ -22,12 +38,12 @@ function install_kind() { # Install kind if AMD64 if [ "$ARCH" == "amd64" ]; then echo "Install kind AMD64..." - [ "$(uname -m)" = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.21.0/kind-linux-amd64 + curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64 chmod +x kind sudo mv kind /usr/local/bin/kind elif [ "$ARCH" == "arm64" ]; then echo "Install kind ARM64..." - [ "$(uname -m)" = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.21.0/kind-linux-arm64 + curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-arm64 chmod +x kind sudo mv kind /usr/local/bin/kind fi @@ -47,20 +63,37 @@ function install_docker() { # shellcheck disable=SC1091 echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ - $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ + $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin print_title "Docker installed successfully." + # Add current user to docker group + echo "Adding user '$USER' to group 'docker'..." + sudo usermod -aG docker "$USER" + #sudo sysctl fs.inotify.max_user_watches=52428899 + #sudo sysctl fs.inotify.max_user_instances=8192 + # TODO: Check if it's possible to replace all Docker commands with 'sudo docker', since 'newgrp' will block the script + echo "You must run 'newgrp docker' or log out and back in to apply group change." + exit 0 } # Check docker function function check_docker() { print_title "Check docker..." if ! docker -v; then - echo "Please install docker first." - return 1 + # Ask the user if they want to install docker + read -r -p "Do you want to install docker? (y/n): " install_docker + if [ "$install_docker" == "y" ]; then + install_docker + else + echo "Please install docker first. Exiting..." + return 1 + fi fi + #echo "Setting inotify..." + #sudo sysctl fs.inotify.max_user_watches=52428899 + #sudo sysctl fs.inotify.max_user_instances=8192 } # Install Kubectl function @@ -79,12 +112,14 @@ function install_kubectl() { # Install kubectl if AMD64 if [ "$ARCH" == "amd64" ]; then echo "Install kubectl AMD64..." - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" + curl -LO "https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl" sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl + sudo rm kubectl elif [ "$ARCH" == "arm64" ]; then echo "Install kubectl ARM64..." - curl -LO "https://dl.k8s.io/release/v1.21.0/bin/linux/arm64/kubectl" + curl -LO "https://dl.k8s.io/release/v1.33.0/bin/linux/arm64/kubectl" sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl + sudo rm kubectl fi print_title "Kubectl installed successfully." } @@ -111,13 +146,13 @@ function install_helm() { chmod 700 get_helm.sh ./get_helm.sh print_title "Helm installed successfully." + sudo rm get_helm.sh } # Check Helm function function check_helm() { print_title "Check helm..." - helm version - if ! helm version; then + if ! command -v helm &> /dev/null; then # Ask the user if they want to install helm read -r -p "Do you want to install helm? (y/n): " install_helm if [ "$install_helm" == "y" ]; then @@ -147,10 +182,14 @@ function install_liqoctl() { echo "Install liqoctl AMD64..." curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v1.0.0/liqoctl-linux-amd64.tar.gz" | tar -xz sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl + sudo rm LICENSE + sudo rm liqoctl elif [ "$ARCH" == "arm64" ]; then echo "Install liqoctl ARM64..." curl --fail -LS "https://github.com/liqotech/liqo/releases/download/v1.0.0/liqoctl-linux-arm64.tar.gz" | tar -xz sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl + sudo rm LICENSE + sudo rm liqoctl fi print_title "Liqo installed successfully." } @@ -165,8 +204,7 @@ function check_liqoctl() { check_and_install_liqoctl() { if ! command -v liqoctl &> /dev/null; then echo "liqoctl not found. Installing liqoctl..." - # Example installation command for liqoctl, you may need to update this based on the official installation instructions - install_liqo_not_stable_version + install_liqoctl echo "liqoctl installed successfully." else # Check the version of the client version of liqo @@ -176,11 +214,9 @@ check_and_install_liqoctl() { exit 1 else echo "liqoctl client version: $CLIENT_VERSION" - # TODO: Update the version check based on the stable version - # Version currently used is an unstable version, rc.3 if [ "$CLIENT_VERSION" != "v1.0.0" ]; then - echo "liqoctl is not installed at the desired version of v1.0.0-rc.3. Installing liqoctl..." - install_liqo_not_stable_version + echo "liqoctl is not installed at the desired version of v1.0.0. Installing liqoctl..." + install_liqoctl else echo "liqoctl is already installed at the version $CLIENT_VERSION." fi @@ -188,20 +224,6 @@ check_and_install_liqoctl() { fi } -install_liqo_not_stable_version() { - # Delete if exists the temporary liqo folder - rm -rf /tmp/liqo - # Clone Liqo repository to local tmp folder - git clone --depth 1 --branch v1.0.0-rc.3 https://github.com/liqotech/liqo.git /tmp/liqo || { echo "Failed to clone Liqo repository"; exit 1; } - make -C /tmp/liqo ctl || { echo "Failed to install Liqo"; exit 1; } - echo "Liqo compiled successfully in /tmp/liqo." - # Create temporary alias for liqoctl to make it available in the current shell - alias liqoctl=/tmp/liqo/liqoctl - echo "liqoctl alias created to /tmp/liqo/liqoctl for the current shell." - - shopt -s expand_aliases -} - # Install jq function function install_jq() { print_title "Install jq..." @@ -228,6 +250,7 @@ function check_tools() { print_title "Check all the tools..." check_jq check_docker + check_kind check_kubectl check_helm check_liqoctl