diff --git a/DEVELOPMENT.md b/DEVELOPMENT.md index 3922ec61..43f61910 100644 --- a/DEVELOPMENT.md +++ b/DEVELOPMENT.md @@ -40,7 +40,7 @@ bash hack/local-up-karmada.sh ``` The minimal environment consists of one host cluster and three member cluster, the host cluster is responsible for deploying karmada control-plane, after karmada control-plane is up, member clusters will be managed by karmada control-plane, member1 and member2 cluster will be managed in `push` mode, and the member3 cluster will be managed in `pull` mode. After you see the success tips for installing, you can start `api` project. To start the `api` project locally, you should fetch kubeconfig for `karmada-apiserver` and `karmada-host` context, you can get the file under the `$HOME/.kubeconfig/karmada.config`. Executing command `make karmada-dashboard-api` to build binary for `api` project, you can start `api` by -```shell + + +```shell +./root/dashboard/_output/bin/linux/amd64/karmada-dashboard-api \ + --karmada-kubeconfig=/root/.kube/karmada.config \ + --karmada-context=karmada-apiserver \ + --kubeconfig=/root/.kube/karmada.config \ + --context=karmada-host \ + --insecure-port=8000 ``` + After that, you can start the dashboard fronted project, install frontend dependencies with `cd ui && pnpm install` firstly. And then start the dashboard fronted project by executing: ```shell cd ui pnpm run dashboard:dev ``` then open your browser with `http://localhost:5173/`, if you can see the overview page of dashboard, it means everything is ok, start developing now. + +--- + +# 开发 + +## 架构 +Karmada dashboard项目由**后端**和**前端**部分组成。后端部分包含两个项目,`api`项目和`web`项目。`web`项目主要负责提供静态资源服务(包括页面静态文件和i18n翻译资源)以及转发前端的API请求。`api`项目主要负责通过使用`client-go` SDK与`karmada-host`和`karmada-apiserver`的apiserver服务交互来管理Kubernetes资源(CRUD操作),这部分的实现位于pkg目录中。 + +前端部分是基于`pnpm`的monorepo项目。所有与前端项目相关的工程都位于`ui`目录中。`packages`目录主要存储可重用的前端组件,如`navigations`、`editors`,甚至`translation tools`。`apps`目录包含可以从外部直接访问的项目,如`dashboard`项目。在生产环境中,apps目录中的项目构建后,会通过Dockerfile中的`cp`命令将压缩的静态资源复制到容器中供外部访问。 + +## 开发环境 + +确保安装了以下软件并添加到您的路径中: + +- [Docker](https://docs.docker.com/engine/install/) +- [Go](https://golang.org/dl/)(在[`go.mod`](go.mod)中检查所需版本) +- [Node.js](https://nodejs.org/en/download)(在[`ui/package.json`](ui/package.json)中检查所需版本) +- [Pnpm](https://pnpm.io/installation) + +## 入门指南 + +克隆仓库后,首先您应该在本地机器上准备所有需要的镜像。 +您可以通过执行以下命令在线加载所有需要的镜像: +```shell +cp hack/images/image.list.load.online.example hack/images/image.list +bash hack/ops/load-images.sh hack/images/image.list +``` +如果您有私有注册表,您也可以通过在`image.list`中用私有注册表地址包装镜像来更改镜像。 + +同时,您也可以在离线模式下加载镜像。在加载镜像之前,您需要预先在`hack/images/`文件夹中下载所有离线镜像文件,然后执行以下命令: +```shell +cp hack/images/image.list.load.offline.example hack/images/image.list +bash hack/ops/load-images.sh hack/images/image.list +``` + +在您的机器上加载所有需要的镜像后,您可以执行代码来启动由kind提供支持的用于开发目的的最小环境。 +```shell +bash hack/local-up-karmada.sh +``` + +最小环境由一个主集群和三个成员集群组成,主集群负责部署karmada控制平面,karmada控制平面启动后,成员集群将由karmada控制平面管理,member1和member2集群将以`push`模式管理,member3集群将以`pull`模式管理。当您看到安装成功提示后,您可以启动`api`项目。要在本地启动`api`项目,您应该获取`karmada-apiserver`和`karmada-host`上下文的kubeconfig,您可以在`$HOME/.kubeconfig/karmada.config`下获取文件。执行命令`make karmada-dashboard-api`构建`api`项目的二进制文件,您可以通过以下方式启动`api`: +```shell +./_output/bin/linux/amd64/karmada-dashboard-api \ + --karmada-kubeconfig=/root/.kube/karmada.config \ + --karmada-context=karmada-apiserver \ + --skip-karmada-apiserver-tls-verify \ + --kubeconfig=/root/.kube/karmada.config \ + --context=karmada-host \ + --insecure-port=8000 +``` +之后,您可以启动dashboard前端项目,首先通过`cd ui && pnpm install`安装前端依赖。然后通过执行以下命令启动dashboard前端项目: +```shell +cd ui +pnpm run dashboard:dev +``` +然后用浏览器打开`http://localhost:5173/`,如果您能看到dashboard的概览页面,说明一切正常,现在开始开发吧。 + +## 开发构建项目脚本命令 + +后端构建命令 + +```bash +cd /root/dashboard && make karmada-dashboard-api && ./_output/bin/linux/amd64/karmada-dashboard-api --karmada-kubeconfig=/root/.kube/karmada.config --karmada-context=karmada-apiserver --kubeconfig=/root/.kube/karmada.config --context=karmada-host +``` + +```bash +make karmada-dashboard-api && ./_output/bin/linux/amd64/karmada-dashboard-api --karmada-kubeconfig=/root/.kube/karmada.config --karmada-context=karmada-apiserver --kubeconfig=/root/.kube/config --context=default +``` + +前端构建命令 + +```bash +cd ui && pnpm run dashboard:dev +``` \ No newline at end of file diff --git a/README-en.md b/README-en.md new file mode 100644 index 00000000..aefd8e03 --- /dev/null +++ b/README-en.md @@ -0,0 +1,139 @@ +# Karmada-dashboard +[](https://github.com/kubernetes/dashboard/blob/master/LICENSE) + +Karmada Dashboard is a general-purpose, web-based control panel for Karmada which is a multi-cluster management project. + + +## 🚀QuickStart + +### Prerequisites +You need to have a Karmada installed on Kubernetes(aka. `host cluster`) and the [karmadactl](https://karmada.io/docs/installation/install-cli-tools#install-karmadactl) or +kubectl command-line tool must be configured to communicate with your host cluster and Karmada control plane. + +If you don't already have the Karmada, you can launch one by following this [tutorial](https://karmada.io/docs/installation/#install-karmada-for-development-environment). + + +--- +### Install Karmada-dashboard +In the following steps, we are going to install Karmada Dashboard on the `host cluster` where running the Karmada +control plane components. We assume that Karmada was installed in namespace `karmada-system` and Karmada config is +located at `$HOME/.kube/karmada.config`, if this differs from your environment, please modify the following commands +accordingly. + +1. Switch user-context of your Karmada config to `karmada-host`. + +```bash +export KUBECONFIG="$HOME/.kube/karmada.config" +kubectl config use-context karmada-host +``` + +Now, you should be able to see Karmada control plane components by following command: +``` +kubectl get deployments.apps -n karmada-system +``` + +If everything works fine, you will get similar messages as following: +``` +NAME READY UP-TO-DATE AVAILABLE AGE +karmada-aggregated-apiserver 2/2 2 2 3d +karmada-apiserver 1/1 1 1 3d +karmada-controller-manager 1/1 1 1 3d +karmada-kube-controller-manager 1/1 1 1 3d +karmada-scheduler 2/2 2 2 3d +karmada-webhook 2/2 2 2 3d +``` + +2. Deploy Karmada Dashboard + +Clone this repo to your machine: +``` +git clone https://github.com/karmada-io/dashboard.git +``` + +Change to the dashboard directory: +``` +cd dashboard +``` + +Create the secret based on your Karmada config, the Karmada Dashboard will use this config to talk to the Karmada API server. +``` +kubectl create secret generic kubeconfig --from-file=kubeconfig=$HOME/.kube/karmada.config -n karmada-system +``` + +Deploy Karmada Dashboard: +``` +kubectl apply -k artifacts/overlays/nodeport-mode +``` + +This will deploy two components in `karmada-system` namespace: + +``` +kubectl get deployments.apps -n karmada-system karmada-dev-linux-renhongcai: Fri Jan 10 16:08:38 2025 + +NAME READY UP-TO-DATE AVAILABLE AGE +karmada-dashboard-api 1/1 1 1 2m +karmada-dashboard-web 1/1 1 1 2m +... +``` + +Then you will be able to access the Karmada Dashboard by `http://your-karmada-host:32000`. +Note that, the Karmada Dashboard service type is `NodePort`, this exposes the dashboard on a specific port on each node +of your `host cluster`, allowing you to access it via any node's IP address and that port. + +You also can use `kubectl port-forward` to forward a local port to the Dashboard's backend pod: +``` +kubectl port-forward -n karmada-system services/karmada-dashboard-web --address 0.0.0.0 8000:8000 +``` +Then you can access it via `http://localhost:8000`. + +You still need the jwt token to login to the dashboard. + +3. Create Service Account + +switch user-context to karmada-apiserver: +```bash +kubectl config use-context karmada-apiserver +``` +Create Service Account: +```bash +kubectl apply -f artifacts/dashboard/karmada-dashboard-sa.yaml +``` + +4. Get jwt token + +Execute the following code to get the jwt token: +```bash +kubectl -n karmada-system get secret/karmada-dashboard-secret -o go-template="{{.data.token | base64decode}}" +``` + +it should print results like this: +```bash +eyJhbGciOiJSUzI1NiIsImtpZCI6InZLdkRNclVZSFB6SUVXczBIRm8zMDBxOHFOanQxbWU4WUk1VVVpUzZwMG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrYXJtYWRhLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrYXJtYWRhLWRhc2hib2FyZC10b2tlbi14NnhzcCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrYXJtYWRhLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE5Y2RkZDc3LTkyOWYtNGM0MS1iZDY4LWIzYWVhY2E0NGJiYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprYXJtYWRhLXN5c3RlbTprYXJtYWRhLWRhc2hib2FyZCJ9.F0BqSxl0GVGvJZ_WNwcEFtChE7joMdIPGhv8--eN22AFTX34IzJ_2akjZcWQ63mbgr1mVY4WjYdl7KRS6w4fEQpqWkWx2Dfp3pylIcMslYRrUPirHE2YN13JDxvjtYyhBVPlbYHSj7y0rvxtfTr7iFaVRMFFiUbC3kVKNhuZtgk_tBHg4UDCQQKFALGc8xndU5nz-BF1gHgzEfLcf9Zyvxj1xLy9mEkLotZjIcnZhwiHKFYtjvCnGXxGyrTvQ5rgilAxBKv0TcmjQep_TG_Q5M9r0u8wmxhDnYd2a7wsJ3P3OnDw7smk6ikY8UzMxVoEPG7XoRcmNqhhAEutvcJoyw +``` + +### Login Dashboard +Now open Karmada-dashboard with url [http://your-karmada-host:32000 ]() + +copy the token you just generated and paste it into the Enter token field on the login page. + +Once the process of authentication passed, you can use karmada dashboard freely. You can follow the Usage of karmada-dashboard to have a quick experience of karmada dashboard. + +## Meeting + +Regular Meeting For dashboard: +* Wednesday at 14:30 UTC+8 (Chinese)(biweekly). [Convert to your timezone](https://www.thetimezoneconverter.com/?t=14%3A30&tz=GMT%2B8&). +* There isn't a dedicated English meeting yet. If you have any topics to discuss, please join [the community meeting](https://github.com/karmada-io/karmada?tab=readme-ov-file#meeting). + +Resources: +- [Meeting Notes and Agenda](https://docs.google.com/document/d/1dX3skCE-QRBWzABq3O9cG7yhIDUWLYWmg7kGq8UHU6s/edit) +- [Meeting Calendar](https://calendar.google.com/calendar/embed?src=a71aae8a75e3558a90683596c71382b8195bf7c84cb50e6e75d1a3e64e08480b%40group.calendar.google.com&ctz=Asia%2FShanghai) | [Subscribe](https://calendar.google.com/calendar/u/1?cid=YTcxYWFlOGE3NWUzNTU4YTkwNjgzNTk2YzcxMzgyYjgxOTViZjdjODRjYjUwZTZlNzVkMWEzZTY0ZTA4NDgwYkBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) +- [Meeting Link](https://zoom.us/j/97070047574?pwd=lXha0Sqngw4mwtmArP1sjsLMMXk34z.1) + +## 💻Contributing +Karmada dashboard is still catching up with the features of Karmada, we have only implemented the basic functionalities currently. +If you want to contribute to the development of the Karmada dashboard, you can refer to the document of development, we are happy to see more contributors join us. +Please feel free to submit issues or pull requests to our repository. + +## License + +Karmada-dashboard is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details. diff --git a/README.md b/README.md index aefd8e03..2dc5aeb2 100644 --- a/README.md +++ b/README.md @@ -1,39 +1,49 @@ # Karmada-dashboard + [](https://github.com/kubernetes/dashboard/blob/master/LICENSE) +[](https://deepwiki.com/HappyLadySauce/dashboard) -Karmada Dashboard is a general-purpose, web-based control panel for Karmada which is a multi-cluster management project. - +Karmada Dashboard是Karmada的通用型基于Web的控制面板,Karmada是一个多集群管理项目。 + -## 🚀QuickStart +## 🚀快速开始 -### Prerequisites -You need to have a Karmada installed on Kubernetes(aka. `host cluster`) and the [karmadactl](https://karmada.io/docs/installation/install-cli-tools#install-karmadactl) or -kubectl command-line tool must be configured to communicate with your host cluster and Karmada control plane. +### 前提条件 -If you don't already have the Karmada, you can launch one by following this [tutorial](https://karmada.io/docs/installation/#install-karmada-for-development-environment). +您需要在Kubernetes(即`宿主集群`)上安装Karmada,并且[karmadactl](https://karmada.io/docs/installation/install-cli-tools#install-karmadactl)或 +kubectl命令行工具必须配置为能够与您的宿主集群和Karmada控制平面进行通信。 +如果您还没有安装Karmada,可以按照这个[教程](https://karmada.io/docs/installation/#install-karmada-for-development-environment)来启动一个。 --- -### Install Karmada-dashboard -In the following steps, we are going to install Karmada Dashboard on the `host cluster` where running the Karmada -control plane components. We assume that Karmada was installed in namespace `karmada-system` and Karmada config is -located at `$HOME/.kube/karmada.config`, if this differs from your environment, please modify the following commands -accordingly. -1. Switch user-context of your Karmada config to `karmada-host`. +### 安装Karmada-dashboard + +在以下步骤中是使用kind搭建的一个集群,我们将在运行Karmada控制平面组件的`宿主集群`上安装Karmada Dashboard。我们假设Karmada已安装在命名空间`karmada-system`中, +且Karmada配置位于`$HOME/.kube/karmada.config`,如果这与您的环境不同,请相应地修改以下命令。 + +1. 将您的Karmada配置的用户上下文切换到`karmada-host`。 ```bash export KUBECONFIG="$HOME/.kube/karmada.config" kubectl config use-context karmada-host ``` -Now, you should be able to see Karmada control plane components by following command: +`karmada-host`是karmada宿主机的`kubeconfig`,如果你是生产环境中的项目,可以直接使用宿主机的 `$HOME/.kube/config`。 + +```bash +export KUBECONFIG="$HOME/.kube/karmada.config" ``` + +现在,您应该能够通过以下命令查看Karmada控制平面组件: + +```bash kubectl get deployments.apps -n karmada-system ``` -If everything works fine, you will get similar messages as following: -``` +如果一切正常,您将得到类似于以下的消息: + +```bash NAME READY UP-TO-DATE AVAILABLE AGE karmada-aggregated-apiserver 2/2 2 2 3d karmada-apiserver 1/1 1 1 3d @@ -43,31 +53,35 @@ karmada-scheduler 2/2 2 2 3d karmada-webhook 2/2 2 2 3d ``` -2. Deploy Karmada Dashboard +2. 部署Karmada Dashboard -Clone this repo to your machine: -``` -git clone https://github.com/karmada-io/dashboard.git -``` +将此仓库克隆到您的机器上: -Change to the dashboard directory: +```bash +git clone https://github.com/HappyLadySauce/dashboard.git ``` + +切换到dashboard目录: + +```bash cd dashboard ``` -Create the secret based on your Karmada config, the Karmada Dashboard will use this config to talk to the Karmada API server. -``` +根据您的Karmada配置创建密钥,Karmada Dashboard将使用此配置与Karmada API服务器通信。 + +```bash kubectl create secret generic kubeconfig --from-file=kubeconfig=$HOME/.kube/karmada.config -n karmada-system ``` -Deploy Karmada Dashboard: -``` +部署Karmada Dashboard: + +```bash kubectl apply -k artifacts/overlays/nodeport-mode ``` -This will deploy two components in `karmada-system` namespace: +这将在`karmada-system`命名空间中部署两个组件: -``` +```bash kubectl get deployments.apps -n karmada-system karmada-dev-linux-renhongcai: Fri Jan 10 16:08:38 2025 NAME READY UP-TO-DATE AVAILABLE AGE @@ -76,64 +90,80 @@ karmada-dashboard-web 1/1 1 1 2m ... ``` -Then you will be able to access the Karmada Dashboard by `http://your-karmada-host:32000`. -Note that, the Karmada Dashboard service type is `NodePort`, this exposes the dashboard on a specific port on each node -of your `host cluster`, allowing you to access it via any node's IP address and that port. +然后,您将能够通过`http://your-karmada-host:32000`访问Karmada Dashboard。 +注意,Karmada Dashboard服务类型是`NodePort`,这会在您的`宿主集群`的每个节点上的特定端口上暴露dashboard, +使您能够通过任何节点的IP地址和该端口来访问它。 -You also can use `kubectl port-forward` to forward a local port to the Dashboard's backend pod: -``` +您也可以使用`kubectl port-forward`将本地端口转发到Dashboard的后端pod: + +```bash kubectl port-forward -n karmada-system services/karmada-dashboard-web --address 0.0.0.0 8000:8000 ``` -Then you can access it via `http://localhost:8000`. -You still need the jwt token to login to the dashboard. +然后您可以通过`http://localhost:8000`访问它。 + +您仍然需要JWT令牌来登录dashboard。 -3. Create Service Account +3. 创建服务账户 + +如果你是生产环境中的项目,需要切换到karmada控制平面 + +```bash +export KUBECONFIG=/etc/karmada/karmada-apiserver.config +``` + +将用户上下文切换到karmada-apiserver: -switch user-context to karmada-apiserver: ```bash kubectl config use-context karmada-apiserver ``` -Create Service Account: + +创建服务账户: + ```bash kubectl apply -f artifacts/dashboard/karmada-dashboard-sa.yaml ``` -4. Get jwt token +4. 获取JWT令牌 + +执行以下代码来获取JWT令牌: -Execute the following code to get the jwt token: ```bash kubectl -n karmada-system get secret/karmada-dashboard-secret -o go-template="{{.data.token | base64decode}}" ``` -it should print results like this: +它应该打印出类似这样的结果: + ```bash eyJhbGciOiJSUzI1NiIsImtpZCI6InZLdkRNclVZSFB6SUVXczBIRm8zMDBxOHFOanQxbWU4WUk1VVVpUzZwMG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrYXJtYWRhLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrYXJtYWRhLWRhc2hib2FyZC10b2tlbi14NnhzcCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrYXJtYWRhLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE5Y2RkZDc3LTkyOWYtNGM0MS1iZDY4LWIzYWVhY2E0NGJiYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprYXJtYWRhLXN5c3RlbTprYXJtYWRhLWRhc2hib2FyZCJ9.F0BqSxl0GVGvJZ_WNwcEFtChE7joMdIPGhv8--eN22AFTX34IzJ_2akjZcWQ63mbgr1mVY4WjYdl7KRS6w4fEQpqWkWx2Dfp3pylIcMslYRrUPirHE2YN13JDxvjtYyhBVPlbYHSj7y0rvxtfTr7iFaVRMFFiUbC3kVKNhuZtgk_tBHg4UDCQQKFALGc8xndU5nz-BF1gHgzEfLcf9Zyvxj1xLy9mEkLotZjIcnZhwiHKFYtjvCnGXxGyrTvQ5rgilAxBKv0TcmjQep_TG_Q5M9r0u8wmxhDnYd2a7wsJ3P3OnDw7smk6ikY8UzMxVoEPG7XoRcmNqhhAEutvcJoyw ``` -### Login Dashboard -Now open Karmada-dashboard with url [http://your-karmada-host:32000 ]() +### 登录Dashboard + +现在使用URL [http://your-karmada-host:32000]() 打开Karmada-dashboard + +复制您刚刚生成的令牌并将其粘贴到登录页面上的输入令牌字段中。 + +一旦认证过程通过,您就可以自由使用karmada dashboard了。您可以按照karmada-dashboard的使用指南快速体验karmada dashboard。 + +## 会议 -copy the token you just generated and paste it into the Enter token field on the login page. - -Once the process of authentication passed, you can use karmada dashboard freely. You can follow the Usage of karmada-dashboard to have a quick experience of karmada dashboard. +Dashboard定期会议: -## Meeting +* 周三14:30(UTC+8)(中文)(双周)。[转换为您的时区](https://www.thetimezoneconverter.com/?t=14%3A30&tz=GMT%2B8&)。 +* 目前还没有专门的英文会议。如果您有任何话题要讨论,请加入[社区会议](https://github.com/karmada-io/karmada?tab=readme-ov-file#meeting)。 -Regular Meeting For dashboard: -* Wednesday at 14:30 UTC+8 (Chinese)(biweekly). [Convert to your timezone](https://www.thetimezoneconverter.com/?t=14%3A30&tz=GMT%2B8&). -* There isn't a dedicated English meeting yet. If you have any topics to discuss, please join [the community meeting](https://github.com/karmada-io/karmada?tab=readme-ov-file#meeting). +资源: +- [会议记录和议程](https://docs.google.com/document/d/1dX3skCE-QRBWzABq3O9cG7yhIDUWLYWmg7kGq8UHU6s/edit) +- [会议日历](https://calendar.google.com/calendar/embed?src=a71aae8a75e3558a90683596c71382b8195bf7c84cb50e6e75d1a3e64e08480b%40group.calendar.google.com&ctz=Asia%2FShanghai) | [订阅](https://calendar.google.com/calendar/u/1?cid=YTcxYWFlOGE3NWUzNTU4YTkwNjgzNTk2YzcxMzgyYjgxOTViZjdjODRjYjUwZTZlNzVkMWEzZTY0ZTA4NDgwYkBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) +- [会议链接](https://zoom.us/j/97070047574?pwd=lXha0Sqngw4mwtmArP1sjsLMMXk34z.1) -Resources: -- [Meeting Notes and Agenda](https://docs.google.com/document/d/1dX3skCE-QRBWzABq3O9cG7yhIDUWLYWmg7kGq8UHU6s/edit) -- [Meeting Calendar](https://calendar.google.com/calendar/embed?src=a71aae8a75e3558a90683596c71382b8195bf7c84cb50e6e75d1a3e64e08480b%40group.calendar.google.com&ctz=Asia%2FShanghai) | [Subscribe](https://calendar.google.com/calendar/u/1?cid=YTcxYWFlOGE3NWUzNTU4YTkwNjgzNTk2YzcxMzgyYjgxOTViZjdjODRjYjUwZTZlNzVkMWEzZTY0ZTA4NDgwYkBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) -- [Meeting Link](https://zoom.us/j/97070047574?pwd=lXha0Sqngw4mwtmArP1sjsLMMXk34z.1) +## 💻贡献 -## 💻Contributing -Karmada dashboard is still catching up with the features of Karmada, we have only implemented the basic functionalities currently. -If you want to contribute to the development of the Karmada dashboard, you can refer to the document of development, we are happy to see more contributors join us. -Please feel free to submit issues or pull requests to our repository. +Karmada dashboard仍在追赶Karmada的功能,目前我们只实现了基本的功能。 +如果您想为Karmada dashboard的开发做出贡献,可以参考开发文档,我们很高兴看到更多的贡献者加入我们。 +请随时向我们的仓库提交问题或拉取请求。 -## License +## 许可证 -Karmada-dashboard is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details. +Karmada-dashboard采用Apache 2.0许可证。详情请参见[LICENSE](LICENSE)文件。 diff --git a/artifacts/dashboard/karmada-dashboard-api.yaml b/artifacts/dashboard/karmada-dashboard-api.yaml index d2dea3f0..5fa8f56f 100644 --- a/artifacts/dashboard/karmada-dashboard-api.yaml +++ b/artifacts/dashboard/karmada-dashboard-api.yaml @@ -32,7 +32,7 @@ spec: - --insecure-bind-address=0.0.0.0 - --bind-address=0.0.0.0 name: karmada-dashboard-api - image: karmada/karmada-dashboard-api:main + image: registry.example.com/karmada/karmada-dashboard-api:latest imagePullPolicy: IfNotPresent env: - name: GIN_MODE diff --git a/artifacts/dashboard/karmada-dashboard-web.yaml b/artifacts/dashboard/karmada-dashboard-web.yaml index f724b928..bbad6e9b 100644 --- a/artifacts/dashboard/karmada-dashboard-web.yaml +++ b/artifacts/dashboard/karmada-dashboard-web.yaml @@ -29,7 +29,8 @@ spec: - --bind-address=0.0.0.0 - --dashboard-config-path=/config/dashboard-config.yaml name: karmada-dashboard-web - image: karmada/karmada-dashboard-web:main + # 更改镜像 + image: registry.example.com/karmada/karmada-dashboard-web:latest imagePullPolicy: IfNotPresent env: - name: GIN_MODE diff --git a/build-images.sh b/build-images.sh new file mode 100755 index 00000000..b4b688e1 --- /dev/null +++ b/build-images.sh @@ -0,0 +1,117 @@ +#!/bin/bash +# 构建Karmada Dashboard的两个镜像:API和Web +# 优先使用本地镜像构建模式 + +set -e + +# 设置环境变量 +REPO_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +VERSION=${VERSION:-"latest"} +REGISTRY=${REGISTRY:-"docker.io/karmada"} +PUSH=${PUSH:-"false"} # 是否推送镜像到仓库 + +# 使用帮助 +function show_help() { + echo "用法: $0 [选项]" + echo "选项:" + echo " -v, --version VERSION 指定镜像版本,默认为 'latest'" + echo " -r, --registry REGISTRY 指定镜像仓库,默认为 'docker.io/karmada'" + echo " -p, --push 构建后推送镜像到仓库" + echo " -h, --help 显示此帮助信息" + exit 0 +} + +# 处理命令行参数 +while [ "$1" != "" ]; do + case $1 in + -v | --version ) shift + VERSION=$1 + ;; + -r | --registry ) shift + REGISTRY=$1 + ;; + -p | --push ) PUSH="true" + ;; + -h | --help ) show_help + ;; + * ) show_help + ;; + esac + shift +done + +# 输出构建信息 +echo "==================== Karmada Dashboard 镜像构建 ====================" +echo "版本: $VERSION" +echo "镜像仓库: $REGISTRY" +echo "推送镜像: $PUSH" +echo "==================================================================" + +# 1. 构建API服务镜像 +echo "开始构建 API 服务镜像..." + +# 编译API二进制文件 +echo "1.1 编译 karmada-dashboard-api 二进制文件" +cd $REPO_ROOT +make karmada-dashboard-api GOOS=linux + +# 构建API镜像 +echo "1.2 构建 karmada-dashboard-api 镜像" +if [ "$PUSH" = "true" ]; then + # 如果需要推送,设置OUTPUT_TYPE为registry + DOCKER_FILE=Dockerfile VERSION=$VERSION REGISTRY=$REGISTRY OUTPUT_TYPE=registry $REPO_ROOT/hack/docker.sh karmada-dashboard-api +else + DOCKER_FILE=Dockerfile VERSION=$VERSION REGISTRY=$REGISTRY $REPO_ROOT/hack/docker.sh karmada-dashboard-api +fi + +echo "API 服务镜像构建完成!" + +# 2. 构建Web服务镜像 +echo "开始构建 Web 服务镜像..." + +# 构建前端项目 +echo "2.1 构建前端项目" +cd $REPO_ROOT/ui +# 检查和安装依赖 +if [ ! -d "node_modules" ]; then + echo "安装前端依赖..." + pnpm install +fi +# 构建前端项目 +echo "编译前端项目..." +pnpm run dashboard:build +cd $REPO_ROOT + +# 编译Web二进制文件 +echo "2.2 编译 karmada-dashboard-web 二进制文件" +make karmada-dashboard-web GOOS=linux + +# 确保dist目录存在 +echo "2.3 准备前端构建产物" +mkdir -p $REPO_ROOT/_output/bin/linux/amd64/dist +# 复制前端构建产物 +cp -r $REPO_ROOT/ui/apps/dashboard/dist/* $REPO_ROOT/_output/bin/linux/amd64/dist/ + +# 构建Web镜像 +echo "2.4 构建 karmada-dashboard-web 镜像" +if [ "$PUSH" = "true" ]; then + # 如果需要推送,设置OUTPUT_TYPE为registry + DOCKER_FILE=build-web.Dockerfile VERSION=$VERSION REGISTRY=$REGISTRY OUTPUT_TYPE=registry $REPO_ROOT/hack/docker.sh karmada-dashboard-web +else + DOCKER_FILE=build-web.Dockerfile VERSION=$VERSION REGISTRY=$REGISTRY $REPO_ROOT/hack/docker.sh karmada-dashboard-web +fi + +echo "Web 服务镜像构建完成!" + +# 输出结果信息 +echo "" +echo "==================== 构建完成 ====================" +echo "API 镜像: $REGISTRY/karmada-dashboard-api:$VERSION" +echo "Web 镜像: $REGISTRY/karmada-dashboard-web:$VERSION" +if [ "$PUSH" = "true" ]; then + echo "镜像已推送到 $REGISTRY" +else + echo "使用 docker images 命令查看已构建的镜像" + echo "如需推送镜像到仓库,请使用 -p 或 --push 参数" +fi +echo "=================================================" \ No newline at end of file diff --git a/cluster/images/build-web.Dockerfile b/cluster/images/build-web.Dockerfile index e4ded1f2..b9096763 100644 --- a/cluster/images/build-web.Dockerfile +++ b/cluster/images/build-web.Dockerfile @@ -16,7 +16,7 @@ FROM alpine:3.21.3 ARG BINARY ARG TARGETPLATFORM -RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories +# RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories RUN apk add --no-cache ca-certificates #tzdata is used to parse the time zone information when using CronFederatedHPA RUN apk add --no-cache tzdata diff --git a/cluster/images/buildx.Dockerfile b/cluster/images/buildx.Dockerfile index d5eca961..52e26095 100644 --- a/cluster/images/buildx.Dockerfile +++ b/cluster/images/buildx.Dockerfile @@ -16,7 +16,7 @@ FROM alpine:3.21.3 ARG BINARY ARG TARGETPLATFORM -RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories +# RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories RUN apk add --no-cache ca-certificates #tzdata is used to parse the time zone information when using CronFederatedHPA RUN apk add --no-cache tzdata diff --git a/cmd/api/app/api.go b/cmd/api/app/api.go index 51aafb0c..278eb676 100644 --- a/cmd/api/app/api.go +++ b/cmd/api/app/api.go @@ -54,11 +54,16 @@ import ( ) // NewAPICommand creates a *cobra.Command object with default parameters +// NewAPICommand 创建一个 *cobra.Command 对象,并设置默认参数 func NewAPICommand(ctx context.Context) *cobra.Command { + // 创建一个 options 对象 opts := options.NewOptions() + // 创建一个 cobra.Command 对象 cmd := &cobra.Command{ + // Use 是 cobra.Command 的 Use 方法,用于设置命令的名称 Use: "karmada-dashboard-api", Long: `The karmada-dashboard-api provide api for karmada-dashboard web ui. It need to access host cluster apiserver and karmada apiserver internally, it will access host cluster apiserver for creating some resource like configmap in host cluster, meanwhile it will access karmada apiserver for interactiving for purpose of managing karmada-specific resources, like cluster、override policy、propagation policy and so on.`, + // RunE 是 cobra.Command 的 Run 方法,用于执行命令 RunE: func(_ *cobra.Command, _ []string) error { // validate options //if errs := opts.Validate(); len(errs) != 0 { @@ -69,7 +74,9 @@ func NewAPICommand(ctx context.Context) *cobra.Command { } return nil }, + // Args 是 cobra.Command 的 Args 方法,用于验证命令参数 Args: func(cmd *cobra.Command, args []string) error { + // 如果命令参数不为空,则返回错误,用于防止用户输入子命令参数 for _, arg := range args { if len(arg) > 0 { return fmt.Errorf("%q does not take any arguments, got %q", cmd.CommandPath(), args) @@ -78,45 +85,79 @@ func NewAPICommand(ctx context.Context) *cobra.Command { return nil }, } + // 创建一个命令行标志集 + // cliflag.NamedFlagSets 是一个包含多个标志集的结构体,用于存储命令行标志 fss := cliflag.NamedFlagSets{} - + // 创建一个通用的标志集 + // FlagSet returns the flag set with the given name and adds it to the ordered name list if it is not in there yet. + // FlagSet 返回给定名称的标志集,并将其添加到有序名称列表中,如果它不在那里。 genericFlagSet := fss.FlagSet("generic") + // 添加通用标志 opts.AddFlags(genericFlagSet) // Set klog flags + // 创建一个 klog 标志集 logsFlagSet := fss.FlagSet("logs") + // 添加 klog 标志 klogflag.Add(logsFlagSet) - + // 添加通用标志和 klog 标志到命令行 cmd.Flags().AddFlagSet(genericFlagSet) cmd.Flags().AddFlagSet(logsFlagSet) return cmd } func run(ctx context.Context, opts *options.Options) error { + // klog 是 karmada 的日志库,这里使用 klog 打印日志 (k8s.io/klog/v2) + // 基础日志 + // klog.Info("普通信息") + // klog.Error("错误信息") + + // 结构化日志 + // klog.InfoS("Starting Karmada Dashboard API", "version", environment.Version) + + // 分级日志 + // klog.V(1).Info("详细日志信息") klog.InfoS("Starting Karmada Dashboard API", "version", environment.Version) + // client 是 karmada 的客户端库,这里使用 client 初始化 karmada 的配置 (github.com/karmada-io/dashboard/pkg/client) client.InitKarmadaConfig( + // 设置用户代理 client.WithUserAgent(environment.UserAgent()), + // 设置 karmada 的 kubeconfig client.WithKubeconfig(opts.KarmadaKubeConfig), + // 设置 karmada 的 context client.WithKubeContext(opts.KarmadaContext), + // 设置 karmada 的 insecure tls skip verify client.WithInsecureTLSSkipVerify(opts.SkipKarmadaApiserverTLSVerify), ) + // 初始化 kubernetes 的 kubeconfig client.InitKubeConfig( + // 设置用户代理 client.WithUserAgent(environment.UserAgent()), + // 设置 kubernetes 的 kubeconfig client.WithKubeconfig(opts.KubeConfig), + // 设置 kubernetes 的 context client.WithKubeContext(opts.KubeContext), + // 设置 kubernetes 的 insecure tls skip verify client.WithInsecureTLSSkipVerify(opts.SkipKubeApiserverTLSVerify), ) + // 确保 API 服务器连接或退出 ensureAPIServerConnectionOrDie() + // 启动服务 serve(opts) + // 初始化 dashboard 的配置 config.InitDashboardConfig(client.InClusterClient(), ctx.Done()) + // 等待上下文结束 <-ctx.Done() + // 退出程序 os.Exit(0) return nil } +// 确保 API 服务器连接或退出 func ensureAPIServerConnectionOrDie() { + // 获取 Kubernetes API 服务器版本信息 versionInfo, err := client.InClusterClient().Discovery().ServerVersion() if err != nil { klog.Fatalf("Error while initializing connection to Kubernetes apiserver. "+ @@ -125,6 +166,7 @@ func ensureAPIServerConnectionOrDie() { } klog.InfoS("Successful initial request to the Kubernetes apiserver", "version", versionInfo.String()) + // 获取 Karmada API 服务器版本信息 karmadaVersionInfo, err := client.InClusterKarmadaClient().Discovery().ServerVersion() if err != nil { klog.Fatalf("Error while initializing connection to Karmada apiserver. "+ @@ -134,10 +176,15 @@ func ensureAPIServerConnectionOrDie() { klog.InfoS("Successful initial request to the Karmada apiserver", "version", karmadaVersionInfo.String()) } +// 启动服务 func serve(opts *options.Options) { + // 设置 insecure 地址 insecureAddress := fmt.Sprintf("%s:%d", opts.InsecureBindAddress, opts.InsecurePort) + // 打印日志 klog.V(1).InfoS("Listening and serving on", "address", insecureAddress) + // 启动服务 go func() { + // 启动服务 klog.Fatal(router.Router().Run(insecureAddress)) }() } diff --git a/cmd/api/app/options/options.go b/cmd/api/app/options/options.go index 9b617c3c..46bb6a06 100644 --- a/cmd/api/app/options/options.go +++ b/cmd/api/app/options/options.go @@ -23,6 +23,7 @@ import ( ) // Options contains everything necessary to create and run api. +// Options 包含创建和运行 API 所需的所有内容。 type Options struct { BindAddress net.IP Port int @@ -40,11 +41,13 @@ type Options struct { } // NewOptions returns initialized Options. +// NewOptions 返回初始化的 Options。 func NewOptions() *Options { return &Options{} } // AddFlags adds flags of api to the specified FlagSet +// AddFlags 将 API 的标志添加到指定的 FlagSet。 func (o *Options) AddFlags(fs *pflag.FlagSet) { if o == nil { return diff --git a/cmd/api/app/router/middleware.go b/cmd/api/app/router/middleware.go index 29559cbd..f95acdca 100644 --- a/cmd/api/app/router/middleware.go +++ b/cmd/api/app/router/middleware.go @@ -28,17 +28,39 @@ import ( ) // EnsureMemberClusterMiddleware ensures that the member cluster exists. +// 确保成员集群存在。 func EnsureMemberClusterMiddleware() gin.HandlerFunc { return func(c *gin.Context) { + // InClusterKarmadaClient 获取集群的karmada客户端 karmadaClient := client.InClusterKarmadaClient() + // 获取成员集群的名称 _, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), c.Param("clustername"), metav1.GetOptions{}) if err != nil { + // 如果成员集群不存在,返回错误信息 c.AbortWithStatusJSON(http.StatusOK, common.BaseResponse{ Code: 500, Msg: err.Error(), }) return } + // 如果成员集群存在,继续处理请求 c.Next() } } + +// CorsMiddleware 跨域中间件 +func CorsMiddleware() gin.HandlerFunc { + return func(c *gin.Context) { + c.Writer.Header().Set("Access-Control-Allow-Origin", "*") + c.Writer.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS") + c.Writer.Header().Set("Access-Control-Allow-Headers", "Origin, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization") + c.Writer.Header().Set("Access-Control-Max-Age", "600") + + if c.Request.Method == "OPTIONS" { + c.AbortWithStatus(http.StatusNoContent) + return + } + + c.Next() + } +} \ No newline at end of file diff --git a/cmd/api/app/router/setup.go b/cmd/api/app/router/setup.go index ec4d7dad..7fb25d8f 100644 --- a/cmd/api/app/router/setup.go +++ b/cmd/api/app/router/setup.go @@ -22,42 +22,63 @@ import ( "github.com/karmada-io/dashboard/pkg/environment" ) +// router 是 Gin 引擎实例 var ( router *gin.Engine - v1 *gin.RouterGroup + // v1 是 /api/v1 的路由组 + v1 *gin.RouterGroup + // member 是 /api/v1/member/:clustername 的路由组 member *gin.RouterGroup ) +// init 初始化路由 func init() { + // 如果环境变量 IS_DEV 为 false,则设置 Gin 为生产模式 if !environment.IsDev() { gin.SetMode(gin.ReleaseMode) } + // 创建 Gin 引擎实例 router = gin.Default() + // 设置可信代理 + // SetTrustedProxies set a list of network origins (IPv4 addresses, IPv4 CIDRs, IPv6 addresses or IPv6 CIDRs) from which to trust request's headers that contain alternative client IP when `(*gin.Engine).ForwardedByClientIP` is `true`. `TrustedProxies` feature is enabled by default, and it also trusts all proxies by default. If you want to disable this feature, use Engine.SetTrustedProxies(nil), then Context.ClientIP() will return the remote address directly. + // SetTrustedProxies 设置一个网络源列表,从这些源中信任请求的头部,当 `(*gin.Engine).ForwardedByClientIP` 为 `true` 时。`TrustedProxies` 功能默认启用,并默认信任所有代理。如果想要禁用此功能,请使用 Engine.SetTrustedProxies(nil),然后 Context.ClientIP() 将直接返回远程地址。 _ = router.SetTrustedProxies(nil) + // 创建 /api/v1 的路由组 v1 = router.Group("/api/v1") + // 为全局API添加CORS中间件 + v1.Use(CorsMiddleware()) + // 创建 /api/v1/member/:clustername 的路由组 member = v1.Group("/member/:clustername") + // 使用 EnsureMemberClusterMiddleware 中间件 member.Use(EnsureMemberClusterMiddleware()) + // 使用 CorsMiddleware 中间件 + member.Use(CorsMiddleware()) + // 创建 /livez 的路由 router.GET("/livez", func(c *gin.Context) { c.String(200, "livez") }) + // 创建 /readyz 的路由 router.GET("/readyz", func(c *gin.Context) { c.String(200, "readyz") }) } // V1 returns the router group for /api/v1 which for resources in control plane endpoints. +// V1 返回 /api/v1 的路由组,用于控制平面资源的 API 端点。 func V1() *gin.RouterGroup { return v1 } // Router returns the main Gin engine instance. +// Router 返回主 Gin 引擎实例。 func Router() *gin.Engine { return router } // MemberV1 returns the router group for /api/v1/member/:clustername which for resources in specific member cluster. +// MemberV1 返回用于特定成员集群资源的 /api/v1/member/:clustername 的路由组。 func MemberV1() *gin.RouterGroup { return member } diff --git a/cmd/api/app/routes/auth/handler.go b/cmd/api/app/routes/auth/handler.go index 3d044393..3204b216 100644 --- a/cmd/api/app/routes/auth/handler.go +++ b/cmd/api/app/routes/auth/handler.go @@ -25,33 +25,45 @@ import ( "github.com/karmada-io/dashboard/cmd/api/app/types/common" ) +// handleLogin 处理登录请求 func handleLogin(c *gin.Context) { + // 创建一个 LoginRequest 对象 loginRequest := new(v1.LoginRequest) + // 绑定请求参数 if err := c.Bind(loginRequest); err != nil { klog.ErrorS(err, "Could not read login request") + // 返回失败响应 common.Fail(c, err) return } + // 调用 login 函数处理登录请求 response, _, err := login(loginRequest, c.Request) if err != nil { + // 返回失败响应 common.Fail(c, err) return } common.Success(c, response) } +// handleMe 处理获取当前用户信息请求 func handleMe(c *gin.Context) { + // 调用 me 函数获取当前用户信息 response, _, err := me(c.Request) if err != nil { klog.ErrorS(err, "Could not get user") + // 返回失败响应 common.Fail(c, err) return } - + // 返回成功响应 common.Success(c, response) } +// init 初始化路由 func init() { + // 添加登录路由 router.V1().POST("/login", handleLogin) + // 添加获取当前用户信息路由 router.V1().GET("/me", handleMe) } diff --git a/cmd/api/app/routes/auth/login.go b/cmd/api/app/routes/auth/login.go index 677c2ed5..fb5a0732 100644 --- a/cmd/api/app/routes/auth/login.go +++ b/cmd/api/app/routes/auth/login.go @@ -24,21 +24,32 @@ import ( "github.com/karmada-io/dashboard/pkg/common/errors" ) +// login 处理登录请求 +// spec 是登录请求参数 +// request 是 HTTP 请求 +// 返回登录响应、HTTP 状态码和错误 func login(spec *v1.LoginRequest, request *http.Request) (*v1.LoginResponse, int, error) { + // 确保请求头中包含授权信息 ensureAuthorizationHeader(spec, request) + // 获取 Karmada 客户端 karmadaClient, err := client.GetKarmadaClientFromRequest(request) if err != nil { return nil, http.StatusInternalServerError, err } - + // 检查 Karmada 服务器版本 if _, err = karmadaClient.Discovery().ServerVersion(); err != nil { + // 处理错误 code, err := errors.HandleError(err) return nil, code, err } - + // 返回登录响应 return &v1.LoginResponse{Token: spec.Token}, http.StatusOK, nil } +// ensureAuthorizationHeader 确保请求头中包含授权信息 +// spec 是登录请求参数 +// request 是 HTTP 请求 func ensureAuthorizationHeader(spec *v1.LoginRequest, request *http.Request) { + // 设置授权头 client.SetAuthorizationHeader(request, spec.Token) } diff --git a/cmd/api/app/routes/auth/me.go b/cmd/api/app/routes/auth/me.go index bbf720c4..4d1972bb 100644 --- a/cmd/api/app/routes/auth/me.go +++ b/cmd/api/app/routes/auth/me.go @@ -29,10 +29,15 @@ import ( ) const ( + // tokenServiceAccountKey 是 JWT 中的 serviceaccount 键 tokenServiceAccountKey = "serviceaccount" ) +// me 处理获取当前用户信息请求 +// request 是 HTTP 请求 +// 返回用户信息、HTTP 状态码和错误 func me(request *http.Request) (*v1.User, int, error) { + // 获取 Karmada 客户端 karmadaClient, err := client.GetKarmadaClientFromRequest(request) if err != nil { code, err := errors.HandleError(err) @@ -40,36 +45,49 @@ func me(request *http.Request) (*v1.User, int, error) { } // Make sure that authorization token is valid + // 检查 Karmada 服务器版本 if _, err = karmadaClient.Discovery().ServerVersion(); err != nil { code, err := errors.HandleError(err) return nil, code, err } + // 从请求中获取用户信息 return getUserFromToken(client.GetBearerToken(request)), http.StatusOK, nil } +// getUserFromToken 从 JWT 中获取用户信息 +// token 是 JWT +// 返回用户信息 func getUserFromToken(token string) *v1.User { parsed, _ := jwt.Parse(token, nil) if parsed == nil { return &v1.User{Authenticated: true} } + // 获取 JWT 中的声明 claims := parsed.Claims.(jwt.MapClaims) + // 遍历 JWT 中的声明,查找 serviceaccount 键 found, value := traverse(tokenServiceAccountKey, claims) if !found { return &v1.User{Authenticated: true} } + // 将 serviceaccount 键的值转换为 v1.ServiceAccount 类型 var sa v1.ServiceAccount ok := transcode(value, &sa) if !ok { return &v1.User{Authenticated: true} } + // 返回用户信息 return &v1.User{Name: sa.Name, Authenticated: true} } +// traverse 遍历 map,查找 key +// key 是 map 中的键 +// m 是 map +// 返回是否找到键和键的值 func traverse(key string, m map[string]interface{}) (found bool, value interface{}) { for k, v := range m { if k == key { @@ -84,6 +102,10 @@ func traverse(key string, m map[string]interface{}) (found bool, value interface return false, "" } +// transcode 将 in 转换为 out +// in 是输入 +// out 是输出 +// 返回是否转换成功 func transcode(in, out interface{}) bool { buf := new(bytes.Buffer) err := json.NewEncoder(buf).Encode(in) @@ -91,6 +113,7 @@ func transcode(in, out interface{}) bool { return false } + // 将 buf 转换为 out err = json.NewDecoder(buf).Decode(out) return err == nil } diff --git a/cmd/api/app/routes/cluster/accesscluster.go b/cmd/api/app/routes/cluster/accesscluster.go index e0c83936..e0afe6b8 100644 --- a/cmd/api/app/routes/cluster/accesscluster.go +++ b/cmd/api/app/routes/cluster/accesscluster.go @@ -38,39 +38,56 @@ import ( const ( // KarmadaKubeconfigName is the name of karmada kubeconfig + // KarmadaKubeconfigName 是 karmada kubeconfig 的名称 KarmadaKubeconfigName = "karmada-kubeconfig" // KarmadaAgentServiceAccountName is the name of karmada-agent serviceaccount + // KarmadaAgentServiceAccountName 是 karmada-agent serviceaccount 的名称 KarmadaAgentServiceAccountName = "karmada-agent-sa" // KarmadaAgentName is the name of karmada-agent + // KarmadaAgentName 是 karmada-agent 的名称 KarmadaAgentName = "karmada-agent" // KarmadaAgentImage is the image of karmada-agent + // KarmadaAgentImage 是 karmada-agent 的镜像 KarmadaAgentImage = "karmada/karmada-agent:latest" // ClusterNamespace is the namespace of cluster + // ClusterNamespace 是集群的命名空间 ClusterNamespace = "karmada-cluster" ) var ( + // karmadaAgentLabels 是 karmada-agent 的标签 karmadaAgentLabels = map[string]string{"app": KarmadaAgentName} + // karmadaAgentReplicas 是 karmada-agent 的副本数 karmadaAgentReplicas = int32(2) + // timeout 是超时时间 timeout = 5 * time.Minute ) +// pullModeOption 是拉取模式选项 type pullModeOption struct { + // karmadaClient 是 karmada 客户端 karmadaClient karmadaclientset.Interface + // karmadaAgentCfg 是 karmada-agent 的配置 karmadaAgentCfg *clientcmdapi.Config + // memberClusterNamespace 是成员集群的命名空间 memberClusterNamespace string + // memberClusterClient 是成员集群的客户端 memberClusterClient *kubeclient.Clientset + // memberClusterName 是成员集群的名称 memberClusterName string + // memberClusterEndpoint 是成员集群的端点 memberClusterEndpoint string } -// createSecretAndRBACInMemberCluster create required secrets and rbac in member cluster +// createSecretAndRBACInMemberCluster 在成员集群中创建所需的秘密和RBAC func (o pullModeOption) createSecretAndRBACInMemberCluster() error { + // 序列化 karmada-agent 的 kubeconfig configBytes, err := clientcmd.Write(*o.karmadaAgentCfg) if err != nil { return fmt.Errorf("failure while serializing karmada-agent kubeConfig. %w", err) } + // 创建 karmada-kubeconfig 秘密 kubeConfigSecret := &corev1.Secret{ TypeMeta: metav1.TypeMeta{ APIVersion: "v1", @@ -85,10 +102,12 @@ func (o pullModeOption) createSecretAndRBACInMemberCluster() error { } // create karmada-kubeconfig secret to be used by karmada-agent component. + // 创建karmada-kubeconfig秘密,供karmada-agent组件使用。 if err := cmdutil.CreateOrUpdateSecret(o.memberClusterClient, kubeConfigSecret); err != nil { return fmt.Errorf("create secret %s failed: %v", kubeConfigSecret.Name, err) } + // 创建 karmada-agent ClusterRole clusterRole := &rbacv1.ClusterRole{ ObjectMeta: metav1.ObjectMeta{ Name: KarmadaAgentName, @@ -107,10 +126,12 @@ func (o pullModeOption) createSecretAndRBACInMemberCluster() error { } // create a karmada-agent ClusterRole in member cluster. + // 在成员集群中创建karmada-agent ClusterRole。 if err := cmdutil.CreateOrUpdateClusterRole(o.memberClusterClient, clusterRole); err != nil { return err } + // 创建 karmada-agent ServiceAccount sa := &corev1.ServiceAccount{ ObjectMeta: metav1.ObjectMeta{ Name: KarmadaAgentServiceAccountName, @@ -119,11 +140,13 @@ func (o pullModeOption) createSecretAndRBACInMemberCluster() error { } // create service account for karmada-agent + // 在成员集群中创建karmada-agent ServiceAccount。 _, err = karmadautil.EnsureServiceAccountExist(o.memberClusterClient, sa, false) if err != nil { return err } + // 创建 karmada-agent ClusterRoleBinding clusterRoleBinding := &rbacv1.ClusterRoleBinding{ ObjectMeta: metav1.ObjectMeta{ Name: KarmadaAgentName, @@ -143,6 +166,7 @@ func (o pullModeOption) createSecretAndRBACInMemberCluster() error { } // grant karmada-agent clusterrole to karmada-agent service account + // 授予karmada-agent ClusterRole给karmada-agent ServiceAccount。 if err := cmdutil.CreateOrUpdateClusterRoleBinding(o.memberClusterClient, clusterRoleBinding); err != nil { return err } @@ -150,7 +174,7 @@ func (o pullModeOption) createSecretAndRBACInMemberCluster() error { return nil } -// makeKarmadaAgentDeployment generate karmada-agent Deployment +// makeKarmadaAgentDeployment 生成karmada-agent Deployment func (o pullModeOption) makeKarmadaAgentDeployment() *appsv1.Deployment { karmadaAgent := &appsv1.Deployment{ TypeMeta: metav1.TypeMeta{ @@ -239,6 +263,7 @@ func (o pullModeOption) makeKarmadaAgentDeployment() *appsv1.Deployment { return karmadaAgent } +// accessClusterInPullMode 在拉取模式下访问集群 func accessClusterInPullMode(opts *pullModeOption) error { _, exist, err := karmadautil.GetClusterWithKarmadaClient(opts.karmadaClient, opts.memberClusterName) if err != nil { @@ -271,6 +296,7 @@ func accessClusterInPullMode(opts *pullModeOption) error { return nil } +// pushModeOption 推送模式选项 type pushModeOption struct { karmadaClient karmadaclientset.Interface clusterName string @@ -278,6 +304,7 @@ type pushModeOption struct { memberClusterRestConfig *rest.Config } +// accessClusterInPushMode 在推送模式下访问集群 func accessClusterInPushMode(opts *pushModeOption) error { registerOption := karmadautil.ClusterRegisterOption{ ClusterNamespace: ClusterNamespace, @@ -287,49 +314,64 @@ func accessClusterInPushMode(opts *pushModeOption) error { ClusterConfig: opts.memberClusterRestConfig, } + // 创建控制平面客户端 controlPlaneKubeClient := kubeclient.NewForConfigOrDie(opts.karmadaRestConfig) + // 创建成员集群客户端 memberClusterKubeClient := kubeclient.NewForConfigOrDie(opts.memberClusterRestConfig) + // 获取成员集群ID id, err := karmadautil.ObtainClusterID(memberClusterKubeClient) if err != nil { klog.ErrorS(err, "ObtainClusterID failed") return err } + // 检查集群ID是否唯一 exist, name, err := karmadautil.IsClusterIdentifyUnique(opts.karmadaClient, id) if err != nil { klog.ErrorS(err, "Check ClusterIdentify failed") return err } + // 如果集群ID不唯一,返回错误 if !exist { return fmt.Errorf("the same cluster has been registered with name %s", name) } + // 设置集群ID registerOption.ClusterID = id - + // 获取成员集群凭证 clusterSecret, impersonatorSecret, err := karmadautil.ObtainCredentialsFromMemberCluster(memberClusterKubeClient, registerOption) if err != nil { klog.ErrorS(err, "ObtainCredentialsFromMemberCluster failed") return err } + // 设置集群凭证 registerOption.Secret = *clusterSecret registerOption.ImpersonatorSecret = *impersonatorSecret - + // 注册集群 err = karmadautil.RegisterClusterInControllerPlane(registerOption, controlPlaneKubeClient, generateClusterInControllerPlane) if err != nil { return err } + // 打印成功信息 klog.Infof("cluster(%s) is joined successfully\n", opts.clusterName) return nil } +// generateClusterInControllerPlane 在控制平面中生成集群对象 func generateClusterInControllerPlane(opts karmadautil.ClusterRegisterOption) (*clusterv1alpha1.Cluster, error) { clusterObj := &clusterv1alpha1.Cluster{} + // 设置集群名称 clusterObj.Name = opts.ClusterName + // 设置同步模式 clusterObj.Spec.SyncMode = clusterv1alpha1.Push + // 设置API端点 clusterObj.Spec.APIEndpoint = opts.ClusterConfig.Host + // 设置集群ID clusterObj.Spec.ID = opts.ClusterID + // 设置集群凭证 clusterObj.Spec.SecretRef = &clusterv1alpha1.LocalSecretReference{ Namespace: opts.Secret.Namespace, Name: opts.Secret.Name, } + // 设置集群凭证 clusterObj.Spec.ImpersonatorSecretRef = &clusterv1alpha1.LocalSecretReference{ Namespace: opts.ImpersonatorSecret.Namespace, Name: opts.ImpersonatorSecret.Name, @@ -347,6 +389,7 @@ func generateClusterInControllerPlane(opts karmadautil.ClusterRegisterOption) (* clusterObj.Spec.Region = opts.ClusterRegion } + // 设置集群配置 clusterObj.Spec.InsecureSkipTLSVerification = opts.ClusterConfig.TLSClientConfig.Insecure if opts.ClusterConfig.Proxy != nil { @@ -357,11 +400,13 @@ func generateClusterInControllerPlane(opts karmadautil.ClusterRegisterOption) (* clusterObj.Spec.ProxyURL = url.String() } + // 创建控制平面Karmada客户端 controlPlaneKarmadaClient := karmadaclientset.NewForConfigOrDie(opts.ControlPlaneConfig) + // 创建集群对象 cluster, err := karmadautil.CreateClusterObject(controlPlaneKarmadaClient, clusterObj) if err != nil { return nil, fmt.Errorf("failed to create cluster(%s) object. error: %v", opts.ClusterName, err) } - + // 返回集群对象 return cluster, nil } diff --git a/cmd/api/app/routes/cluster/handler.go b/cmd/api/app/routes/cluster/handler.go index 2cfb4d2f..124bdefc 100644 --- a/cmd/api/app/routes/cluster/handler.go +++ b/cmd/api/app/routes/cluster/handler.go @@ -36,46 +36,69 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/cluster" ) +// 获取集群列表 func handleGetClusterList(c *gin.Context) { + // 获取Karmada客户端 karmadaClient := client.InClusterKarmadaClient() + // 解析数据选择路径参数 dataSelect := common.ParseDataSelectPathParameter(c) + // 获取集群列表 result, err := cluster.GetClusterList(karmadaClient, dataSelect) if err != nil { + // 打印错误信息 klog.ErrorS(err, "GetClusterList failed") + // 返回错误 common.Fail(c, err) return } + // 返回成功 common.Success(c, result) } +// 获取集群详情 func handleGetClusterDetail(c *gin.Context) { + // 获取Karmada客户端 karmadaClient := client.InClusterKarmadaClient() + // 获取集群名称 name := c.Param("name") + // 获取集群详情 result, err := cluster.GetClusterDetail(karmadaClient, name) if err != nil { + // 打印错误信息 klog.ErrorS(err, "GetClusterDetail failed") + // 返回错误 common.Fail(c, err) return } + // 返回成功 common.Success(c, result) } +// 创建集群 func handlePostCluster(c *gin.Context) { + // 获取集群请求 clusterRequest := new(v1.PostClusterRequest) + // 解析集群请求 if err := c.ShouldBind(clusterRequest); err != nil { + // 打印错误信息 klog.ErrorS(err, "Could not read cluster request") + // 返回错误 common.Fail(c, err) return } + // 解析成员集群端点 memberClusterEndpoint, err := parseEndpointFromKubeconfig(clusterRequest.MemberClusterKubeConfig) if err != nil { + // 打印错误信息 klog.ErrorS(err, "Could not parse member cluster endpoint") + // 返回错误 common.Fail(c, err) return } clusterRequest.MemberClusterEndpoint = memberClusterEndpoint + // 获取Karmada客户端 karmadaClient := client.InClusterKarmadaClient() - + // 如果同步模式为拉取模式 if clusterRequest.SyncMode == v1alpha1.Pull { memberClusterClient, err := client.KubeClientSetFromKubeConfig(clusterRequest.MemberClusterKubeConfig) if err != nil { @@ -89,6 +112,7 @@ func handlePostCluster(c *gin.Context) { common.Fail(c, err) return } + // 创建拉取模式选项 opts := &pullModeOption{ karmadaClient: karmadaClient, karmadaAgentCfg: apiConfig, @@ -97,6 +121,7 @@ func handlePostCluster(c *gin.Context) { memberClusterName: clusterRequest.MemberClusterName, memberClusterEndpoint: clusterRequest.MemberClusterEndpoint, } + // 访问集群 if err = accessClusterInPullMode(opts); err != nil { klog.ErrorS(err, "accessClusterInPullMode failed") common.Fail(c, err) @@ -105,15 +130,19 @@ func handlePostCluster(c *gin.Context) { common.Success(c, "ok") } } else if clusterRequest.SyncMode == v1alpha1.Push { + // 获取成员集群REST配置 memberClusterRestConfig, err := client.LoadRestConfigFromKubeConfig(clusterRequest.MemberClusterKubeConfig) if err != nil { klog.ErrorS(err, "Generate rest config from memberClusterKubeconfig failed") + // 返回错误 common.Fail(c, err) return } + // 获取Karmada配置 restConfig, _, err := client.GetKarmadaConfig() if err != nil { klog.ErrorS(err, "Get restConfig failed") + // 返回错误 common.Fail(c, err) return } @@ -123,36 +152,47 @@ func handlePostCluster(c *gin.Context) { karmadaRestConfig: restConfig, memberClusterRestConfig: memberClusterRestConfig, } + // 访问集群 if err := accessClusterInPushMode(opts); err != nil { klog.ErrorS(err, "accessClusterInPushMode failed") + // 返回错误 common.Fail(c, err) return } + // 打印成功信息 klog.Infof("accessClusterInPushMode success") + // 返回成功 common.Success(c, "ok") } else { + // 打印错误信息 klog.Errorf("Unknown sync mode %s", clusterRequest.SyncMode) + // 返回错误 common.Fail(c, fmt.Errorf("unknown sync mode %s", clusterRequest.SyncMode)) } } +// 更新集群 func handlePutCluster(c *gin.Context) { clusterRequest := new(v1.PutClusterRequest) name := c.Param("name") if err := c.ShouldBind(clusterRequest); err != nil { + // 打印错误信息 klog.ErrorS(err, "Could not read handlePutCluster request") + // 返回错误 common.Fail(c, err) return } + // 获取Karmada客户端 karmadaClient := client.InClusterKarmadaClient() memberCluster, err := karmadaClient.ClusterV1alpha1().Clusters().Get(context.TODO(), name, metav1.GetOptions{}) if err != nil { + // 打印错误信息 klog.ErrorS(err, "Get cluster failed") + // 返回错误 common.Fail(c, err) return } - - // assume that the frontend can fetch the whole labels and taints + // 假设前端可以获取整个标签和污点 labels := make(map[string]string) if clusterRequest.Labels != nil { for _, labelItem := range *clusterRequest.Labels { @@ -160,7 +200,7 @@ func handlePutCluster(c *gin.Context) { } memberCluster.Labels = labels } - + // 假设前端可以获取整个污点 taints := make([]corev1.Taint, 0) if clusterRequest.Taints != nil { for _, taintItem := range *clusterRequest.Taints { @@ -172,72 +212,103 @@ func handlePutCluster(c *gin.Context) { } memberCluster.Spec.Taints = taints } - + // 更新集群 _, err = karmadaClient.ClusterV1alpha1().Clusters().Update(context.TODO(), memberCluster, metav1.UpdateOptions{}) if err != nil { + // 打印错误信息 klog.ErrorS(err, "Update cluster failed") + // 返回错误 common.Fail(c, err) return } + // 返回成功 common.Success(c, "ok") } +// 删除集群 func handleDeleteCluster(c *gin.Context) { + // 获取上下文 ctx := context.Context(c) + // 获取删除集群请求 clusterRequest := new(v1.DeleteClusterRequest) + // 解析删除集群请求 if err := c.ShouldBindUri(&clusterRequest); err != nil { + // 返回错误 common.Fail(c, err) return } clusterName := clusterRequest.MemberClusterName + // 获取Karmada客户端 karmadaClient := client.InClusterKarmadaClient() + // 等待时间 waitDuration := time.Second * 60 err := karmadaClient.ClusterV1alpha1().Clusters().Delete(ctx, clusterName, metav1.DeleteOptions{}) if apierrors.IsNotFound(err) { + // 返回错误 common.Fail(c, fmt.Errorf("no cluster object %s found in karmada control Plane", clusterName)) return } if err != nil { + // 打印错误信息 klog.Errorf("Failed to delete cluster object. cluster name: %s, error: %v", clusterName, err) + // 返回错误 common.Fail(c, err) return } // make sure the given cluster object has been deleted err = wait.PollUntilContextTimeout(ctx, 1*time.Second, waitDuration, true, func(ctx context.Context) (done bool, err error) { + // 获取集群 _, err = karmadaClient.ClusterV1alpha1().Clusters().Get(ctx, clusterName, metav1.GetOptions{}) if apierrors.IsNotFound(err) { return true, nil } if err != nil { + // 打印错误信息 klog.Errorf("Failed to get cluster %s. err: %v", clusterName, err) + // 返回错误 return false, err } + // 打印信息 klog.Infof("Waiting for the cluster object %s to be deleted", clusterName) + // 返回false return false, nil }) if err != nil { + // 打印错误信息 klog.Errorf("Failed to delete cluster object. cluster name: %s, error: %v", clusterName, err) + // 返回错误 common.Fail(c, err) return } common.Success(c, "ok") } +// 解析kubeconfig中的endpoint func parseEndpointFromKubeconfig(kubeconfigContents string) (string, error) { + // 解析kubeconfig restConfig, err := client.LoadRestConfigFromKubeConfig(kubeconfigContents) if err != nil { + // 返回错误 return "", err } + // 返回端点 return restConfig.Host, nil } +// 初始化路由 func init() { + // 获取V1路由 r := router.V1() + // 获取集群列表 r.GET("/cluster", handleGetClusterList) + // 获取集群详情 r.GET("/cluster/:name", handleGetClusterDetail) + // 创建集群 r.POST("/cluster", handlePostCluster) + // 更新集群 r.PUT("/cluster/:name", handlePutCluster) + // 删除集群 r.DELETE("/cluster/:name", handleDeleteCluster) } diff --git a/cmd/api/app/routes/clusteroverridepolicy/handler.go b/cmd/api/app/routes/clusteroverridepolicy/handler.go index 5054156a..498c2f77 100644 --- a/cmd/api/app/routes/clusteroverridepolicy/handler.go +++ b/cmd/api/app/routes/clusteroverridepolicy/handler.go @@ -32,6 +32,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/clusteroverridepolicy" ) +// 获取集群覆盖策略列表 func handleGetClusterOverridePolicyList(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() dataSelect := common.ParseDataSelectPathParameter(c) @@ -44,6 +45,7 @@ func handleGetClusterOverridePolicyList(c *gin.Context) { common.Success(c, clusterOverrideList) } +// 获取集群覆盖策略详情 func handleGetClusterOverridePolicyDetail(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() name := c.Param("clusterOverridePolicyName") @@ -56,6 +58,7 @@ func handleGetClusterOverridePolicyDetail(c *gin.Context) { common.Success(c, result) } +// 创建集群覆盖策略 func handlePostClusterOverridePolicy(c *gin.Context) { ctx := context.Context(c) overridepolicyRequest := new(v1.PostOverridePolicyRequest) @@ -91,9 +94,13 @@ func handlePostClusterOverridePolicy(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 获取集群覆盖策略列表 r.GET("/clusteroverridepolicy", handleGetClusterOverridePolicyList) + // 获取集群覆盖策略详情 r.GET("/clusteroverridepolicy/:clusterOverridePolicyName", handleGetClusterOverridePolicyDetail) + // 创建集群覆盖策略 r.POST("/clusteroverridepolicy", handlePostClusterOverridePolicy) } diff --git a/cmd/api/app/routes/clusterpropagationpolicy/handler.go b/cmd/api/app/routes/clusterpropagationpolicy/handler.go index f510f33e..0f922d01 100644 --- a/cmd/api/app/routes/clusterpropagationpolicy/handler.go +++ b/cmd/api/app/routes/clusterpropagationpolicy/handler.go @@ -32,6 +32,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/clusterpropagationpolicy" ) +// 获取集群传播策略列表 func handleGetClusterPropagationPolicyList(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() dataSelect := common.ParseDataSelectPathParameter(c) @@ -44,6 +45,7 @@ func handleGetClusterPropagationPolicyList(c *gin.Context) { common.Success(c, clusterPropagationList) } +// 获取集群传播策略详情 func handleGetClusterPropagationPolicyDetail(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() name := c.Param("clusterPropagationPolicyName") @@ -56,6 +58,7 @@ func handleGetClusterPropagationPolicyDetail(c *gin.Context) { common.Success(c, result) } +// 创建集群传播策略 func handlePostClusterPropagationPolicy(c *gin.Context) { ctx := context.Context(c) propagationpolicyRequest := new(v1.PostPropagationPolicyRequest) @@ -91,9 +94,13 @@ func handlePostClusterPropagationPolicy(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 获取集群传播策略列表 r.GET("/clusterpropagationpolicy", handleGetClusterPropagationPolicyList) + // 获取集群传播策略详情 r.GET("/clusterpropagationpolicy/:clusterPropagationPolicyName", handleGetClusterPropagationPolicyDetail) + // 创建集群传播策略 r.POST("/clusterpropagationpolicy", handlePostClusterPropagationPolicy) } diff --git a/cmd/api/app/routes/config/handler.go b/cmd/api/app/routes/config/handler.go index b4a383f2..8ac27f42 100644 --- a/cmd/api/app/routes/config/handler.go +++ b/cmd/api/app/routes/config/handler.go @@ -28,12 +28,14 @@ import ( ) // GetDashboardConfig handles the request to retrieve the dashboard configuration. +// 获取仪表盘配置 func GetDashboardConfig(c *gin.Context) { dashboardConfig := config.GetDashboardConfig() common.Success(c, dashboardConfig) } // SetDashboardConfig handles the request to update the dashboard configuration. +// 更新仪表盘配置 func SetDashboardConfig(c *gin.Context) { setDashboardConfigRequest := new(v1.SetDashboardConfigRequest) if err := c.ShouldBind(setDashboardConfigRequest); err != nil { @@ -62,8 +64,11 @@ func SetDashboardConfig(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 获取仪表盘配置 r.GET("/config", GetDashboardConfig) + // 更新仪表盘配置 r.POST("/config", SetDashboardConfig) } diff --git a/cmd/api/app/routes/configmap/handler.go b/cmd/api/app/routes/configmap/handler.go index cf1d0b24..55ba6ca3 100644 --- a/cmd/api/app/routes/configmap/handler.go +++ b/cmd/api/app/routes/configmap/handler.go @@ -25,6 +25,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/configmap" ) +// 获取配置map列表 func handleGetConfigMap(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() dataSelect := common.ParseDataSelectPathParameter(c) @@ -37,6 +38,7 @@ func handleGetConfigMap(c *gin.Context) { common.Success(c, result) } +// 获取配置map详情 func handleGetConfigMapDetail(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() namespace := c.Param("namespace") @@ -49,9 +51,12 @@ func handleGetConfigMapDetail(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.V1() + // 获取配置map列表 r.GET("/configmap", handleGetConfigMap) + // 获取配置map详情 r.GET("/configmap/:namespace", handleGetConfigMap) r.GET("/configmap/:namespace/:name", handleGetConfigMapDetail) } diff --git a/cmd/api/app/routes/cronjob/handler.go b/cmd/api/app/routes/cronjob/handler.go index ee62ac0c..6b27a933 100644 --- a/cmd/api/app/routes/cronjob/handler.go +++ b/cmd/api/app/routes/cronjob/handler.go @@ -26,6 +26,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/event" ) +// 获取cronjob列表 func handleGetCronJob(c *gin.Context) { namespace := common.ParseNamespacePathParameter(c) dataSelect := common.ParseDataSelectPathParameter(c) @@ -38,6 +39,7 @@ func handleGetCronJob(c *gin.Context) { common.Success(c, result) } +// 获取cronjob详情 func handleGetCronJobDetail(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -50,6 +52,7 @@ func handleGetCronJobDetail(c *gin.Context) { common.Success(c, result) } +// 获取cronjob事件 func handleGetCronJobEvents(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -62,10 +65,15 @@ func handleGetCronJobEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取cronjob列表 r.GET("/cronjob", handleGetCronJob) + // 获取cronjob详情 r.GET("/cronjob/:namespace", handleGetCronJob) r.GET("/cronjob/:namespace/:statefulset", handleGetCronJobDetail) + // 获取cronjob事件 r.GET("/cronjob/:namespace/:statefulset/event", handleGetCronJobEvents) } diff --git a/cmd/api/app/routes/daemonset/handler.go b/cmd/api/app/routes/daemonset/handler.go index b0b0f88f..ce3a8aca 100644 --- a/cmd/api/app/routes/daemonset/handler.go +++ b/cmd/api/app/routes/daemonset/handler.go @@ -26,6 +26,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/event" ) +// 获取daemonset列表 func handleGetDaemonset(c *gin.Context) { namespace := common.ParseNamespacePathParameter(c) dataSelect := common.ParseDataSelectPathParameter(c) @@ -38,6 +39,7 @@ func handleGetDaemonset(c *gin.Context) { common.Success(c, result) } +// 获取daemonset详情 func handleGetDaemonsetDetail(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -50,6 +52,7 @@ func handleGetDaemonsetDetail(c *gin.Context) { common.Success(c, result) } +// 获取daemonset事件 func handleGetDaemonsetEvents(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -62,10 +65,15 @@ func handleGetDaemonsetEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取daemonset列表 r.GET("/daemonset", handleGetDaemonset) + // 获取daemonset详情 r.GET("/daemonset/:namespace", handleGetDaemonset) r.GET("/daemonset/:namespace/:statefulset", handleGetDaemonsetDetail) + // 获取daemonset事件 r.GET("/daemonset/:namespace/:statefulset/event", handleGetDaemonsetEvents) } diff --git a/cmd/api/app/routes/deployment/handler.go b/cmd/api/app/routes/deployment/handler.go index 1457985f..07d92439 100644 --- a/cmd/api/app/routes/deployment/handler.go +++ b/cmd/api/app/routes/deployment/handler.go @@ -33,6 +33,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/event" ) +// 创建deployment func handlerCreateDeployment(c *gin.Context) { ctx := context.Context(c) createDeploymentRequest := new(v1.CreateDeploymentRequest) @@ -67,6 +68,7 @@ func handlerCreateDeployment(c *gin.Context) { common.Success(c, result) } +// 获取deployment列表 func handleGetDeployments(c *gin.Context) { namespace := common.ParseNamespacePathParameter(c) dataSelect := common.ParseDataSelectPathParameter(c) @@ -79,6 +81,7 @@ func handleGetDeployments(c *gin.Context) { common.Success(c, result) } +// 获取deployment详情 func handleGetDeploymentDetail(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("deployment") @@ -91,6 +94,7 @@ func handleGetDeploymentDetail(c *gin.Context) { common.Success(c, result) } +// 获取deployment事件 func handleGetDeploymentEvents(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("deployment") @@ -103,11 +107,18 @@ func handleGetDeploymentEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取deployment列表 r.GET("/deployment", handleGetDeployments) + // 获取deployment列表 r.GET("/deployment/:namespace", handleGetDeployments) + // 获取deployment详情 r.GET("/deployment/:namespace/:deployment", handleGetDeploymentDetail) + // 获取deployment事件 r.GET("/deployment/:namespace/:deployment/event", handleGetDeploymentEvents) + // 创建deployment r.POST("/deployment", handlerCreateDeployment) } diff --git a/cmd/api/app/routes/ingress/handler.go b/cmd/api/app/routes/ingress/handler.go index 658a9ff1..a415dcc2 100644 --- a/cmd/api/app/routes/ingress/handler.go +++ b/cmd/api/app/routes/ingress/handler.go @@ -25,6 +25,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/ingress" ) +// 获取ingress列表 func handleGetIngress(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() dataSelect := common.ParseDataSelectPathParameter(c) @@ -37,6 +38,7 @@ func handleGetIngress(c *gin.Context) { common.Success(c, result) } +// 获取ingress详情 func handleGetIngressDetail(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() namespace := c.Param("namespace") @@ -49,9 +51,13 @@ func handleGetIngressDetail(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.V1() + // 获取ingress列表 r.GET("/ingress", handleGetIngress) + // 获取ingress列表 r.GET("/ingress/:namespace", handleGetIngress) + // 获取ingress详情 r.GET("/ingress/:namespace/:service", handleGetIngressDetail) } diff --git a/cmd/api/app/routes/job/handler.go b/cmd/api/app/routes/job/handler.go index eb136d40..2a04a931 100644 --- a/cmd/api/app/routes/job/handler.go +++ b/cmd/api/app/routes/job/handler.go @@ -26,6 +26,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/job" ) +// 获取job列表 func handleGetJob(c *gin.Context) { namespace := common.ParseNamespacePathParameter(c) dataSelect := common.ParseDataSelectPathParameter(c) @@ -38,6 +39,7 @@ func handleGetJob(c *gin.Context) { common.Success(c, result) } +// 获取job详情 func handleGetJobDetail(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -50,6 +52,7 @@ func handleGetJobDetail(c *gin.Context) { common.Success(c, result) } +// 获取job事件 func handleGetJobEvents(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -62,10 +65,16 @@ func handleGetJobEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取job列表 r.GET("/job", handleGetJob) + // 获取job列表 r.GET("/job/:namespace", handleGetJob) + // 获取job详情 r.GET("/job/:namespace/:statefulset", handleGetJobDetail) + // 获取job事件 r.GET("/job/:namespace/:statefulset/event", handleGetJobEvents) } diff --git a/cmd/api/app/routes/member/deployment/handler.go b/cmd/api/app/routes/member/deployment/handler.go index a70faaaf..e26b3535 100644 --- a/cmd/api/app/routes/member/deployment/handler.go +++ b/cmd/api/app/routes/member/deployment/handler.go @@ -26,6 +26,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/event" ) +// 获取成员集群的deployment列表 func handleGetMemberDeployments(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) namespace := common.ParseNamespacePathParameter(c) @@ -38,6 +39,7 @@ func handleGetMemberDeployments(c *gin.Context) { common.Success(c, result) } +// 获取成员集群的deployment详情 func handleGetMemberDeploymentDetail(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) namespace := c.Param("namespace") @@ -50,6 +52,7 @@ func handleGetMemberDeploymentDetail(c *gin.Context) { common.Success(c, result) } +// 获取成员集群的deployment事件 func handleGetMemberDeploymentEvents(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) namespace := c.Param("namespace") @@ -63,10 +66,15 @@ func handleGetMemberDeploymentEvents(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.MemberV1() + // 获取成员集群的deployment列表 r.GET("/deployment", handleGetMemberDeployments) + // 获取成员集群的deployment列表 r.GET("/deployment/:namespace", handleGetMemberDeployments) + // 获取成员集群的deployment详情 r.GET("/deployment/:namespace/:deployment", handleGetMemberDeploymentDetail) + // 获取成员集群的deployment事件 r.GET("/deployment/:namespace/:deployment/event", handleGetMemberDeploymentEvents) } diff --git a/cmd/api/app/routes/member/member.go b/cmd/api/app/routes/member/member.go index 317521b0..03018a0a 100644 --- a/cmd/api/app/routes/member/member.go +++ b/cmd/api/app/routes/member/member.go @@ -17,8 +17,12 @@ limitations under the License. package member import ( + // 导入成员集群的deployment路由 _ "github.com/karmada-io/dashboard/cmd/api/app/routes/member/deployment" // Importing member route packages forces route registration + // 导入成员集群的namespace路由 _ "github.com/karmada-io/dashboard/cmd/api/app/routes/member/namespace" // Importing member route packages forces route registration + // 导入成员集群的node路由 _ "github.com/karmada-io/dashboard/cmd/api/app/routes/member/node" // Importing member route packages forces route registration + // 导入成员集群的pod路由 _ "github.com/karmada-io/dashboard/cmd/api/app/routes/member/pod" // Importing member route packages forces route registration ) diff --git a/cmd/api/app/routes/member/namespace/handler.go b/cmd/api/app/routes/member/namespace/handler.go index 29ec3f7a..fec9c7fe 100644 --- a/cmd/api/app/routes/member/namespace/handler.go +++ b/cmd/api/app/routes/member/namespace/handler.go @@ -26,6 +26,7 @@ import ( ns "github.com/karmada-io/dashboard/pkg/resource/namespace" ) +// 获取成员集群的namespace列表 func handleGetMemberNamespace(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) @@ -38,6 +39,7 @@ func handleGetMemberNamespace(c *gin.Context) { common.Success(c, result) } +// 获取成员集群的namespace详情 func handleGetMemberNamespaceDetail(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) @@ -50,6 +52,7 @@ func handleGetMemberNamespaceDetail(c *gin.Context) { common.Success(c, result) } +// 获取成员集群的namespace事件 func handleGetMemberNamespaceEvents(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) @@ -63,9 +66,13 @@ func handleGetMemberNamespaceEvents(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.MemberV1() + // 获取成员集群的namespace列表 r.GET("/namespace", handleGetMemberNamespace) + // 获取成员集群的namespace详情 r.GET("/namespace/:name", handleGetMemberNamespaceDetail) + // 获取成员集群的namespace事件 r.GET("/namespace/:name/event", handleGetMemberNamespaceEvents) } diff --git a/cmd/api/app/routes/member/node/handler.go b/cmd/api/app/routes/member/node/handler.go index 7a8c5af0..f966ff1a 100644 --- a/cmd/api/app/routes/member/node/handler.go +++ b/cmd/api/app/routes/member/node/handler.go @@ -25,6 +25,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/node" ) +// 获取成员集群的node列表 func handleGetClusterNode(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) dataSelect := common.ParseDataSelectPathParameter(c) @@ -36,7 +37,9 @@ func handleGetClusterNode(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.MemberV1() + // 获取成员集群的node列表 r.GET("/node", handleGetClusterNode) } diff --git a/cmd/api/app/routes/member/pod/handler.go b/cmd/api/app/routes/member/pod/handler.go index f6d23dbc..601a904c 100644 --- a/cmd/api/app/routes/member/pod/handler.go +++ b/cmd/api/app/routes/member/pod/handler.go @@ -26,6 +26,7 @@ import ( ) // return a pods list +// 获取成员集群的pod列表 func handleGetMemberPod(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) dataSelect := common.ParseDataSelectPathParameter(c) @@ -39,6 +40,7 @@ func handleGetMemberPod(c *gin.Context) { } // return a pod detail +// 获取成员集群的pod详情 func handleGetMemberPodDetail(c *gin.Context) { memberClient := client.InClusterClientForMemberCluster(c.Param("clustername")) namespace := c.Param("namespace") @@ -51,9 +53,13 @@ func handleGetMemberPodDetail(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.MemberV1() + // 获取成员集群的pod列表 r.GET("/pod", handleGetMemberPod) + // 获取成员集群的pod列表 r.GET("/pod/:namespace", handleGetMemberPod) + // 获取成员集群的pod详情 r.GET("/pod/:namespace/:name", handleGetMemberPodDetail) } diff --git a/cmd/api/app/routes/namespace/handler.go b/cmd/api/app/routes/namespace/handler.go index b728d23f..284d449c 100644 --- a/cmd/api/app/routes/namespace/handler.go +++ b/cmd/api/app/routes/namespace/handler.go @@ -27,6 +27,7 @@ import ( ns "github.com/karmada-io/dashboard/pkg/resource/namespace" ) +// 创建namespace func handleCreateNamespace(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() createNamespaceRequest := new(v1.CreateNamesapceRequest) @@ -44,6 +45,8 @@ func handleCreateNamespace(c *gin.Context) { } common.Success(c, "ok") } + +// 获取namespace列表 func handleGetNamespaces(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() dataSelect := common.ParseDataSelectPathParameter(c) @@ -54,6 +57,8 @@ func handleGetNamespaces(c *gin.Context) { } common.Success(c, result) } + +// 获取namespace详情 func handleGetNamespaceDetail(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() name := c.Param("name") @@ -64,6 +69,8 @@ func handleGetNamespaceDetail(c *gin.Context) { } common.Success(c, result) } + +// 获取namespace事件 func handleGetNamespaceEvents(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() name := c.Param("name") @@ -75,10 +82,16 @@ func handleGetNamespaceEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 创建namespace r.POST("/namespace", handleCreateNamespace) + // 获取namespace列表 r.GET("/namespace", handleGetNamespaces) + // 获取namespace详情 r.GET("/namespace/:name", handleGetNamespaceDetail) + // 获取namespace事件 r.GET("/namespace/:name/event", handleGetNamespaceEvents) } diff --git a/cmd/api/app/routes/overridepolicy/handler.go b/cmd/api/app/routes/overridepolicy/handler.go index d5999733..5e31b420 100644 --- a/cmd/api/app/routes/overridepolicy/handler.go +++ b/cmd/api/app/routes/overridepolicy/handler.go @@ -34,6 +34,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/overridepolicy" ) +// 获取覆盖策略列表 func handleGetOverridePolicyList(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() dataSelect := common.ParseDataSelectPathParameter(c) @@ -47,6 +48,8 @@ func handleGetOverridePolicyList(c *gin.Context) { } common.Success(c, overrideList) } + +// 获取覆盖策略详情 func handleGetOverridePolicyDetail(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() namespace := c.Param("namespace") @@ -59,6 +62,8 @@ func handleGetOverridePolicyDetail(c *gin.Context) { } common.Success(c, result) } + +// 创建覆盖策略 func handlePostOverridePolicy(c *gin.Context) { // todo precheck existence of namespace, now we tested it under scope of default, it's ok till now. ctx := context.Context(c) @@ -97,6 +102,8 @@ func handlePostOverridePolicy(c *gin.Context) { } common.Success(c, "ok") } + +// 更新覆盖策略 func handlePutOverridePolicy(c *gin.Context) { ctx := context.Context(c) overridepolicyRequest := new(v1.PutOverridePolicyRequest) @@ -138,6 +145,8 @@ func handlePutOverridePolicy(c *gin.Context) { } common.Success(c, "ok") } + +// 删除覆盖策略 func handleDeleteOverridePolicy(c *gin.Context) { ctx := context.Context(c) overridepolicyRequest := new(v1.DeleteOverridePolicyRequest) @@ -175,12 +184,19 @@ func handleDeleteOverridePolicy(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 获取覆盖策略列表 r.GET("/overridepolicy", handleGetOverridePolicyList) + // 获取覆盖策略列表 r.GET("/overridepolicy/:namespace", handleGetOverridePolicyList) + // 获取覆盖策略详情 r.GET("/overridepolicy/namespace/:namespace/:overridePolicyName", handleGetOverridePolicyDetail) + // 创建覆盖策略 r.POST("/overridepolicy", handlePostOverridePolicy) + // 更新覆盖策略 r.PUT("/overridepolicy", handlePutOverridePolicy) + // 删除覆盖策略 r.DELETE("/overridepolicy", handleDeleteOverridePolicy) } diff --git a/cmd/api/app/routes/overview/handler.go b/cmd/api/app/routes/overview/handler.go index 15cee65a..e5e07907 100644 --- a/cmd/api/app/routes/overview/handler.go +++ b/cmd/api/app/routes/overview/handler.go @@ -20,10 +20,12 @@ import ( "github.com/gin-gonic/gin" "github.com/karmada-io/dashboard/cmd/api/app/router" + "github.com/karmada-io/dashboard/cmd/api/app/routes/overview/topology" v1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" "github.com/karmada-io/dashboard/cmd/api/app/types/common" ) +// 获取仪表盘概览 func handleGetOverview(c *gin.Context) { dataSelect := common.ParseDataSelectPathParameter(c) karmadaInfo, err := GetControllerManagerInfo() @@ -51,6 +53,7 @@ func handleGetOverview(c *gin.Context) { }) } +// 初始化路由 func init() { /* 创建时间 2024-01-01 @@ -61,4 +64,17 @@ func init() { */ r := router.V1() r.GET("/overview", handleGetOverview) + // 添加资源汇总接口路由 + r.GET("/overview/resources", HandleGetResourcesSummary) + // 添加节点汇总接口路由 + r.GET("/overview/nodes", HandleGetNodeSummary) + // 添加Pod汇总接口路由 + r.GET("/overview/pods", HandleGetPodSummary) + // 添加集群调度预览接口路由 + r.GET("/overview/schedule", HandleGetSchedulePreview) + // 添加所有集群资源预览接口路由 + r.GET("/overview/all-resources", HandleGetAllClusterResourcesPreview) + + // 注册拓扑图相关路由 + topology.RegisterRoutes(r) } diff --git a/cmd/api/app/routes/overview/misc.go b/cmd/api/app/routes/overview/misc.go index 0388af39..a171a6bf 100644 --- a/cmd/api/app/routes/overview/misc.go +++ b/cmd/api/app/routes/overview/misc.go @@ -43,6 +43,7 @@ const ( ) // GetControllerManagerVersionInfo returns the version info of karmada-controller-manager. +// 获取karmada-controller-manager的版本信息 func GetControllerManagerVersionInfo() (*version.Info, error) { kubeClient := client.InClusterClient() restConfig, _, err := client.GetKubeConfig() @@ -96,6 +97,7 @@ func GetControllerManagerVersionInfo() (*version.Info, error) { } // ParseVersion parses the version string to version.Info. +// 解析版本字符串到version.Info func ParseVersion(versionStr string) *version.Info { versionInfo := &version.Info{} leftBraceIdx := strings.IndexByte(versionStr, '{') @@ -134,6 +136,7 @@ func ParseVersion(versionStr string) *version.Info { } // GetControllerManagerInfo returns the version info of karmada-controller-manager. +// 获取karmada-controller-manager的版本信息 func GetControllerManagerInfo() (*v1.KarmadaInfo, error) { versionInfo, err := GetControllerManagerVersionInfo() if err != nil { @@ -161,6 +164,7 @@ func GetControllerManagerInfo() (*v1.KarmadaInfo, error) { } // GetMemberClusterInfo returns the status of member clusters. +// 获取成员集群的状态 func GetMemberClusterInfo(ds *dataselect.DataSelectQuery) (*v1.MemberClusterStatus, error) { karmadaClient := client.InClusterKarmadaClient() result, err := cluster.GetClusterList(karmadaClient, ds) @@ -194,6 +198,7 @@ func GetMemberClusterInfo(ds *dataselect.DataSelectQuery) (*v1.MemberClusterStat } // GetClusterResourceStatus returns the status of cluster resources. +// 获取集群资源的状态 func GetClusterResourceStatus() (*v1.ClusterResourceStatus, error) { clusterResourceStatus := &v1.ClusterResourceStatus{} ctx := context.TODO() diff --git a/cmd/api/app/routes/overview/nodes.go b/cmd/api/app/routes/overview/nodes.go new file mode 100644 index 00000000..18ce5e81 --- /dev/null +++ b/cmd/api/app/routes/overview/nodes.go @@ -0,0 +1,331 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package overview + +import ( + "context" + "encoding/json" + "fmt" + "sync" + + "github.com/gin-gonic/gin" + v1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/klog/v2" + + apiV1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/cmd/api/app/types/common" + "github.com/karmada-io/dashboard/pkg/client" + "github.com/karmada-io/dashboard/pkg/dataselect" +) + +// GetNodeSummary 获取节点汇总信息 +func GetNodeSummary(dataSelect *dataselect.DataSelectQuery) (*apiV1.NodesResponse, error) { + // 初始化汇总结构 + response := &apiV1.NodesResponse{ + Items: []apiV1.NodeItem{}, + Summary: apiV1.NodeSummary{ + TotalNum: 0, + ReadyNum: 0, + }, + } + + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + + // 获取集群列表 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 使用WaitGroup来管理并发请求 + var wg sync.WaitGroup + // 使用互斥锁保护共享数据 + var mu sync.Mutex + + // 遍历所有集群 + for _, cluster := range clusterList.Items { + wg.Add(1) + go func(clusterName string) { + defer wg.Done() + + // 获取成员集群的客户端 + memberClient := client.InClusterClientForMemberCluster(clusterName) + if memberClient == nil { + klog.Warningf("Failed to get client for cluster %s", clusterName) + return + } + + // 获取该集群的节点 + nodes, err := getNodesForCluster(memberClient, clusterName) + if err != nil { + klog.ErrorS(err, "Failed to get nodes", "cluster", clusterName) + return + } + + // 加锁更新共享数据 + mu.Lock() + defer mu.Unlock() + + // 更新全局统计信息 + response.Summary.TotalNum += int32(len(nodes)) + for _, node := range nodes { + if node.Ready { + response.Summary.ReadyNum++ + } + } + + // 追加节点信息 + response.Items = append(response.Items, nodes...) + }(cluster.Name) + } + + // 等待所有请求完成 + wg.Wait() + + // 如果使用了数据选择器,则过滤和排序节点 + if dataSelect != nil && len(response.Items) > 0 { + // 这里可以根据需要实现节点的排序和分页 + // 目前为简单实现,实际使用时可能需要更复杂的逻辑 + } + + return response, nil +} + +// getNodesForCluster 获取指定集群的所有节点 +func getNodesForCluster(client kubernetes.Interface, clusterName string) ([]apiV1.NodeItem, error) { + // 获取节点列表 + nodeList, err := client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{}) + if err != nil { + return nil, err + } + + nodes := make([]apiV1.NodeItem, 0, len(nodeList.Items)) + for _, node := range nodeList.Items { + // 获取每个节点上运行的Pod数量 + podUsage, err := getPodUsageForNode(client, node.Name) + if err != nil { + klog.Warningf("Failed to get pod usage for node %s in cluster %s: %v", node.Name, clusterName, err) + } + + // 获取节点的CPU和内存使用情况 + cpuUsage, memoryUsage, err := getNodeResourceUsage(client, node.Name, clusterName) + if err != nil { + klog.Warningf("Failed to get resource usage for node %s in cluster %s: %v", node.Name, clusterName, err) + } + + nodeItem := apiV1.NodeItem{ + ClusterName: clusterName, + Name: node.Name, + Ready: isNodeReady(node), + Role: getNodeRole(node), + CPUCapacity: getNodeCPUCapacity(node), + CPUUsage: cpuUsage, // 使用实际获取的CPU使用量 + MemoryCapacity: getNodeMemoryCapacity(node), + MemoryUsage: memoryUsage, // 使用实际获取的内存使用量 + PodCapacity: getNodePodCapacity(node), + PodUsage: podUsage, // 使用实际获取的Pod使用量 + Status: getNodeStatus(node), + Labels: node.Labels, + CreationTimestamp: node.CreationTimestamp, + } + nodes = append(nodes, nodeItem) + } + + return nodes, nil +} + +// getPodUsageForNode 获取节点上运行的Pod数量 +func getPodUsageForNode(client kubernetes.Interface, nodeName string) (int64, error) { + // 获取集群中所有Pod + pods, err := client.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{ + FieldSelector: fmt.Sprintf("spec.nodeName=%s", nodeName), + }) + if err != nil { + return 0, err + } + + // 计算在该节点上运行的Pod数量 + return int64(len(pods.Items)), nil +} + +// getNodeResourceUsage 获取节点的CPU和内存使用情况 +func getNodeResourceUsage(client kubernetes.Interface, nodeName, clusterName string) (int64, int64, error) { + var cpuUsage, memoryUsage int64 + + // 尝试从 Metrics API 获取节点的使用情况 + // 首先尝试metrics.k8s.io API + metricsAvailable := false + gv := metav1.GroupVersion{Group: "metrics.k8s.io", Version: "v1beta1"} + config := client.CoreV1().RESTClient().Get().AbsPath("apis", gv.Group, gv.Version, "nodes", nodeName) + result := config.Do(context.TODO()) + if result.Error() == nil { + // Metrics API 可用 + metricsAvailable = true + var nodeMetrics map[string]interface{} + data, err := result.Raw() + if err != nil { + return 0, 0, err + } + + if err := json.Unmarshal(data, &nodeMetrics); err != nil { + return 0, 0, err + } + + // 解析CPU和内存使用情况 + if usage, ok := nodeMetrics["usage"].(map[string]interface{}); ok { + if cpuStr, ok := usage["cpu"].(string); ok { + cpuValue, err := parseCPUQuantity(cpuStr) + if err != nil { + klog.Warningf("Failed to parse CPU usage for node %s: %v", nodeName, err) + } else { + cpuUsage = cpuValue + } + } + + if memStr, ok := usage["memory"].(string); ok { + memValue, err := parseMemoryQuantity(memStr) + if err != nil { + klog.Warningf("Failed to parse memory usage for node %s: %v", nodeName, err) + } else { + memoryUsage = memValue + } + } + } + } + + if !metricsAvailable { + // 如果 Metrics API 不可用,尝试从 /metrics 端点获取 + // 注意:这需要 scrape 端点的权限,许多集群可能不允许 + klog.Warningf("Metrics API not available for node %s in cluster %s, trying to estimate usage from containers", nodeName, clusterName) + + // 尝试通过节点上运行的所有容器的请求资源来估算 + pods, err := client.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{ + FieldSelector: fmt.Sprintf("spec.nodeName=%s,status.phase=Running", nodeName), + }) + if err != nil { + return 0, 0, err + } + + // 累加所有pod的CPU和内存请求 + for _, pod := range pods.Items { + for _, container := range pod.Spec.Containers { + if cpu := container.Resources.Requests.Cpu(); cpu != nil { + cpuUsage += cpu.MilliValue() / 1000 + } + if mem := container.Resources.Requests.Memory(); mem != nil { + memoryUsage += mem.Value() / 1024 + } + } + } + } + + return cpuUsage, memoryUsage, nil +} + +// parseCPUQuantity 解析CPU数量字符串 +func parseCPUQuantity(cpuStr string) (int64, error) { + quantity, err := resource.ParseQuantity(cpuStr) + if err != nil { + return 0, err + } + return quantity.MilliValue() / 1000, nil // 转换为核心数 +} + +// parseMemoryQuantity 解析内存数量字符串 +func parseMemoryQuantity(memStr string) (int64, error) { + quantity, err := resource.ParseQuantity(memStr) + if err != nil { + return 0, err + } + return quantity.Value() / 1024, nil // 转换为KiB +} + +// isNodeReady 检查节点是否就绪 +func isNodeReady(node v1.Node) bool { + for _, condition := range node.Status.Conditions { + if condition.Type == v1.NodeReady && condition.Status == v1.ConditionTrue { + return true + } + } + return false +} + +// getNodeRole 获取节点角色 +func getNodeRole(node v1.Node) string { + if _, isMaster := node.Labels["node-role.kubernetes.io/master"]; isMaster { + return "master" + } + if _, isControl := node.Labels["node-role.kubernetes.io/control-plane"]; isControl { + return "master" + } + return "worker" +} + +// getNodeCPUCapacity 获取节点CPU容量(核) +func getNodeCPUCapacity(node v1.Node) int64 { + if cpu := node.Status.Capacity.Cpu(); cpu != nil { + return cpu.MilliValue() / 1000 + } + return 0 +} + +// getNodeMemoryCapacity 获取节点内存容量(KB) +func getNodeMemoryCapacity(node v1.Node) int64 { + if mem := node.Status.Capacity.Memory(); mem != nil { + return mem.Value() / 1024 + } + return 0 +} + +// getNodePodCapacity 获取节点Pod容量 +func getNodePodCapacity(node v1.Node) int64 { + if pods := node.Status.Capacity.Pods(); pods != nil { + return pods.Value() + } + return 0 +} + +// getNodeStatus 获取节点状态 +func getNodeStatus(node v1.Node) string { + for _, condition := range node.Status.Conditions { + if condition.Type == v1.NodeReady { + if condition.Status == v1.ConditionTrue { + return "Ready" + } + return string(condition.Reason) + } + } + return "Unknown" +} + +// HandleGetNodeSummary 处理获取节点汇总信息的请求 +func HandleGetNodeSummary(c *gin.Context) { + dataSelect := common.ParseDataSelectPathParameter(c) + summary, err := GetNodeSummary(dataSelect) + if err != nil { + klog.ErrorS(err, "Failed to get node summary") + common.Fail(c, err) + return + } + + common.Success(c, summary) +} diff --git a/cmd/api/app/routes/overview/pods.go b/cmd/api/app/routes/overview/pods.go new file mode 100644 index 00000000..2372f925 --- /dev/null +++ b/cmd/api/app/routes/overview/pods.go @@ -0,0 +1,264 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package overview + +import ( + "context" + "sync" + + "github.com/gin-gonic/gin" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/klog/v2" + + apiV1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/cmd/api/app/types/common" + "github.com/karmada-io/dashboard/pkg/client" + "github.com/karmada-io/dashboard/pkg/dataselect" +) + +// GetPodSummary 获取Pod汇总信息 +func GetPodSummary(dataSelect *dataselect.DataSelectQuery) (*apiV1.PodsResponse, error) { + // 初始化汇总结构 + response := &apiV1.PodsResponse{ + Items: []apiV1.PodItem{}, + StatusStats: apiV1.PodSummaryStats{}, + NamespaceStats: []apiV1.NamespacePodsStats{}, + ClusterStats: []apiV1.ClusterPodsStats{}, + } + + // 用于统计命名空间和集群数据的映射 + namespaceMap := make(map[string]int) + clusterMap := make(map[string]int) + + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + + // 获取集群列表 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 使用WaitGroup来管理并发请求 + var wg sync.WaitGroup + // 使用互斥锁保护共享数据 + var mu sync.Mutex + + // 遍历所有集群 + for _, cluster := range clusterList.Items { + wg.Add(1) + go func(clusterName string) { + defer wg.Done() + + // 获取成员集群的客户端 + memberClient := client.InClusterClientForMemberCluster(clusterName) + if memberClient == nil { + klog.Warningf("Failed to get client for cluster %s", clusterName) + return + } + + // 获取该集群的Pod + pods, err := getPodsForCluster(memberClient, clusterName) + if err != nil { + klog.ErrorS(err, "Failed to get pods", "cluster", clusterName) + return + } + + // 加锁更新共享数据 + mu.Lock() + defer mu.Unlock() + + // 更新状态统计 + for _, pod := range pods { + switch pod.Phase { + case v1.PodRunning: + response.StatusStats.Running++ + case v1.PodPending: + response.StatusStats.Pending++ + case v1.PodSucceeded: + response.StatusStats.Succeeded++ + case v1.PodFailed: + response.StatusStats.Failed++ + case v1.PodUnknown: + response.StatusStats.Unknown++ + } + response.StatusStats.Total++ + + // 更新命名空间统计 + namespaceMap[pod.Namespace]++ + // 更新集群统计 + clusterMap[clusterName]++ + } + + // 追加Pod信息 + response.Items = append(response.Items, pods...) + }(cluster.Name) + } + + // 等待所有请求完成 + wg.Wait() + + // 转换命名空间统计数据 + for ns, count := range namespaceMap { + response.NamespaceStats = append(response.NamespaceStats, apiV1.NamespacePodsStats{ + Namespace: ns, + PodCount: count, + }) + } + + // 转换集群统计数据 + for cluster, count := range clusterMap { + response.ClusterStats = append(response.ClusterStats, apiV1.ClusterPodsStats{ + ClusterName: cluster, + PodCount: count, + }) + } + + // 如果使用了数据选择器,则过滤和排序Pod + if dataSelect != nil && len(response.Items) > 0 { + // 这里可以根据需要实现Pod的排序和分页 + // 目前为简单实现,实际使用时可能需要更复杂的逻辑 + } + + return response, nil +} + +// getPodsForCluster 获取指定集群的所有Pod +func getPodsForCluster(client kubernetes.Interface, clusterName string) ([]apiV1.PodItem, error) { + // 获取Pod列表 + podList, err := client.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{}) + if err != nil { + return nil, err + } + + pods := make([]apiV1.PodItem, 0, len(podList.Items)) + for _, pod := range podList.Items { + podItem := apiV1.PodItem{ + ClusterName: clusterName, + Namespace: pod.Namespace, + Name: pod.Name, + Phase: pod.Status.Phase, + Status: getPodStatus(pod), + ReadyContainers: getReadyContainers(pod), + TotalContainers: len(pod.Spec.Containers), + CPURequest: getPodCPURequest(pod), + MemoryRequest: getPodMemoryRequest(pod), + CPULimit: getPodCPULimit(pod), + MemoryLimit: getPodMemoryLimit(pod), + RestartCount: getPodRestartCount(pod), + PodIP: pod.Status.PodIP, + NodeName: pod.Spec.NodeName, + CreationTimestamp: pod.CreationTimestamp, + } + pods = append(pods, podItem) + } + + return pods, nil +} + +// getPodStatus 获取Pod状态描述 +func getPodStatus(pod v1.Pod) string { + // 如果Pod处于Pending状态且正在拉取镜像,返回特殊状态 + if pod.Status.Phase == v1.PodPending { + for _, containerStatus := range pod.Status.ContainerStatuses { + if containerStatus.State.Waiting != nil && containerStatus.State.Waiting.Reason == "ImagePullBackOff" { + return "ImagePullBackOff" + } + } + } + return string(pod.Status.Phase) +} + +// getReadyContainers 计算Pod中就绪的容器数量 +func getReadyContainers(pod v1.Pod) int { + readyCount := 0 + for _, containerStatus := range pod.Status.ContainerStatuses { + if containerStatus.Ready { + readyCount++ + } + } + return readyCount +} + +// getPodCPURequest 计算Pod CPU请求量(核) +func getPodCPURequest(pod v1.Pod) int64 { + var total int64 + for _, container := range pod.Spec.Containers { + if request := container.Resources.Requests.Cpu(); request != nil { + total += request.MilliValue() / 1000 + } + } + return total +} + +// getPodMemoryRequest 计算Pod内存请求量(KB) +func getPodMemoryRequest(pod v1.Pod) int64 { + var total int64 + for _, container := range pod.Spec.Containers { + if request := container.Resources.Requests.Memory(); request != nil { + total += request.Value() / 1024 + } + } + return total +} + +// getPodCPULimit 计算Pod CPU限制(核) +func getPodCPULimit(pod v1.Pod) int64 { + var total int64 + for _, container := range pod.Spec.Containers { + if limit := container.Resources.Limits.Cpu(); limit != nil { + total += limit.MilliValue() / 1000 + } + } + return total +} + +// getPodMemoryLimit 计算Pod内存限制(KB) +func getPodMemoryLimit(pod v1.Pod) int64 { + var total int64 + for _, container := range pod.Spec.Containers { + if limit := container.Resources.Limits.Memory(); limit != nil { + total += limit.Value() / 1024 + } + } + return total +} + +// getPodRestartCount 计算Pod重启次数 +func getPodRestartCount(pod v1.Pod) int32 { + var total int32 + for _, containerStatus := range pod.Status.ContainerStatuses { + total += containerStatus.RestartCount + } + return total +} + +// HandleGetPodSummary 处理获取Pod汇总信息的请求 +func HandleGetPodSummary(c *gin.Context) { + dataSelect := common.ParseDataSelectPathParameter(c) + summary, err := GetPodSummary(dataSelect) + if err != nil { + klog.ErrorS(err, "Failed to get pod summary") + common.Fail(c, err) + return + } + + common.Success(c, summary) +} diff --git a/cmd/api/app/routes/overview/resources.go b/cmd/api/app/routes/overview/resources.go new file mode 100644 index 00000000..50b16587 --- /dev/null +++ b/cmd/api/app/routes/overview/resources.go @@ -0,0 +1,102 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package overview + +import ( + "context" + + "github.com/gin-gonic/gin" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/klog/v2" + + v1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/cmd/api/app/types/common" + "github.com/karmada-io/dashboard/pkg/client" +) + +// GetClusterResourcesSummary 获取所有集群的资源汇总信息 +func GetClusterResourcesSummary() (*v1.ResourcesSummary, error) { + // 初始化汇总结构 + summary := &v1.ResourcesSummary{} + + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + + // 直接获取集群列表,避免使用dataselect包 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 遍历所有集群,累加资源数据 + for _, cluster := range clusterList.Items { + // 节点状态统计 + if cluster.Status.NodeSummary != nil { + // 计算节点总数和就绪节点数 + totalNodes := int(cluster.Status.NodeSummary.TotalNum) + readyNodes := int(cluster.Status.NodeSummary.ReadyNum) + + summary.Node.Total += int64(totalNodes) + summary.Node.Ready += int64(readyNodes) + } + + // 资源统计 - 只有当ResourceSummary不为空时才进行统计 + if cluster.Status.ResourceSummary != nil { + // Pod统计 + if podCapacity := cluster.Status.ResourceSummary.Allocatable.Pods(); podCapacity != nil { + summary.Pod.Capacity += podCapacity.Value() + } + + if podAllocated := cluster.Status.ResourceSummary.Allocated.Pods(); podAllocated != nil { + summary.Pod.Allocated += podAllocated.Value() + } + + // CPU统计 - 转换为核心数 + if cpuCapacity := cluster.Status.ResourceSummary.Allocatable.Cpu(); cpuCapacity != nil { + summary.CPU.Capacity += cpuCapacity.MilliValue() / 1000 + } + + if cpuAllocated := cluster.Status.ResourceSummary.Allocated.Cpu(); cpuAllocated != nil { + summary.CPU.Usage += cpuAllocated.MilliValue() / 1000 + } + + // 内存统计 - 转换为KiB + if memCapacity := cluster.Status.ResourceSummary.Allocatable.Memory(); memCapacity != nil { + summary.Memory.Capacity += memCapacity.Value() / 1024 + } + + if memAllocated := cluster.Status.ResourceSummary.Allocated.Memory(); memAllocated != nil { + summary.Memory.Usage += memAllocated.Value() / 1024 + } + } + } + + return summary, nil +} + +// HandleGetResourcesSummary 处理获取资源汇总信息的请求 +func HandleGetResourcesSummary(c *gin.Context) { + summary, err := GetClusterResourcesSummary() + if err != nil { + klog.ErrorS(err, "Failed to get cluster resources summary") + common.Fail(c, err) + return + } + + common.Success(c, summary) +} diff --git a/cmd/api/app/routes/overview/schedule.go b/cmd/api/app/routes/overview/schedule.go new file mode 100644 index 00000000..2154901f --- /dev/null +++ b/cmd/api/app/routes/overview/schedule.go @@ -0,0 +1,1931 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package overview + +import ( + "context" + "fmt" + "sort" + "strconv" + "strings" + "sync" + + "github.com/gin-gonic/gin" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/client-go/dynamic" + "k8s.io/client-go/rest" + "k8s.io/klog/v2" + + clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1" + + v1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/cmd/api/app/types/common" + "github.com/karmada-io/dashboard/pkg/client" + policyv1alpha1 "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1" + karmadaclientset "github.com/karmada-io/karmada/pkg/generated/clientset/versioned" +) + +// 资源类型分组映射 +var resourceGroupMap = map[string]string{ + // 工作负载 + "Deployment": "Workloads", + "StatefulSet": "Workloads", + "DaemonSet": "Workloads", + "Job": "Workloads", + "CronJob": "Workloads", + "Pod": "Workloads", + "ReplicaSet": "Workloads", + "ReplicationController": "Workloads", + + // 网络 + "Service": "Network", + "Ingress": "Network", + "NetworkPolicy": "Network", + + // 存储 + "PersistentVolume": "Storage", + "PersistentVolumeClaim": "Storage", + "StorageClass": "Storage", + + // 配置 + "ConfigMap": "Configuration", + "Secret": "Configuration", + + // 其他类型 + "CustomResourceDefinition": "CustomResources", +} + +// 添加常用资源类型的GVR映射表 +var supportedResources = map[string]schema.GroupVersionResource{ + "Deployment": { + Group: "apps", + Version: "v1", + Resource: "deployments", + }, + "Service": { + Group: "", + Version: "v1", + Resource: "services", + }, + "Pod": { + Group: "", + Version: "v1", + Resource: "pods", + }, + "ConfigMap": { + Group: "", + Version: "v1", + Resource: "configmaps", + }, + "Secret": { + Group: "", + Version: "v1", + Resource: "secrets", + }, + "StatefulSet": { + Group: "apps", + Version: "v1", + Resource: "statefulsets", + }, + "DaemonSet": { + Group: "apps", + Version: "v1", + Resource: "daemonsets", + }, + "Ingress": { + Group: "networking.k8s.io", + Version: "v1", + Resource: "ingresses", + }, + "Job": { + Group: "batch", + Version: "v1", + Resource: "jobs", + }, + "CronJob": { + Group: "batch", + Version: "v1", + Resource: "cronjobs", + }, + "PersistentVolumeClaim": { + Group: "", + Version: "v1", + Resource: "persistentvolumeclaims", + }, +} + +// 获取资源类型的分组 +func getResourceGroup(kind string) string { + if group, ok := resourceGroupMap[kind]; ok { + return group + } + return "Others" +} + +// findPropagationPolicyForResource 查找与资源匹配的传播策略 +func findPropagationPolicyForResource(ctx context.Context, karmadaClient karmadaclientset.Interface, namespace, name, kind string) (string, map[string]int32, error) { + // 获取所有PropagationPolicy + policyList, err := karmadaClient.PolicyV1alpha1().PropagationPolicies(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get propagation policies") + return "", nil, err + } + + // 初始化集群权重映射 + clusterWeights := make(map[string]int32) + + // 检查每个策略是否匹配该资源 + for _, policy := range policyList.Items { + if policy.Namespace != namespace && namespace != "" { + continue + } + + for _, rs := range policy.Spec.ResourceSelectors { + if rs.Kind == kind && (rs.Name == name || rs.Name == "") { + // 找到匹配的策略,获取集群权重 + if policy.Spec.Placement.ReplicaScheduling != nil && policy.Spec.Placement.ReplicaScheduling.ReplicaDivisionPreference == policyv1alpha1.ReplicaDivisionPreferenceWeighted { + // 如果使用加权调度,则获取每个集群的权重 + if len(policy.Spec.Placement.ReplicaScheduling.WeightPreference.StaticWeightList) > 0 { + for _, staticWeight := range policy.Spec.Placement.ReplicaScheduling.WeightPreference.StaticWeightList { + // ClusterAffinity是一个结构体,需要进一步处理来获取集群名称 + if staticWeight.TargetCluster.ClusterNames != nil && len(staticWeight.TargetCluster.ClusterNames) > 0 { + for _, clusterName := range staticWeight.TargetCluster.ClusterNames { + clusterWeights[clusterName] = int32(staticWeight.Weight) + } + } + } + } + } + return policy.Name, clusterWeights, nil + } + } + } + + // 查找ClusterPropagationPolicy + clusterPolicyList, err := karmadaClient.PolicyV1alpha1().ClusterPropagationPolicies().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster propagation policies") + return "", nil, err + } + + for _, policy := range clusterPolicyList.Items { + for _, rs := range policy.Spec.ResourceSelectors { + if rs.Kind == kind && (rs.Name == name || rs.Name == "") { + // 找到匹配的策略,获取集群权重 + if policy.Spec.Placement.ReplicaScheduling != nil && policy.Spec.Placement.ReplicaScheduling.ReplicaDivisionPreference == policyv1alpha1.ReplicaDivisionPreferenceWeighted { + // 如果使用加权调度,则获取每个集群的权重 + if len(policy.Spec.Placement.ReplicaScheduling.WeightPreference.StaticWeightList) > 0 { + for _, staticWeight := range policy.Spec.Placement.ReplicaScheduling.WeightPreference.StaticWeightList { + // ClusterAffinity是一个结构体,需要进一步处理来获取集群名称 + if staticWeight.TargetCluster.ClusterNames != nil && len(staticWeight.TargetCluster.ClusterNames) > 0 { + for _, clusterName := range staticWeight.TargetCluster.ClusterNames { + clusterWeights[clusterName] = int32(staticWeight.Weight) + } + } + } + } + } + return policy.Name, clusterWeights, nil + } + } + } + + return "", clusterWeights, nil +} + +// 在ActualResourceTypeDistribution结构中添加调度策略信息 +type ResourceSchedulingInfo struct { + // 资源名称 + ResourceName string `json:"resourceName"` + // 命名空间 + Namespace string `json:"namespace"` + // 调度策略名称 + PropagationPolicy string `json:"propagationPolicy"` + // 集群权重映射 (集群名称 -> 权重值) + ClusterWeights map[string]int32 `json:"clusterWeights"` + // 集群分布情况 + ClusterDist []v1.ActualClusterDistribution `json:"clusterDist"` + // 实际部署总数 + ActualCount int `json:"actualCount"` + // 调度计划总数 + ScheduledCount int `json:"scheduledCount"` +} + +// GetClusterSchedulePreview 获取集群调度预览信息 +func GetClusterSchedulePreview() (*v1.SchedulePreviewResponse, error) { + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + ctx := context.TODO() + + // 初始化响应结构 + response := &v1.SchedulePreviewResponse{ + Nodes: []v1.ScheduleNode{ + { + ID: "karmada-control-plane", + Name: "Karmada控制平面", + Type: "control-plane", + }, + }, + Links: []v1.ScheduleLink{}, + ResourceDist: []v1.ResourceTypeDistribution{}, + } + + // 获取所有集群 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 为每个集群创建节点 + for _, cluster := range clusterList.Items { + // 收集集群的调度参数 + schedulingParams := &v1.SchedulingParams{ + Labels: make(map[string]string), + } + + // 从注解中获取集群权重(默认为1) + schedulingParams.Weight = 1 + if weightStr, exists := cluster.Annotations["scheduling.karmada.io/weight"]; exists { + if weight, err := strconv.ParseInt(weightStr, 10, 32); err == nil { + schedulingParams.Weight = int32(weight) + } + } + + // 从集群注解获取污点信息 + // 注:实际实现中可能需要从其他地方获取污点信息 + taints := []v1.Taint{} + for k, v := range cluster.Annotations { + if strings.HasPrefix(k, "taint.karmada.io/") { + // 简单解析,实际环境中可能需要更复杂的逻辑 + key := strings.TrimPrefix(k, "taint.karmada.io/") + parts := strings.Split(v, ":") + effect := "NoSchedule" // 默认 + value := "" + + if len(parts) > 0 { + value = parts[0] + } + if len(parts) > 1 { + effect = parts[1] + } + + taints = append(taints, v1.Taint{ + Key: key, + Value: value, + Effect: effect, + }) + } + } + schedulingParams.Taints = taints + + // 获取集群标签 + if len(cluster.Labels) > 0 { + for k, v := range cluster.Labels { + schedulingParams.Labels[k] = v + } + } + + response.Nodes = append(response.Nodes, v1.ScheduleNode{ + ID: cluster.Name, + Name: cluster.Name, + Type: "member-cluster", + SchedulingParams: schedulingParams, + }) + } + + // 获取资源绑定信息 + resourceBindings, err := karmadaClient.WorkV1alpha2().ResourceBindings(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get resource bindings") + return nil, err + } + + clusterResourceBindings, err := karmadaClient.WorkV1alpha2().ClusterResourceBindings().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster resource bindings") + return nil, err + } + + // 资源类型统计 - 从绑定中获取的调度信息 + scheduledResourceMap := make(map[string]map[string]int) + // 存储资源类型对应的资源名称 + resourceTypeToNameMap := make(map[string][]string) + // 实际部署的资源统计 + actualResourceMap := make(map[string]map[string]int) + + // 存储资源详细调度信息 + resourceSchedulingMap := make(map[string]map[string]*ResourceSchedulingInfo) + + // 收集各资源类型对应的资源名称 + for _, binding := range resourceBindings.Items { + resourceType := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + if resourceName != "" { + // 如果该类型还没有初始化映射,则先初始化 + if _, exists := resourceTypeToNameMap[resourceType]; !exists { + resourceTypeToNameMap[resourceType] = []string{} + } + // 检查资源名称是否已存在,避免重复 + nameExists := false + for _, existingName := range resourceTypeToNameMap[resourceType] { + if existingName == resourceName { + nameExists = true + break + } + } + if !nameExists { + resourceTypeToNameMap[resourceType] = append(resourceTypeToNameMap[resourceType], resourceName) + } + } + } + + // 处理集群级资源 + for _, binding := range clusterResourceBindings.Items { + resourceType := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + if resourceName != "" { + // 如果该类型还没有初始化映射,则先初始化 + if _, exists := resourceTypeToNameMap[resourceType]; !exists { + resourceTypeToNameMap[resourceType] = []string{} + } + // 检查资源名称是否已存在,避免重复 + nameExists := false + for _, existingName := range resourceTypeToNameMap[resourceType] { + if existingName == resourceName { + nameExists = true + break + } + } + if !nameExists { + resourceTypeToNameMap[resourceType] = append(resourceTypeToNameMap[resourceType], resourceName) + } + } + } + + // 处理资源绑定 - 获取调度信息 + for _, binding := range resourceBindings.Items { + resourceKind := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + resourceNamespace := binding.Spec.Resource.Namespace + + if resourceName == "" { + continue + } + + // 将资源添加到类型统计 + if _, ok := scheduledResourceMap[resourceKind]; !ok { + scheduledResourceMap[resourceKind] = make(map[string]int) + } + + // 查找匹配的传播策略 + policyName, clusterWeights, _ := findPropagationPolicyForResource(ctx, karmadaClient, resourceNamespace, resourceName, resourceKind) + + // 资源唯一标识符 + resourceKey := fmt.Sprintf("%s/%s/%s", resourceNamespace, resourceKind, resourceName) + + // 初始化资源类型映射 + if _, ok := resourceSchedulingMap[resourceKind]; !ok { + resourceSchedulingMap[resourceKind] = make(map[string]*ResourceSchedulingInfo) + } + + // 初始化资源信息 + if _, ok := resourceSchedulingMap[resourceKind][resourceKey]; !ok { + resourceSchedulingMap[resourceKind][resourceKey] = &ResourceSchedulingInfo{ + ResourceName: resourceName, + Namespace: resourceNamespace, + PropagationPolicy: policyName, + ClusterWeights: clusterWeights, + ClusterDist: []v1.ActualClusterDistribution{}, + ActualCount: 0, + ScheduledCount: 0, + } + } + + // 为每个集群绑定记录调度信息 + for _, cluster := range binding.Spec.Clusters { + clusterName := cluster.Name + replicaCount := cluster.Replicas + + // 增加资源类型统计 + scheduledResourceMap[resourceKind][clusterName] += int(replicaCount) + + // 增加调度计数 + found := false + for i, dist := range resourceSchedulingMap[resourceKind][resourceKey].ClusterDist { + if dist.ClusterName == clusterName { + dist.ScheduledCount += int(replicaCount) + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist[i] = dist + found = true + break + } + } + + if !found { + // 添加新的集群分布记录 + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist = append( + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist, + v1.ActualClusterDistribution{ + ClusterName: clusterName, + ScheduledCount: int(replicaCount), + ActualCount: 0, + Status: v1.ResourceDeploymentStatus{ + Scheduled: true, + Actual: false, + ScheduledCount: int(replicaCount), + ActualCount: 0, + }, + }, + ) + } + + // 增加总调度计数 + resourceSchedulingMap[resourceKind][resourceKey].ScheduledCount += int(replicaCount) + } + } + + // 处理集群资源绑定 + for _, binding := range clusterResourceBindings.Items { + resourceKind := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + + if resourceName == "" { + continue + } + + // 将资源添加到类型统计 + if _, ok := scheduledResourceMap[resourceKind]; !ok { + scheduledResourceMap[resourceKind] = make(map[string]int) + } + + // 查找匹配的传播策略 + policyName, clusterWeights, _ := findPropagationPolicyForResource(ctx, karmadaClient, "", resourceName, resourceKind) + + // 资源唯一标识符 (集群级资源无命名空间) + resourceKey := fmt.Sprintf("/%s/%s", resourceKind, resourceName) + + // 初始化资源类型映射 + if _, ok := resourceSchedulingMap[resourceKind]; !ok { + resourceSchedulingMap[resourceKind] = make(map[string]*ResourceSchedulingInfo) + } + + // 初始化资源信息 + if _, ok := resourceSchedulingMap[resourceKind][resourceKey]; !ok { + resourceSchedulingMap[resourceKind][resourceKey] = &ResourceSchedulingInfo{ + ResourceName: resourceName, + Namespace: "", + PropagationPolicy: policyName, + ClusterWeights: clusterWeights, + ClusterDist: []v1.ActualClusterDistribution{}, + ActualCount: 0, + ScheduledCount: 0, + } + } + + // 为每个集群绑定记录调度信息 + for _, cluster := range binding.Spec.Clusters { + clusterName := cluster.Name + replicaCount := cluster.Replicas + + // 增加资源类型统计 + scheduledResourceMap[resourceKind][clusterName] += int(replicaCount) + + // 增加调度计数 + found := false + for i, dist := range resourceSchedulingMap[resourceKind][resourceKey].ClusterDist { + if dist.ClusterName == clusterName { + dist.ScheduledCount += int(replicaCount) + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist[i] = dist + found = true + break + } + } + + if !found { + // 添加新的集群分布记录 + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist = append( + resourceSchedulingMap[resourceKind][resourceKey].ClusterDist, + v1.ActualClusterDistribution{ + ClusterName: clusterName, + ScheduledCount: int(replicaCount), + ActualCount: 0, + Status: v1.ResourceDeploymentStatus{ + Scheduled: true, + Actual: false, + ScheduledCount: int(replicaCount), + ActualCount: 0, + }, + }, + ) + } + + // 增加总调度计数 + resourceSchedulingMap[resourceKind][resourceKey].ScheduledCount += int(replicaCount) + } + } + + // 收集实际部署的资源信息 + // 并发获取各集群资源 + var wg sync.WaitGroup + var mu sync.Mutex // 保护map的并发访问 + + for i := range clusterList.Items { + cluster := &clusterList.Items[i] + wg.Add(1) + + go func(c *clusterv1alpha1.Cluster) { + defer wg.Done() + + // 使用现有的客户端函数获取成员集群客户端 + kubeClient := client.InClusterClientForMemberCluster(c.Name) + if kubeClient == nil { + klog.ErrorS(fmt.Errorf("failed to get client"), "Could not get client for cluster", "cluster", c.Name) + return + } + + // 创建动态客户端 - 通过设置相同的配置 + config, err := client.GetMemberConfig() + if err != nil { + klog.ErrorS(err, "Failed to get member config", "cluster", c.Name) + return + } + + // 修改配置以指向特定集群 + restConfig := rest.CopyConfig(config) + // 获取karmada配置 + karmadaConfig, _, err := client.GetKarmadaConfig() + if err != nil { + klog.ErrorS(err, "Failed to get karmada config", "cluster", c.Name) + return + } + // 使用固定的代理URL格式 - client包中定义的proxyURL常量为非导出 + proxyURL := "/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/" + restConfig.Host = karmadaConfig.Host + fmt.Sprintf(proxyURL, c.Name) + + dynamicClient, err := dynamic.NewForConfig(restConfig) + if err != nil { + klog.ErrorS(err, "Failed to create dynamic client", "cluster", c.Name) + return + } + + // 初始化该集群的资源统计 + clusterResources := make(map[string]int) + + // 查询所有支持的资源类型 + for resourceKind, gvr := range supportedResources { + // 查询资源列表 + list, err := dynamicClient.Resource(gvr).Namespace(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to list resources", "cluster", c.Name, "resource", resourceKind) + continue + } + + // 记录资源数量 + count := len(list.Items) + + // 对于Deployment类型,需要获取实际的Pod数量而不是Deployment对象数量 + if resourceKind == "Deployment" && count > 0 { + // Pod计数总和 + totalPodCount := 0 + + // 遍历每个Deployment + for _, deployment := range list.Items { + // 提取Deployment名称和命名空间 + deployName, _, _ := unstructured.NestedString(deployment.Object, "metadata", "name") + deployNamespace, _, _ := unstructured.NestedString(deployment.Object, "metadata", "namespace") + + if deployName == "" { + continue + } + + // 检查该Deployment是否由Karmada调度 - 通过检查特定标签或注释 + // Karmada调度的资源通常会有特定标签 + deployLabels, _, _ := unstructured.NestedMap(deployment.Object, "metadata", "labels") + deployAnnotations, _, _ := unstructured.NestedMap(deployment.Object, "metadata", "annotations") + + // 检查是否有Karmada调度相关的标签或注释 + isKarmadaManaged := false + + // 检查特定的Karmada标签 + if deployLabels != nil { + // 检查常见的Karmada标签 + if _, ok := deployLabels["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := deployLabels["propagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := deployLabels["clusterpropagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 检查特定的Karmada注释 + if deployAnnotations != nil && !isKarmadaManaged { + if _, ok := deployAnnotations["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["propagation.karmada.io/status"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["resourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["clusterresourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 如果不是由Karmada管理的资源,则跳过 + if !isKarmadaManaged { + // 额外验证:检查该资源是否在ResourceBinding或ClusterResourceBinding中存在 + // 检查ResourceBinding + foundInResourceBindings := false + for _, binding := range resourceBindings.Items { + if binding.Spec.Resource.Kind == "Deployment" && + binding.Spec.Resource.Name == deployName && + (binding.Namespace == deployNamespace || binding.Spec.Resource.Namespace == deployNamespace) { + foundInResourceBindings = true + break + } + } + + // 检查ClusterResourceBinding + if !foundInResourceBindings { + for _, binding := range clusterResourceBindings.Items { + if binding.Spec.Resource.Kind == "Deployment" && + binding.Spec.Resource.Name == deployName { + foundInResourceBindings = true + break + } + } + } + + // 如果在绑定中也未找到,则确认跳过该资源 + if !foundInResourceBindings { + klog.V(4).Infof("Skipping non-Karmada managed deployment %s/%s in cluster %s", + deployNamespace, deployName, c.Name) + continue + } + } + + // 获取Deployment的Pod selector + var podSelector map[string]string + selectorObj, found, _ := unstructured.NestedMap(deployment.Object, "spec", "selector", "matchLabels") + if found && selectorObj != nil { + podSelector = make(map[string]string) + for k, v := range selectorObj { + if strVal, ok := v.(string); ok { + podSelector[k] = strVal + } + } + } else { + // 如果没有找到matchLabels,使用默认的app标签 + podSelector = map[string]string{"app": deployName} + } + + // 记录原始Deployment的UniqKey,用于后面关联Pod数量 + deploymentUID := fmt.Sprintf("%s/%s/%s", deployNamespace, resourceKind, deployName) + deployPodCount := 0 + + // 构建实际的标签选择器字符串 + labelSelector := "" + for key, value := range podSelector { + if labelSelector != "" { + labelSelector += "," + } + labelSelector += fmt.Sprintf("%s=%s", key, value) + } + + // 只计算匹配标签的Pod + if labelSelector != "" { + podListOptions := metav1.ListOptions{ + LabelSelector: labelSelector, + } + + // 在Deployment所在的命名空间中查找Pod + namespacePodList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(deployNamespace).List(ctx, podListOptions) + if err == nil && namespacePodList != nil { + // 获取运行中的Pod数量 + for _, pod := range namespacePodList.Items { + podStatus, found, err := unstructured.NestedString(pod.Object, "status", "phase") + if found && err == nil && podStatus == "Running" { + deployPodCount++ + } + } + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 匹配选择器[%s]的Pod数量: %d", + c.Name, deployNamespace, deployName, labelSelector, deployPodCount) + } else if err != nil { + klog.Warningf("获取集群[%s]命名空间[%s]中Pod失败: %v", c.Name, deployNamespace, err) + } + } + + // 如果通过标签选择器没找到Pod,尝试使用常见标签模式 + if deployPodCount == 0 { + // 记录找到Pod的选择器,便于调试 + foundSelector := "" + // 尝试其他常见的标签格式 + commonLabelSelectors := []string{ + fmt.Sprintf("app=%s", deployName), + fmt.Sprintf("app.kubernetes.io/name=%s", deployName), + fmt.Sprintf("k8s-app=%s", deployName), + } + + for _, commonSelector := range commonLabelSelectors { + podListOptions := metav1.ListOptions{ + LabelSelector: commonSelector, + } + + namespacePodList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(deployNamespace).List(ctx, podListOptions) + if err != nil { + continue + } + + // 统计运行中的Pod + commonPodCount := 0 + for _, pod := range namespacePodList.Items { + podStatus, found, err := unstructured.NestedString(pod.Object, "status", "phase") + if found && err == nil && podStatus == "Running" { + commonPodCount++ + } + } + + // 如果找到了Pod,使用这个计数并退出循环 + if commonPodCount > 0 { + deployPodCount = commonPodCount + foundSelector = commonSelector + break + } + } + + if deployPodCount > 0 { + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 使用二次尝试选择器[%s]找到Pod数量: %d", + c.Name, deployNamespace, deployName, foundSelector, deployPodCount) + } + } + + // 如果计数仍然为0,可能需要获取Deployment的replicas值作为参考 + if deployPodCount == 0 { + replicas, found, _ := unstructured.NestedInt64(deployment.Object, "spec", "replicas") + if found && replicas > 0 { + deployPodCount = int(replicas) + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 未找到Pod,使用replicas值: %d", + c.Name, deployNamespace, deployName, deployPodCount) + } + } + + // 记录该Deployment的Pod数量 + klog.V(3).Infof("最终统计: 集群[%s], Deployment[%s/%s]的运行Pod数: %d", + c.Name, deployNamespace, deployName, deployPodCount) + + // 保存精确的Pod计数 + mu.Lock() + if _, ok := actualResourceMap[resourceKind]; !ok { + actualResourceMap[resourceKind] = make(map[string]int) + } + // 存储每个具体Deployment的Pod计数,使用包含命名空间和名称的唯一标识符 + actualResourceMap[resourceKind][fmt.Sprintf("%s:%s", c.Name, deploymentUID)] = deployPodCount + mu.Unlock() + + // 累加Pod数量到总数 + totalPodCount += deployPodCount + } + + } else if resourceKind != "Deployment" { + // 对于非Deployment资源,检查是否为Karmada管理的资源 + validatedCount := 0 + + for _, resource := range list.Items { + resourceName, _, _ := unstructured.NestedString(resource.Object, "metadata", "name") + resourceNamespace, _, _ := unstructured.NestedString(resource.Object, "metadata", "namespace") + + if resourceName == "" { + continue + } + + // 检查资源标签和注释是否包含Karmada管理标记 + resourceLabels, _, _ := unstructured.NestedMap(resource.Object, "metadata", "labels") + resourceAnnotations, _, _ := unstructured.NestedMap(resource.Object, "metadata", "annotations") + + isKarmadaManaged := false + + // 检查标签 + if resourceLabels != nil { + if _, ok := resourceLabels["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceLabels["propagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceLabels["clusterpropagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 检查注释 + if resourceAnnotations != nil && !isKarmadaManaged { + if _, ok := resourceAnnotations["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["propagation.karmada.io/status"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["resourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["clusterresourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 如果不是Karmada管理的资源,检查其是否在绑定中存在 + if !isKarmadaManaged { + foundInResourceBindings := false + + // 检查ResourceBinding + for _, binding := range resourceBindings.Items { + if binding.Spec.Resource.Kind == resourceKind && + binding.Spec.Resource.Name == resourceName && + (binding.Namespace == resourceNamespace || binding.Spec.Resource.Namespace == resourceNamespace) { + foundInResourceBindings = true + break + } + } + + // 检查ClusterResourceBinding + if !foundInResourceBindings { + for _, binding := range clusterResourceBindings.Items { + if binding.Spec.Resource.Kind == resourceKind && + binding.Spec.Resource.Name == resourceName { + foundInResourceBindings = true + break + } + } + } + + // 如果在绑定中也未找到,则跳过 + if !foundInResourceBindings { + continue + } + } + + // 到这里,说明资源是由Karmada管理的,或者在绑定中已找到 + validatedCount++ + } + + // 更新为验证后的资源数量 + count = validatedCount + } + + if count > 0 { + clusterResources[resourceKind] = count + + mu.Lock() + // 更新实际资源统计 + if _, ok := actualResourceMap[resourceKind]; !ok { + actualResourceMap[resourceKind] = make(map[string]int) + } + actualResourceMap[resourceKind][c.Name] = count + mu.Unlock() + } + } + + klog.Infof("Cluster %s has resources: %v", c.Name, clusterResources) + }(cluster) + } + + // 等待所有集群资源收集完成 + wg.Wait() + + // 使用wg.Wait()之后,添加实际部署资源到调度信息中 + // 处理actualResourceMap数据,更新到resourceSchedulingMap中 + for resourceKind, clusterMap := range actualResourceMap { + if _, ok := resourceSchedulingMap[resourceKind]; !ok { + continue + } + + // 遍历所有集群上报的实际Pod计数 + for clusterResourceKey, count := range clusterMap { + // 解析clusterResourceKey,判断是否是具体资源的计数(格式为"集群名:命名空间/类型/名称") + parts := strings.Split(clusterResourceKey, ":") + if len(parts) == 2 { + clusterName := parts[0] + resourceKey := parts[1] + + // 从resourceKey中提取资源信息(namespace/kind/name) + keyParts := strings.SplitN(resourceKey, "/", 3) + if len(keyParts) >= 3 { + namespace := keyParts[0] + name := keyParts[2] + + // 查找对应的资源信息 + resourceFound := false + + // 构建查找键 + lookupKey := fmt.Sprintf("%s/%s/%s", namespace, resourceKind, name) + if namespace == "" { + lookupKey = fmt.Sprintf("/%s/%s", resourceKind, name) + } + + if resourceInfo, exists := resourceSchedulingMap[resourceKind][lookupKey]; exists { + // 找到对应资源 + resourceFound = true + + // 更新集群分布中的实际部署计数 + clusterFound := false + for i, dist := range resourceInfo.ClusterDist { + if dist.ClusterName == clusterName { + clusterFound = true + // 更新实际部署数量 + dist.ActualCount = count + dist.Status.Actual = true + dist.Status.ActualCount = count + resourceInfo.ClusterDist[i] = dist + + // 记录日志以便调试 + klog.V(3).Infof("更新资源[%s]在集群[%s]的实际部署数: %d (计划数: %d)", + lookupKey, clusterName, count, dist.ScheduledCount) + break + } + } + + if !clusterFound { + klog.V(3).Infof("资源[%s]没有集群[%s]的分布记录,跳过更新", lookupKey, clusterName) + } + + // 重新计算资源总实际部署数 + resourceInfo.ActualCount = 0 + for _, dist := range resourceInfo.ClusterDist { + resourceInfo.ActualCount += dist.ActualCount + } + } + + if !resourceFound { + klog.V(3).Infof("未找到资源记录[%s],无法更新实际部署数", lookupKey) + } + } else { + klog.V(3).Infof("资源键[%s]格式错误,无法解析", resourceKey) + } + } else { + // 这是集群级别的计数(旧版格式),不再使用这种简化处理 + klog.V(3).Infof("跳过旧格式的集群级计数: %s = %d", clusterResourceKey, count) + } + } + } + + // 添加详细的调度资源信息到响应中 + detailedResources := make([]v1.ResourceDetailInfo, 0) + + for resourceKind, resourcesMap := range resourceSchedulingMap { + for _, info := range resourcesMap { + // 创建集群权重映射 + clusterWeights := make(map[string]int32) + + // 如果有策略设置的集群权重,优先使用策略的权重 + if len(info.ClusterWeights) > 0 { + clusterWeights = info.ClusterWeights + } else { + // 否则,使用集群注解中的权重 + for _, node := range response.Nodes { + if node.Type == "member-cluster" && node.SchedulingParams != nil { + clusterWeights[node.ID] = node.SchedulingParams.Weight + } + } + } + + // 创建详细资源信息 + detailedResource := v1.ResourceDetailInfo{ + ResourceName: info.ResourceName, + ResourceKind: resourceKind, + ResourceGroup: getResourceGroup(resourceKind), + Namespace: info.Namespace, + PropagationPolicy: info.PropagationPolicy, + ClusterWeights: clusterWeights, // 添加集群权重映射 + ClusterDist: info.ClusterDist, + TotalScheduledCount: info.ScheduledCount, + TotalActualCount: info.ActualCount, + } + detailedResources = append(detailedResources, detailedResource) + } + } + + // 按资源类型和名称排序 + sort.Slice(detailedResources, func(i, j int) bool { + if detailedResources[i].ResourceKind != detailedResources[j].ResourceKind { + return detailedResources[i].ResourceKind < detailedResources[j].ResourceKind + } + return detailedResources[i].ResourceName < detailedResources[j].ResourceName + }) + + response.DetailedResources = detailedResources + + // 转换资源类型统计为响应格式,并按资源类型排序 - 使用合并后的信息 + var resourceTypes []string + for resourceType := range scheduledResourceMap { + resourceTypes = append(resourceTypes, resourceType) + } + sort.Strings(resourceTypes) + + for _, resourceType := range resourceTypes { + clusterMap := scheduledResourceMap[resourceType] + typeDist := v1.ResourceTypeDistribution{ + ResourceType: resourceType, + ClusterDist: []v1.ClusterDistribution{}, + } + + // 对集群名称进行排序,保证展示顺序一致 + var clusterNames []string + for clusterName := range clusterMap { + clusterNames = append(clusterNames, clusterName) + } + sort.Strings(clusterNames) + + for _, clusterName := range clusterNames { + count := clusterMap[clusterName] + typeDist.ClusterDist = append(typeDist.ClusterDist, v1.ClusterDistribution{ + ClusterName: clusterName, + Count: count, + }) + } + + response.ResourceDist = append(response.ResourceDist, typeDist) + } + + // 修改链接信息以体现资源流向 - 每个具体资源单独显示 + // 清空之前的链接 + response.Links = []v1.ScheduleLink{} + + // 每个资源节点列表 - 使用单独节点而不是资源类型分组 + resourceNodes := []v1.ScheduleNode{} + + // 为每个具体资源创建单独的节点和链接 + for _, resource := range detailedResources { + // 创建资源唯一ID + resourceID := fmt.Sprintf("resource-%s-%s", resource.ResourceKind, resource.ResourceName) + if resource.Namespace != "" { + resourceID = fmt.Sprintf("resource-%s-%s-%s", resource.Namespace, resource.ResourceKind, resource.ResourceName) + } + + // 创建资源节点 + resourceNode := v1.ScheduleNode{ + ID: resourceID, + Name: resource.ResourceName, + Type: "resource", + ResourceInfo: &v1.ResourceNodeInfo{ + ResourceKind: resource.ResourceKind, + ResourceGroup: resource.ResourceGroup, + Namespace: resource.Namespace, + PropagationPolicy: resource.PropagationPolicy, + }, + } + + resourceNodes = append(resourceNodes, resourceNode) + + // 从控制平面到资源的链接 + response.Links = append(response.Links, v1.ScheduleLink{ + Source: "karmada-control-plane", + Target: resourceID, + Value: 1, // 控制平面到资源的值为1 + Type: resource.ResourceKind, + }) + + // 从资源到各集群的链接 + for _, dist := range resource.ClusterDist { + if dist.ScheduledCount > 0 { + response.Links = append(response.Links, v1.ScheduleLink{ + Source: resourceID, + Target: dist.ClusterName, + Value: dist.ScheduledCount, + Type: resource.ResourceKind, + }) + } + } + } + + // 将资源节点添加到响应中 + response.Nodes = append(response.Nodes, resourceNodes...) + + // 获取传播策略 + propagationPolicies, err := karmadaClient.PolicyV1alpha1().PropagationPolicies(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get propagation policies") + return nil, err + } + + clusterPropagationPolicies, err := karmadaClient.PolicyV1alpha1().ClusterPropagationPolicies().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster propagation policies") + return nil, err + } + + // 将策略信息添加到响应中 + response.Summary = v1.ScheduleSummary{ + TotalClusters: len(clusterList.Items), + TotalPropagationPolicy: len(propagationPolicies.Items) + len(clusterPropagationPolicies.Items), + TotalResourceBinding: len(resourceBindings.Items) + len(clusterResourceBindings.Items), + } + + return response, nil +} + +// HandleGetSchedulePreview 处理获取集群调度预览的请求 +func HandleGetSchedulePreview(c *gin.Context) { + preview, err := GetClusterSchedulePreview() + if err != nil { + klog.ErrorS(err, "Failed to get cluster schedule preview") + common.Fail(c, err) + return + } + + common.Success(c, preview) +} + +// GetAllClusterResourcesPreview 获取所有集群资源预览信息,不局限于Karmada调度的资源 +func GetAllClusterResourcesPreview() (*v1.SchedulePreviewResponse, error) { + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + ctx := context.TODO() + + // 初始化响应结构 + response := &v1.SchedulePreviewResponse{ + Nodes: []v1.ScheduleNode{ + { + ID: "karmada-control-plane", + Name: "Karmada控制平面", + Type: "control-plane", + }, + }, + Links: []v1.ScheduleLink{}, + ResourceDist: []v1.ResourceTypeDistribution{}, + } + + // 获取所有集群 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 为每个集群创建节点 + for _, cluster := range clusterList.Items { + // 收集集群的调度参数 + schedulingParams := &v1.SchedulingParams{ + Labels: make(map[string]string), + } + + // 从注解中获取集群权重(默认为1) + schedulingParams.Weight = 1 + if weightStr, exists := cluster.Annotations["scheduling.karmada.io/weight"]; exists { + if weight, err := strconv.ParseInt(weightStr, 10, 32); err == nil { + schedulingParams.Weight = int32(weight) + } + } + + // 从集群注解获取污点信息 + // 注:实际实现中可能需要从其他地方获取污点信息 + taints := []v1.Taint{} + for k, v := range cluster.Annotations { + if strings.HasPrefix(k, "taint.karmada.io/") { + // 简单解析,实际环境中可能需要更复杂的逻辑 + key := strings.TrimPrefix(k, "taint.karmada.io/") + parts := strings.Split(v, ":") + effect := "NoSchedule" // 默认 + value := "" + + if len(parts) > 0 { + value = parts[0] + } + if len(parts) > 1 { + effect = parts[1] + } + + taints = append(taints, v1.Taint{ + Key: key, + Value: value, + Effect: effect, + }) + } + } + schedulingParams.Taints = taints + + // 获取集群标签 + if len(cluster.Labels) > 0 { + for k, v := range cluster.Labels { + schedulingParams.Labels[k] = v + } + } + + response.Nodes = append(response.Nodes, v1.ScheduleNode{ + ID: cluster.Name, + Name: cluster.Name, + Type: "member-cluster", + SchedulingParams: schedulingParams, + }) + } + + // 获取资源绑定信息 + resourceBindings, err := karmadaClient.WorkV1alpha2().ResourceBindings(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get resource bindings") + return nil, err + } + + clusterResourceBindings, err := karmadaClient.WorkV1alpha2().ClusterResourceBindings().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster resource bindings") + return nil, err + } + + // 资源类型统计 - 从绑定中获取的调度信息 + scheduledResourceMap := make(map[string]map[string]int) + // 资源类型和集群间的链接统计 - 从绑定中获取的调度信息 + scheduledResourceLinks := make(map[string]map[string]int) + + // 实际部署的资源统计 + actualResourceMap := make(map[string]map[string]int) + + // 存储资源类型对应的资源名称 + resourceTypeToNameMap := make(map[string][]string) + + // 收集各资源类型对应的资源名称 + for _, binding := range resourceBindings.Items { + resourceType := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + if resourceName != "" { + // 如果该类型还没有初始化映射,则先初始化 + if _, exists := resourceTypeToNameMap[resourceType]; !exists { + resourceTypeToNameMap[resourceType] = []string{} + } + // 检查资源名称是否已存在,避免重复 + nameExists := false + for _, existingName := range resourceTypeToNameMap[resourceType] { + if existingName == resourceName { + nameExists = true + break + } + } + if !nameExists { + resourceTypeToNameMap[resourceType] = append(resourceTypeToNameMap[resourceType], resourceName) + } + } + } + + // 处理集群级资源 + for _, binding := range clusterResourceBindings.Items { + resourceType := binding.Spec.Resource.Kind + resourceName := binding.Spec.Resource.Name + if resourceName != "" { + // 如果该类型还没有初始化映射,则先初始化 + if _, exists := resourceTypeToNameMap[resourceType]; !exists { + resourceTypeToNameMap[resourceType] = []string{} + } + // 检查资源名称是否已存在,避免重复 + nameExists := false + for _, existingName := range resourceTypeToNameMap[resourceType] { + if existingName == resourceName { + nameExists = true + break + } + } + if !nameExists { + resourceTypeToNameMap[resourceType] = append(resourceTypeToNameMap[resourceType], resourceName) + } + } + } + + // 处理资源绑定 - 获取调度信息 + for _, binding := range resourceBindings.Items { + resourceKind := binding.Spec.Resource.Kind + + // 将资源添加到类型统计 + if _, ok := scheduledResourceMap[resourceKind]; !ok { + scheduledResourceMap[resourceKind] = make(map[string]int) + } + + // 如果链接映射不存在此资源类型,则初始化 + if _, ok := scheduledResourceLinks[resourceKind]; !ok { + scheduledResourceLinks[resourceKind] = make(map[string]int) + } + + // 为每个集群绑定创建链接 + for _, cluster := range binding.Spec.Clusters { + clusterName := cluster.Name + + // 增加资源类型到特定集群的链接计数 + scheduledResourceLinks[resourceKind][clusterName]++ + + // 增加资源类型统计 + scheduledResourceMap[resourceKind][clusterName]++ + } + } + + // 处理集群资源绑定 - 获取调度信息 + for _, binding := range clusterResourceBindings.Items { + resourceKind := binding.Spec.Resource.Kind + + // 将资源添加到类型统计 + if _, ok := scheduledResourceMap[resourceKind]; !ok { + scheduledResourceMap[resourceKind] = make(map[string]int) + } + + // 如果链接映射不存在此资源类型,则初始化 + if _, ok := scheduledResourceLinks[resourceKind]; !ok { + scheduledResourceLinks[resourceKind] = make(map[string]int) + } + + // 为每个集群绑定创建链接 + for _, cluster := range binding.Spec.Clusters { + clusterName := cluster.Name + + // 增加资源类型到特定集群的链接计数 + scheduledResourceLinks[resourceKind][clusterName]++ + + // 增加资源类型统计 + scheduledResourceMap[resourceKind][clusterName]++ + } + } + + // 收集实际部署的资源信息 + // 并发获取各集群资源 + var wg sync.WaitGroup + var mu sync.Mutex // 保护map的并发访问 + + for i := range clusterList.Items { + cluster := &clusterList.Items[i] + wg.Add(1) + + go func(c *clusterv1alpha1.Cluster) { + defer wg.Done() + + // 使用现有的客户端函数获取成员集群客户端 + kubeClient := client.InClusterClientForMemberCluster(c.Name) + if kubeClient == nil { + klog.ErrorS(fmt.Errorf("failed to get client"), "Could not get client for cluster", "cluster", c.Name) + return + } + + // 创建动态客户端 - 通过设置相同的配置 + config, err := client.GetMemberConfig() + if err != nil { + klog.ErrorS(err, "Failed to get member config", "cluster", c.Name) + return + } + + // 修改配置以指向特定集群 + restConfig := rest.CopyConfig(config) + // 获取karmada配置 + karmadaConfig, _, err := client.GetKarmadaConfig() + if err != nil { + klog.ErrorS(err, "Failed to get karmada config", "cluster", c.Name) + return + } + // 使用固定的代理URL格式 - client包中定义的proxyURL常量为非导出 + proxyURL := "/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/" + restConfig.Host = karmadaConfig.Host + fmt.Sprintf(proxyURL, c.Name) + + dynamicClient, err := dynamic.NewForConfig(restConfig) + if err != nil { + klog.ErrorS(err, "Failed to create dynamic client", "cluster", c.Name) + return + } + + // 初始化该集群的资源统计 + clusterResources := make(map[string]int) + + // 查询所有支持的资源类型 + for resourceKind, gvr := range supportedResources { + // 查询资源列表 + list, err := dynamicClient.Resource(gvr).Namespace(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to list resources", "cluster", c.Name, "resource", resourceKind) + continue + } + + // 记录资源数量 + count := len(list.Items) + + // 对于Deployment类型,需要获取实际的Pod数量而不是Deployment对象数量 + if resourceKind == "Deployment" && count > 0 { + // Pod计数总和 + totalPodCount := 0 + + // 遍历每个Deployment + for _, deployment := range list.Items { + // 提取Deployment名称和命名空间 + deployName, _, _ := unstructured.NestedString(deployment.Object, "metadata", "name") + deployNamespace, _, _ := unstructured.NestedString(deployment.Object, "metadata", "namespace") + + if deployName == "" { + continue + } + + // 检查该Deployment是否由Karmada调度 - 通过检查特定标签或注释 + // Karmada调度的资源通常会有特定标签 + deployLabels, _, _ := unstructured.NestedMap(deployment.Object, "metadata", "labels") + deployAnnotations, _, _ := unstructured.NestedMap(deployment.Object, "metadata", "annotations") + + // 检查是否有Karmada调度相关的标签或注释 + isKarmadaManaged := false + + // 检查特定的Karmada标签 + if deployLabels != nil { + // 检查常见的Karmada标签 + if _, ok := deployLabels["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := deployLabels["propagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := deployLabels["clusterpropagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 检查特定的Karmada注释 + if deployAnnotations != nil && !isKarmadaManaged { + if _, ok := deployAnnotations["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["propagation.karmada.io/status"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["resourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := deployAnnotations["clusterresourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 如果不是由Karmada管理的资源,则跳过 + if !isKarmadaManaged { + // 额外验证:检查该资源是否在ResourceBinding或ClusterResourceBinding中存在 + // 检查ResourceBinding + foundInResourceBindings := false + for _, binding := range resourceBindings.Items { + if binding.Spec.Resource.Kind == "Deployment" && + binding.Spec.Resource.Name == deployName && + (binding.Namespace == deployNamespace || binding.Spec.Resource.Namespace == deployNamespace) { + foundInResourceBindings = true + break + } + } + + // 检查ClusterResourceBinding + if !foundInResourceBindings { + for _, binding := range clusterResourceBindings.Items { + if binding.Spec.Resource.Kind == "Deployment" && + binding.Spec.Resource.Name == deployName { + foundInResourceBindings = true + break + } + } + } + + // 如果在绑定中也未找到,则确认跳过该资源 + if !foundInResourceBindings { + klog.V(4).Infof("Skipping non-Karmada managed deployment %s/%s in cluster %s", + deployNamespace, deployName, c.Name) + continue + } + } + + // 获取Deployment的Pod selector + var podSelector map[string]string + selectorObj, found, _ := unstructured.NestedMap(deployment.Object, "spec", "selector", "matchLabels") + if found && selectorObj != nil { + podSelector = make(map[string]string) + for k, v := range selectorObj { + if strVal, ok := v.(string); ok { + podSelector[k] = strVal + } + } + } else { + // 如果没有找到matchLabels,使用默认的app标签 + podSelector = map[string]string{"app": deployName} + } + + // 记录原始Deployment的UniqKey,用于后面关联Pod数量 + deploymentUID := fmt.Sprintf("%s/%s/%s", deployNamespace, resourceKind, deployName) + deployPodCount := 0 + + // 构建实际的标签选择器字符串 + labelSelector := "" + for key, value := range podSelector { + if labelSelector != "" { + labelSelector += "," + } + labelSelector += fmt.Sprintf("%s=%s", key, value) + } + + // 只计算匹配标签的Pod + if labelSelector != "" { + podListOptions := metav1.ListOptions{ + LabelSelector: labelSelector, + } + + // 在Deployment所在的命名空间中查找Pod + namespacePodList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(deployNamespace).List(ctx, podListOptions) + if err == nil && namespacePodList != nil { + // 获取运行中的Pod数量 + for _, pod := range namespacePodList.Items { + podStatus, found, err := unstructured.NestedString(pod.Object, "status", "phase") + if found && err == nil && podStatus == "Running" { + deployPodCount++ + } + } + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 匹配选择器[%s]的Pod数量: %d", + c.Name, deployNamespace, deployName, labelSelector, deployPodCount) + } else if err != nil { + klog.Warningf("获取集群[%s]命名空间[%s]中Pod失败: %v", c.Name, deployNamespace, err) + } + } + + // 如果通过标签选择器没找到Pod,尝试使用常见标签模式 + if deployPodCount == 0 { + // 记录找到Pod的选择器,便于调试 + foundSelector := "" + // 尝试其他常见的标签格式 + commonLabelSelectors := []string{ + fmt.Sprintf("app=%s", deployName), + fmt.Sprintf("app.kubernetes.io/name=%s", deployName), + fmt.Sprintf("k8s-app=%s", deployName), + } + + for _, commonSelector := range commonLabelSelectors { + podListOptions := metav1.ListOptions{ + LabelSelector: commonSelector, + } + + namespacePodList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(deployNamespace).List(ctx, podListOptions) + if err != nil { + continue + } + + // 统计运行中的Pod + commonPodCount := 0 + for _, pod := range namespacePodList.Items { + podStatus, found, err := unstructured.NestedString(pod.Object, "status", "phase") + if found && err == nil && podStatus == "Running" { + commonPodCount++ + } + } + + // 如果找到了Pod,使用这个计数并退出循环 + if commonPodCount > 0 { + deployPodCount = commonPodCount + foundSelector = commonSelector + break + } + } + + if deployPodCount > 0 { + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 使用二次尝试选择器[%s]找到Pod数量: %d", + c.Name, deployNamespace, deployName, foundSelector, deployPodCount) + } + } + + // 如果计数仍然为0,可能需要获取Deployment的replicas值作为参考 + if deployPodCount == 0 { + replicas, found, _ := unstructured.NestedInt64(deployment.Object, "spec", "replicas") + if found && replicas > 0 { + deployPodCount = int(replicas) + klog.V(3).Infof("集群[%s] Deployment[%s/%s] 未找到Pod,使用replicas值: %d", + c.Name, deployNamespace, deployName, deployPodCount) + } + } + + // 记录该Deployment的Pod数量 + klog.V(3).Infof("最终统计: 集群[%s], Deployment[%s/%s]的运行Pod数: %d", + c.Name, deployNamespace, deployName, deployPodCount) + + // 保存精确的Pod计数 + mu.Lock() + if _, ok := actualResourceMap[resourceKind]; !ok { + actualResourceMap[resourceKind] = make(map[string]int) + } + // 存储每个具体Deployment的Pod计数,使用包含命名空间和名称的唯一标识符 + actualResourceMap[resourceKind][fmt.Sprintf("%s:%s", c.Name, deploymentUID)] = deployPodCount + mu.Unlock() + + // 累加Pod数量到总数 + totalPodCount += deployPodCount + } + + } else if resourceKind != "Deployment" { + // 对于非Deployment资源,检查是否为Karmada管理的资源 + validatedCount := 0 + + for _, resource := range list.Items { + resourceName, _, _ := unstructured.NestedString(resource.Object, "metadata", "name") + resourceNamespace, _, _ := unstructured.NestedString(resource.Object, "metadata", "namespace") + + if resourceName == "" { + continue + } + + // 检查资源标签和注释是否包含Karmada管理标记 + resourceLabels, _, _ := unstructured.NestedMap(resource.Object, "metadata", "labels") + resourceAnnotations, _, _ := unstructured.NestedMap(resource.Object, "metadata", "annotations") + + isKarmadaManaged := false + + // 检查标签 + if resourceLabels != nil { + if _, ok := resourceLabels["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceLabels["propagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceLabels["clusterpropagationpolicy.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 检查注释 + if resourceAnnotations != nil && !isKarmadaManaged { + if _, ok := resourceAnnotations["karmada.io/managed"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["propagation.karmada.io/status"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["resourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + if _, ok := resourceAnnotations["clusterresourcebinding.karmada.io/name"]; ok { + isKarmadaManaged = true + } + } + + // 如果不是Karmada管理的资源,检查其是否在绑定中存在 + if !isKarmadaManaged { + foundInResourceBindings := false + + // 检查ResourceBinding + for _, binding := range resourceBindings.Items { + if binding.Spec.Resource.Kind == resourceKind && + binding.Spec.Resource.Name == resourceName && + (binding.Namespace == resourceNamespace || binding.Spec.Resource.Namespace == resourceNamespace) { + foundInResourceBindings = true + break + } + } + + // 检查ClusterResourceBinding + if !foundInResourceBindings { + for _, binding := range clusterResourceBindings.Items { + if binding.Spec.Resource.Kind == resourceKind && + binding.Spec.Resource.Name == resourceName { + foundInResourceBindings = true + break + } + } + } + + // 如果在绑定中也未找到,则跳过 + if !foundInResourceBindings { + continue + } + } + + // 到这里,说明资源是由Karmada管理的,或者在绑定中已找到 + validatedCount++ + } + + // 更新为验证后的资源数量 + count = validatedCount + } + + if count > 0 { + clusterResources[resourceKind] = count + + mu.Lock() + // 更新实际资源统计 + if _, ok := actualResourceMap[resourceKind]; !ok { + actualResourceMap[resourceKind] = make(map[string]int) + } + actualResourceMap[resourceKind][c.Name] = count + mu.Unlock() + } + } + + klog.Infof("Cluster %s has resources: %v", c.Name, clusterResources) + }(cluster) + } + + // 等待所有集群资源收集完成 + wg.Wait() + + // 合并调度信息和实际部署信息 + // 对于实际资源部署,使用实际发现的数量 + // 如果实际未发现资源,但存在调度记录,则保留调度记录 + mergedResourceMap := make(map[string]map[string]int) + mergedResourceLinks := make(map[string]map[string]int) + + // 先处理所有调度信息 + for resourceKind, clusterMap := range scheduledResourceMap { + if _, ok := mergedResourceMap[resourceKind]; !ok { + mergedResourceMap[resourceKind] = make(map[string]int) + } + + if _, ok := mergedResourceLinks[resourceKind]; !ok { + mergedResourceLinks[resourceKind] = make(map[string]int) + } + + for clusterName, count := range clusterMap { + mergedResourceMap[resourceKind][clusterName] = count + mergedResourceLinks[resourceKind][clusterName] = count + } + } + + // 再处理所有实际部署信息 + for resourceKind, clusterMap := range actualResourceMap { + if _, ok := mergedResourceMap[resourceKind]; !ok { + mergedResourceMap[resourceKind] = make(map[string]int) + } + + if _, ok := mergedResourceLinks[resourceKind]; !ok { + mergedResourceLinks[resourceKind] = make(map[string]int) + } + + for clusterName, count := range clusterMap { + mergedResourceMap[resourceKind][clusterName] = count + mergedResourceLinks[resourceKind][clusterName] = count + } + } + + // 创建链接 - 使用合并后的信息 + for resourceKind, clusterMap := range mergedResourceLinks { + for clusterName, count := range clusterMap { + response.Links = append(response.Links, v1.ScheduleLink{ + Source: "karmada-control-plane", + Target: clusterName, + Value: count, + Type: resourceKind, + }) + } + } + + // 转换资源类型统计为响应格式,并按资源类型排序 - 使用合并后的信息 + var resourceTypes []string + for resourceType := range mergedResourceMap { + resourceTypes = append(resourceTypes, resourceType) + } + sort.Strings(resourceTypes) + + for _, resourceType := range resourceTypes { + clusterMap := mergedResourceMap[resourceType] + typeDist := v1.ResourceTypeDistribution{ + ResourceType: resourceType, + ClusterDist: []v1.ClusterDistribution{}, + } + + // 对集群名称进行排序,保证展示顺序一致 + var clusterNames []string + for clusterName := range clusterMap { + clusterNames = append(clusterNames, clusterName) + } + sort.Strings(clusterNames) + + for _, clusterName := range clusterNames { + count := clusterMap[clusterName] + typeDist.ClusterDist = append(typeDist.ClusterDist, v1.ClusterDistribution{ + ClusterName: clusterName, + Count: count, + }) + } + + response.ResourceDist = append(response.ResourceDist, typeDist) + } + + // 添加实际资源分布信息 + // 这一部分将同时显示调度计划和实际部署情况 + actualResourceDist := make([]v1.ActualResourceTypeDistribution, 0) + + // 使用与前面相同的资源类型列表,保持一致性 + for _, resourceType := range resourceTypes { + // 只有当该资源类型有调度信息时才进行处理 + if _, exists := scheduledResourceMap[resourceType]; !exists { + continue + } + + dist := v1.ActualResourceTypeDistribution{ + ResourceType: resourceType, + ResourceGroup: getResourceGroup(resourceType), + ClusterDist: []v1.ActualClusterDistribution{}, + TotalScheduledCount: 0, + TotalActualCount: 0, + ResourceNames: resourceTypeToNameMap[resourceType], + } + + // 排序资源名称列表,使显示更加有序 + sort.Strings(dist.ResourceNames) + + // 合并调度和实际部署信息 + scheduledMap := scheduledResourceMap[resourceType] + actualMap := actualResourceMap[resourceType] + + // 收集所有相关集群 + clustersSet := make(map[string]bool) + for cluster := range scheduledMap { + clustersSet[cluster] = true + } + // 只收集在scheduledMap中有记录的集群 + for cluster := range actualMap { + if _, hasSchedule := scheduledMap[cluster]; hasSchedule { + clustersSet[cluster] = true + } + } + + // 对集群名称进行排序 + var clusters []string + for cluster := range clustersSet { + clusters = append(clusters, cluster) + } + sort.Strings(clusters) + + // 为每个集群创建分布记录 + for _, clusterName := range clusters { + scheduledCount := scheduledMap[clusterName] + actualCount := actualMap[clusterName] + + dist.TotalScheduledCount += scheduledCount + dist.TotalActualCount += actualCount + + clusterDist := v1.ActualClusterDistribution{ + ClusterName: clusterName, + ScheduledCount: scheduledCount, + ActualCount: actualCount, + Status: v1.ResourceDeploymentStatus{ + Scheduled: scheduledCount > 0, + Actual: actualCount > 0, + ScheduledCount: scheduledCount, + ActualCount: actualCount, + }, + } + dist.ClusterDist = append(dist.ClusterDist, clusterDist) + } + + // 只有当至少有一个调度记录或实际部署记录时,才添加到结果中 + if dist.TotalScheduledCount > 0 || dist.TotalActualCount > 0 { + actualResourceDist = append(actualResourceDist, dist) + } + } + + // 将实际资源分布添加到响应中 + response.ActualResourceDist = actualResourceDist + + // 获取传播策略 + propagationPolicies, err := karmadaClient.PolicyV1alpha1().PropagationPolicies(metav1.NamespaceAll).List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get propagation policies") + return nil, err + } + + clusterPropagationPolicies, err := karmadaClient.PolicyV1alpha1().ClusterPropagationPolicies().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster propagation policies") + return nil, err + } + + // 将策略信息添加到响应中 + response.Summary = v1.ScheduleSummary{ + TotalClusters: len(clusterList.Items), + TotalPropagationPolicy: len(propagationPolicies.Items) + len(clusterPropagationPolicies.Items), + TotalResourceBinding: len(resourceBindings.Items) + len(clusterResourceBindings.Items), + } + + return response, nil +} + +// HandleGetAllClusterResourcesPreview 处理获取所有集群资源预览的请求 +func HandleGetAllClusterResourcesPreview(c *gin.Context) { + preview, err := GetAllClusterResourcesPreview() + if err != nil { + klog.ErrorS(err, "Failed to get all cluster resources preview") + common.Fail(c, err) + return + } + + common.Success(c, preview) +} + +// 在init中已注册,此处不需要额外添加路由 diff --git a/cmd/api/app/routes/overview/topology/handler.go b/cmd/api/app/routes/overview/topology/handler.go new file mode 100644 index 00000000..fead776e --- /dev/null +++ b/cmd/api/app/routes/overview/topology/handler.go @@ -0,0 +1,80 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package topology + +import ( + "github.com/gin-gonic/gin" + "k8s.io/klog/v2" + + v1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/cmd/api/app/types/common" +) + +// HandleGetTopology 处理获取拓扑图数据的请求 +func HandleGetTopology(c *gin.Context) { + // 获取查询参数 + showResources := c.DefaultQuery("showResources", "true") == "true" + showNodes := c.DefaultQuery("showNodes", "true") == "true" + showPods := c.DefaultQuery("showPods", "false") == "true" + + // 获取拓扑图数据 + topologyData, err := GetTopologyData(showResources, showNodes, showPods) + if err != nil { + klog.ErrorS(err, "Failed to get topology data") + common.Fail(c, err) + return + } + + // 返回成功响应 + common.Success(c, &v1.TopologyResponse{ + Data: *topologyData, + }) +} + +// HandleGetClusterTopology 处理获取特定集群拓扑图数据的请求 +func HandleGetClusterTopology(c *gin.Context) { + // 获取集群名称 + clusterName := c.Param("clusterName") + if clusterName == "" { + common.Fail(c, common.NewBadRequestError("cluster name is required")) + return + } + + // 获取查询参数 + showResources := c.DefaultQuery("showResources", "true") == "true" + showNodes := c.DefaultQuery("showNodes", "true") == "true" + showPods := c.DefaultQuery("showPods", "false") == "true" + + // 获取特定集群的拓扑图数据 + topologyData, err := GetClusterTopologyData(clusterName, showResources, showNodes, showPods) + if err != nil { + klog.ErrorS(err, "Failed to get cluster topology data", "cluster", clusterName) + common.Fail(c, err) + return + } + + // 返回成功响应 + common.Success(c, &v1.TopologyResponse{ + Data: *topologyData, + }) +} + +// RegisterRoutes 注册拓扑图相关的路由 +func RegisterRoutes(r gin.IRoutes) { + r.GET("/overview/topology", HandleGetTopology) + r.GET("/overview/topology/:clusterName", HandleGetClusterTopology) +} diff --git a/cmd/api/app/routes/overview/topology/topology.go b/cmd/api/app/routes/overview/topology/topology.go new file mode 100644 index 00000000..2511685f --- /dev/null +++ b/cmd/api/app/routes/overview/topology/topology.go @@ -0,0 +1,691 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package topology + +import ( + "context" + "fmt" + "strconv" + "strings" + "sync" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/client-go/dynamic" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/klog/v2" + + clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1" + + v1 "github.com/karmada-io/dashboard/cmd/api/app/types/api/v1" + "github.com/karmada-io/dashboard/pkg/client" +) + +// 支持的资源类型GVR +var supportedResources = map[string]schema.GroupVersionResource{ + "Deployment": { + Group: "apps", + Version: "v1", + Resource: "deployments", + }, + "Pod": { + Group: "", + Version: "v1", + Resource: "pods", + }, + "Service": { + Group: "", + Version: "v1", + Resource: "services", + }, + "Node": { + Group: "", + Version: "v1", + Resource: "nodes", + }, + "StatefulSet": { + Group: "apps", + Version: "v1", + Resource: "statefulsets", + }, + "DaemonSet": { + Group: "apps", + Version: "v1", + Resource: "daemonsets", + }, +} + +// GetTopologyData 获取整个Karmada拓扑图数据 +func GetTopologyData(showResources, showNodes, showPods bool) (*v1.TopologyData, error) { + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + ctx := context.TODO() + + // 初始化拓扑图数据 + topologyData := &v1.TopologyData{ + Nodes: []v1.TopologyNode{}, + Edges: []v1.TopologyEdge{}, + Summary: &v1.TopologySummary{ + ResourceDistribution: make(map[string]int), + }, + } + + // 添加Karmada控制平面节点 + controlPlaneNode := v1.TopologyNode{ + ID: "karmada-control-plane", + Name: "Karmada控制平面", + Type: "control-plane", + Status: "ready", + ParentID: "", + } + topologyData.Nodes = append(topologyData.Nodes, controlPlaneNode) + + // 获取所有集群 + clusterList, err := karmadaClient.ClusterV1alpha1().Clusters().List(ctx, metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster list") + return nil, err + } + + // 初始化统计信息 + topologyData.Summary.TotalClusters = len(clusterList.Items) + topologyData.Summary.TotalNodes = 0 + topologyData.Summary.TotalPods = 0 + + // 并发处理每个集群 + var wg sync.WaitGroup + var mu sync.Mutex + clusterNodeMap := make(map[string][]v1.TopologyNode) + clusterEdgeMap := make(map[string][]v1.TopologyEdge) + + for i := range clusterList.Items { + cluster := &clusterList.Items[i] + wg.Add(1) + + go func(c *clusterv1alpha1.Cluster) { + defer wg.Done() + + // 处理单个集群 + clusterNodes, clusterEdges, clusterStats, err := processCluster(c, showResources, showNodes, showPods) + if err != nil { + klog.ErrorS(err, "Failed to process cluster", "cluster", c.Name) + return + } + + // 安全地更新共享数据 + mu.Lock() + defer mu.Unlock() + + // 存储集群的节点和边 + clusterNodeMap[c.Name] = clusterNodes + clusterEdgeMap[c.Name] = clusterEdges + + // 更新统计信息 + topologyData.Summary.TotalNodes += clusterStats.TotalNodes + topologyData.Summary.TotalPods += clusterStats.TotalPods + + // 更新资源分布 + for resourceType, count := range clusterStats.ResourceDistribution { + topologyData.Summary.ResourceDistribution[resourceType] += count + } + }(cluster) + } + + // 等待所有集群处理完成 + wg.Wait() + + // 添加集群节点到拓扑图 + for _, cluster := range clusterList.Items { + // 创建集群节点 + clusterNode := createClusterNode(&cluster) + topologyData.Nodes = append(topologyData.Nodes, clusterNode) + + // 添加从控制平面到集群的边 + controlToClusterEdge := v1.TopologyEdge{ + ID: fmt.Sprintf("edge-control-to-%s", cluster.Name), + Source: controlPlaneNode.ID, + Target: clusterNode.ID, + Type: "control", + Value: 1, + } + topologyData.Edges = append(topologyData.Edges, controlToClusterEdge) + + // 添加集群的节点和边 + if nodes, ok := clusterNodeMap[cluster.Name]; ok { + topologyData.Nodes = append(topologyData.Nodes, nodes...) + } + if edges, ok := clusterEdgeMap[cluster.Name]; ok { + topologyData.Edges = append(topologyData.Edges, edges...) + } + } + + return topologyData, nil +} + +// GetClusterTopologyData 获取特定集群的拓扑图数据 +func GetClusterTopologyData(clusterName string, showResources, showNodes, showPods bool) (*v1.TopologyData, error) { + // 获取Karmada客户端 + karmadaClient := client.InClusterKarmadaClient() + ctx := context.TODO() + + // 获取指定的集群 + cluster, err := karmadaClient.ClusterV1alpha1().Clusters().Get(ctx, clusterName, metav1.GetOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get cluster", "cluster", clusterName) + return nil, err + } + + // 初始化拓扑图数据 + topologyData := &v1.TopologyData{ + Nodes: []v1.TopologyNode{}, + Edges: []v1.TopologyEdge{}, + Summary: &v1.TopologySummary{ + TotalClusters: 1, + ResourceDistribution: make(map[string]int), + }, + } + + // 添加Karmada控制平面节点 + controlPlaneNode := v1.TopologyNode{ + ID: "karmada-control-plane", + Name: "Karmada控制平面", + Type: "control-plane", + Status: "ready", + ParentID: "", + } + topologyData.Nodes = append(topologyData.Nodes, controlPlaneNode) + + // 创建集群节点 + clusterNode := createClusterNode(cluster) + topologyData.Nodes = append(topologyData.Nodes, clusterNode) + + // 添加从控制平面到集群的边 + controlToClusterEdge := v1.TopologyEdge{ + ID: fmt.Sprintf("edge-control-to-%s", cluster.Name), + Source: controlPlaneNode.ID, + Target: clusterNode.ID, + Type: "control", + Value: 1, + } + topologyData.Edges = append(topologyData.Edges, controlToClusterEdge) + + // 处理集群内部的节点、Pod和资源 + clusterNodes, clusterEdges, clusterStats, err := processCluster(cluster, showResources, showNodes, showPods) + if err != nil { + klog.ErrorS(err, "Failed to process cluster", "cluster", cluster.Name) + return nil, err + } + + // 添加集群内部的节点和边 + topologyData.Nodes = append(topologyData.Nodes, clusterNodes...) + topologyData.Edges = append(topologyData.Edges, clusterEdges...) + + // 更新统计信息 + topologyData.Summary.TotalNodes = clusterStats.TotalNodes + topologyData.Summary.TotalPods = clusterStats.TotalPods + topologyData.Summary.ResourceDistribution = clusterStats.ResourceDistribution + + return topologyData, nil +} + +// 处理单个集群的拓扑信息 +func processCluster(cluster *clusterv1alpha1.Cluster, showResources, showNodes, showPods bool) ([]v1.TopologyNode, []v1.TopologyEdge, *v1.TopologySummary, error) { + var nodes []v1.TopologyNode + var edges []v1.TopologyEdge + summary := &v1.TopologySummary{ + TotalNodes: 0, + TotalPods: 0, + ResourceDistribution: make(map[string]int), + } + + // 获取集群客户端 + kubeClient := client.InClusterClientForMemberCluster(cluster.Name) + if kubeClient == nil { + return nil, nil, nil, fmt.Errorf("failed to get client for cluster %s", cluster.Name) + } + + // 创建动态客户端 + config, err := client.GetMemberConfig() + if err != nil { + return nil, nil, nil, fmt.Errorf("failed to get member config for cluster %s: %v", cluster.Name, err) + } + + // 修改配置以指向特定集群 + restConfig := rest.CopyConfig(config) + karmadaConfig, _, err := client.GetKarmadaConfig() + if err != nil { + return nil, nil, nil, fmt.Errorf("failed to get karmada config for cluster %s: %v", cluster.Name, err) + } + proxyURL := "/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/" + restConfig.Host = karmadaConfig.Host + fmt.Sprintf(proxyURL, cluster.Name) + + dynamicClient, err := dynamic.NewForConfig(restConfig) + if err != nil { + return nil, nil, nil, fmt.Errorf("failed to create dynamic client for cluster %s: %v", cluster.Name, err) + } + + // 获取节点信息 + if showNodes { + nodeList, err := dynamicClient.Resource(supportedResources["Node"]).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get nodes", "cluster", cluster.Name) + } else { + summary.TotalNodes = len(nodeList.Items) + + // 处理每个节点 + for _, nodeObj := range nodeList.Items { + nodeName, _, _ := unstructured.NestedString(nodeObj.Object, "metadata", "name") + if nodeName == "" { + continue + } + + // 获取节点状态 + nodeStatus := "notready" + conditions, _, _ := unstructured.NestedSlice(nodeObj.Object, "status", "conditions") + for _, condObj := range conditions { + cond, ok := condObj.(map[string]interface{}) + if !ok { + continue + } + condType, _, _ := unstructured.NestedString(cond, "type") + condStatus, _, _ := unstructured.NestedString(cond, "status") + if condType == "Ready" && condStatus == "True" { + nodeStatus = "ready" + break + } + } + + // 创建节点资源使用情况 + nodeResources := &v1.NodeResources{} + + // 获取CPU资源 + allocatableCPU, _, _ := unstructured.NestedString(nodeObj.Object, "status", "allocatable", "cpu") + capacityCPU, _, _ := unstructured.NestedString(nodeObj.Object, "status", "capacity", "cpu") + if allocatableCPU != "" && capacityCPU != "" { + nodeResources.CPU = &v1.ResourceUsage{ + Used: allocatableCPU, + Total: capacityCPU, + UsageRate: 0, // 需要计算实际使用率 + } + } + + // 获取内存资源 + allocatableMemory, _, _ := unstructured.NestedString(nodeObj.Object, "status", "allocatable", "memory") + capacityMemory, _, _ := unstructured.NestedString(nodeObj.Object, "status", "capacity", "memory") + if allocatableMemory != "" && capacityMemory != "" { + nodeResources.Memory = &v1.ResourceUsage{ + Used: allocatableMemory, + Total: capacityMemory, + UsageRate: 0, // 需要计算实际使用率 + } + } + + // 获取Pod资源 + allocatablePods, _, _ := unstructured.NestedString(nodeObj.Object, "status", "allocatable", "pods") + capacityPods, _, _ := unstructured.NestedString(nodeObj.Object, "status", "capacity", "pods") + if allocatablePods != "" && capacityPods != "" { + nodeResources.Pods = &v1.ResourceUsage{ + Used: allocatablePods, + Total: capacityPods, + UsageRate: 0, // 需要计算实际使用率 + } + } + + // 获取节点标签 + nodeLabels := make(map[string]string) + labels, _, _ := unstructured.NestedMap(nodeObj.Object, "metadata", "labels") + for k, v := range labels { + if strVal, ok := v.(string); ok { + nodeLabels[k] = strVal + } + } + + // 创建节点 + nodeID := fmt.Sprintf("node-%s-%s", cluster.Name, nodeName) + node := v1.TopologyNode{ + ID: nodeID, + Name: nodeName, + Type: "node", + Status: nodeStatus, + ParentID: cluster.Name, + Resources: nodeResources, + Labels: nodeLabels, + } + nodes = append(nodes, node) + + // 添加从集群到节点的边 + clusterToNodeEdge := v1.TopologyEdge{ + ID: fmt.Sprintf("edge-%s-to-%s", cluster.Name, nodeID), + Source: cluster.Name, + Target: nodeID, + Type: "control", + Value: 1, + } + edges = append(edges, clusterToNodeEdge) + + // 如果需要显示Pod,则获取该节点上的Pod + if showPods { + fieldSelector := fmt.Sprintf("spec.nodeName=%s", nodeName) + podList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{ + FieldSelector: fieldSelector, + }) + if err != nil { + klog.ErrorS(err, "Failed to get pods for node", "cluster", cluster.Name, "node", nodeName) + continue + } + + // 处理每个Pod + for _, podObj := range podList.Items { + podName, _, _ := unstructured.NestedString(podObj.Object, "metadata", "name") + podNamespace, _, _ := unstructured.NestedString(podObj.Object, "metadata", "namespace") + if podName == "" { + continue + } + + // 获取Pod状态 + podStatus := "notready" + phase, _, _ := unstructured.NestedString(podObj.Object, "status", "phase") + if phase == "Running" { + podStatus = "ready" + } + + // 创建Pod节点 + podID := fmt.Sprintf("pod-%s-%s-%s", cluster.Name, podNamespace, podName) + pod := v1.TopologyNode{ + ID: podID, + Name: podName, + Type: "pod", + Status: podStatus, + ParentID: nodeID, + Metadata: map[string]interface{}{ + "namespace": podNamespace, + "phase": phase, + }, + } + nodes = append(nodes, pod) + + // 添加从节点到Pod的边 + nodeToPodEdge := v1.TopologyEdge{ + ID: fmt.Sprintf("edge-%s-to-%s", nodeID, podID), + Source: nodeID, + Target: podID, + Type: "schedule", + Value: 1, + } + edges = append(edges, nodeToPodEdge) + + // 更新Pod计数 + summary.TotalPods++ + } + } + } + } + } + + // 获取资源信息 + if showResources { + // 获取Deployment资源 + deployList, err := dynamicClient.Resource(supportedResources["Deployment"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get deployments", "cluster", cluster.Name) + } else { + summary.ResourceDistribution["Deployment"] = len(deployList.Items) + } + + // 获取Service资源 + serviceList, err := dynamicClient.Resource(supportedResources["Service"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get services", "cluster", cluster.Name) + } else { + summary.ResourceDistribution["Service"] = len(serviceList.Items) + } + + // 获取StatefulSet资源 + statefulSetList, err := dynamicClient.Resource(supportedResources["StatefulSet"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get statefulsets", "cluster", cluster.Name) + } else { + summary.ResourceDistribution["StatefulSet"] = len(statefulSetList.Items) + } + + // 获取DaemonSet资源 + daemonSetList, err := dynamicClient.Resource(supportedResources["DaemonSet"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get daemonsets", "cluster", cluster.Name) + } else { + summary.ResourceDistribution["DaemonSet"] = len(daemonSetList.Items) + } + + // 如果不显示Pod,则单独获取Pod数量 + if !showPods { + podList, err := dynamicClient.Resource(supportedResources["Pod"]).Namespace(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{}) + if err != nil { + klog.ErrorS(err, "Failed to get pods", "cluster", cluster.Name) + } else { + summary.TotalPods = len(podList.Items) + summary.ResourceDistribution["Pod"] = len(podList.Items) + } + } + } + + return nodes, edges, summary, nil +} + +// 创建集群节点 +func createClusterNode(cluster *clusterv1alpha1.Cluster) v1.TopologyNode { + // 确定集群状态 + clusterStatus := "notready" + for _, condition := range cluster.Status.Conditions { + if condition.Type == clusterv1alpha1.ClusterConditionReady && condition.Status == metav1.ConditionTrue { + clusterStatus = "ready" + break + } + } + + // 获取集群标签 + clusterLabels := make(map[string]string) + for k, v := range cluster.Labels { + clusterLabels[k] = v + } + + // 创建资源使用情况 + clusterResources := &v1.NodeResources{} + + // 如果有资源信息,则填充 + if cluster.Status.ResourceSummary != nil { + // CPU资源 + if cluster.Status.ResourceSummary.Allocatable != nil && cluster.Status.ResourceSummary.Allocatable.Cpu() != nil { + allocatableCPU := cluster.Status.ResourceSummary.Allocatable.Cpu().String() + // 使用Allocated作为已分配资源 + cpuUsage := "0" + if cluster.Status.ResourceSummary.Allocated != nil && cluster.Status.ResourceSummary.Allocated.Cpu() != nil { + cpuUsage = cluster.Status.ResourceSummary.Allocated.Cpu().String() + } + clusterResources.CPU = &v1.ResourceUsage{ + Used: cpuUsage, + Total: allocatableCPU, + UsageRate: calculateResourceUsageRate(cpuUsage, allocatableCPU), + } + } + + // 内存资源 + if cluster.Status.ResourceSummary.Allocatable != nil && cluster.Status.ResourceSummary.Allocatable.Memory() != nil { + allocatableMemory := cluster.Status.ResourceSummary.Allocatable.Memory().String() + memoryUsage := "0" + if cluster.Status.ResourceSummary.Allocated != nil && cluster.Status.ResourceSummary.Allocated.Memory() != nil { + memoryUsage = cluster.Status.ResourceSummary.Allocated.Memory().String() + } + clusterResources.Memory = &v1.ResourceUsage{ + Used: memoryUsage, + Total: allocatableMemory, + UsageRate: calculateResourceUsageRate(memoryUsage, allocatableMemory), + } + } + + // Pod资源 + if cluster.Status.ResourceSummary.Allocatable != nil && cluster.Status.ResourceSummary.Allocatable.Pods() != nil { + allocatablePods := cluster.Status.ResourceSummary.Allocatable.Pods().String() + podsUsage := "0" + if cluster.Status.ResourceSummary.Allocated != nil && cluster.Status.ResourceSummary.Allocated.Pods() != nil { + podsUsage = cluster.Status.ResourceSummary.Allocated.Pods().String() + } + clusterResources.Pods = &v1.ResourceUsage{ + Used: podsUsage, + Total: allocatablePods, + UsageRate: calculateResourceUsageRate(podsUsage, allocatablePods), + } + } + } + + return v1.TopologyNode{ + ID: cluster.Name, + Name: cluster.Name, + Type: "cluster", + Status: clusterStatus, + ParentID: "karmada-control-plane", + Resources: clusterResources, + Labels: clusterLabels, + Metadata: map[string]interface{}{ + "apiEndpoint": cluster.Spec.APIEndpoint, + "syncMode": cluster.Spec.SyncMode, + }, + } +} + +// 计算资源使用率 +func calculateResourceUsageRate(used, total string) float64 { + // 解析资源字符串 + usedValue, err := parseResourceValue(used) + if err != nil { + return 0 + } + + totalValue, err := parseResourceValue(total) + if err != nil || totalValue == 0 { + return 0 + } + + return usedValue / totalValue * 100 +} + +// 解析资源值 +func parseResourceValue(resource string) (float64, error) { + // 移除单位 + resource = strings.TrimRight(resource, "mKMGTPEi") + return strconv.ParseFloat(resource, 64) +} + +// 获取集群的资源使用情况 +func getClusterResourceUsage(cluster *clusterv1alpha1.Cluster) (*v1.NodeResources, error) { + // 创建资源使用情况 + resources := &v1.NodeResources{} + + // 获取集群客户端 + config, err := client.GetMemberConfig() + if err != nil { + return nil, err + } + + // 修改配置以指向特定集群 + restConfig := rest.CopyConfig(config) + karmadaConfig, _, err := client.GetKarmadaConfig() + if err != nil { + return nil, fmt.Errorf("failed to get karmada config: %v", err) + } + proxyURL := "/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/" + restConfig.Host = karmadaConfig.Host + fmt.Sprintf(proxyURL, cluster.Name) + + kubeClient, err := kubernetes.NewForConfig(restConfig) + if err != nil { + return nil, err + } + + // 获取节点列表 + nodeList, err := kubeClient.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{}) + if err != nil { + return nil, err + } + + // 计算总资源和已使用资源 + var totalCPU, usedCPU int64 + var totalMemory, usedMemory int64 + var totalPods, usedPods int64 + + // 遍历所有节点 + for _, node := range nodeList.Items { + // 获取节点容量 + nodeTotalCPU := node.Status.Capacity.Cpu().MilliValue() + nodeTotalMemory := node.Status.Capacity.Memory().Value() + nodeTotalPods := node.Status.Capacity.Pods().Value() + + // 累加总资源 + totalCPU += nodeTotalCPU + totalMemory += nodeTotalMemory + totalPods += nodeTotalPods + + // 获取Pod列表 + fieldSelector := fmt.Sprintf("spec.nodeName=%s", node.Name) + podList, err := kubeClient.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{ + FieldSelector: fieldSelector, + }) + if err != nil { + continue + } + + // 计算已使用资源 + for _, pod := range podList.Items { + if pod.Status.Phase != corev1.PodRunning && pod.Status.Phase != corev1.PodPending { + continue + } + + // 累加Pod数量 + usedPods++ + + // 累加CPU和内存使用量 + for _, container := range pod.Spec.Containers { + usedCPU += container.Resources.Requests.Cpu().MilliValue() + usedMemory += container.Resources.Requests.Memory().Value() + } + } + } + + // 设置CPU使用情况 + resources.CPU = &v1.ResourceUsage{ + Used: fmt.Sprintf("%dm", usedCPU), + Total: fmt.Sprintf("%dm", totalCPU), + UsageRate: float64(usedCPU) / float64(totalCPU) * 100, + } + + // 设置内存使用情况 + resources.Memory = &v1.ResourceUsage{ + Used: fmt.Sprintf("%dMi", usedMemory/(1024*1024)), + Total: fmt.Sprintf("%dMi", totalMemory/(1024*1024)), + UsageRate: float64(usedMemory) / float64(totalMemory) * 100, + } + + // 设置Pod使用情况 + resources.Pods = &v1.ResourceUsage{ + Used: fmt.Sprintf("%d", usedPods), + Total: fmt.Sprintf("%d", totalPods), + UsageRate: float64(usedPods) / float64(totalPods) * 100, + } + + return resources, nil +} diff --git a/cmd/api/app/routes/propagationpolicy/handler.go b/cmd/api/app/routes/propagationpolicy/handler.go index 204dfa03..65900919 100644 --- a/cmd/api/app/routes/propagationpolicy/handler.go +++ b/cmd/api/app/routes/propagationpolicy/handler.go @@ -34,6 +34,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/propagationpolicy" ) +// 获取传播策略列表 func handleGetPropagationPolicyList(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() dataSelect := common.ParseDataSelectPathParameter(c) @@ -47,6 +48,8 @@ func handleGetPropagationPolicyList(c *gin.Context) { } common.Success(c, propagationList) } + +// 获取传播策略详情 func handleGetPropagationPolicyDetail(c *gin.Context) { karmadaClient := client.InClusterKarmadaClient() namespace := c.Param("namespace") @@ -59,6 +62,8 @@ func handleGetPropagationPolicyDetail(c *gin.Context) { } common.Success(c, result) } + +// 创建传播策略 func handlePostPropagationPolicy(c *gin.Context) { // todo precheck existence of namespace, now we tested it under scope of default, it's ok till now. ctx := context.Context(c) @@ -97,6 +102,8 @@ func handlePostPropagationPolicy(c *gin.Context) { } common.Success(c, "ok") } + +// 更新传播策略 func handlePutPropagationPolicy(c *gin.Context) { ctx := context.Context(c) propagationpolicyRequest := new(v1.PutPropagationPolicyRequest) @@ -138,6 +145,8 @@ func handlePutPropagationPolicy(c *gin.Context) { } common.Success(c, "ok") } + +// 删除传播策略 func handleDeletePropagationPolicy(c *gin.Context) { ctx := context.Context(c) propagationpolicyRequest := new(v1.DeletePropagationPolicyRequest) @@ -175,11 +184,17 @@ func handleDeletePropagationPolicy(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 获取传播策略列表 r.GET("/propagationpolicy", handleGetPropagationPolicyList) + // 获取传播策略详情 r.GET("/propagationpolicy/namespace/:namespace/:propagationPolicyName", handleGetPropagationPolicyDetail) + // 创建传播策略 r.POST("/propagationpolicy", handlePostPropagationPolicy) + // 更新传播策略 r.PUT("/propagationpolicy", handlePutPropagationPolicy) + // 删除传播策略 r.DELETE("/propagationpolicy", handleDeletePropagationPolicy) } diff --git a/cmd/api/app/routes/secret/handler.go b/cmd/api/app/routes/secret/handler.go index fe2fecf0..c1407f3c 100644 --- a/cmd/api/app/routes/secret/handler.go +++ b/cmd/api/app/routes/secret/handler.go @@ -25,6 +25,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/secret" ) +// 获取secret列表 func handleGetSecrets(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() dataSelect := common.ParseDataSelectPathParameter(c) @@ -37,6 +38,7 @@ func handleGetSecrets(c *gin.Context) { common.Success(c, result) } +// 获取secret详情 func handleGetSecretDetail(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() namespace := c.Param("namespace") @@ -48,9 +50,14 @@ func handleGetSecretDetail(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取secret列表 r.GET("/secret", handleGetSecrets) + // 获取secret列表 r.GET("/secret/:namespace", handleGetSecrets) + // 获取secret详情 r.GET("/secret/:namespace/:service", handleGetSecretDetail) } diff --git a/cmd/api/app/routes/service/handler.go b/cmd/api/app/routes/service/handler.go index ee03a0f9..ac16d2e4 100644 --- a/cmd/api/app/routes/service/handler.go +++ b/cmd/api/app/routes/service/handler.go @@ -25,6 +25,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/service" ) +// 获取service列表 func handleGetServices(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() dataSelect := common.ParseDataSelectPathParameter(c) @@ -37,6 +38,7 @@ func handleGetServices(c *gin.Context) { common.Success(c, result) } +// 获取service详情 func handleGetServiceDetail(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() namespace := c.Param("namespace") @@ -49,6 +51,7 @@ func handleGetServiceDetail(c *gin.Context) { common.Success(c, result) } +// 获取service事件 func handleGetServiceEvents(c *gin.Context) { k8sClient := client.InClusterClientForKarmadaAPIServer() namespace := c.Param("namespace") @@ -62,10 +65,15 @@ func handleGetServiceEvents(c *gin.Context) { common.Success(c, result) } +// 初始化路由 func init() { r := router.V1() + // 获取service列表 r.GET("/service", handleGetServices) + // 获取service列表 r.GET("/service/:namespace", handleGetServices) + // 获取service详情 r.GET("/service/:namespace/:service", handleGetServiceDetail) + // 获取service事件 r.GET("/service/:namespace/:service/event", handleGetServiceEvents) } diff --git a/cmd/api/app/routes/statefulset/handler.go b/cmd/api/app/routes/statefulset/handler.go index 6b71664e..c3c12d37 100644 --- a/cmd/api/app/routes/statefulset/handler.go +++ b/cmd/api/app/routes/statefulset/handler.go @@ -26,6 +26,7 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/statefulset" ) +// 获取statefulset列表 func handleGetStatefulsets(c *gin.Context) { namespace := common.ParseNamespacePathParameter(c) dataSelect := common.ParseDataSelectPathParameter(c) @@ -38,6 +39,7 @@ func handleGetStatefulsets(c *gin.Context) { common.Success(c, result) } +// 获取statefulset详情 func handleGetStatefulsetDetail(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -50,6 +52,7 @@ func handleGetStatefulsetDetail(c *gin.Context) { common.Success(c, result) } +// 获取statefulset事件 func handleGetStatefulsetEvents(c *gin.Context) { namespace := c.Param("namespace") name := c.Param("statefulset") @@ -62,10 +65,16 @@ func handleGetStatefulsetEvents(c *gin.Context) { } common.Success(c, result) } + +// 初始化路由 func init() { r := router.V1() + // 获取statefulset列表 r.GET("/statefulset", handleGetStatefulsets) + // 获取statefulset列表 r.GET("/statefulset/:namespace", handleGetStatefulsets) + // 获取statefulset详情 r.GET("/statefulset/:namespace/:statefulset", handleGetStatefulsetDetail) + // 获取statefulset事件 r.GET("/statefulset/:namespace/:statefulset/event", handleGetStatefulsetEvents) } diff --git a/cmd/api/app/routes/unstructured/handler.go b/cmd/api/app/routes/unstructured/handler.go index 739d5ad3..9ddb1d17 100644 --- a/cmd/api/app/routes/unstructured/handler.go +++ b/cmd/api/app/routes/unstructured/handler.go @@ -30,6 +30,7 @@ import ( "github.com/karmada-io/dashboard/pkg/client" ) +// 删除资源 func handleDeleteResource(c *gin.Context) { verber, err := client.VerberClient(c.Request) if err != nil { @@ -112,6 +113,7 @@ func handlePutResource(c *gin.Context) { common.Success(c, "ok") } +// 创建资源 func handleCreateResource(c *gin.Context) { // todo double-check existence of target resources, if exist return directly. verber, err := client.VerberClient(c.Request) @@ -141,16 +143,25 @@ func handleCreateResource(c *gin.Context) { common.Success(c, "ok") } +// 初始化路由 func init() { r := router.V1() + // 删除资源 r.DELETE("/_raw/:kind/namespace/:namespace/name/:name", handleDeleteResource) + // 获取资源 r.GET("/_raw/:kind/namespace/:namespace/name/:name", handleGetResource) + // 更新资源 r.PUT("/_raw/:kind/namespace/:namespace/name/:name", handlePutResource) + // 创建资源 r.POST("/_raw/:kind/namespace/:namespace/name/:name", handleCreateResource) // Verber (non-namespaced) + // 删除资源 r.DELETE("/_raw/:kind/name/:name", handleDeleteResource) + // 获取资源 r.GET("/_raw/:kind/name/:name", handleGetResource) + // 更新资源 r.PUT("/_raw/:kind/name/:name", handlePutResource) + // 创建资源 r.POST("/_raw/:kind/name/:name", handleCreateResource) } diff --git a/cmd/api/app/types/api/v1/auth.go b/cmd/api/app/types/api/v1/auth.go index b556ce9d..8c342924 100644 --- a/cmd/api/app/types/api/v1/auth.go +++ b/cmd/api/app/types/api/v1/auth.go @@ -17,22 +17,26 @@ limitations under the License. package v1 // LoginRequest is the request for login. +// LoginRequest 是登录请求 type LoginRequest struct { Token string `json:"token"` } // LoginResponse is the response for login. +// LoginResponse 是登录响应 type LoginResponse struct { Token string `json:"token"` } // User is the user info. +// User 是用户信息 type User struct { Name string `json:"name,omitempty"` Authenticated bool `json:"authenticated"` } // ServiceAccount is the service account info. +// ServiceAccount 是服务账户信息 type ServiceAccount struct { Name string `json:"name"` UID string `json:"uid"` diff --git a/cmd/api/app/types/api/v1/cluster.go b/cmd/api/app/types/api/v1/cluster.go index f4c2f3fd..c482ba13 100644 --- a/cmd/api/app/types/api/v1/cluster.go +++ b/cmd/api/app/types/api/v1/cluster.go @@ -22,48 +22,72 @@ import ( ) // PostClusterRequest is the request body for creating a cluster. +// PostClusterRequest 是创建集群的请求 type PostClusterRequest struct { + // MemberClusterKubeConfig 是成员集群的 kubeconfig MemberClusterKubeConfig string `json:"memberClusterKubeconfig" binding:"required"` + // SyncMode 是集群同步模式 SyncMode v1alpha1.ClusterSyncMode `json:"syncMode" binding:"required"` + // MemberClusterName 是成员集群的名称 MemberClusterName string `json:"memberClusterName" binding:"required"` + // MemberClusterEndpoint 是成员集群的端点 MemberClusterEndpoint string `json:"memberClusterEndpoint"` + // MemberClusterNamespace 是成员集群的命名空间 MemberClusterNamespace string `json:"memberClusterNamespace"` + // ClusterProvider 是集群提供商 ClusterProvider string `json:"clusterProvider"` + // ClusterRegion 是集群区域 ClusterRegion string `json:"clusterRegion"` + // ClusterZones 是集群区域 ClusterZones []string `json:"clusterZones"` } // PostClusterResponse is the response body for creating a cluster. +// PostClusterResponse 是创建集群的响应 type PostClusterResponse struct { } // LabelRequest is the request body for labeling a cluster. +// LabelRequest 是标签集群的请求 type LabelRequest struct { + // Key 是标签的键 Key string `json:"key"` + // Value 是标签的值 Value string `json:"value"` } // TaintRequest is the request body for tainting a cluster. +// TaintRequest 是污点集群的请求 type TaintRequest struct { + // Effect 是污点的效果 Effect corev1.TaintEffect `json:"effect"` + // Key 是污点的键 Key string `json:"key"` + // Value 是污点的值 Value string `json:"value"` } // PutClusterRequest is the request body for updating a cluster. +// PutClusterRequest 是更新集群的请求 type PutClusterRequest struct { + // Labels 是标签 Labels *[]LabelRequest `json:"labels"` + // Taints 是污点 Taints *[]TaintRequest `json:"taints"` } // PutClusterResponse is the response body for updating a cluster. +// PutClusterResponse 是更新集群的响应 type PutClusterResponse struct{} // DeleteClusterRequest is the request body for deleting a cluster. +// DeleteClusterRequest 是删除集群的请求 type DeleteClusterRequest struct { + // MemberClusterName 是成员集群的名称 MemberClusterName string `uri:"name" binding:"required"` } // DeleteClusterResponse is the response body for deleting a cluster. +// DeleteClusterResponse 是删除集群的响应 type DeleteClusterResponse struct { } diff --git a/cmd/api/app/types/api/v1/config.go b/cmd/api/app/types/api/v1/config.go index 0e650666..81e15e53 100644 --- a/cmd/api/app/types/api/v1/config.go +++ b/cmd/api/app/types/api/v1/config.go @@ -19,8 +19,12 @@ package v1 import "github.com/karmada-io/dashboard/pkg/config" // SetDashboardConfigRequest is the request for setting dashboard config +// SetDashboardConfigRequest 是设置 dashboard 配置的请求 type SetDashboardConfigRequest struct { + // DockerRegistries 是 docker 注册表 DockerRegistries []config.DockerRegistry `json:"docker_registries"` + // ChartRegistries 是 chart 注册表 ChartRegistries []config.ChartRegistry `json:"chart_registries"` + // MenuConfigs 是菜单配置 MenuConfigs []config.MenuConfig `json:"menu_configs"` } diff --git a/cmd/api/app/types/api/v1/deployment.go b/cmd/api/app/types/api/v1/deployment.go index 55ba8ccf..04784457 100644 --- a/cmd/api/app/types/api/v1/deployment.go +++ b/cmd/api/app/types/api/v1/deployment.go @@ -17,11 +17,16 @@ limitations under the License. package v1 // CreateDeploymentRequest defines the request structure for creating a deployment. +// CreateDeploymentRequest 是创建部署的请求 type CreateDeploymentRequest struct { + // Namespace 是命名空间 Namespace string `json:"namespace"` + // Name 是名称 Name string `json:"name"` + // Content 是内容 Content string `json:"content"` } // CreateDeploymentResponse defines the response structure for creating a deployment. +// CreateDeploymentResponse 是创建部署的响应 type CreateDeploymentResponse struct{} diff --git a/cmd/api/app/types/api/v1/namespace.go b/cmd/api/app/types/api/v1/namespace.go index 557b82b9..205c8a08 100644 --- a/cmd/api/app/types/api/v1/namespace.go +++ b/cmd/api/app/types/api/v1/namespace.go @@ -17,10 +17,14 @@ limitations under the License. package v1 // CreateNamesapceRequest is the request body for creating a namespace. +// CreateNamesapceRequest 是创建命名空间的请求 type CreateNamesapceRequest struct { + // Name 是命名空间的名称 Name string `json:"name" required:"true"` + // SkipAutoPropagation 是是否跳过自动传播 SkipAutoPropagation bool `json:"skipAutoPropagation"` } // CreateNamesapceResponse is the response body for creating a namespace. +// CreateNamesapceResponse 是创建命名空间的响应 type CreateNamesapceResponse struct{} diff --git a/cmd/api/app/types/api/v1/overridepolicy.go b/cmd/api/app/types/api/v1/overridepolicy.go index b1a86cf4..61b23b9a 100644 --- a/cmd/api/app/types/api/v1/overridepolicy.go +++ b/cmd/api/app/types/api/v1/overridepolicy.go @@ -17,35 +17,51 @@ limitations under the License. package v1 // PostOverridePolicyRequest is the request body for creating an override policy. +// PostOverridePolicyRequest 是创建覆盖策略的请求 type PostOverridePolicyRequest struct { + // OverrideData 是覆盖策略的数据 OverrideData string `json:"overrideData" binding:"required"` + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` } // PostOverridePolicyResponse is the response body for creating an override policy. +// PostOverridePolicyResponse 是创建覆盖策略的响应 type PostOverridePolicyResponse struct { } // PutOverridePolicyRequest is the request body for updating an override policy. +// PutOverridePolicyRequest 是更新覆盖策略的请求 type PutOverridePolicyRequest struct { + // OverrideData 是覆盖策略的数据 OverrideData string `json:"overrideData" binding:"required"` + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` + // Name 是名称 Name string `json:"name" binding:"required"` } // PutOverridePolicyResponse is the response body for updating an override policy. +// PutOverridePolicyResponse 是更新覆盖策略的响应 type PutOverridePolicyResponse struct { } // DeleteOverridePolicyRequest is the request body for deleting an override policy. +// DeleteOverridePolicyRequest 是删除覆盖策略的请求 type DeleteOverridePolicyRequest struct { + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` + // Name 是名称 Name string `json:"name" binding:"required"` } // DeleteOverridePolicyResponse is the response body for deleting an override policy. +// DeleteOverridePolicyResponse 是删除覆盖策略的响应 type DeleteOverridePolicyResponse struct { } diff --git a/cmd/api/app/types/api/v1/overview.go b/cmd/api/app/types/api/v1/overview.go index 84466555..52c0c609 100644 --- a/cmd/api/app/types/api/v1/overview.go +++ b/cmd/api/app/types/api/v1/overview.go @@ -18,61 +18,245 @@ package v1 import ( "github.com/karmada-io/karmada/pkg/version" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // OverviewResponse represents the response structure for the overview API. +// OverviewResponse 是概览 API 的响应 type OverviewResponse struct { - KarmadaInfo *KarmadaInfo `json:"karmadaInfo"` - MemberClusterStatus *MemberClusterStatus `json:"memberClusterStatus"` + // KarmadaInfo 是 Karmada 系统的信息 + KarmadaInfo *KarmadaInfo `json:"karmadaInfo"` + // MemberClusterStatus 是成员集群的状态 + MemberClusterStatus *MemberClusterStatus `json:"memberClusterStatus"` + // ClusterResourceStatus 是集群资源的状态 ClusterResourceStatus *ClusterResourceStatus `json:"clusterResourceStatus"` } // KarmadaInfo contains information about the Karmada system. +// KarmadaInfo 包含 Karmada 系统的信息 type KarmadaInfo struct { - Version *version.Info `json:"version"` - Status string `json:"status"` - CreateTime metav1.Time `json:"createTime"` + // Version 是 Karmada 的版本 + Version *version.Info `json:"version"` + // Status 是 Karmada 的状态 + Status string `json:"status"` + // CreateTime 是 Karmada 的创建时间 + CreateTime metav1.Time `json:"createTime"` } // NodeSummary provides a summary of node statistics. +// NodeSummary 提供节点统计的摘要 type NodeSummary struct { + // TotalNum 是节点总数 TotalNum int32 `json:"totalNum"` + // ReadyNum 是就绪节点数 ReadyNum int32 `json:"readyNum"` } // CPUSummary provides a summary of CPU resource usage. +// CPUSummary 提供 CPU 资源使用的摘要 type CPUSummary struct { - TotalCPU int64 `json:"totalCPU"` + // TotalCPU 是 CPU 总数 + TotalCPU int64 `json:"totalCPU"` + // AllocatedCPU 是已分配的 CPU AllocatedCPU float64 `json:"allocatedCPU"` } // MemorySummary provides a summary of memory resource usage. +// MemorySummary 提供内存资源使用的摘要 type MemorySummary struct { - TotalMemory int64 `json:"totalMemory"` // Kib => 8 * KiB + // TotalMemory 是内存总数 + TotalMemory int64 `json:"totalMemory"` // Kib => 8 * KiB + // AllocatedMemory 是已分配的内存 AllocatedMemory float64 `json:"allocatedMemory"` } // PodSummary provides a summary of pod statistics. +// PodSummary 提供 Pod 统计的摘要 type PodSummary struct { - TotalPod int64 `json:"totalPod"` + // TotalPod 是 Pod 总数 + TotalPod int64 `json:"totalPod"` + // AllocatedPod 是已分配的 Pod AllocatedPod int64 `json:"allocatedPod"` } // MemberClusterStatus represents the status of member clusters. +// MemberClusterStatus 表示成员集群的状态 type MemberClusterStatus struct { - NodeSummary *NodeSummary `json:"nodeSummary"` - CPUSummary *CPUSummary `json:"cpuSummary"` + // NodeSummary 是节点统计的摘要 + NodeSummary *NodeSummary `json:"nodeSummary"` + // CPUSummary 是 CPU 资源使用的摘要 + CPUSummary *CPUSummary `json:"cpuSummary"` + // MemorySummary 是内存资源使用的摘要 MemorySummary *MemorySummary `json:"memorySummary"` - PodSummary *PodSummary `json:"podSummary"` + // PodSummary 是 Pod 统计的摘要 + PodSummary *PodSummary `json:"podSummary"` } // ClusterResourceStatus represents the status of various resources in the cluster. +// ClusterResourceStatus 表示集群中各种资源的状态 type ClusterResourceStatus struct { + // PropagationPolicyNum 是传播策略的数量 PropagationPolicyNum int `json:"propagationPolicyNum"` - OverridePolicyNum int `json:"overridePolicyNum"` - NamespaceNum int `json:"namespaceNum"` - WorkloadNum int `json:"workloadNum"` - ServiceNum int `json:"serviceNum"` - ConfigNum int `json:"configNum"` + // OverridePolicyNum 是覆盖策略的数量 + OverridePolicyNum int `json:"overridePolicyNum"` + // NamespaceNum 是命名空间的数量 + NamespaceNum int `json:"namespaceNum"` + // WorkloadNum 是工作负载的数量 + WorkloadNum int `json:"workloadNum"` + // ServiceNum 是服务数量 + ServiceNum int `json:"serviceNum"` + // ConfigNum 是配置数量 + ConfigNum int `json:"configNum"` +} + +// ResourcesSummary 表示所有集群资源的汇总统计信息 +type ResourcesSummary struct { + // Node 节点资源统计 + Node struct { + // Total 总节点数 + Total int64 `json:"total"` + // Ready 就绪节点数 + Ready int64 `json:"ready"` + } `json:"node"` + + // Pod Pod资源统计 + Pod struct { + // Capacity Pod总容量 + Capacity int64 `json:"capacity"` + // Allocated 已分配Pod数 + Allocated int64 `json:"allocated"` + } `json:"pod"` + + // CPU CPU资源统计 + CPU struct { + // Capacity CPU总容量(核) + Capacity int64 `json:"capacity"` + // Usage CPU使用量(核) + Usage int64 `json:"usage"` + } `json:"cpu"` + + // Memory 内存资源统计 + Memory struct { + // Capacity 内存总容量(KiB) + Capacity int64 `json:"capacity"` + // Usage 内存使用量(KiB) + Usage int64 `json:"usage"` + } `json:"memory"` +} + +// NodeItem 表示单个节点信息 +type NodeItem struct { + // ClusterName 集群名称 + ClusterName string `json:"clusterName"` + // Name 节点名称 + Name string `json:"name"` + // Ready 是否就绪 + Ready bool `json:"ready"` + // Role 角色 (master/worker) + Role string `json:"role"` + // CPUCapacity CPU容量 (核) + CPUCapacity int64 `json:"cpuCapacity"` + // CPUUsage CPU使用率 + CPUUsage int64 `json:"cpuUsage"` + // MemoryCapacity 内存容量 (KB) + MemoryCapacity int64 `json:"memoryCapacity"` + // MemoryUsage 内存使用率 + MemoryUsage int64 `json:"memoryUsage"` + // PodCapacity Pod容量 + PodCapacity int64 `json:"podCapacity"` + // PodUsage Pod使用量 + PodUsage int64 `json:"podUsage"` + // Status 状态 + Status string `json:"status"` + // Labels 标签 + Labels map[string]string `json:"labels"` + // CreationTimestamp 创建时间 + CreationTimestamp metav1.Time `json:"creationTimestamp"` +} + +// NodesResponse 包含所有集群的节点信息 +type NodesResponse struct { + // Items 节点列表 + Items []NodeItem `json:"items"` + // Summary 节点状态统计 + Summary NodeSummary `json:"summary"` +} + +// PodItem 表示单个Pod信息 +type PodItem struct { + // ClusterName 集群名称 + ClusterName string `json:"clusterName"` + // Namespace 命名空间 + Namespace string `json:"namespace"` + // Name Pod名称 + Name string `json:"name"` + // Phase Pod阶段 + Phase v1.PodPhase `json:"phase"` + // Status Pod状态 + Status string `json:"status"` + // ReadyContainers 就绪容器数量 + ReadyContainers int `json:"readyContainers"` + // TotalContainers 总容器数量 + TotalContainers int `json:"totalContainers"` + // CPURequest CPU请求量(核) + CPURequest int64 `json:"cpuRequest"` + // MemoryRequest 内存请求量(KB) + MemoryRequest int64 `json:"memoryRequest"` + // CPULimit CPU限制(核) + CPULimit int64 `json:"cpuLimit"` + // MemoryLimit 内存限制(KB) + MemoryLimit int64 `json:"memoryLimit"` + // RestartCount 重启次数 + RestartCount int32 `json:"restartCount"` + // PodIP Pod IP + PodIP string `json:"podIP"` + // NodeName 节点名称 + NodeName string `json:"nodeName"` + // CreationTimestamp 创建时间 + CreationTimestamp metav1.Time `json:"creationTimestamp"` +} + +// PodSummaryStats 表示Pod状态统计信息 +type PodSummaryStats struct { + // Running 运行中的Pod数量 + Running int `json:"running"` + // Pending 挂起中的Pod数量 + Pending int `json:"pending"` + // Succeeded 成功的Pod数量 + Succeeded int `json:"succeeded"` + // Failed 失败的Pod数量 + Failed int `json:"failed"` + // Unknown 未知状态的Pod数量 + Unknown int `json:"unknown"` + // Total 总Pod数量 + Total int `json:"total"` +} + +// NamespacePodsStats 表示命名空间Pod统计信息 +type NamespacePodsStats struct { + // Namespace 命名空间名称 + Namespace string `json:"namespace"` + // PodCount Pod数量 + PodCount int `json:"podCount"` +} + +// ClusterPodsStats 表示集群Pod统计信息 +type ClusterPodsStats struct { + // ClusterName 集群名称 + ClusterName string `json:"clusterName"` + // PodCount Pod数量 + PodCount int `json:"podCount"` +} + +// PodsResponse 包含所有集群的Pod信息 +type PodsResponse struct { + // Items Pod列表 + Items []PodItem `json:"items"` + // StatusStats Pod状态统计 + StatusStats PodSummaryStats `json:"statusStats"` + // NamespaceStats 命名空间Pod统计 + NamespaceStats []NamespacePodsStats `json:"namespaceStats"` + // ClusterStats 集群Pod统计 + ClusterStats []ClusterPodsStats `json:"clusterStats"` } diff --git a/cmd/api/app/types/api/v1/propagationpolicy.go b/cmd/api/app/types/api/v1/propagationpolicy.go index df95f52e..2f8ed902 100644 --- a/cmd/api/app/types/api/v1/propagationpolicy.go +++ b/cmd/api/app/types/api/v1/propagationpolicy.go @@ -18,35 +18,51 @@ package v1 // PostPropagationPolicyRequest defines the request structure for creating a propagation policy. // todo this is only a simple version of pp request, just for POC +// PostPropagationPolicyRequest 是创建传播策略的请求 type PostPropagationPolicyRequest struct { + // PropagationData 是传播策略的数据 PropagationData string `json:"propagationData" binding:"required"` + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` } // PostPropagationPolicyResponse defines the response structure for creating a propagation policy. +// PostPropagationPolicyResponse 是创建传播策略的响应 type PostPropagationPolicyResponse struct { } // PutPropagationPolicyRequest defines the request structure for updating a propagation policy. +// PutPropagationPolicyRequest 是更新传播策略的请求 type PutPropagationPolicyRequest struct { + // PropagationData 是传播策略的数据 PropagationData string `json:"propagationData" binding:"required"` + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` + // Name 是名称 Name string `json:"name" binding:"required"` } // PutPropagationPolicyResponse defines the response structure for updating a propagation policy. +// PutPropagationPolicyResponse 是更新传播策略的响应 type PutPropagationPolicyResponse struct { } // DeletePropagationPolicyRequest defines the request structure for deleting a propagation policy. +// DeletePropagationPolicyRequest 是删除传播策略的请求 type DeletePropagationPolicyRequest struct { + // IsClusterScope 是是否集群范围 IsClusterScope bool `json:"isClusterScope"` + // Namespace 是命名空间 Namespace string `json:"namespace"` + // Name 是名称 Name string `json:"name" binding:"required"` } // DeletePropagationPolicyResponse defines the response structure for deleting a propagation policy. +// DeletePropagationPolicyResponse 是删除传播策略的响应 type DeletePropagationPolicyResponse struct { } diff --git a/cmd/api/app/types/api/v1/schedule.go b/cmd/api/app/types/api/v1/schedule.go new file mode 100644 index 00000000..30e4fbd3 --- /dev/null +++ b/cmd/api/app/types/api/v1/schedule.go @@ -0,0 +1,195 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1 + +// ScheduleNode 表示调度图中的一个节点 +type ScheduleNode struct { + // ID 节点唯一标识 + ID string `json:"id"` + // Name 节点显示名称 + Name string `json:"name"` + // Type 节点类型 (control-plane/member-cluster) + Type string `json:"type"` + // SchedulingParams 集群调度参数 + SchedulingParams *SchedulingParams `json:"schedulingParams,omitempty"` + // ResourceInfo 资源信息(当节点是资源时) + ResourceInfo *ResourceNodeInfo `json:"resourceInfo,omitempty"` +} + +// ScheduleLink 表示调度图中的连接线 +type ScheduleLink struct { + // Source 源节点ID + Source string `json:"source"` + // Target 目标节点ID + Target string `json:"target"` + // Value 连接的权重/值 + Value int `json:"value"` + // Type 资源类型 + Type string `json:"type"` +} + +// ClusterDistribution 表示资源在单个集群中的分布情况 +type ClusterDistribution struct { + // ClusterName 集群名称 + ClusterName string `json:"clusterName"` + // Count 资源数量 + Count int `json:"count"` +} + +// ResourceTypeDistribution 表示单种资源类型在各集群中的分布情况 +type ResourceTypeDistribution struct { + // ResourceType 资源类型 + ResourceType string `json:"resourceType"` + // ClusterDist 各集群分布情况 + ClusterDist []ClusterDistribution `json:"clusterDist"` +} + +// ScheduleSummary 调度概览统计信息 +type ScheduleSummary struct { + // TotalClusters 总集群数 + TotalClusters int `json:"totalClusters"` + // TotalPropagationPolicy 总传播策略数 + TotalPropagationPolicy int `json:"totalPropagationPolicy"` + // TotalResourceBinding 总资源绑定数 + TotalResourceBinding int `json:"totalResourceBinding"` +} + +// ResourceDeploymentStatus 表示资源在集群中的部署状态 +type ResourceDeploymentStatus struct { + // Scheduled 是否已调度 + Scheduled bool `json:"scheduled"` + // Actual 是否实际部署 + Actual bool `json:"actual"` + // ScheduledCount 调度计划的数量 + ScheduledCount int `json:"scheduledCount"` + // ActualCount 实际部署的数量 + ActualCount int `json:"actualCount"` +} + +// ActualClusterDistribution 表示资源在单个集群中的实际分布情况 +type ActualClusterDistribution struct { + // ClusterName 集群名称 + ClusterName string `json:"clusterName"` + // ScheduledCount 调度计划数量 + ScheduledCount int `json:"scheduledCount"` + // ActualCount 实际部署数量 + ActualCount int `json:"actualCount"` + // Status 部署状态 + Status ResourceDeploymentStatus `json:"status"` +} + +// ActualResourceTypeDistribution 表示单种资源类型在各集群中的实际部署情况 +type ActualResourceTypeDistribution struct { + // ResourceType 资源类型 + ResourceType string `json:"resourceType"` + // ResourceGroup 资源分组 + ResourceGroup string `json:"resourceGroup"` + // ClusterDist 各集群实际分布情况 + ClusterDist []ActualClusterDistribution `json:"clusterDist"` + // TotalScheduledCount 总调度计划数量 + TotalScheduledCount int `json:"totalScheduledCount"` + // TotalActualCount 总实际部署数量 + TotalActualCount int `json:"totalActualCount"` + // ResourceNames 该资源类型下的具体资源名称列表 + ResourceNames []string `json:"resourceNames,omitempty"` +} + +// SchedulePreviewResponse 集群调度预览响应 +type SchedulePreviewResponse struct { + // Nodes 节点列表 + Nodes []ScheduleNode `json:"nodes"` + // Links 连接线列表 + Links []ScheduleLink `json:"links"` + // ResourceDist 资源分布统计 + ResourceDist []ResourceTypeDistribution `json:"resourceDist"` + // Summary 概览统计信息 + Summary ScheduleSummary `json:"summary"` + // ActualResourceDist 实际资源分布统计(可选,扩展功能) + ActualResourceDist []ActualResourceTypeDistribution `json:"actualResourceDist,omitempty"` + // DetailedResources 详细资源信息列表 + DetailedResources []ResourceDetailInfo `json:"detailedResources,omitempty"` +} + +// ResourceNodeInfo 资源节点信息 +type ResourceNodeInfo struct { + // ResourceKind 资源类型 + ResourceKind string `json:"resourceKind"` + // ResourceGroup 资源分组 + ResourceGroup string `json:"resourceGroup"` + // Namespace 命名空间 + Namespace string `json:"namespace"` + // PropagationPolicy 传播策略 + PropagationPolicy string `json:"propagationPolicy"` +} + +// ResourceDetailInfo 资源详细信息 +type ResourceDetailInfo struct { + // ResourceName 资源名称 + ResourceName string `json:"resourceName"` + // ResourceKind 资源类型 + ResourceKind string `json:"resourceKind"` + // ResourceGroup 资源组 + ResourceGroup string `json:"resourceGroup"` + // Namespace 命名空间 + Namespace string `json:"namespace"` + // PropagationPolicy 传播策略 + PropagationPolicy string `json:"propagationPolicy"` + // Weight 权重 + Weight int32 `json:"weight"` + // ClusterWeights 集群权重映射 + ClusterWeights map[string]int32 `json:"clusterWeights,omitempty"` + // ClusterDist 集群分布 + ClusterDist []ActualClusterDistribution `json:"clusterDist"` + // TotalScheduledCount 计划总数 + TotalScheduledCount int `json:"totalScheduledCount"` + // TotalActualCount 实际总数 + TotalActualCount int `json:"totalActualCount"` +} + +// Taint 表示集群污点 +type Taint struct { + // Key 污点键 + Key string `json:"key"` + // Value 污点值 + Value string `json:"value"` + // Effect 污点效果 + Effect string `json:"effect"` +} + +// Toleration 表示容忍 +type Toleration struct { + // Key 容忍键 + Key string `json:"key"` + // Value 容忍值 + Value string `json:"value,omitempty"` + // Effect 容忍效果 + Effect string `json:"effect,omitempty"` + // Operator 操作符 + Operator string `json:"operator,omitempty"` +} + +// SchedulingParams 集群调度参数 +type SchedulingParams struct { + // Weight 集群权重 + Weight int32 `json:"weight,omitempty"` + // Taints 集群污点 + Taints []Taint `json:"taints,omitempty"` + // Tolerations 集群容忍 + Tolerations []Toleration `json:"tolerations,omitempty"` + // Labels 集群标签 + Labels map[string]string `json:"labels,omitempty"` +} diff --git a/cmd/api/app/types/api/v1/topology.go b/cmd/api/app/types/api/v1/topology.go new file mode 100644 index 00000000..3354b48f --- /dev/null +++ b/cmd/api/app/types/api/v1/topology.go @@ -0,0 +1,103 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package v1 + +// TopologyNode 表示拓扑图中的节点 +type TopologyNode struct { + // 节点ID,唯一标识 + ID string `json:"id"` + // 节点名称 + Name string `json:"name"` + // 节点类型:control-plane, cluster, node, pod + Type string `json:"type"` + // 节点状态:ready, notready + Status string `json:"status"` + // 父节点ID + ParentID string `json:"parentId"` + // 节点元数据 + Metadata map[string]interface{} `json:"metadata,omitempty"` + // 资源使用情况 + Resources *NodeResources `json:"resources,omitempty"` + // 标签 + Labels map[string]string `json:"labels,omitempty"` +} + +// TopologyEdge 表示拓扑图中的边 +type TopologyEdge struct { + // 边ID + ID string `json:"id"` + // 源节点ID + Source string `json:"source"` + // 目标节点ID + Target string `json:"target"` + // 边类型:control, schedule + Type string `json:"type"` + // 边的权重或值 + Value int `json:"value"` + // 边的元数据 + Metadata map[string]interface{} `json:"metadata,omitempty"` +} + +// NodeResources 表示节点的资源使用情况 +type NodeResources struct { + // CPU使用情况 + CPU *ResourceUsage `json:"cpu,omitempty"` + // 内存使用情况 + Memory *ResourceUsage `json:"memory,omitempty"` + // Pod使用情况 + Pods *ResourceUsage `json:"pods,omitempty"` + // 存储使用情况 + Storage *ResourceUsage `json:"storage,omitempty"` +} + +// ResourceUsage 表示资源使用情况 +type ResourceUsage struct { + // 已使用量 + Used string `json:"used"` + // 总量 + Total string `json:"total"` + // 使用率(百分比) + UsageRate float64 `json:"usageRate"` +} + +// TopologyData 表示整个拓扑图数据 +type TopologyData struct { + // 节点列表 + Nodes []TopologyNode `json:"nodes"` + // 边列表 + Edges []TopologyEdge `json:"edges"` + // 统计信息 + Summary *TopologySummary `json:"summary,omitempty"` +} + +// TopologySummary 表示拓扑图的统计信息 +type TopologySummary struct { + // 集群总数 + TotalClusters int `json:"totalClusters"` + // 节点总数 + TotalNodes int `json:"totalNodes"` + // Pod总数 + TotalPods int `json:"totalPods"` + // 资源类型分布 + ResourceDistribution map[string]int `json:"resourceDistribution,omitempty"` +} + +// TopologyResponse 表示拓扑图API响应 +type TopologyResponse struct { + // 拓扑图数据 + Data TopologyData `json:"data"` +} diff --git a/cmd/api/app/types/common/errors.go b/cmd/api/app/types/common/errors.go new file mode 100644 index 00000000..e37209fa --- /dev/null +++ b/cmd/api/app/types/common/errors.go @@ -0,0 +1,68 @@ +/* +Copyright 2024 The Karmada Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package common + +// ErrorResponse 表示API错误响应 +type ErrorResponse struct { + Code int `json:"code"` + Message string `json:"message"` +} + +// Error 实现error接口 +func (e *ErrorResponse) Error() string { + return e.Message +} + +// NewBadRequestError 创建一个400错误 +func NewBadRequestError(message string) error { + return &ErrorResponse{ + Code: 400, + Message: message, + } +} + +// NewInternalServerError 创建一个500错误 +func NewInternalServerError(message string) error { + return &ErrorResponse{ + Code: 500, + Message: message, + } +} + +// NewNotFoundError 创建一个404错误 +func NewNotFoundError(message string) error { + return &ErrorResponse{ + Code: 404, + Message: message, + } +} + +// NewForbiddenError 创建一个403错误 +func NewForbiddenError(message string) error { + return &ErrorResponse{ + Code: 403, + Message: message, + } +} + +// NewUnauthorizedError 创建一个401错误 +func NewUnauthorizedError(message string) error { + return &ErrorResponse{ + Code: 401, + Message: message, + } +} diff --git a/cmd/api/app/types/common/request.go b/cmd/api/app/types/common/request.go index d00d0d41..4b9e150d 100644 --- a/cmd/api/app/types/common/request.go +++ b/cmd/api/app/types/common/request.go @@ -26,50 +26,71 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/common" ) +// parsePaginationPathParameter 解析分页路径参数 +// 解析分页路径参数 func parsePaginationPathParameter(request *gin.Context) *dataselect.PaginationQuery { + // 获取请求的 itemsPerPage 参数并转换为 int64 itemsPerPage, err := strconv.ParseInt(request.Query("itemsPerPage"), 10, 0) if err != nil { return dataselect.NoPagination } - + // 获取请求的 page 参数并转换为 int64 page, err := strconv.ParseInt(request.Query("page"), 10, 0) if err != nil { return dataselect.NoPagination } // Frontend pages start from 1 and backend starts from 0 + // 前端页面从 1 开始,后端从 0 开始 return dataselect.NewPaginationQuery(int(itemsPerPage), int(page-1)) } +// parseFilterPathParameter 解析过滤路径参数 +// 解析过滤路径参数 func parseFilterPathParameter(request *gin.Context) *dataselect.FilterQuery { + // 获取请求的 filterBy 参数并转换为字符串 return dataselect.NewFilterQuery(strings.Split(request.Query("filterBy"), ",")) } -// Parses query parameters of the request and returns a SortQuery object +// parseSortPathParameter 解析排序路径参数 +// 解析排序路径参数 func parseSortPathParameter(request *gin.Context) *dataselect.SortQuery { + // 获取请求的 sortBy 参数并转换为字符串 return dataselect.NewSortQuery(strings.Split(request.Query("sortBy"), ",")) } -// ParseDataSelectPathParameter parses query parameters of the request and returns a DataSelectQuery object +// ParseDataSelectPathParameter 解析请求的查询参数并返回一个 DataSelectQuery 对象 func ParseDataSelectPathParameter(request *gin.Context) *dataselect.DataSelectQuery { + // 解析分页路径参数 paginationQuery := parsePaginationPathParameter(request) + // 解析排序路径参数 sortQuery := parseSortPathParameter(request) + // 解析过滤路径参数 filterQuery := parseFilterPathParameter(request) + // 返回一个 DataSelectQuery 对象 return dataselect.NewDataSelectQuery(paginationQuery, sortQuery, filterQuery) } // ParseNamespacePathParameter parses namespace selector for list pages in path parameter. // The namespace selector is a comma separated list of namespaces that are trimmed. // No namespaces mean "view all user namespaces", i.e., everything except kube-system. +// 解析请求的命名空间路径参数并返回一个 NamespaceQuery 对象 func ParseNamespacePathParameter(request *gin.Context) *common.NamespaceQuery { + // 获取请求的 namespace 参数并转换为字符串 namespace := request.Param("namespace") + // 将命名空间参数按逗号分割成一个列表 namespaces := strings.Split(namespace, ",") + // 创建一个非空命名空间列表 var nonEmptyNamespaces []string + // 遍历命名空间列表 for _, n := range namespaces { + // 去除命名空间参数两端的空格 n = strings.Trim(n, " ") + // 如果命名空间参数不为空,则添加到非空命名空间列表中 if len(n) > 0 { nonEmptyNamespaces = append(nonEmptyNamespaces, n) } } + // 返回一个 NamespaceQuery 对象 return common.NewNamespaceQuery(nonEmptyNamespaces) -} +} \ No newline at end of file diff --git a/cmd/api/app/types/common/response.go b/cmd/api/app/types/common/response.go index da04cb69..3a4c66aa 100644 --- a/cmd/api/app/types/common/response.go +++ b/cmd/api/app/types/common/response.go @@ -23,6 +23,7 @@ import ( ) // BaseResponse is the base response +// BaseResponse 是基础响应 type BaseResponse struct { Code int `json:"code"` Msg string `json:"message"` @@ -30,16 +31,19 @@ type BaseResponse struct { } // Success generate success response +// Success 生成成功响应 func Success(c *gin.Context, obj interface{}) { Response(c, nil, obj) } // Fail generate fail response +// Fail 生成失败响应 func Fail(c *gin.Context, err error) { Response(c, err, nil) } // Response generate response +// Response 生成响应 func Response(c *gin.Context, err error, data interface{}) { code := 200 // biz status code message := "success" // biz status message diff --git a/cmd/api/main.go b/cmd/api/main.go index 82666ee7..ad976650 100644 --- a/cmd/api/main.go +++ b/cmd/api/main.go @@ -26,8 +26,13 @@ import ( ) func main() { + // 创建一个上下文 ctx := context.TODO() + // 创建一个 API 命令 cmd := app.NewAPICommand(ctx) + // 运行命令 + // cli.Run(cmd) 是 Kubernetes 代码库中 k8s.io/component-base/cli 包提供的一个函数,它的作用是运行一个命令行工具(cmd)并返回执行结果的退出码(exit code)。 code := cli.Run(cmd) + // 退出程序 os.Exit(code) } diff --git a/cmd/metrics-scraper/app/db/consts.go b/cmd/metrics-scraper/app/db/consts.go index 08614534..083579cc 100644 --- a/cmd/metrics-scraper/app/db/consts.go +++ b/cmd/metrics-scraper/app/db/consts.go @@ -18,17 +18,24 @@ package db const ( // Namespace is the namespace of karmada. + // Namespace 是 karmada 的命名空间 Namespace = "karmada-system" // KarmadaAgent is the name of karmada agent. + // KarmadaAgent 是 karmada 代理的名称 KarmadaAgent = "karmada-agent" // KarmadaScheduler is the name of karmada scheduler. + // KarmadaScheduler 是 karmada 调度器的名称 KarmadaScheduler = "karmada-scheduler" // KarmadaSchedulerEstimator is the name of karmada scheduler estimator. + // KarmadaSchedulerEstimator 是 karmada 调度器估计器的名称 KarmadaSchedulerEstimator = "karmada-scheduler-estimator" // KarmadaControllerManager is the name of karmada controller manager. + // KarmadaControllerManager 是 karmada 控制器管理器的名称 KarmadaControllerManager = "karmada-controller-manager" // SchedulerPort is the port of karmada scheduler. + // SchedulerPort 是 karmada 调度器的端口 SchedulerPort = "10351" // ControllerManagerPort is the port of karmada controller manager. + // ControllerManagerPort 是 karmada 控制器管理器的端口 ControllerManagerPort = "8080" ) diff --git a/cmd/metrics-scraper/app/db/models.go b/cmd/metrics-scraper/app/db/models.go index 9d70fc17..7cc440b6 100644 --- a/cmd/metrics-scraper/app/db/models.go +++ b/cmd/metrics-scraper/app/db/models.go @@ -16,12 +16,12 @@ limitations under the License. package db -// PodInfo is the pod info. +// PodInfo 是 pod 信息。 type PodInfo struct { Name string `json:"name"` } -// Metric is the metric info. +// Metric 是指标信息。 type Metric struct { Name string `json:"name"` Help string `json:"help"` @@ -29,14 +29,14 @@ type Metric struct { Values []MetricValue `json:"values,omitempty"` } -// MetricValue is the metric value info. +// MetricValue 是指标值信息。 type MetricValue struct { Labels map[string]string `json:"labels,omitempty"` Value string `json:"value"` Measure string `json:"measure"` } -// ParsedData is the parsed data info. +// ParsedData 是解析的数据信息。 type ParsedData struct { CurrentTime string `json:"currentTime"` Metrics map[string]*Metric `json:"metrics"` diff --git a/cmd/metrics-scraper/app/options/options.go b/cmd/metrics-scraper/app/options/options.go index 9b617c3c..cd7678c3 100644 --- a/cmd/metrics-scraper/app/options/options.go +++ b/cmd/metrics-scraper/app/options/options.go @@ -22,7 +22,7 @@ import ( "github.com/spf13/pflag" ) -// Options contains everything necessary to create and run api. +// Options 包含创建和运行 api 所需的所有内容。 type Options struct { BindAddress net.IP Port int @@ -39,12 +39,12 @@ type Options struct { OpenAPIEnabled bool } -// NewOptions returns initialized Options. +// NewOptions 返回初始化的 Options。 func NewOptions() *Options { return &Options{} } -// AddFlags adds flags of api to the specified FlagSet +// AddFlags 将 api 的标志添加到指定的 FlagSet。 func (o *Options) AddFlags(fs *pflag.FlagSet) { if o == nil { return diff --git a/cmd/metrics-scraper/app/router/setup.go b/cmd/metrics-scraper/app/router/setup.go index de8dd780..2583955b 100644 --- a/cmd/metrics-scraper/app/router/setup.go +++ b/cmd/metrics-scraper/app/router/setup.go @@ -27,6 +27,7 @@ var ( v1 *gin.RouterGroup ) +// init 初始化函数 func init() { if !environment.IsDev() { gin.SetMode(gin.ReleaseMode) @@ -44,12 +45,12 @@ func init() { }) } -// V1 returns the router group for /api/v1. +// V1 返回 /api/v1 的路由组 func V1() *gin.RouterGroup { return v1 } -// Router returns the main Gin engine instance. +// Router 返回主 Gin 引擎实例 func Router() *gin.Engine { return router } diff --git a/cmd/metrics-scraper/app/routes/metrics/handler.go b/cmd/metrics-scraper/app/routes/metrics/handler.go index deed68e0..ece3c2f5 100644 --- a/cmd/metrics-scraper/app/routes/metrics/handler.go +++ b/cmd/metrics-scraper/app/routes/metrics/handler.go @@ -26,7 +26,7 @@ import ( var requests = make(chan scrape.SaveRequest) -// GetMetrics returns the metrics for the given app name +// GetMetrics 返回给定应用名称的指标 func GetMetrics(c *gin.Context) { appName := c.Param("app_name") queryType := c.Query("type") diff --git a/cmd/metrics-scraper/app/routes/metrics/handlerqueries.go b/cmd/metrics-scraper/app/routes/metrics/handlerqueries.go index 726e158a..27e8564f 100644 --- a/cmd/metrics-scraper/app/routes/metrics/handlerqueries.go +++ b/cmd/metrics-scraper/app/routes/metrics/handlerqueries.go @@ -29,18 +29,18 @@ import ( "github.com/karmada-io/dashboard/cmd/metrics-scraper/app/scrape" ) -// MetricInfo represents the information about a metric. +// MetricInfo 表示指标信息。 type MetricInfo struct { Help string `json:"help"` Type string `json:"type"` } -// QueryMetrics handles the querying of metrics. +// QueryMetrics 处理指标查询。 func QueryMetrics(c *gin.Context) { appName := c.Param("app_name") podName := c.Param("pod_name") - queryType := c.Query("type") // Use a query parameter to determine the action - metricName := c.Query("mname") // Optional: only needed for details + queryType := c.Query("type") // 使用查询参数来确定操作 + metricName := c.Query("mname") // 可选:仅在需要时需要 sanitizedAppName := strings.ReplaceAll(appName, "-", "_") sanitizedPodName := strings.ReplaceAll(podName, "-", "_") @@ -52,7 +52,7 @@ func QueryMetrics(c *gin.Context) { return } - // Add transaction for consistent reads + // 添加事务以确保一致的读取 tx, err := db.Begin() if err != nil { log.Printf("Error starting transaction: %v", err) @@ -73,6 +73,7 @@ func QueryMetrics(c *gin.Context) { } } +// queryMetricNames 查询指标名称 func queryMetricNames(c *gin.Context, tx *sql.Tx, sanitizedPodName string) { rows, err := tx.Query(fmt.Sprintf("SELECT DISTINCT name FROM %s", sanitizedPodName)) if err != nil { @@ -96,6 +97,7 @@ func queryMetricNames(c *gin.Context, tx *sql.Tx, sanitizedPodName string) { c.JSON(http.StatusOK, gin.H{"metricNames": metricNames}) } +// queryMetricDetailsByName 查询指标详细信息 func queryMetricDetailsByName(c *gin.Context, tx *sql.Tx, sanitizedPodName, metricName string) { if metricName == "" { c.JSON(http.StatusBadRequest, gin.H{"error": "Metric name required for details"}) @@ -185,6 +187,7 @@ func queryMetricDetailsByName(c *gin.Context, tx *sql.Tx, sanitizedPodName, metr c.JSON(http.StatusOK, gin.H{"details": detailsMap}) } +// queryMetricDetails 查询指标详细信息 func queryMetricDetails(c *gin.Context, appName string) { // Handle metricsdetails query type db, err := sql.Open("sqlite", strings.ReplaceAll(appName, "-", "_")+".db") diff --git a/cmd/metrics-scraper/app/scrape/consts.go b/cmd/metrics-scraper/app/scrape/consts.go index dc67eefa..eb974188 100644 --- a/cmd/metrics-scraper/app/scrape/consts.go +++ b/cmd/metrics-scraper/app/scrape/consts.go @@ -17,6 +17,7 @@ limitations under the License. package scrape const ( + // createMainTableSQL 创建主表的 SQL 语句 createMainTableSQL = ` CREATE TABLE IF NOT EXISTS %s ( id INTEGER PRIMARY KEY AUTOINCREMENT, @@ -27,6 +28,7 @@ const ( ) ` + // createValuesTableSQL 创建值表的 SQL 语句 createValuesTableSQL = ` CREATE TABLE IF NOT EXISTS %s_values ( id INTEGER PRIMARY KEY AUTOINCREMENT, @@ -37,36 +39,44 @@ const ( ) ` + // createTimeLoadTableSQL 创建时间加载表的 SQL 语句 createTimeLoadTableSQL = ` CREATE TABLE IF NOT EXISTS %s ( time_entry DATETIME PRIMARY KEY ) ` + // insertTimeLoadSQL 插入时间加载的 SQL 语句 insertTimeLoadSQL = ` INSERT OR REPLACE INTO %s (time_entry) VALUES (?) ` - // 900 is 15 minutes in seconds + + // getOldestTimeSQL 获取最旧时间的 SQL 语句 getOldestTimeSQL = ` SELECT time_entry FROM %s ORDER BY time_entry DESC LIMIT 1 OFFSET 900 ` + // deleteOldTimeSQL 删除旧时间的 SQL 语句 deleteOldTimeSQL = `DELETE FROM %s WHERE time_entry <= ?` deleteAssociatedMetricsSQL = ` DELETE FROM %s WHERE currentTime <= ? ` + // deleteAssociatedValuesSQL 删除关联值的 SQL 语句 deleteAssociatedValuesSQL = ` DELETE FROM %s_values WHERE metric_id NOT IN (SELECT id FROM %s) ` + // insertMainSQL 插入主表的 SQL 语句 insertMainSQL = ` INSERT INTO %s (name, help, type, currentTime) VALUES (?, ?, ?, ?) ` + + // createLabelsTableSQL 创建标签表的 SQL 语句 createLabelsTableSQL = `CREATE TABLE IF NOT EXISTS %s_labels ( id INTEGER PRIMARY KEY AUTOINCREMENT, value_id INTEGER, diff --git a/cmd/metrics-scraper/app/scrape/db.go b/cmd/metrics-scraper/app/scrape/db.go index dc022b8c..91d89d36 100644 --- a/cmd/metrics-scraper/app/scrape/db.go +++ b/cmd/metrics-scraper/app/scrape/db.go @@ -28,7 +28,7 @@ var ( dbMapLock sync.RWMutex ) -// GetDB returns an existing database connection or creates a new one +// GetDB 返回一个现有的数据库连接或创建一个新的连接 func GetDB(appName string) (*sql.DB, error) { sanitizedAppName := strings.ReplaceAll(appName, "-", "_") @@ -43,7 +43,7 @@ func GetDB(appName string) (*sql.DB, error) { dbMapLock.Lock() defer dbMapLock.Unlock() - // Double-check after acquiring write lock + // 在获取写锁后再次检查 if db, exists := dbMap[sanitizedAppName]; exists { return db, nil } @@ -53,8 +53,8 @@ func GetDB(appName string) (*sql.DB, error) { return nil, err } - // Set connection pool settings - db.SetMaxOpenConns(1) // Restrict to 1 connection to prevent lock conflicts + // 设置连接池设置 + db.SetMaxOpenConns(1) // 限制为 1 个连接以防止锁冲突 db.SetMaxIdleConns(1) dbMap[sanitizedAppName] = db diff --git a/cmd/metrics-scraper/app/scrape/job.go b/cmd/metrics-scraper/app/scrape/job.go index 18991401..58ebb506 100644 --- a/cmd/metrics-scraper/app/scrape/job.go +++ b/cmd/metrics-scraper/app/scrape/job.go @@ -35,6 +35,7 @@ import ( ) // SaveRequest Define a struct for save requests +// SaveRequest 定义一个用于保存请求的结构体 type SaveRequest struct { appName string podName string @@ -42,10 +43,10 @@ type SaveRequest struct { result chan error } -// FetchMetrics fetches metrics from all pods of the given app name +// FetchMetrics 从给定应用名称的所有 pod 中获取指标 func FetchMetrics(ctx context.Context, appName string, requests chan SaveRequest) (map[string]*db.ParsedData, []string, error) { kubeClient := client.InClusterClient() - podsMap, errors := getKarmadaPods(ctx, appName) // Pass context here + podsMap, errors := getKarmadaPods(ctx, appName) // 传递上下文 if len(podsMap) == 0 && len(errors) > 0 { return nil, errors, fmt.Errorf("no pods found") } @@ -119,6 +120,7 @@ func FetchMetrics(ctx context.Context, appName string, requests chan SaveRequest return allMetrics, errors, nil } +// getKarmadaPods 获取 karmada 的 pod 信息 func getKarmadaPods(ctx context.Context, appName string) (map[string][]db.PodInfo, []string) { kubeClient := client.InClusterClient() podsMap := make(map[string][]db.PodInfo) @@ -159,6 +161,7 @@ func getKarmadaPods(ctx context.Context, appName string) (map[string][]db.PodInf return podsMap, errors } +// getClusterPods 获取集群的 pod 信息 func getClusterPods(ctx context.Context, cluster *v1alpha1.Cluster) ([]db.PodInfo, error) { fmt.Printf("Getting pods for cluster: %s\n", cluster.Name) @@ -200,6 +203,7 @@ func getClusterPods(ctx context.Context, cluster *v1alpha1.Cluster) ([]db.PodInf return podInfos, nil } +// getKarmadaAgentMetrics 获取 karmada 代理的指标 func getKarmadaAgentMetrics(ctx context.Context, podName string, clusterName string, requests chan SaveRequest) (*db.ParsedData, error) { kubeClient := client.InClusterKarmadaClient() clusters, err := kubeClient.ClusterV1alpha1().Clusters().List(ctx, metav1.ListOptions{}) diff --git a/docs/images/readme-dashboard-cn.png b/docs/images/readme-dashboard-cn.png index 3c866533..b14b8bc2 100644 Binary files a/docs/images/readme-dashboard-cn.png and b/docs/images/readme-dashboard-cn.png differ diff --git a/docs/user-guide.md b/docs/user-guide.md index dcea71c4..af1dbd59 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -8,7 +8,6 @@ sudo sysctl -w fs.inotify.max_user_instances=2099999999 sudo sysctl -w fs.inotify.max_queued_events=2099999999 ``` - ## Create Cluster For Test Once the system environment is set up, you can proceed with installing the test cluster. The test cluster consists of a control plane and three member clusters. The control plane will have the karmada control plane installed, while the member clusters include two push mode member clusters and one pull mode member cluster. The architecture of the test cluster is as follows: diff --git a/go.mod b/go.mod index bf97e312..578ea12a 100644 --- a/go.mod +++ b/go.mod @@ -10,7 +10,6 @@ require ( github.com/golang-jwt/jwt/v5 v5.2.1 github.com/karmada-io/karmada v1.13.0 github.com/prometheus/common v0.55.0 - github.com/samber/lo v1.39.0 github.com/spf13/cobra v1.8.1 github.com/spf13/pflag v1.0.5 gopkg.in/yaml.v3 v3.0.1 diff --git a/go.sum b/go.sum index a425b5f6..d4fd3f4f 100644 --- a/go.sum +++ b/go.sum @@ -208,8 +208,6 @@ github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/samber/lo v1.39.0 h1:4gTz1wUhNYLhFSKl6O+8peW0v2F4BCY034GRpU9WnuA= -github.com/samber/lo v1.39.0/go.mod h1:+m/ZKRl6ClXCE2Lgf3MsQlWfh4bn1bz6CXEOxnEXnEA= github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM= diff --git a/hack/images/image.list b/hack/images/image.list new file mode 100644 index 00000000..cab0c8b8 --- /dev/null +++ b/hack/images/image.list @@ -0,0 +1,29 @@ +# hack/ops/load-images.sh script will load image (from online or offline source) +# and rename the image to image name as described. Each line should in the following format: +# component-name;image-name;online-image-name;offline-image-name; +# restrictions: +# - `component-name` and `image-name` are required fields. +# - each line must have four field or four semicolons. +# - if `online-image-name` is empty, it will share the field as `image-name`, +# which have the same effect of `component-name;image-name;image-name;` +# - if use `online-image-name` and `offline-image-name`, `online-image-name` has more priority, +# means only `online-image-name` will be loaded +# - the offline images must be stored under the fold: ${REPO_ROOT}/hack/images/ + +# third-party dependencies +etcd;registry.k8s.io/etcd:3.5.9-0;;; +karmada-apiserver;registry.k8s.io/kube-apiserver:v1.27.11;;; +kube-controller-manager;registry.k8s.io/kube-controller-manager:v1.27.11;;; +kind;docker.io/kindest/node:v1.27.11;;; +metrics-server;registry.k8s.io/metrics-server/metrics-server:v0.6.3;;; + +# karmada +karmada-controller-manager;docker.io/karmada/karmada-controller-manager:v1.9.0;;; +karmada-scheduler;docker.io/karmada/karmada-scheduler:v1.9.0;;; +karmada-descheduler;docker.io/karmada/karmada-descheduler:v1.9.0;;; +karmada-webhook;docker.io/karmada/karmada-webhook:v1.9.0;;; +karmada-scheduler-estimator;docker.io/karmada/karmada-scheduler-estimator:v1.9.0;;; +karmada-aggregated-apiserver;docker.io/karmada/karmada-aggregated-apiserver:v1.9.0;;; +karmada-search;docker.io/karmada/karmada-search:v1.9.0;;; +karmada-metrics-adapter;docker.io/karmada/karmada-metrics-adapter:v1.9.0;;; +karmada-agent;docker.io/karmada/karmada-agent:v1.9.0;;; diff --git a/hack/local-build.sh b/hack/local-build.sh new file mode 100755 index 00000000..f830b6fc --- /dev/null +++ b/hack/local-build.sh @@ -0,0 +1,120 @@ +#!/bin/bash +# 本地构建Karmada Dashboard镜像 +# 不依赖网络下载包,使用本地构建方式 + + +# 直接使用本地构建脚本来构建镜像 + +# 指定版本: +# ./local-build.sh -v v1.0.0 + +# 部署时,需要修改部署YAML文件中的镜像标签,从main改为您构建的标签(如dev): +# image: karmada/karmada-dashboard-api:dev +# image: karmada/karmada-dashboard-web:dev + +set -e + +# 修正REPO_ROOT的计算方式,以支持从任何目录调用脚本 +SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +if [[ $(basename "$SCRIPT_DIR") == "hack" ]]; then + # 如果脚本在hack目录下 + REPO_ROOT=$(cd "$SCRIPT_DIR/.." && pwd) +else + # 如果脚本已经在项目根目录 + REPO_ROOT="$SCRIPT_DIR" +fi + +VERSION=${VERSION:-"latest"} +REGISTRY=${REGISTRY:-"docker.io/karmada"} + +echo "==================== Karmada Dashboard 本地镜像构建 ====================" +echo "版本: $VERSION" +echo "镜像仓库: $REGISTRY" +echo "项目根目录: $REPO_ROOT" +echo "==================================================================" + +# 创建临时构建目录 +BUILD_DIR=$(mktemp -d) +echo "使用临时构建目录: $BUILD_DIR" + +# 确保退出时删除临时目录 +trap "rm -rf $BUILD_DIR" EXIT + +# 构建API镜像 +echo "1. 构建API镜像" + +# 编译API二进制文件 +echo "1.1 编译API二进制文件" +cd $REPO_ROOT +make karmada-dashboard-api GOOS=linux +cp -f $REPO_ROOT/_output/bin/linux/amd64/karmada-dashboard-api $BUILD_DIR/ + +# 创建不需要网络的Dockerfile +cat > $BUILD_DIR/Dockerfile.api << EOF +FROM alpine:3.18 + +# 不使用apk添加包,使用最小镜像 +COPY karmada-dashboard-api /bin/karmada-dashboard-api +WORKDIR /bin + +# 设置容器入口点 +ENTRYPOINT ["/bin/karmada-dashboard-api"] +EOF + +# 构建镜像 +echo "1.2 构建API镜像" +(cd $BUILD_DIR && docker build -t ${REGISTRY}/karmada-dashboard-api:${VERSION} -f Dockerfile.api .) +echo "API镜像构建完成: ${REGISTRY}/karmada-dashboard-api:${VERSION}" + +# 构建Web镜像 +echo "2. 构建Web镜像" + +# 编译前端代码 +echo "2.1 构建前端代码" +cd $REPO_ROOT/ui +# 检查和安装依赖 +if [ ! -d "node_modules" ]; then + echo "安装前端依赖..." + pnpm install +fi +# 构建前端项目 +echo "编译前端项目..." +pnpm run dashboard:build +cd $REPO_ROOT + +# 编译Web二进制文件 +echo "2.2 编译Web二进制文件" +make karmada-dashboard-web GOOS=linux +cp -f $REPO_ROOT/_output/bin/linux/amd64/karmada-dashboard-web $BUILD_DIR/ + +# 复制前端构建产物 +echo "2.3 复制前端构建产物" +mkdir -p $BUILD_DIR/dist +cp -r $REPO_ROOT/ui/apps/dashboard/dist/* $BUILD_DIR/dist/ + +# 创建不需要网络的Dockerfile +cat > $BUILD_DIR/Dockerfile.web << EOF +FROM alpine:3.18 + +# 不使用apk添加包,使用最小镜像 +COPY dist /static +COPY karmada-dashboard-web /bin/karmada-dashboard-web +WORKDIR /bin + +# 设置容器入口点 +ENTRYPOINT ["/bin/karmada-dashboard-web"] +EOF + +# 构建镜像 +echo "2.4 构建Web镜像" +(cd $BUILD_DIR && docker build -t ${REGISTRY}/karmada-dashboard-web:${VERSION} -f Dockerfile.web .) +echo "Web镜像构建完成: ${REGISTRY}/karmada-dashboard-web:${VERSION}" + +# 输出结果 +echo "" +echo "==================== 构建完成 ====================" +echo "API 镜像: $REGISTRY/karmada-dashboard-api:$VERSION" +echo "Web 镜像: $REGISTRY/karmada-dashboard-web:$VERSION" +echo "使用 docker images 命令查看已构建的镜像" +echo "=================================================" + diff --git a/package.json b/package.json new file mode 100644 index 00000000..148bee11 --- /dev/null +++ b/package.json @@ -0,0 +1,5 @@ +{ + "dependencies": { + "@antv/g6": "4.8.23" + } +} diff --git a/pkg/client/auth.go b/pkg/client/auth.go index 1c450ca5..7fc26fc7 100644 --- a/pkg/client/auth.go +++ b/pkg/client/auth.go @@ -28,12 +28,16 @@ import ( const ( // authorizationHeader is the default authorization header name. + // 授权头名称 authorizationHeader = "Authorization" // authorizationTokenPrefix is the default bearer token prefix. + // 授权令牌前缀 authorizationTokenPrefix = "Bearer " ) +// karmadaConfigFromRequest 从 HTTP 请求创建一个 Karmada 配置 func karmadaConfigFromRequest(request *http.Request) (*rest.Config, error) { + // 构建授权信息 authInfo, err := buildAuthInfo(request) if err != nil { return nil, err @@ -42,89 +46,119 @@ func karmadaConfigFromRequest(request *http.Request) (*rest.Config, error) { return buildConfigFromAuthInfo(authInfo) } +// buildConfigFromAuthInfo 从授权信息构建一个 Karmada 配置 func buildConfigFromAuthInfo(authInfo *clientcmdapi.AuthInfo) (*rest.Config, error) { - cmdCfg := clientcmdapi.NewConfig() + // clientcmdapi.AuthInfo 是 clientcmd 包中的一个结构体,用于存储认证信息 + // clientcmdapi 是 client-go 包中的一个子包,用于存储客户端配置 + // clientcmd.NewDefaultClientConfig 是 client-go 包中的一个函数,用于创建一个默认的客户端配置 + // clientcmd.ConfigOverrides 是 client-go 包中的一个结构体,用于存储客户端配置的覆盖信息 + // clientcmd.NewConfig 是 client-go 包中的一个函数,用于创建一个默认的客户端配置 + // 创建一个 Karmada 配置 + cmdCfg := clientcmdapi.NewConfig() + // 设置集群 cmdCfg.Clusters[DefaultCmdConfigName] = &clientcmdapi.Cluster{ + // 设置集群的 Server Server: karmadaRestConfig.Host, + // 设置集群的 CertificateAuthority CertificateAuthority: karmadaRestConfig.TLSClientConfig.CAFile, + // 设置集群的 CertificateAuthorityData CertificateAuthorityData: karmadaRestConfig.TLSClientConfig.CAData, + // 设置集群的 InsecureSkipTLSVerify InsecureSkipTLSVerify: karmadaRestConfig.TLSClientConfig.Insecure, } - + // 设置认证信息 cmdCfg.AuthInfos[DefaultCmdConfigName] = authInfo - + // 设置上下文 cmdCfg.Contexts[DefaultCmdConfigName] = &clientcmdapi.Context{ + // 设置上下文的集群 Cluster: DefaultCmdConfigName, + // 设置上下文的认证信息 AuthInfo: DefaultCmdConfigName, } - + // 设置当前上下文 cmdCfg.CurrentContext = DefaultCmdConfigName - + // 返回 Karmada 配置 return clientcmd.NewDefaultClientConfig( *cmdCfg, &clientcmd.ConfigOverrides{}, ).ClientConfig() } +// buildAuthInfo 构建授权信息 func buildAuthInfo(request *http.Request) (*clientcmdapi.AuthInfo, error) { + // 检查请求头中是否包含授权信息 if !HasAuthorizationHeader(request) { return nil, k8serrors.NewUnauthorized("MSG_LOGIN_UNAUTHORIZED_ERROR") } - + // 获取授权令牌 token := GetBearerToken(request) + // 创建授权信息 authInfo := &clientcmdapi.AuthInfo{ Token: token, ImpersonateUserExtra: make(map[string][]string), } - + // 处理模拟用户 handleImpersonation(authInfo, request) + // 返回授权信息 return authInfo, nil } -// HasAuthorizationHeader checks if the request has an authorization header. +// HasAuthorizationHeader 检查请求头中是否包含授权信息 func HasAuthorizationHeader(req *http.Request) bool { + // 获取请求头中的授权信息 header := req.Header.Get(authorizationHeader) + // 如果授权信息为空,则返回 false if len(header) == 0 { return false } - + // 提取授权令牌 token := extractBearerToken(header) + // 如果授权令牌为空,则返回 false + if len(token) == 0 { + return false + } + // 返回授权信息 return strings.HasPrefix(header, authorizationTokenPrefix) && len(token) > 0 } -// GetBearerToken returns the bearer token from the authorization header. +// GetBearerToken 从授权头中获取授权令牌 func GetBearerToken(req *http.Request) string { + // 获取请求头中的授权信息 header := req.Header.Get(authorizationHeader) + // 提取授权令牌 return extractBearerToken(header) } // SetAuthorizationHeader sets the authorization header for the given request. +// 设置授权头 func SetAuthorizationHeader(req *http.Request, token string) { + // 设置授权头 req.Header.Set(authorizationHeader, authorizationTokenPrefix+token) } +// extractBearerToken 提取授权令牌 func extractBearerToken(header string) string { return strings.TrimPrefix(header, authorizationTokenPrefix) } +// handleImpersonation 处理模拟用户 func handleImpersonation(authInfo *clientcmdapi.AuthInfo, request *http.Request) { + // 获取请求头中的模拟用户 user := request.Header.Get(ImpersonateUserHeader) + // 获取请求头中的模拟组 groups := request.Header[ImpersonateGroupHeader] - + // 如果模拟用户为空,则返回 if len(user) == 0 { return } - // Impersonate user authInfo.Impersonate = user - - // Impersonate groups if available + // 如果模拟组可用,则设置模拟组 if len(groups) > 0 { authInfo.ImpersonateGroups = groups } - - // Add extra impersonation fields if available + // 如果模拟用户额外信息可用,则设置模拟用户额外信息 for name, values := range request.Header { if strings.HasPrefix(name, ImpersonateUserExtraHeader) { extraName := strings.TrimPrefix(name, ImpersonateUserExtraHeader) diff --git a/pkg/client/client.go b/pkg/client/client.go index bca55aae..b1955030 100644 --- a/pkg/client/client.go +++ b/pkg/client/client.go @@ -29,6 +29,7 @@ import ( ) // LoadRestConfig creates a rest.Config using the passed kubeconfig. If context is empty, current context in kubeconfig will be used. +// LoadRestConfig 使用传递的 kubeconfig 创建一个 rest.Config。如果 context 为空,则使用 kubeconfig 中的当前上下文。 func LoadRestConfig(kubeconfig string, context string) (*rest.Config, error) { loader := &clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfig} loadedConfig, err := loader.Load() @@ -50,6 +51,7 @@ func LoadRestConfig(kubeconfig string, context string) (*rest.Config, error) { } // LoadAPIConfig creates a clientcmdapi.Config using the passed kubeconfig. If currentContext is empty, current context in kubeconfig will be used. +// LoadAPIConfig 使用传递的 kubeconfig 创建一个 clientcmdapi.Config。如果 currentContext 为空,则使用 kubeconfig 中的当前上下文。 func LoadAPIConfig(kubeconfig string, currentContext string) (*clientcmdapi.Config, error) { config, err := clientcmd.LoadFromFile(kubeconfig) if err != nil { @@ -86,6 +88,7 @@ func LoadAPIConfig(kubeconfig string, currentContext string) (*clientcmdapi.Conf } // LoadRestConfigFromKubeConfig creates a rest.Config from a kubeconfig string. +// LoadRestConfigFromKubeConfig 从 kubeconfig 字符串创建一个 rest.Config。 func LoadRestConfigFromKubeConfig(kubeconfig string) (*rest.Config, error) { apiConfig, err := clientcmd.Load([]byte(kubeconfig)) if err != nil { @@ -100,6 +103,7 @@ func LoadRestConfigFromKubeConfig(kubeconfig string) (*rest.Config, error) { } // KubeClientSetFromKubeConfig creates a Kubernetes clientset from a kubeconfig string. +// KubeClientSetFromKubeConfig 从 kubeconfig 字符串创建一个 Kubernetes 客户端。 func KubeClientSetFromKubeConfig(kubeconfig string) (*kubeclient.Clientset, error) { restConfig, err := LoadRestConfigFromKubeConfig(kubeconfig) if err != nil { @@ -110,14 +114,19 @@ func KubeClientSetFromKubeConfig(kubeconfig string) (*kubeclient.Clientset, erro } // GetKarmadaClientFromRequest creates a Karmada clientset from an HTTP request. +// GetKarmadaClientFromRequest 从 HTTP 请求创建一个 Karmada 客户端 func GetKarmadaClientFromRequest(request *http.Request) (karmadaclientset.Interface, error) { + // 检查 Karmada 是否已初始化 if !isKarmadaInitialized() { return nil, fmt.Errorf("client package not initialized") } + // 从 HTTP 请求创建 Karmada 客户端 return karmadaClientFromRequest(request) } +// karmadaClientFromRequest 从 HTTP 请求创建一个 Karmada 客户端 func karmadaClientFromRequest(request *http.Request) (karmadaclientset.Interface, error) { + // 从 HTTP 请求创建 Karmada 配置 config, err := karmadaConfigFromRequest(request) if err != nil { return nil, err diff --git a/pkg/client/init.go b/pkg/client/init.go index 9595fc02..3d5ae622 100644 --- a/pkg/client/init.go +++ b/pkg/client/init.go @@ -29,21 +29,33 @@ import ( "k8s.io/klog/v2" ) +// proxyURL 代理 URL const proxyURL = "/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/" var ( + // kubernetesRestConfig 是 Kubernetes 的 rest.Config kubernetesRestConfig *rest.Config + // kubernetesAPIConfig 是 Kubernetes 的 clientcmdapi.Config kubernetesAPIConfig *clientcmdapi.Config + // inClusterClient 是 Kubernetes 的客户端 inClusterClient kubeclient.Interface + // karmadaRestConfig 是 Karmada 的 rest.Config karmadaRestConfig *rest.Config + // karmadaAPIConfig 是 Karmada 的 clientcmdapi.Config karmadaAPIConfig *clientcmdapi.Config + // karmadaMemberConfig 是 Karmada 的 rest.Config karmadaMemberConfig *rest.Config + // inClusterKarmadaClient 是 Karmada 的客户端 inClusterKarmadaClient karmadaclientset.Interface + // inClusterClientForKarmadaAPIServer 是 Karmada 的客户端 inClusterClientForKarmadaAPIServer kubeclient.Interface + // inClusterClientForMemberAPIServer 是 Karmada 的客户端 inClusterClientForMemberAPIServer kubeclient.Interface + // memberClients 是成员集群的客户端 memberClients sync.Map ) +// configBuilder 是 config 的构建器 type configBuilder struct { kubeconfigPath string kubeContext string @@ -51,37 +63,38 @@ type configBuilder struct { userAgent string } -// Option is a function that configures a configBuilder. +// Option 是 configBuilder 的配置选项 type Option func(*configBuilder) -// WithUserAgent is an option to set the user agent. +// WithUserAgent 是设置 user agent 的选项 func WithUserAgent(agent string) Option { return func(c *configBuilder) { c.userAgent = agent } } -// WithKubeconfig is an option to set the kubeconfig path. +// WithKubeconfig 是设置 kubeconfig 路径的选项 func WithKubeconfig(path string) Option { return func(c *configBuilder) { c.kubeconfigPath = path } } -// WithKubeContext is an option to set the kubeconfig context. +// WithKubeContext 是设置 kubeconfig 上下文的选项 func WithKubeContext(kubecontext string) Option { return func(c *configBuilder) { c.kubeContext = kubecontext } } -// WithInsecureTLSSkipVerify is an option to set the insecure tls skip verify. +// WithInsecureTLSSkipVerify 是设置不安全的 TLS 跳过验证的选项 func WithInsecureTLSSkipVerify(insecure bool) Option { return func(c *configBuilder) { c.insecure = insecure } } +// newConfigBuilder 是创建 configBuilder 的函数 func newConfigBuilder(options ...Option) *configBuilder { builder := &configBuilder{} @@ -92,6 +105,7 @@ func newConfigBuilder(options ...Option) *configBuilder { return builder } +// buildRestConfig 是构建 rest.Config 的函数 func (in *configBuilder) buildRestConfig() (*rest.Config, error) { if len(in.kubeconfigPath) == 0 { return nil, errors.New("must specify kubeconfig") @@ -112,6 +126,7 @@ func (in *configBuilder) buildRestConfig() (*rest.Config, error) { return restConfig, nil } +// buildAPIConfig 是构建 clientcmdapi.Config 的函数 func (in *configBuilder) buildAPIConfig() (*clientcmdapi.Config, error) { if len(in.kubeconfigPath) == 0 { return nil, errors.New("must specify kubeconfig") @@ -124,6 +139,7 @@ func (in *configBuilder) buildAPIConfig() (*clientcmdapi.Config, error) { return apiConfig, nil } +// isKubeInitialized 检查 Kubernetes config 是否已初始化 func isKubeInitialized() bool { if kubernetesRestConfig == nil || kubernetesAPIConfig == nil { klog.Errorf(`karmada/karmada-dashboard/client' package has not been initialized properly. Run 'client.InitKubeConfig(...)' to initialize it. `) @@ -132,7 +148,7 @@ func isKubeInitialized() bool { return true } -// InitKubeConfig initializes the kubernetes client config. +// InitKubeConfig 初始化 Kubernetes 客户端配置 func InitKubeConfig(options ...Option) { builder := newConfigBuilder(options...) // prefer InClusterConfig, if something wrong, use explicit kubeconfig path @@ -163,7 +179,7 @@ func InitKubeConfig(options ...Option) { } } -// InClusterClient returns a kubernetes client. +// InClusterClient 返回一个 Kubernetes 客户端 func InClusterClient() kubeclient.Interface { if !isKubeInitialized() { return nil @@ -184,7 +200,7 @@ func InClusterClient() kubeclient.Interface { return inClusterClient } -// GetKubeConfig returns the kubernetes client config. +// GetKubeConfig 返回 Kubernetes 客户端配置 func GetKubeConfig() (*rest.Config, *clientcmdapi.Config, error) { if !isKubeInitialized() { return nil, nil, fmt.Errorf("client package not initialized") @@ -192,6 +208,7 @@ func GetKubeConfig() (*rest.Config, *clientcmdapi.Config, error) { return kubernetesRestConfig, kubernetesAPIConfig, nil } +// isKarmadaInitialized 检查 Karmada config 是否已初始化 func isKarmadaInitialized() bool { if karmadaRestConfig == nil || karmadaAPIConfig == nil { klog.Errorf(`karmada/karmada-dashboard/client' package has not been initialized properly. Run 'client.InitKarmadaConfig(...)' to initialize it. `) @@ -200,7 +217,7 @@ func isKarmadaInitialized() bool { return true } -// InitKarmadaConfig initializes the karmada client config. +// InitKarmadaConfig 初始化 Karmada 客户端配置 func InitKarmadaConfig(options ...Option) { builder := newConfigBuilder(options...) restConfig, err := builder.buildRestConfig() @@ -225,7 +242,7 @@ func InitKarmadaConfig(options ...Option) { karmadaMemberConfig = memberConfig } -// InClusterKarmadaClient returns a karmada client. +// InClusterKarmadaClient 返回一个 Karmada 客户端 func InClusterKarmadaClient() karmadaclientset.Interface { if !isKarmadaInitialized() { return nil @@ -244,7 +261,7 @@ func InClusterKarmadaClient() karmadaclientset.Interface { return inClusterKarmadaClient } -// GetKarmadaConfig returns the karmada client config. +// GetKarmadaConfig 返回 Karmada 客户端配置 func GetKarmadaConfig() (*rest.Config, *clientcmdapi.Config, error) { if !isKarmadaInitialized() { return nil, nil, fmt.Errorf("client package not initialized") @@ -252,7 +269,7 @@ func GetKarmadaConfig() (*rest.Config, *clientcmdapi.Config, error) { return karmadaRestConfig, karmadaAPIConfig, nil } -// GetMemberConfig returns the member client config. +// GetMemberConfig 返回成员集群的客户端配置 func GetMemberConfig() (*rest.Config, error) { if !isKarmadaInitialized() { return nil, fmt.Errorf("client package not initialized") @@ -260,7 +277,7 @@ func GetMemberConfig() (*rest.Config, error) { return karmadaMemberConfig, nil } -// InClusterClientForKarmadaAPIServer returns a kubernetes client for karmada apiserver. +// InClusterClientForKarmadaAPIServer 返回一个 Karmada API 服务器的 Kubernetes 客户端 func InClusterClientForKarmadaAPIServer() kubeclient.Interface { if !isKarmadaInitialized() { return nil @@ -282,13 +299,14 @@ func InClusterClientForKarmadaAPIServer() kubeclient.Interface { return inClusterClientForKarmadaAPIServer } -// InClusterClientForMemberCluster returns a kubernetes client for member apiserver. +// InClusterClientForMemberCluster 返回一个成员集群的 Kubernetes 客户端 func InClusterClientForMemberCluster(clusterName string) kubeclient.Interface { if !isKarmadaInitialized() { return nil } // Load and return Interface for member apiserver if already exist + // 如果成员 API 服务器已经存在,则加载并返回 Interface if value, ok := memberClients.Load(clusterName); ok { if inClusterClientForMemberAPIServer, ok = value.(kubeclient.Interface); ok { return inClusterClientForMemberAPIServer @@ -297,7 +315,7 @@ func InClusterClientForMemberCluster(clusterName string) kubeclient.Interface { return nil } - // Client for new member apiserver + // 为新的成员 API 服务器创建客户端 restConfig, _, err := GetKarmadaConfig() if err != nil { klog.ErrorS(err, "Could not get karmada restConfig") @@ -319,7 +337,7 @@ func InClusterClientForMemberCluster(clusterName string) kubeclient.Interface { return inClusterClientForMemberAPIServer } -// ConvertRestConfigToAPIConfig converts a rest.Config to a clientcmdapi.Config. +// ConvertRestConfigToAPIConfig 将 rest.Config 转换为 clientcmdapi.Config func ConvertRestConfigToAPIConfig(restConfig *rest.Config) *clientcmdapi.Config { // 将 rest.Config 转换为 clientcmdapi.Config clientcmdConfig := clientcmdapi.NewConfig() diff --git a/pkg/client/types.go b/pkg/client/types.go index fd89cf07..95ef07f7 100644 --- a/pkg/client/types.go +++ b/pkg/client/types.go @@ -24,28 +24,38 @@ import ( const ( // DefaultQPS is the default globalClient QPS configuration. High enough QPS to fit all expected use cases. // QPS=0 is not set here, because globalClient code is overriding it. + // 默认 QPS DefaultQPS = 1e6 // DefaultBurst is the default globalClient burst configuration. High enough Burst to fit all expected use cases. // Burst=0 is not set here, because globalClient code is overriding it. + // 默认突发 DefaultBurst = 1e6 // DefaultUserAgent is the default http header for user-agent + // 默认用户代理 DefaultUserAgent = "dashboard" // DefaultCmdConfigName is the default cluster/context/auth name to be set in clientcmd config + // 默认集群/上下文/认证名称 DefaultCmdConfigName = "kubernetes" // ImpersonateUserHeader is the header name to identify username to act as. + // 模拟用户头名称 ImpersonateUserHeader = "Impersonate-User" // ImpersonateGroupHeader is the header name to identify group name to act as. - // Can be provided multiple times to set multiple groups. + // 模拟组头名称 ImpersonateGroupHeader = "Impersonate-Group" // ImpersonateUserExtraHeader is the header name used to associate extra fields with the user. + // 模拟用户额外头名称 // It is optional, and it requires ImpersonateUserHeader to be set. ImpersonateUserExtraHeader = "Impersonate-Extra-" ) -// ResourceVerber is responsible for performing generic CRUD operations on all supported resources. +// ResourceVerber 是负责对所有支持的资源执行通用 CRUD 操作的接口 type ResourceVerber interface { + // Update 更新资源 Update(object *unstructured.Unstructured) error + // Get 获取资源 Get(kind string, namespace string, name string) (runtime.Object, error) + // Delete 删除资源 Delete(kind string, namespace string, name string, deleteNow bool) error + // Create 创建资源 Create(object *unstructured.Unstructured) (*unstructured.Unstructured, error) } diff --git a/pkg/client/verber.go b/pkg/client/verber.go index 1bda0f51..29b48470 100644 --- a/pkg/client/verber.go +++ b/pkg/client/verber.go @@ -36,16 +36,17 @@ import ( ) var ( + // kindToGroupVersionResource 是 kind 到 GroupVersionResource 的映射 kindToGroupVersionResource = map[string]schema.GroupVersionResource{} ) -// resourceVerber is a struct responsible for doing common verb operations on resources, like -// DELETE, PUT, UPDATE. +// resourceVerber 是一个负责对资源执行常见 CRUD 操作的结构体,例如 DELETE、PUT、UPDATE。 type resourceVerber struct { client dynamic.Interface discovery discovery.DiscoveryInterface } +// groupVersionResourceFromUnstructured 从 Unstructured 对象获取 GroupVersionResource func (v *resourceVerber) groupVersionResourceFromUnstructured(object *unstructured.Unstructured) schema.GroupVersionResource { gvk := object.GetObjectKind().GroupVersionKind() @@ -56,6 +57,7 @@ func (v *resourceVerber) groupVersionResourceFromUnstructured(object *unstructur } } +// groupVersionResourceFromKind 从 kind 获取 GroupVersionResource func (v *resourceVerber) groupVersionResourceFromKind(kind string) (schema.GroupVersionResource, error) { if gvr, exists := kindToGroupVersionResource[kind]; exists { klog.V(3).InfoS("GroupVersionResource cache hit", "kind", kind) @@ -80,6 +82,7 @@ func (v *resourceVerber) groupVersionResourceFromKind(kind string) (schema.Group return schema.GroupVersionResource{}, fmt.Errorf("could not find GVR for kind %s", kind) } +// buildGroupVersionResourceCache 构建 GroupVersionResource 缓存 func (v *resourceVerber) buildGroupVersionResourceCache(resourceList []*metav1.APIResourceList) error { for _, resource := range resourceList { gv, err := schema.ParseGroupVersion(resource.GroupVersion) @@ -111,7 +114,7 @@ func (v *resourceVerber) buildGroupVersionResourceCache(resourceList []*metav1.A return nil } -// Delete deletes the resource of the given kind in the given namespace with the given name. +// Delete 删除指定命名空间和名称的资源 func (v *resourceVerber) Delete(kind string, namespace string, name string, deleteNow bool) error { gvr, err := v.groupVersionResourceFromKind(kind) if err != nil { @@ -132,7 +135,7 @@ func (v *resourceVerber) Delete(kind string, namespace string, name string, dele return v.client.Resource(gvr).Namespace(namespace).Delete(context.TODO(), name, defaultDeleteOptions) } -// Update patches resource of the given kind in the given namespace with the given name. +// Update 更新指定命名空间和名称的资源 func (v *resourceVerber) Update(object *unstructured.Unstructured) error { name := object.GetName() namespace := object.GetNamespace() @@ -168,7 +171,7 @@ func (v *resourceVerber) Update(object *unstructured.Unstructured) error { }) } -// Get gets the resource of the given kind in the given namespace with the given name. +// Get 获取指定命名空间和名称的资源 func (v *resourceVerber) Get(kind string, namespace string, name string) (runtime.Object, error) { gvr, err := v.groupVersionResourceFromKind(kind) if err != nil { @@ -177,7 +180,7 @@ func (v *resourceVerber) Get(kind string, namespace string, name string) (runtim return v.client.Resource(gvr).Namespace(namespace).Get(context.TODO(), name, metav1.GetOptions{}) } -// Create creates the resource of the given kind in the given namespace with the given name. +// Create 创建指定命名空间和名称的资源 func (v *resourceVerber) Create(object *unstructured.Unstructured) (*unstructured.Unstructured, error) { namespace := object.GetNamespace() gvr := v.groupVersionResourceFromUnstructured(object) @@ -185,7 +188,7 @@ func (v *resourceVerber) Create(object *unstructured.Unstructured) (*unstructure return v.client.Resource(gvr).Namespace(namespace).Create(context.TODO(), object, metav1.CreateOptions{}) } -// VerberClient returns a resourceVerber client. +// VerberClient 返回一个 resourceVerber 客户端 func VerberClient(_ *http.Request) (ResourceVerber, error) { // todo currently ignore rest.config from http.Request restConfig, _, err := GetKarmadaConfig() diff --git a/pkg/config/config.go b/pkg/config/config.go index edf31366..c7c8e3f4 100644 --- a/pkg/config/config.go +++ b/pkg/config/config.go @@ -31,6 +31,7 @@ import ( "k8s.io/klog/v2" ) +// dashboardConfig 是 dashboard 的配置 var dashboardConfig DashboardConfig const ( @@ -47,7 +48,7 @@ var ( } ) -// GetConfigKey returns the configuration key based on the environment name. +// GetConfigKey 根据环境名称返回配置键 func GetConfigKey() string { envName := os.Getenv("ENV_NAME") if envName == "" { @@ -56,7 +57,7 @@ func GetConfigKey() string { return fmt.Sprintf("%s.yaml", envName) } -// InitDashboardConfig initializes the dashboard configuration using a Kubernetes client. +// InitDashboardConfig 使用 Kubernetes 客户端初始化 dashboard 配置 func InitDashboardConfig(k8sClient kubernetes.Interface, stopper <-chan struct{}) { factory := informers.NewSharedInformerFactory(k8sClient, 0) resource, err := factory.ForResource(configmapGVR) @@ -100,7 +101,7 @@ func InitDashboardConfig(k8sClient kubernetes.Interface, stopper <-chan struct{} klog.Infof("ConfigMap informer started, waiting for ConfigMap events...") } -// GetDashboardConfig returns a copy of the current dashboard configuration. +// GetDashboardConfig 返回当前 dashboard 配置的副本 func GetDashboardConfig() DashboardConfig { return DashboardConfig{ DockerRegistries: dashboardConfig.DockerRegistries, @@ -110,7 +111,7 @@ func GetDashboardConfig() DashboardConfig { } } -// UpdateDashboardConfig updates the dashboard configuration in the Kubernetes ConfigMap. +// UpdateDashboardConfig 更新 Kubernetes ConfigMap 中的 dashboard 配置 func UpdateDashboardConfig(k8sClient kubernetes.Interface, newDashboardConfig DashboardConfig) error { ctx := context.TODO() oldConfigMap, err := k8sClient.CoreV1().ConfigMaps(configNamespace).Get(ctx, configName, metav1.GetOptions{}) @@ -133,7 +134,7 @@ func UpdateDashboardConfig(k8sClient kubernetes.Interface, newDashboardConfig Da return nil } -// InitDashboardConfigFromMountFile initializes the dashboard configuration from a mounted file. +// InitDashboardConfigFromMountFile 从挂载的文件初始化 dashboard 配置 func InitDashboardConfigFromMountFile(mountPath string) error { _, err := os.Stat(mountPath) if os.IsNotExist(err) { diff --git a/pkg/config/model.go b/pkg/config/model.go index 9bef29c1..b6129afa 100644 --- a/pkg/config/model.go +++ b/pkg/config/model.go @@ -17,6 +17,7 @@ limitations under the License. package config // DockerRegistry represents a Docker registry configuration. +// DockerRegistry 表示 Docker 注册表配置 type DockerRegistry struct { Name string `yaml:"name" json:"name"` URL string `yaml:"url" json:"url"` @@ -25,7 +26,7 @@ type DockerRegistry struct { AddTime int64 `yaml:"add_time" json:"add_time"` } -// ChartRegistry represents a Helm chart registry configuration. +// ChartRegistry 表示 Helm 图表注册表配置 type ChartRegistry struct { Name string `yaml:"name" json:"name"` URL string `yaml:"url" json:"url"` @@ -34,7 +35,7 @@ type ChartRegistry struct { AddTime int64 `yaml:"add_time" json:"add_time"` } -// MenuConfig represents a menu configuration. +// MenuConfig 表示菜单配置 type MenuConfig struct { Path string `yaml:"path" json:"path"` Enable bool `yaml:"enable" json:"enable"` @@ -42,7 +43,7 @@ type MenuConfig struct { Children []MenuConfig `yaml:"children" json:"children,omitempty"` } -// DashboardConfig represents the configuration structure for the Karmada dashboard. +// DashboardConfig 表示 Karmada 仪表板的配置结构 type DashboardConfig struct { DockerRegistries []DockerRegistry `yaml:"docker_registries" json:"docker_registries"` ChartRegistries []ChartRegistry `yaml:"chart_registries" json:"chart_registries"` diff --git a/pkg/dataselect/dataselect_test.go b/pkg/dataselect/dataselect_test.go index 0234b60a..d8eb63cf 100644 --- a/pkg/dataselect/dataselect_test.go +++ b/pkg/dataselect/dataselect_test.go @@ -19,23 +19,27 @@ import ( "testing" ) +// PaginationTestCase 表示分页测试用例 type PaginationTestCase struct { Info string PaginationQuery *PaginationQuery ExpectedOrder []int } +// SortTestCase 表示排序测试用例 type SortTestCase struct { Info string SortQuery *SortQuery ExpectedOrder []int } +// TestDataCell 表示测试数据单元 type TestDataCell struct { Name string ID int } +// GetProperty 获取属性 func (c TestDataCell) GetProperty(name PropertyName) ComparableValue { switch name { case NameProperty: @@ -47,6 +51,7 @@ func (c TestDataCell) GetProperty(name PropertyName) ComparableValue { } } +// toCells 将标准数据单元转换为数据单元 func toCells(std []TestDataCell) []DataCell { cells := make([]DataCell, len(std)) for i := range std { @@ -55,6 +60,7 @@ func toCells(std []TestDataCell) []DataCell { return cells } +// fromCells 将数据单元转换为标准数据单元 func fromCells(cells []DataCell) []TestDataCell { std := make([]TestDataCell, len(cells)) for i := range std { @@ -63,6 +69,7 @@ func fromCells(cells []DataCell) []TestDataCell { return std } +// getDataCellList 获取数据单元列表 func getDataCellList() []DataCell { return toCells([]TestDataCell{ {"ab", 1}, @@ -78,6 +85,7 @@ func getDataCellList() []DataCell { }) } +// getOrder 获取排序顺序 func getOrder(dataList []TestDataCell) []int { idOrder := []int{} for _, e := range dataList { @@ -86,6 +94,7 @@ func getOrder(dataList []TestDataCell) []int { return idOrder } +// TestSort 测试排序 func TestSort(t *testing.T) { testCases := []SortTestCase{ { @@ -154,6 +163,7 @@ func TestSort(t *testing.T) { } } +// TestPagination 测试分页 func TestPagination(t *testing.T) { testCases := []PaginationTestCase{ { diff --git a/pkg/dataselect/dataselectquery.go b/pkg/dataselect/dataselectquery.go index 827989bf..4a473b86 100644 --- a/pkg/dataselect/dataselectquery.go +++ b/pkg/dataselect/dataselectquery.go @@ -17,12 +17,18 @@ package dataselect // DataSelectQuery is options for GenericDataSelect which takes []GenericDataCell and returns selected data. // Can be extended to include any kind of selection - for example filtering. // Currently included only Pagination and Sort options. +// DataSelectQuery 是用于 GenericDataSelect 的选项,它接受 []GenericDataCell 并返回选定的数据。 +// 可以扩展以包括任何类型的选择 - 例如过滤。 +// 目前仅包含分页和排序选项。 type DataSelectQuery struct { // PaginationQuery holds options for pagination of data select. + // PaginationQuery 持有数据选择功能的分页选项 PaginationQuery *PaginationQuery // SortQuery holds options for sort functionality of data select. + // SortQuery 持有数据选择功能的排序选项 SortQuery *SortQuery // FilterQuery holds options for filter functionality of data select. + // FilterQuery 持有数据选择功能的过滤选项 FilterQuery *FilterQuery //MetricQuery *MetricQuery } @@ -48,24 +54,31 @@ var NoSort = &SortQuery{ } // FilterQuery holds options for filter functionality of data select. +// FilterQuery 持有数据选择功能的过滤选项 type FilterQuery struct { // FilterByList is a list of filter criteria for data selection. + // FilterByList 是数据选择功能的过滤条件列表 FilterByList []FilterBy } // FilterBy defines a filter criterion for data selection. // It specifies a property to filter on and the value to compare against. +// FilterBy 定义了一个数据选择功能的过滤条件 type FilterBy struct { // Property is the name of the field or attribute to filter by. + // Property 是过滤的属性名称 Property PropertyName // Value is the comparable value to match against the specified property. + // Value 是可比较的值,用于与指定的属性进行比较 Value ComparableValue } // NoFilter is an option for no filter. +// NoFilter 是一个没有过滤选项的选项 var NoFilter = &FilterQuery{ // FilterByList is a list of filter criteria for data selection. + // FilterByList 是数据选择功能的过滤条件列表 FilterByList: []FilterBy{}, } @@ -73,10 +86,14 @@ var NoFilter = &FilterQuery{ var NoDataSelect = NewDataSelectQuery(NoPagination, NoSort, NoFilter) // NewDataSelectQuery creates DataSelectQuery object from simpler data select queries. +// NewDataSelectQuery 从更简单的数据选择查询创建 DataSelectQuery 对象 func NewDataSelectQuery(paginationQuery *PaginationQuery, sortQuery *SortQuery, filterQuery *FilterQuery) *DataSelectQuery { return &DataSelectQuery{ + // 分页查询 PaginationQuery: paginationQuery, + // 排序查询 SortQuery: sortQuery, + // 过滤查询 FilterQuery: filterQuery, } } diff --git a/pkg/dataselect/pagination.go b/pkg/dataselect/pagination.go index a01bd74f..070be522 100644 --- a/pkg/dataselect/pagination.go +++ b/pkg/dataselect/pagination.go @@ -15,23 +15,28 @@ package dataselect // NoPagination By default backend pagination will not be applied. +// 默认情况下后端分页不会被应用 +// 没有项目会被返回 var NoPagination = NewPaginationQuery(-1, -1) // EmptyPagination No items will be returned +// 没有项目会被返回 var EmptyPagination = NewPaginationQuery(0, 0) // DefaultPagination Returns 10 items from page 1 +// 返回第 1 页的 10 个项目 var DefaultPagination = NewPaginationQuery(10, 0) -// PaginationQuery structure represents pagination settings +// PaginationQuery 结构体表示分页设置 type PaginationQuery struct { - // How many items per page should be returned + // 每页应该返回多少个项目 ItemsPerPage int - // Number of page that should be returned when pagination is applied to the list + // 当应用分页时应该返回的页码 Page int } // NewPaginationQuery return pagination query structure based on given parameters +// 根据给定的参数返回一个分页查询结构体 func NewPaginationQuery(itemsPerPage, page int) *PaginationQuery { return &PaginationQuery{itemsPerPage, page} } diff --git a/pkg/resource/clusteroverridepolicy/common.go b/pkg/resource/clusteroverridepolicy/common.go index 154c2919..96321b95 100644 --- a/pkg/resource/clusteroverridepolicy/common.go +++ b/pkg/resource/clusteroverridepolicy/common.go @@ -23,9 +23,10 @@ import ( ) // ClusterOverridePolicyCell wraps v1alpha1.ClusterOverridePolicy for data selection. +// ClusterOverridePolicyCell 用于数据选择。 type ClusterOverridePolicyCell v1alpha1.ClusterOverridePolicy -// GetProperty returns a property of the cluster override policy cell. +// GetProperty 返回集群覆盖策略单元格的属性。 func (c ClusterOverridePolicyCell) GetProperty(name dataselect.PropertyName) dataselect.ComparableValue { switch name { case dataselect.NameProperty: @@ -38,6 +39,7 @@ func (c ClusterOverridePolicyCell) GetProperty(name dataselect.PropertyName) dat } } +// toCells 将v1alpha1.ClusterOverridePolicy对象列表转换为dataselect.DataCell列表。 func toCells(std []v1alpha1.ClusterOverridePolicy) []dataselect.DataCell { cells := make([]dataselect.DataCell, len(std)) for i := range std { @@ -46,6 +48,7 @@ func toCells(std []v1alpha1.ClusterOverridePolicy) []dataselect.DataCell { return cells } +// fromCells 将dataselect.DataCell列表转换为v1alpha1.ClusterOverridePolicy对象列表。 func fromCells(cells []dataselect.DataCell) []v1alpha1.ClusterOverridePolicy { std := make([]v1alpha1.ClusterOverridePolicy, len(cells)) for i := range std { diff --git a/pkg/resource/clusteroverridepolicy/detail.go b/pkg/resource/clusteroverridepolicy/detail.go index 1cf12679..c4c958d4 100644 --- a/pkg/resource/clusteroverridepolicy/detail.go +++ b/pkg/resource/clusteroverridepolicy/detail.go @@ -27,15 +27,16 @@ import ( ) // ClusterOverridePolicyDetail contains clusterPropagationPolicy details and non-critical errors. +// ClusterOverridePolicyDetail 包含集群传播策略的详细信息和非关键错误。 type ClusterOverridePolicyDetail struct { - // Extends list item structure. + // 扩展列表项结构。 ClusterOverridePolicy `json:",inline"` - // List of non-critical errors, that occurred during resource retrieval. + // 在资源检索期间发生的非关键错误列表。 Errors []error `json:"errors"` } -// GetClusterOverridePolicyDetail gets clusterPropagationPolicy details. +// GetClusterOverridePolicyDetail 获取集群传播策略的详细信息。 func GetClusterOverridePolicyDetail(client karmadaclientset.Interface, name string) (*ClusterOverridePolicyDetail, error) { overridepolicyData, err := client.PolicyV1alpha1().ClusterOverridePolicies().Get(context.TODO(), name, metaV1.GetOptions{}) if err != nil { @@ -51,6 +52,8 @@ func GetClusterOverridePolicyDetail(client karmadaclientset.Interface, name stri return &propagationpolicy, nil } +// toOverridePolicyDetail 将ClusterOverridePolicy对象转换为ClusterOverridePolicyDetail对象。 +// 它将ClusterOverridePolicy对象转换为ClusterOverridePolicyDetail对象,并添加非关键错误。 func toOverridePolicyDetail(clusterOverridepolicy *v1alpha1.ClusterOverridePolicy, nonCriticalErrors []error) ClusterOverridePolicyDetail { return ClusterOverridePolicyDetail{ ClusterOverridePolicy: toClusterOverridePolicy(clusterOverridepolicy), diff --git a/pkg/resource/clusteroverridepolicy/list.go b/pkg/resource/clusteroverridepolicy/list.go index 7eb50334..1d628c40 100644 --- a/pkg/resource/clusteroverridepolicy/list.go +++ b/pkg/resource/clusteroverridepolicy/list.go @@ -29,26 +29,27 @@ import ( ) // ClusterOverridePolicyList contains a list of overriders in the karmada control-plane. +// ClusterOverridePolicyList 包含Karmada控制平面中的覆盖策略列表。 type ClusterOverridePolicyList struct { ListMeta types.ListMeta `json:"listMeta"` - // Unordered list of clusterOverridePolicies. + // 未排序的clusterOverridePolicies列表。 ClusterOverridePolicies []ClusterOverridePolicy `json:"clusterOverridePolicies"` - // List of non-critical errors, that occurred during resource retrieval. + // 在资源检索期间发生的非关键错误列表。 Errors []error `json:"errors"` } -// ClusterOverridePolicy contains information about a single clusterOverridePolicy. +// ClusterOverridePolicy 包含有关单个集群覆盖策略的信息。 type ClusterOverridePolicy struct { ObjectMeta types.ObjectMeta `json:"objectMeta"` TypeMeta types.TypeMeta `json:"typeMeta"` - // Override specificed data + // 覆盖特定数据 ResourceSelectors []v1alpha1.ResourceSelector `json:"resourceSelectors"` OverrideRules []v1alpha1.RuleWithCluster `json:"overrideRules"` } -// GetClusterOverridePolicyList returns a list of all overiders in the karmada control-plance. +// GetClusterOverridePolicyList 返回Karmada控制平面中所有覆盖策略的列表。 func GetClusterOverridePolicyList(client karmadaclientset.Interface, dsQuery *dataselect.DataSelectQuery) (*ClusterOverridePolicyList, error) { clusterOverridePolicies, err := client.PolicyV1alpha1().ClusterOverridePolicies().List(context.TODO(), helpers.ListEverything) nonCriticalErrors, criticalError := errors.ExtractErrors(err) @@ -59,6 +60,7 @@ func GetClusterOverridePolicyList(client karmadaclientset.Interface, dsQuery *da return toClusterOverridePolicyList(clusterOverridePolicies.Items, nonCriticalErrors, dsQuery), nil } +// toClusterOverridePolicyList 将v1alpha1.ClusterOverridePolicy对象列表转换为ClusterOverridePolicyList对象。 func toClusterOverridePolicyList(clusterOverridePolicies []v1alpha1.ClusterOverridePolicy, nonCriticalErrors []error, dsQuery *dataselect.DataSelectQuery) *ClusterOverridePolicyList { overridepolicyList := &ClusterOverridePolicyList{ ClusterOverridePolicies: make([]ClusterOverridePolicy, 0), @@ -76,6 +78,7 @@ func toClusterOverridePolicyList(clusterOverridePolicies []v1alpha1.ClusterOverr return overridepolicyList } +// toClusterOverridePolicy 将v1alpha1.ClusterOverridePolicy对象转换为ClusterOverridePolicy对象。 func toClusterOverridePolicy(overridepolicy *v1alpha1.ClusterOverridePolicy) ClusterOverridePolicy { return ClusterOverridePolicy{ ObjectMeta: types.NewObjectMeta(overridepolicy.ObjectMeta), diff --git a/pkg/resource/clusterpropagationpolicy/common.go b/pkg/resource/clusterpropagationpolicy/common.go index 53de069f..9cee5b03 100644 --- a/pkg/resource/clusterpropagationpolicy/common.go +++ b/pkg/resource/clusterpropagationpolicy/common.go @@ -23,9 +23,10 @@ import ( ) // ClusterPropagationPolicyCell wraps v1alpha1.ClusterPropagationPolicy for data selection. +// ClusterPropagationPolicyCell 用于数据选择。 type ClusterPropagationPolicyCell v1alpha1.ClusterPropagationPolicy -// GetProperty returns a property of the cluster propagation policy cell. +// GetProperty 返回集群传播策略单元格的属性。 func (c ClusterPropagationPolicyCell) GetProperty(name dataselect.PropertyName) dataselect.ComparableValue { switch name { case dataselect.NameProperty: @@ -38,6 +39,7 @@ func (c ClusterPropagationPolicyCell) GetProperty(name dataselect.PropertyName) } } +// toCells 将v1alpha1.ClusterPropagationPolicy对象列表转换为dataselect.DataCell列表。 func toCells(std []v1alpha1.ClusterPropagationPolicy) []dataselect.DataCell { cells := make([]dataselect.DataCell, len(std)) for i := range std { @@ -46,6 +48,7 @@ func toCells(std []v1alpha1.ClusterPropagationPolicy) []dataselect.DataCell { return cells } +// fromCells 将dataselect.DataCell列表转换为v1alpha1.ClusterPropagationPolicy对象列表。 func fromCells(cells []dataselect.DataCell) []v1alpha1.ClusterPropagationPolicy { std := make([]v1alpha1.ClusterPropagationPolicy, len(cells)) for i := range std { diff --git a/pkg/resource/clusterpropagationpolicy/detail.go b/pkg/resource/clusterpropagationpolicy/detail.go index d3efc90e..bd00c55e 100644 --- a/pkg/resource/clusterpropagationpolicy/detail.go +++ b/pkg/resource/clusterpropagationpolicy/detail.go @@ -27,6 +27,7 @@ import ( ) // ClusterPropagationPolicyDetail contains clusterPropagationPolicy details. +// ClusterPropagationPolicyDetail 包含集群传播策略的详细信息。 type ClusterPropagationPolicyDetail struct { // Extends list item structure. ClusterPropagationPolicy `json:",inline"` @@ -35,7 +36,7 @@ type ClusterPropagationPolicyDetail struct { Errors []error `json:"errors"` } -// GetClusterPropagationPolicyDetail gets clusterPropagationPolicy details. +// GetClusterPropagationPolicyDetail 获取集群传播策略的详细信息。 func GetClusterPropagationPolicyDetail(client karmadaclientset.Interface, name string) (*ClusterPropagationPolicyDetail, error) { propagationpolicyData, err := client.PolicyV1alpha1().ClusterPropagationPolicies().Get(context.TODO(), name, metaV1.GetOptions{}) if err != nil { @@ -51,6 +52,8 @@ func GetClusterPropagationPolicyDetail(client karmadaclientset.Interface, name s return &propagationpolicy, nil } +// toPropagationPolicyDetail 将ClusterPropagationPolicy对象转换为ClusterPropagationPolicyDetail对象。 +// 它将ClusterPropagationPolicy对象转换为ClusterPropagationPolicyDetail对象,并添加非关键错误。 func toPropagationPolicyDetail(clusterPropagationpolicy *v1alpha1.ClusterPropagationPolicy, nonCriticalErrors []error) ClusterPropagationPolicyDetail { return ClusterPropagationPolicyDetail{ ClusterPropagationPolicy: toClusterPropagationPolicy(clusterPropagationpolicy), diff --git a/pkg/resource/clusterpropagationpolicy/list.go b/pkg/resource/clusterpropagationpolicy/list.go index 71a33fb9..2f46f5ee 100644 --- a/pkg/resource/clusterpropagationpolicy/list.go +++ b/pkg/resource/clusterpropagationpolicy/list.go @@ -29,6 +29,7 @@ import ( ) // ClusterPropagationPolicyList contains a list of propagation in the karmada control-plane. +// 集群传播策略列表包含Karmada控制平面中的传播策略列表。 type ClusterPropagationPolicyList struct { ListMeta types.ListMeta `json:"listMeta"` @@ -39,7 +40,7 @@ type ClusterPropagationPolicyList struct { Errors []error `json:"errors"` } -// ClusterPropagationPolicy represents a cluster propagation policy. +// ClusterPropagationPolicy 表示集群传播策略。 type ClusterPropagationPolicy struct { ObjectMeta types.ObjectMeta `json:"objectMeta"` TypeMeta types.TypeMeta `json:"typeMeta"` @@ -48,7 +49,7 @@ type ClusterPropagationPolicy struct { ResourceSelectors []v1alpha1.ResourceSelector `json:"resourceSelectors"` } -// GetClusterPropagationPolicyList returns a list of all propagations in the karmada control-plance. +// GetClusterPropagationPolicyList 返回Karmada控制平面中所有传播策略的列表。 func GetClusterPropagationPolicyList(client karmadaclientset.Interface, dsQuery *dataselect.DataSelectQuery) (*ClusterPropagationPolicyList, error) { clusterPropagationPolicies, err := client.PolicyV1alpha1().ClusterPropagationPolicies().List(context.TODO(), helpers.ListEverything) nonCriticalErrors, criticalError := errors.ExtractErrors(err) @@ -59,6 +60,7 @@ func GetClusterPropagationPolicyList(client karmadaclientset.Interface, dsQuery return toClusterPropagationPolicyList(clusterPropagationPolicies.Items, nonCriticalErrors, dsQuery), nil } +// toClusterPropagationPolicyList 将v1alpha1.ClusterPropagationPolicy对象列表转换为ClusterPropagationPolicyList对象。 func toClusterPropagationPolicyList(clusterPropagationPolicies []v1alpha1.ClusterPropagationPolicy, nonCriticalErrors []error, dsQuery *dataselect.DataSelectQuery) *ClusterPropagationPolicyList { propagationpolicyList := &ClusterPropagationPolicyList{ ClusterPropagationPolicies: make([]ClusterPropagationPolicy, 0), @@ -76,6 +78,7 @@ func toClusterPropagationPolicyList(clusterPropagationPolicies []v1alpha1.Cluste return propagationpolicyList } +// toClusterPropagationPolicy 将v1alpha1.ClusterPropagationPolicy对象转换为ClusterPropagationPolicy对象。 func toClusterPropagationPolicy(propagationpolicy *v1alpha1.ClusterPropagationPolicy) ClusterPropagationPolicy { return ClusterPropagationPolicy{ ObjectMeta: types.NewObjectMeta(propagationpolicy.ObjectMeta), diff --git a/pkg/resource/common/namespace.go b/pkg/resource/common/namespace.go index c6bdd20f..76bd7ac9 100644 --- a/pkg/resource/common/namespace.go +++ b/pkg/resource/common/namespace.go @@ -23,39 +23,50 @@ import api "k8s.io/api/core/v1" // 3. More than one namespace selected: resources from all namespaces are queried and then // filtered here. type NamespaceQuery struct { + // namespaces 是命名空间列表 namespaces []string } // NewSameNamespaceQuery creates new namespace query that queries single namespace. func NewSameNamespaceQuery(namespace string) *NamespaceQuery { + // 创建一个包含单个命名空间的命名空间查询 return &NamespaceQuery{[]string{namespace}} } // NewNamespaceQuery creates new query for given namespaces. func NewNamespaceQuery(namespaces []string) *NamespaceQuery { + // 创建一个包含多个命名空间的命名空间查询 return &NamespaceQuery{namespaces} } // ToRequestParam returns K8s API namespace query for list of objects from this namespaces. // This is an optimization to query for single namespace if one was selected and for all // namespaces otherwise. +// 将命名空间查询转换为 K8s API 的命名空间查询参数 func (n *NamespaceQuery) ToRequestParam() string { + // 如果命名空间列表中只有一个命名空间,则返回该命名空间 if len(n.namespaces) == 1 { return n.namespaces[0] } + // 返回所有命名空间 return api.NamespaceAll } // Matches returns true when the given namespace matches this query. +// 判断给定的命名空间是否匹配此查询 func (n *NamespaceQuery) Matches(namespace string) bool { + // 如果命名空间列表为空,则返回 true if len(n.namespaces) == 0 { return true } + // 遍历命名空间列表 for _, queryNamespace := range n.namespaces { + // 如果命名空间匹配,则返回 true if namespace == queryNamespace { return true } } + // 如果没有匹配的命名空间,则返回 false return false } diff --git a/pkg/resource/namespace/list.go b/pkg/resource/namespace/list.go index 2a667472..26cf0415 100644 --- a/pkg/resource/namespace/list.go +++ b/pkg/resource/namespace/list.go @@ -29,29 +29,29 @@ import ( "github.com/karmada-io/dashboard/pkg/dataselect" ) -// NamespaceList contains a list of namespaces in the cluster. +// NamespaceList 包含集群中的命名空间列表 type NamespaceList struct { ListMeta types.ListMeta `json:"listMeta"` - // Unordered list of Namespaces. + // 未排序的命名空间列表 Namespaces []Namespace `json:"namespaces"` - // List of non-critical errors, that occurred during resource retrieval. + // 在资源检索期间发生的非关键错误列表 Errors []error `json:"errors"` } -// Namespace is a presentation layer view of Kubernetes namespaces. This means it is namespace plus -// additional augmented data we can get from other sources. +// Namespace 是 Kubernetes 命名空间的表示层视图。这意味着它是一个命名空间加上 +// 其他来源可以获取到的附加增强数据。 type Namespace struct { ObjectMeta types.ObjectMeta `json:"objectMeta"` TypeMeta types.TypeMeta `json:"typeMeta"` - // Phase is the current lifecycle phase of the namespace. + // Phase 是命名空间的当前生命周期阶段。 Phase v1.NamespacePhase `json:"phase"` SkipAutoPropagation bool `json:"skipAutoPropagation"` } -// GetNamespaceList returns a list of all namespaces in the cluster. +// GetNamespaceList 返回集群中所有命名空间的列表。 func GetNamespaceList(client kubernetes.Interface, dsQuery *dataselect.DataSelectQuery) (*NamespaceList, error) { log.Println("Getting list of namespaces") namespaces, err := client.CoreV1().Namespaces().List(context.TODO(), helpers.ListEverything) @@ -64,6 +64,7 @@ func GetNamespaceList(client kubernetes.Interface, dsQuery *dataselect.DataSelec return toNamespaceList(namespaces.Items, nonCriticalErrors, dsQuery), nil } +// toNamespaceList 将命名空间列表转换为命名空间列表 func toNamespaceList(namespaces []v1.Namespace, nonCriticalErrors []error, dsQuery *dataselect.DataSelectQuery) *NamespaceList { namespaceList := &NamespaceList{ Namespaces: make([]Namespace, 0), @@ -82,6 +83,7 @@ func toNamespaceList(namespaces []v1.Namespace, nonCriticalErrors []error, dsQue return namespaceList } +// toNamespace 将命名空间转换为命名空间 func toNamespace(namespace v1.Namespace) Namespace { _, exist := namespace.Labels[skipAutoPropagationLable] diff --git a/pkg/resource/overridepolicy/common.go b/pkg/resource/overridepolicy/common.go index a747fb50..35c92f13 100644 --- a/pkg/resource/overridepolicy/common.go +++ b/pkg/resource/overridepolicy/common.go @@ -23,9 +23,10 @@ import ( ) // OverridePolicyCell represents an OverridePolicy that implements the DataCell interface. +// OverridePolicyCell 表示一个实现DataCell接口的OverridePolicy。 type OverridePolicyCell v1alpha1.OverridePolicy -// GetProperty returns a comparable value for a specified property name. +// GetProperty 返回指定属性名称的可比较值。 func (c OverridePolicyCell) GetProperty(name dataselect.PropertyName) dataselect.ComparableValue { switch name { case dataselect.NameProperty: @@ -41,6 +42,7 @@ func (c OverridePolicyCell) GetProperty(name dataselect.PropertyName) dataselect } } +// toCells 将v1alpha1.OverridePolicy对象列表转换为dataselect.DataCell列表。 func toCells(std []v1alpha1.OverridePolicy) []dataselect.DataCell { cells := make([]dataselect.DataCell, len(std)) for i := range std { @@ -49,6 +51,7 @@ func toCells(std []v1alpha1.OverridePolicy) []dataselect.DataCell { return cells } +// fromCells 将dataselect.DataCell列表转换为v1alpha1.OverridePolicy对象列表。 func fromCells(cells []dataselect.DataCell) []v1alpha1.OverridePolicy { std := make([]v1alpha1.OverridePolicy, len(cells)) for i := range std { diff --git a/pkg/resource/overridepolicy/detail.go b/pkg/resource/overridepolicy/detail.go index 41a5c59e..834fa71d 100644 --- a/pkg/resource/overridepolicy/detail.go +++ b/pkg/resource/overridepolicy/detail.go @@ -28,6 +28,8 @@ import ( // OverridePolicyDetail is a presentation layer view of Karmada OverridePolicy resource. This means it is OverridePolicy plus // additional augmented data we can get from other sources. +// OverridePolicyDetail 是Karmada OverridePolicy资源的表示层视图。这意味着它是一个OverridePolicy加上 +// 其他来源可以获取的附加数据。 type OverridePolicyDetail struct { // Extends list item structure. OverridePolicy `json:",inline"` @@ -36,7 +38,7 @@ type OverridePolicyDetail struct { Errors []error `json:"errors"` } -// GetOverridePolicyDetail gets Overridepolicy details. +// GetOverridePolicyDetail 获取OverridePolicy的详细信息。 func GetOverridePolicyDetail(client karmadaclientset.Interface, namespace, name string) (*OverridePolicyDetail, error) { OverridepolicyData, err := client.PolicyV1alpha1().OverridePolicies(namespace).Get(context.TODO(), name, metaV1.GetOptions{}) if err != nil { @@ -52,6 +54,8 @@ func GetOverridePolicyDetail(client karmadaclientset.Interface, namespace, name return &Overridepolicy, nil } +// toOverridePolicyDetail 将OverridePolicy对象转换为OverridePolicyDetail对象。 +// 它将OverridePolicy对象转换为OverridePolicyDetail对象,并添加非关键错误。 func toOverridePolicyDetail(Overridepolicy *v1alpha1.OverridePolicy, nonCriticalErrors []error) OverridePolicyDetail { return OverridePolicyDetail{ OverridePolicy: toOverridePolicy(Overridepolicy), diff --git a/pkg/resource/overridepolicy/list.go b/pkg/resource/overridepolicy/list.go index f8468500..097c6efb 100644 --- a/pkg/resource/overridepolicy/list.go +++ b/pkg/resource/overridepolicy/list.go @@ -31,27 +31,28 @@ import ( "github.com/karmada-io/dashboard/pkg/resource/common" ) -// OverridePolicyList contains a list of propagation in the karmada control-plance. +// OverridePolicyList contains a list of override policies in the karmada control-plane. +// OverridePolicyList 包含Karmada控制平面中的覆盖策略列表。 type OverridePolicyList struct { ListMeta types.ListMeta `json:"listMeta"` - // Unordered list of OverridePolicys. + // 未排序的OverridePolicy列表。 OverridePolicys []OverridePolicy `json:"overridepolicys"` - // List of non-critical errors, that occurred during resource retrieval. + // 在资源检索期间发生的非关键错误列表。 Errors []error `json:"errors"` } -// OverridePolicy contains information about a single override. +// OverridePolicy 包含有关单个覆盖策略的信息。 type OverridePolicy struct { ObjectMeta types.ObjectMeta `json:"objectMeta"` TypeMeta types.TypeMeta `json:"typeMeta"` - // Override specificed data + // 覆盖特定数据 ResourceSelectors []v1alpha1.ResourceSelector `json:"resourceSelectors"` OverrideRules []v1alpha1.RuleWithCluster `json:"overrideRules"` } -// GetOverridePolicyList returns a list of all override policies in the Karmada control-plane. +// GetOverridePolicyList 返回Karmada控制平面中所有覆盖策略的列表。 func GetOverridePolicyList(client karmadaclientset.Interface, k8sClient kubernetes.Interface, nsQuery *common.NamespaceQuery, dsQuery *dataselect.DataSelectQuery) (*OverridePolicyList, error) { log.Println("Getting list of overridepolicy") overridePolicies, err := client.PolicyV1alpha1().OverridePolicies(nsQuery.ToRequestParam()).List(context.TODO(), helpers.ListEverything) @@ -63,6 +64,7 @@ func GetOverridePolicyList(client karmadaclientset.Interface, k8sClient kubernet return toOverridePolicyList(k8sClient, overridePolicies.Items, nonCriticalErrors, dsQuery), nil } +// toOverridePolicyList 将v1alpha1.OverridePolicy对象列表转换为OverridePolicyList对象。 func toOverridePolicyList(_ kubernetes.Interface, overridepolicies []v1alpha1.OverridePolicy, nonCriticalErrors []error, dsQuery *dataselect.DataSelectQuery) *OverridePolicyList { overridepolicyList := &OverridePolicyList{ OverridePolicys: make([]OverridePolicy, 0), @@ -80,6 +82,7 @@ func toOverridePolicyList(_ kubernetes.Interface, overridepolicies []v1alpha1.Ov return overridepolicyList } +// toOverridePolicy 将v1alpha1.OverridePolicy对象转换为OverridePolicy对象。 func toOverridePolicy(overridepolicy *v1alpha1.OverridePolicy) OverridePolicy { return OverridePolicy{ ObjectMeta: types.NewObjectMeta(overridepolicy.ObjectMeta), diff --git a/pkg/resource/propagationpolicy/common.go b/pkg/resource/propagationpolicy/common.go index 5a829d1b..d831d017 100644 --- a/pkg/resource/propagationpolicy/common.go +++ b/pkg/resource/propagationpolicy/common.go @@ -23,9 +23,11 @@ import ( ) // PropagationPolicyCell is a wrapper around PropagationPolicy type +// 用于在dataselect中存储和处理PropagationPolicy对象 type PropagationPolicyCell v1alpha1.PropagationPolicy // GetProperty returns the given property of the PropagationPolicy. +// 获取PropagationPolicy的指定属性 func (c PropagationPolicyCell) GetProperty(name dataselect.PropertyName) dataselect.ComparableValue { switch name { case dataselect.NameProperty: @@ -42,6 +44,7 @@ func (c PropagationPolicyCell) GetProperty(name dataselect.PropertyName) datasel } } +// toCells 将v1alpha1.PropagationPolicy对象列表转换为dataselect.DataCell列表 func toCells(std []v1alpha1.PropagationPolicy) []dataselect.DataCell { cells := make([]dataselect.DataCell, len(std)) for i := range std { @@ -50,6 +53,7 @@ func toCells(std []v1alpha1.PropagationPolicy) []dataselect.DataCell { return cells } +// fromCells 将dataselect.DataCell列表转换为v1alpha1.PropagationPolicy对象列表 func fromCells(cells []dataselect.DataCell) []v1alpha1.PropagationPolicy { std := make([]v1alpha1.PropagationPolicy, len(cells)) for i := range std { diff --git a/pkg/resource/propagationpolicy/detail.go b/pkg/resource/propagationpolicy/detail.go index 66c4e6b7..88b92dbb 100644 --- a/pkg/resource/propagationpolicy/detail.go +++ b/pkg/resource/propagationpolicy/detail.go @@ -28,6 +28,8 @@ import ( // PropagationPolicyDetail is a presentation layer view of Karmada PropagationPolicy resource. This means it is PropagationPolicy plus // additional augmented data we can get from other sources. +// PropagationPolicyDetail 是Karmada PropagationPolicy资源的表示层视图。这意味着它是一个PropagationPolicy加上 +// 其他来源可以获取的附加数据。 type PropagationPolicyDetail struct { // Extends list item structure. PropagationPolicy `json:",inline"` @@ -37,6 +39,7 @@ type PropagationPolicyDetail struct { } // GetPropagationPolicyDetail gets propagationpolicy details. +// GetPropagationPolicyDetail 获取PropagationPolicy的详细信息。 func GetPropagationPolicyDetail(client karmadaclientset.Interface, namespace, name string) (*PropagationPolicyDetail, error) { propagationpolicyData, err := client.PolicyV1alpha1().PropagationPolicies(namespace).Get(context.TODO(), name, metaV1.GetOptions{}) if err != nil { @@ -52,6 +55,8 @@ func GetPropagationPolicyDetail(client karmadaclientset.Interface, namespace, na return &propagationpolicy, nil } +// toPropagationPolicyDetail 将PropagationPolicy对象转换为PropagationPolicyDetail对象。 +// 它将PropagationPolicy对象转换为PropagationPolicyDetail对象,并添加非关键错误。 func toPropagationPolicyDetail(propagationpolicy *v1alpha1.PropagationPolicy, nonCriticalErrors []error) PropagationPolicyDetail { return PropagationPolicyDetail{ PropagationPolicy: toPropagationPolicy(propagationpolicy), diff --git a/pkg/resource/propagationpolicy/list.go b/pkg/resource/propagationpolicy/list.go index 7c475e1a..2f65b2f9 100644 --- a/pkg/resource/propagationpolicy/list.go +++ b/pkg/resource/propagationpolicy/list.go @@ -35,27 +35,28 @@ import ( ) // PropagationPolicyList contains a list of propagation in the karmada control-plance. +// PropagationPolicyList 包含Karmada控制平面中的传播列表。 type PropagationPolicyList struct { ListMeta types.ListMeta `json:"listMeta"` - // Unordered list of PropagationPolicys. + // 未排序的PropagationPolicy列表。 PropagationPolicys []PropagationPolicy `json:"propagationpolicys"` - // List of non-critical errors, that occurred during resource retrieval. + // 在资源检索期间发生的非关键错误列表。 Errors []error `json:"errors"` } -// PropagationPolicy contains information about a single propagation. +// PropagationPolicy 包含有关单个传播的信息。 type PropagationPolicy struct { ObjectMeta types.ObjectMeta `json:"objectMeta"` TypeMeta types.TypeMeta `json:"typeMeta"` - // propagation specificed data + // 传播特定数据 SchedulerName string `json:"schedulerName"` ClusterAffinity *v1alpha1.ClusterAffinity `json:"clusterAffinity"` RelatedResources []string `json:"relatedResources"` } -// GetPropagationPolicyList returns a list of all propagations in the karmada control-plance. +// GetPropagationPolicyList 返回Karmada控制平面中所有传播的列表。 func GetPropagationPolicyList(client karmadaclientset.Interface, k8sClient kubernetes.Interface, nsQuery *common.NamespaceQuery, dsQuery *dataselect.DataSelectQuery) (*PropagationPolicyList, error) { log.Println("Getting list of namespaces") propagationpolicies, err := client.PolicyV1alpha1().PropagationPolicies(nsQuery.ToRequestParam()).List(context.TODO(), helpers.ListEverything) @@ -67,6 +68,7 @@ func GetPropagationPolicyList(client karmadaclientset.Interface, k8sClient kuber return toPropagationPolicyList(k8sClient, propagationpolicies.Items, nonCriticalErrors, dsQuery), nil } +// toPropagationPolicyList 将v1alpha1.PropagationPolicy对象列表转换为PropagationPolicyList对象。 func toPropagationPolicyList(_ kubernetes.Interface, propagationpolicies []v1alpha1.PropagationPolicy, nonCriticalErrors []error, dsQuery *dataselect.DataSelectQuery) *PropagationPolicyList { propagationpolicyList := &PropagationPolicyList{ PropagationPolicys: make([]PropagationPolicy, 0), @@ -101,6 +103,7 @@ func toPropagationPolicyList(_ kubernetes.Interface, propagationpolicies []v1alp return propagationpolicyList } +// toPropagationPolicy 将v1alpha1.PropagationPolicy对象转换为PropagationPolicy对象。 func toPropagationPolicy(propagationpolicy *v1alpha1.PropagationPolicy) PropagationPolicy { return PropagationPolicy{ ObjectMeta: types.NewObjectMeta(propagationpolicy.ObjectMeta), diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml new file mode 100644 index 00000000..288cd144 --- /dev/null +++ b/pnpm-lock.yaml @@ -0,0 +1,557 @@ +lockfileVersion: '9.0' + +settings: + autoInstallPeers: true + excludeLinksFromLockfile: false + +importers: + + .: + dependencies: + '@antv/g6': + specifier: 4.8.23 + version: 4.8.23 + +packages: + + '@ant-design/colors@4.0.5': + resolution: {integrity: sha512-3mnuX2prnWOWvpFTS2WH2LoouWlOgtnIpc6IarWN6GOzzLF8dW/U8UctuvIPhoboETehZfJ61XP+CGakBEPJ3Q==} + + '@antv/algorithm@0.1.26': + resolution: {integrity: sha512-DVhcFSQ8YQnMNW34Mk8BSsfc61iC1sAnmcfYoXTAshYHuU50p/6b7x3QYaGctDNKWGvi1ub7mPcSY0bK+aN0qg==} + + '@antv/dom-util@2.0.4': + resolution: {integrity: sha512-2shXUl504fKwt82T3GkuT4Uoc6p9qjCKnJ8gXGLSW4T1W37dqf9AV28aCfoVPHp2BUXpSsB+PAJX2rG/jLHsLQ==} + + '@antv/event-emitter@0.1.3': + resolution: {integrity: sha512-4ddpsiHN9Pd4UIlWuKVK1C4IiZIdbwQvy9i7DUSI3xNJ89FPUFt8lxDYj8GzzfdllV0NkJTRxnG+FvLk0llidg==} + + '@antv/g-base@0.5.16': + resolution: {integrity: sha512-jP06wggTubDPHXoKwFg3/f1lyxBX9ywwN3E/HG74Nd7DXqOXQis8tsIWW+O6dS/h9vyuXLd1/wDWkMMm3ZzXdg==} + + '@antv/g-canvas@0.5.17': + resolution: {integrity: sha512-sXYJMWTOlb/Ycb6sTKu00LcJqInXJY4t99+kSM40u2OfqrXYmaXDjHR7D2V0roMkbK/QWiWS9UnEidCR1VtMOA==} + + '@antv/g-math@0.1.9': + resolution: {integrity: sha512-KHMSfPfZ5XHM1PZnG42Q2gxXfOitYveNTA7L61lR6mhZ8Y/aExsYmHqaKBsSarU0z+6WLrl9C07PQJZaw0uljQ==} + + '@antv/g-svg@0.5.7': + resolution: {integrity: sha512-jUbWoPgr4YNsOat2Y/rGAouNQYGpw4R0cvlN0YafwOyacFFYy2zC8RslNd6KkPhhR3XHNSqJOuCYZj/YmLUwYw==} + + '@antv/g-webgpu-core@0.7.2': + resolution: {integrity: sha512-xUMmop7f3Rs34zFYKXLqHhDR1CQTeDl/7vI7Sn3X/73BqJc3X3HIIRvm83Fg2CjVACaOzw4WeLRXNaOCp9fz9w==} + + '@antv/g-webgpu-engine@0.7.2': + resolution: {integrity: sha512-lx8Y93IW2cnJvdoDRKyMmTdYqSC1pOmF0nyG3PGGyA0NI9vBYVgO0KTF6hkyWjdTWVq7XDZyf/h8CJridLh3lg==} + + '@antv/g-webgpu@0.7.2': + resolution: {integrity: sha512-kw+oYGsdvj5qeUfy5DPb/jztZBV+2fmqBd3Vv8NlKatfBmv8AirYX/CCW74AUSdWm99rEiLyxFB1VdRZ6b/wnQ==} + + '@antv/g6-core@0.8.23': + resolution: {integrity: sha512-JWdnba5Bx4/hLhbIQeyvdgh68SDYZisveukuBifxLKODCNJNKTopmWf1w6tU+RxAT2k5ByXkTGWQE1IkIL8O+Q==} + + '@antv/g6-core@0.8.24': + resolution: {integrity: sha512-rgI3dArAD8uoSz2+skS4ctN4x/Of33ivTIKaEYYvClxgkLZWVz9zvocy+5AWcVPBHZsAXkZcdh9zndIoWY/33A==} + + '@antv/g6-element@0.8.23': + resolution: {integrity: sha512-KdJOiu4D7UExsYjKOJUcd7YCD/gCfxqHOlS01zkyOqsaindWVLdshBAZWXc2zgzVwHS/fadxwUI+DcllsRkH0g==} + + '@antv/g6-pc@0.8.23': + resolution: {integrity: sha512-8H5n1U8T4pyBcoaEKB8g4TRKycHtONSA+qOeFMq7XIDh1DCn0tUF1uLvwj096Zp+/bUXtAfaRvg+n1KKyCVZ0w==} + + '@antv/g6-plugin@0.8.23': + resolution: {integrity: sha512-DwhSuUc0a0foIM4nrhXR/+ooZafkVve0IEErldhsygKWLDSz/c9HRLON66OEdzQX7Ed1uE0SMcBUsIDe+wPQrw==} + + '@antv/g6@4.8.23': + resolution: {integrity: sha512-tsnJzlZCiOKvwAULGom6ppARutRmoAgV1wZzkOmDRm8ZdokUkpEYfb3faV6802VMs82DLP0zZ0KavapoK1q8hQ==} + + '@antv/graphlib@1.2.0': + resolution: {integrity: sha512-hhJOMThec51nU4Fe5p/viLlNIL71uDEgYFzKPajWjr2715SFG1HAgiP6AVylIeqBcAZ04u3Lw7usjl/TuI5RuQ==} + + '@antv/hierarchy@0.6.14': + resolution: {integrity: sha512-V3uknf7bhynOqQDw2sg+9r9DwZ9pc6k/EcqyTFdfXB1+ydr7urisP0MipIuimucvQKN+Qkd+d6w601r1UIroqQ==} + + '@antv/layout@0.3.25': + resolution: {integrity: sha512-d29Aw1PXoAavMRZy7iTB9L5rMBeChFEX0BJ9ELP4TI35ySdCu07YbmPo9ju9OH/6sG2/NB3o85Ayxrre3iwX/g==} + + '@antv/matrix-util@3.0.4': + resolution: {integrity: sha512-BAPyu6dUliHcQ7fm9hZSGKqkwcjEDVLVAstlHULLvcMZvANHeLXgHEgV7JqcAV/GIhIz8aZChIlzM1ZboiXpYQ==} + + '@antv/matrix-util@3.1.0-beta.3': + resolution: {integrity: sha512-W2R6Za3A6CmG51Y/4jZUM/tFgYSq7vTqJL1VD9dKrvwxS4sE0ZcXINtkp55CdyBwJ6Cwm8pfoRpnD4FnHahN0A==} + + '@antv/path-util@2.0.15': + resolution: {integrity: sha512-R2VLZ5C8PLPtr3VciNyxtjKqJ0XlANzpFb5sE9GE61UQqSRuSVSzIakMxjEPrpqbgc+s+y8i+fmc89Snu7qbNw==} + + '@antv/scale@0.3.18': + resolution: {integrity: sha512-GHwE6Lo7S/Q5fgaLPaCsW+CH+3zl4aXpnN1skOiEY0Ue9/u+s2EySv6aDXYkAqs//i0uilMDD/0/4n8caX9U9w==} + + '@antv/util@2.0.17': + resolution: {integrity: sha512-o6I9hi5CIUvLGDhth0RxNSFDRwXeywmt6ExR4+RmVAzIi48ps6HUy+svxOCayvrPBN37uE6TAc2KDofRo0nK9Q==} + + '@antv/util@3.3.10': + resolution: {integrity: sha512-basGML3DFA3O87INnzvDStjzS+n0JLEhRnRsDzP9keiXz8gT1z/fTdmJAZFOzMMWxy+HKbi7NbSt0+8vz/OsBQ==} + + '@babel/runtime@7.27.1': + resolution: {integrity: sha512-1x3D2xEk2fRo3PAhwQwu5UubzgiVWSXTBfWpVd2Mx2AzRqJuDJCsgaDVZ7HB5iGzDW1Hl1sWN2mFyKjmR9uAog==} + engines: {node: '>=6.9.0'} + + '@probe.gl/env@3.6.0': + resolution: {integrity: sha512-4tTZYUg/8BICC3Yyb9rOeoKeijKbZHRXBEKObrfPmX4sQmYB15ZOUpoVBhAyJkOYVAM8EkPci6Uw5dLCwx2BEQ==} + + '@probe.gl/log@3.6.0': + resolution: {integrity: sha512-hjpyenpEvOdowgZ1qMeCJxfRD4JkKdlXz0RC14m42Un62NtOT+GpWyKA4LssT0+xyLULCByRAtG2fzZorpIAcA==} + + '@probe.gl/stats@3.6.0': + resolution: {integrity: sha512-JdALQXB44OP4kUBN/UrQgzbJe4qokbVF4Y8lkIA8iVCFnjVowWIgkD/z/0QO65yELT54tTrtepw1jScjKB+rhQ==} + + '@types/d3-timer@2.0.3': + resolution: {integrity: sha512-jhAJzaanK5LqyLQ50jJNIrB8fjL9gwWZTgYjevPvkDLMU+kTAZkYsobI59nYoeSrH1PucuyJEi247Pb90t6XUg==} + + color-convert@1.9.3: + resolution: {integrity: sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==} + + color-name@1.1.3: + resolution: {integrity: sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==} + + color-name@1.1.4: + resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==} + + color-string@1.9.1: + resolution: {integrity: sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==} + + color@3.2.1: + resolution: {integrity: sha512-aBl7dZI9ENN6fUGC7mWpMTPNHmWUSNan9tuWN6ahh5ZLNk9baLJOnSMlrQkHcrfFgz2/RigjUVAjdx36VcemKA==} + + csstype@3.1.3: + resolution: {integrity: sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==} + + d3-color@1.4.1: + resolution: {integrity: sha512-p2sTHSLCJI2QKunbGb7ocOh7DgTAn8IrLx21QRc/BSnodXM4sv6aLQlnfpvehFMLZEfBc6g9pH9SWQccFYfJ9Q==} + + d3-dispatch@2.0.0: + resolution: {integrity: sha512-S/m2VsXI7gAti2pBoLClFFTMOO1HTtT0j99AuXLoGFKO6deHDdnv6ZGTxSTTUTgO1zVcv82fCOtDjYK4EECmWA==} + + d3-ease@1.0.7: + resolution: {integrity: sha512-lx14ZPYkhNx0s/2HX5sLFUI3mbasHjSSpwO/KaaNACweVwxUruKyWVcb293wMv1RqTPZyZ8kSZ2NogUZNcLOFQ==} + + d3-force@2.1.1: + resolution: {integrity: sha512-nAuHEzBqMvpFVMf9OX75d00OxvOXdxY+xECIXjW6Gv8BRrXu6gAWbv/9XKrvfJ5i5DCokDW7RYE50LRoK092ew==} + + d3-interpolate@3.0.1: + resolution: {integrity: sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==} + engines: {node: '>=12'} + + d3-quadtree@2.0.0: + resolution: {integrity: sha512-b0Ed2t1UUalJpc3qXzKi+cPGxeXRr4KU9YSlocN74aTzp6R/Ud43t79yLLqxHRWZfsvWXmbDWPpoENK1K539xw==} + + d3-timer@1.0.10: + resolution: {integrity: sha512-B1JDm0XDaQC+uvo4DT79H0XmBskgS3l6Ve+1SBCfxgmtIb1AVrPIoqd+nPSv+loMX8szQ0sVUhGngL7D5QPiXw==} + + d3-timer@2.0.0: + resolution: {integrity: sha512-TO4VLh0/420Y/9dO3+f9abDEFYeCUr2WZRlxJvbp4HPTQcSylXNiL6yZa9FIUvV1yRiFufl1bszTCLDqv9PWNA==} + + dagre-compound@0.0.11: + resolution: {integrity: sha512-UrSgRP9LtOZCYb9e5doolZXpc7xayyszgyOs7uakTK4n4KsLegLVTRRtq01GpQd/iZjYw5fWMapx9ed+c80MAQ==} + engines: {node: '>=6.0.0'} + peerDependencies: + dagre: ^0.8.5 + + dagre@0.8.5: + resolution: {integrity: sha512-/aTqmnRta7x7MCCpExk7HQL2O4owCT2h8NT//9I1OQ9vt29Pa0BzSAkR5lwFUcQ7491yVi/3CXU9jQ5o0Mn2Sw==} + + detect-browser@5.3.0: + resolution: {integrity: sha512-53rsFbGdwMwlF7qvCt0ypLM5V5/Mbl0szB7GPN8y9NCcbknYOeVVXdrXEq+90IwAfrrzt6Hd+u2E2ntakICU8w==} + + eventemitter3@4.0.7: + resolution: {integrity: sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==} + + fast-deep-equal@3.1.3: + resolution: {integrity: sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==} + + fecha@4.2.3: + resolution: {integrity: sha512-OP2IUU6HeYKJi3i0z4A19kHMQoLVs4Hc+DPqqxI2h/DPZHTm/vjsfC6P0b4jCMy14XizLBqvndQ+UilD7707Jw==} + + gl-matrix@3.4.3: + resolution: {integrity: sha512-wcCp8vu8FT22BnvKVPjXa/ICBWRq/zjFfdofZy1WSpQZpphblv12/bOQLBC1rMM7SGOFS9ltVmKOHil5+Ml7gA==} + + gl-vec2@1.3.0: + resolution: {integrity: sha512-YiqaAuNsheWmUV0Sa8k94kBB0D6RWjwZztyO+trEYS8KzJ6OQB/4686gdrf59wld4hHFIvaxynO3nRxpk1Ij/A==} + + graphlib@2.1.8: + resolution: {integrity: sha512-jcLLfkpoVGmH7/InMC/1hIvOPSUh38oJtGhvrOFGzioE1DZ+0YW16RgmOJhHiuWTvGiJQ9Z1Ik43JvkRPRvE+A==} + + insert-css@2.0.0: + resolution: {integrity: sha512-xGq5ISgcUP5cvGkS2MMFLtPDBtrtQPSFfC6gA6U8wHKqfjTIMZLZNxOItQnoSjdOzlXOLU/yD32RKC4SvjNbtA==} + + is-any-array@2.0.1: + resolution: {integrity: sha512-UtilS7hLRu++wb/WBAw9bNuP1Eg04Ivn1vERJck8zJthEvXCBEBpGR/33u/xLKWEQf95803oalHrVDptcAvFdQ==} + + is-arrayish@0.3.2: + resolution: {integrity: sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==} + + lodash@4.17.21: + resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==} + + ml-array-max@1.2.4: + resolution: {integrity: sha512-BlEeg80jI0tW6WaPyGxf5Sa4sqvcyY6lbSn5Vcv44lp1I2GR6AWojfUvLnGTNsIXrZ8uqWmo8VcG1WpkI2ONMQ==} + + ml-array-min@1.2.3: + resolution: {integrity: sha512-VcZ5f3VZ1iihtrGvgfh/q0XlMobG6GQ8FsNyQXD3T+IlstDv85g8kfV0xUG1QPRO/t21aukaJowDzMTc7j5V6Q==} + + ml-array-rescale@1.3.7: + resolution: {integrity: sha512-48NGChTouvEo9KBctDfHC3udWnQKNKEWN0ziELvY3KG25GR5cA8K8wNVzracsqSW1QEkAXjTNx+ycgAv06/1mQ==} + + ml-matrix@6.12.1: + resolution: {integrity: sha512-TJ+8eOFdp+INvzR4zAuwBQJznDUfktMtOB6g/hUcGh3rcyjxbz4Te57Pgri8Q9bhSQ7Zys4IYOGhFdnlgeB6Lw==} + + ml-matrix@6.5.0: + resolution: {integrity: sha512-sms732Dge+rs5dU4mnjE0oqLWm1WujvR2fr38LgUHRG2cjXjWlO3WJupLYaSz3++2iYr0UrGDK72OAivr3J8dg==} + + probe.gl@3.6.0: + resolution: {integrity: sha512-19JydJWI7+DtR4feV+pu4Mn1I5TAc0xojuxVgZdXIyfmTLfUaFnk4OloWK1bKbPtkgGKLr2lnbnCXmpZEcEp9g==} + + regl@1.7.0: + resolution: {integrity: sha512-bEAtp/qrtKucxXSJkD4ebopFZYP0q1+3Vb2WECWv/T8yQEgKxDxJ7ztO285tAMaYZVR6mM1GgI6CCn8FROtL1w==} + + simple-swizzle@0.2.2: + resolution: {integrity: sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==} + + tinycolor2@1.6.0: + resolution: {integrity: sha512-XPaBkWQJdsf3pLKJV9p4qN/S+fm2Oj8AIPo1BTUhg5oxkvm9+SVEGFdhyOz7tTdUTfvxMiAs4sp6/eZO2Ew+pw==} + + tslib@2.8.1: + resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==} + +snapshots: + + '@ant-design/colors@4.0.5': + dependencies: + tinycolor2: 1.6.0 + + '@antv/algorithm@0.1.26': + dependencies: + '@antv/util': 2.0.17 + tslib: 2.8.1 + + '@antv/dom-util@2.0.4': + dependencies: + tslib: 2.8.1 + + '@antv/event-emitter@0.1.3': {} + + '@antv/g-base@0.5.16': + dependencies: + '@antv/event-emitter': 0.1.3 + '@antv/g-math': 0.1.9 + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/util': 2.0.17 + '@types/d3-timer': 2.0.3 + d3-ease: 1.0.7 + d3-interpolate: 3.0.1 + d3-timer: 1.0.10 + detect-browser: 5.3.0 + tslib: 2.8.1 + + '@antv/g-canvas@0.5.17': + dependencies: + '@antv/g-base': 0.5.16 + '@antv/g-math': 0.1.9 + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/util': 2.0.17 + gl-matrix: 3.4.3 + tslib: 2.8.1 + + '@antv/g-math@0.1.9': + dependencies: + '@antv/util': 2.0.17 + gl-matrix: 3.4.3 + + '@antv/g-svg@0.5.7': + dependencies: + '@antv/g-base': 0.5.16 + '@antv/g-math': 0.1.9 + '@antv/util': 2.0.17 + detect-browser: 5.3.0 + tslib: 2.8.1 + + '@antv/g-webgpu-core@0.7.2': + dependencies: + eventemitter3: 4.0.7 + gl-matrix: 3.4.3 + lodash: 4.17.21 + probe.gl: 3.6.0 + + '@antv/g-webgpu-engine@0.7.2': + dependencies: + '@antv/g-webgpu-core': 0.7.2 + gl-matrix: 3.4.3 + lodash: 4.17.21 + regl: 1.7.0 + + '@antv/g-webgpu@0.7.2': + dependencies: + '@antv/g-webgpu-core': 0.7.2 + '@antv/g-webgpu-engine': 0.7.2 + gl-matrix: 3.4.3 + gl-vec2: 1.3.0 + lodash: 4.17.21 + + '@antv/g6-core@0.8.23': + dependencies: + '@antv/algorithm': 0.1.26 + '@antv/dom-util': 2.0.4 + '@antv/event-emitter': 0.1.3 + '@antv/g-base': 0.5.16 + '@antv/g-math': 0.1.9 + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/util': 2.0.17 + ml-matrix: 6.12.1 + tslib: 2.8.1 + + '@antv/g6-core@0.8.24': + dependencies: + '@antv/algorithm': 0.1.26 + '@antv/dom-util': 2.0.4 + '@antv/event-emitter': 0.1.3 + '@antv/g-base': 0.5.16 + '@antv/g-math': 0.1.9 + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/util': 2.0.17 + ml-matrix: 6.12.1 + tslib: 2.8.1 + + '@antv/g6-element@0.8.23': + dependencies: + '@antv/g-base': 0.5.16 + '@antv/g6-core': 0.8.23 + '@antv/util': 2.0.17 + + '@antv/g6-pc@0.8.23': + dependencies: + '@ant-design/colors': 4.0.5 + '@antv/algorithm': 0.1.26 + '@antv/dom-util': 2.0.4 + '@antv/event-emitter': 0.1.3 + '@antv/g-base': 0.5.16 + '@antv/g-canvas': 0.5.17 + '@antv/g-math': 0.1.9 + '@antv/g-svg': 0.5.7 + '@antv/g6-core': 0.8.24 + '@antv/g6-element': 0.8.23 + '@antv/g6-plugin': 0.8.23 + '@antv/hierarchy': 0.6.14 + '@antv/layout': 0.3.25(dagre@0.8.5) + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/util': 2.0.17 + color: 3.2.1 + d3-force: 2.1.1 + dagre: 0.8.5 + insert-css: 2.0.0 + ml-matrix: 6.12.1 + + '@antv/g6-plugin@0.8.23': + dependencies: + '@antv/dom-util': 2.0.4 + '@antv/g-base': 0.5.16 + '@antv/g-canvas': 0.5.17 + '@antv/g-svg': 0.5.7 + '@antv/g6-core': 0.8.23 + '@antv/g6-element': 0.8.23 + '@antv/matrix-util': 3.1.0-beta.3 + '@antv/path-util': 2.0.15 + '@antv/scale': 0.3.18 + '@antv/util': 2.0.17 + insert-css: 2.0.0 + + '@antv/g6@4.8.23': + dependencies: + '@antv/g6-pc': 0.8.23 + + '@antv/graphlib@1.2.0': {} + + '@antv/hierarchy@0.6.14': {} + + '@antv/layout@0.3.25(dagre@0.8.5)': + dependencies: + '@antv/g-webgpu': 0.7.2 + '@antv/graphlib': 1.2.0 + '@antv/util': 3.3.10 + d3-force: 2.1.1 + d3-quadtree: 2.0.0 + dagre-compound: 0.0.11(dagre@0.8.5) + ml-matrix: 6.5.0 + transitivePeerDependencies: + - dagre + + '@antv/matrix-util@3.0.4': + dependencies: + '@antv/util': 2.0.17 + gl-matrix: 3.4.3 + tslib: 2.8.1 + + '@antv/matrix-util@3.1.0-beta.3': + dependencies: + '@antv/util': 2.0.17 + gl-matrix: 3.4.3 + tslib: 2.8.1 + + '@antv/path-util@2.0.15': + dependencies: + '@antv/matrix-util': 3.0.4 + '@antv/util': 2.0.17 + tslib: 2.8.1 + + '@antv/scale@0.3.18': + dependencies: + '@antv/util': 2.0.17 + fecha: 4.2.3 + tslib: 2.8.1 + + '@antv/util@2.0.17': + dependencies: + csstype: 3.1.3 + tslib: 2.8.1 + + '@antv/util@3.3.10': + dependencies: + fast-deep-equal: 3.1.3 + gl-matrix: 3.4.3 + tslib: 2.8.1 + + '@babel/runtime@7.27.1': {} + + '@probe.gl/env@3.6.0': + dependencies: + '@babel/runtime': 7.27.1 + + '@probe.gl/log@3.6.0': + dependencies: + '@babel/runtime': 7.27.1 + '@probe.gl/env': 3.6.0 + + '@probe.gl/stats@3.6.0': + dependencies: + '@babel/runtime': 7.27.1 + + '@types/d3-timer@2.0.3': {} + + color-convert@1.9.3: + dependencies: + color-name: 1.1.3 + + color-name@1.1.3: {} + + color-name@1.1.4: {} + + color-string@1.9.1: + dependencies: + color-name: 1.1.4 + simple-swizzle: 0.2.2 + + color@3.2.1: + dependencies: + color-convert: 1.9.3 + color-string: 1.9.1 + + csstype@3.1.3: {} + + d3-color@1.4.1: {} + + d3-dispatch@2.0.0: {} + + d3-ease@1.0.7: {} + + d3-force@2.1.1: + dependencies: + d3-dispatch: 2.0.0 + d3-quadtree: 2.0.0 + d3-timer: 2.0.0 + + d3-interpolate@3.0.1: + dependencies: + d3-color: 1.4.1 + + d3-quadtree@2.0.0: {} + + d3-timer@1.0.10: {} + + d3-timer@2.0.0: {} + + dagre-compound@0.0.11(dagre@0.8.5): + dependencies: + dagre: 0.8.5 + + dagre@0.8.5: + dependencies: + graphlib: 2.1.8 + lodash: 4.17.21 + + detect-browser@5.3.0: {} + + eventemitter3@4.0.7: {} + + fast-deep-equal@3.1.3: {} + + fecha@4.2.3: {} + + gl-matrix@3.4.3: {} + + gl-vec2@1.3.0: {} + + graphlib@2.1.8: + dependencies: + lodash: 4.17.21 + + insert-css@2.0.0: {} + + is-any-array@2.0.1: {} + + is-arrayish@0.3.2: {} + + lodash@4.17.21: {} + + ml-array-max@1.2.4: + dependencies: + is-any-array: 2.0.1 + + ml-array-min@1.2.3: + dependencies: + is-any-array: 2.0.1 + + ml-array-rescale@1.3.7: + dependencies: + is-any-array: 2.0.1 + ml-array-max: 1.2.4 + ml-array-min: 1.2.3 + + ml-matrix@6.12.1: + dependencies: + is-any-array: 2.0.1 + ml-array-rescale: 1.3.7 + + ml-matrix@6.5.0: + dependencies: + ml-array-rescale: 1.3.7 + + probe.gl@3.6.0: + dependencies: + '@babel/runtime': 7.27.1 + '@probe.gl/env': 3.6.0 + '@probe.gl/log': 3.6.0 + '@probe.gl/stats': 3.6.0 + + regl@1.7.0: {} + + simple-swizzle@0.2.2: + dependencies: + is-arrayish: 0.3.2 + + tinycolor2@1.6.0: {} + + tslib@2.8.1: {} diff --git a/schedule.json b/schedule.json new file mode 100644 index 00000000..3e68bd37 --- /dev/null +++ b/schedule.json @@ -0,0 +1 @@ +{"code":200,"message":"success","data":{"nodes":[{"id":"karmada-control-plane","name":"Karmada控制平面","type":"control-plane"},{"id":"member1","name":"member1","type":"member-cluster","schedulingParams":{"weight":1}},{"id":"member2","name":"member2","type":"member-cluster","schedulingParams":{"weight":1}},{"id":"member3","name":"member3","type":"member-cluster","schedulingParams":{"weight":1}},{"id":"resource-default-Deployment-example-deployment","name":"example-deployment","type":"resource","resourceInfo":{"resourceKind":"Deployment","resourceGroup":"Workloads","namespace":"default","propagationPolicy":"example-namespace-policy"}},{"id":"resource-default-Deployment-example-deployment2","name":"example-deployment2","type":"resource","resourceInfo":{"resourceKind":"Deployment","resourceGroup":"Workloads","namespace":"default","propagationPolicy":"example-namespace-policy2"}}],"links":[{"source":"karmada-control-plane","target":"resource-default-Deployment-example-deployment","value":1,"type":"Deployment"},{"source":"resource-default-Deployment-example-deployment","target":"member2","value":2,"type":"Deployment"},{"source":"resource-default-Deployment-example-deployment","target":"member3","value":2,"type":"Deployment"},{"source":"resource-default-Deployment-example-deployment","target":"member1","value":1,"type":"Deployment"},{"source":"karmada-control-plane","target":"resource-default-Deployment-example-deployment2","value":1,"type":"Deployment"},{"source":"resource-default-Deployment-example-deployment2","target":"member2","value":4,"type":"Deployment"},{"source":"resource-default-Deployment-example-deployment2","target":"member1","value":2,"type":"Deployment"}],"resourceDist":[{"resourceType":"Deployment","clusterDist":[{"clusterName":"member1","count":3},{"clusterName":"member2","count":6},{"clusterName":"member3","count":2}]}],"summary":{"totalClusters":3,"totalPropagationPolicy":2,"totalResourceBinding":2},"detailedResources":[{"resourceName":"example-deployment","resourceKind":"Deployment","resourceGroup":"Workloads","namespace":"default","propagationPolicy":"example-namespace-policy","weight":0,"clusterWeights":{"member1":1,"member2":2,"member3":2},"clusterDist":[{"clusterName":"member2","scheduledCount":2,"actualCount":3,"status":{"scheduled":true,"actual":true,"scheduledCount":2,"actualCount":3}},{"clusterName":"member3","scheduledCount":2,"actualCount":1,"status":{"scheduled":true,"actual":true,"scheduledCount":2,"actualCount":1}},{"clusterName":"member1","scheduledCount":1,"actualCount":1,"status":{"scheduled":true,"actual":true,"scheduledCount":1,"actualCount":1}}],"totalScheduledCount":5,"totalActualCount":5},{"resourceName":"example-deployment2","resourceKind":"Deployment","resourceGroup":"Workloads","namespace":"default","propagationPolicy":"example-namespace-policy2","weight":0,"clusterWeights":{"member1":1,"member2":2},"clusterDist":[{"clusterName":"member2","scheduledCount":4,"actualCount":3,"status":{"scheduled":true,"actual":true,"scheduledCount":4,"actualCount":3}},{"clusterName":"member1","scheduledCount":2,"actualCount":1,"status":{"scheduled":true,"actual":true,"scheduledCount":2,"actualCount":1}}],"totalScheduledCount":6,"totalActualCount":4}]}} \ No newline at end of file diff --git a/test.yaml b/test.yaml new file mode 100644 index 00000000..81790f6f --- /dev/null +++ b/test.yaml @@ -0,0 +1,146 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: + description: 示例 Deployment + creationTimestamp: 2025-05-05T15:35:22Z + generation: 1 + labels: + app: example + managedFields: + - apiVersion: apps/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: {} + f:description: {} + f:labels: + .: {} + f:app: {} + f:spec: + f:progressDeadlineSeconds: {} + f:replicas: {} + f:revisionHistoryLimit: {} + f:selector: {} + f:strategy: + f:rollingUpdate: + .: {} + f:maxSurge: {} + f:maxUnavailable: {} + f:type: {} + f:template: + f:metadata: + f:labels: + .: {} + f:app: {} + f:spec: + f:containers: + k:{"name":"example"}: + .: {} + f:env: + .: {} + k:{"name":"ENV_VAR"}: + .: {} + f:name: {} + f:value: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:ports: + .: {} + k:{"containerPort":80,"protocol":"TCP"}: + .: {} + f:containerPort: {} + f:protocol: {} + f:resources: + .: {} + f:limits: + .: {} + f:cpu: {} + f:memory: {} + f:requests: + .: {} + f:cpu: {} + f:memory: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/data"}: + .: {} + f:mountPath: {} + f:name: {} + f:dnsPolicy: {} + f:imagePullSecrets: + .: {} + k:{"name":"myregistrykey"}: {} + f:restartPolicy: {} + f:schedulerName: {} + f:securityContext: {} + f:terminationGracePeriodSeconds: {} + f:volumes: + .: {} + k:{"name":"data"}: + .: {} + f:emptyDir: {} + f:name: {} + manager: dashboard + operation: Update + time: 2025-05-05T15:35:22Z + name: example-deployment + namespace: default + resourceVersion: "129391" + uid: beadf88b-c2ca-4945-b9d2-10dee03f288d +spec: + progressDeadlineSeconds: 600 + replicas: 1 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: example + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + creationTimestamp: null + labels: + app: example + spec: + containers: + - env: + - name: ENV_VAR + value: value + image: nginx:latest + imagePullPolicy: Always + name: example + ports: + - containerPort: 80 + protocol: TCP + resources: + limits: + cpu: 500m + memory: 256Mi + requests: + cpu: 250m + memory: 128Mi + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /data + name: data + dnsPolicy: ClusterFirst + imagePullSecrets: + - name: myregistrykey + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 + volumes: + - emptyDir: {} + name: data +status: {} + diff --git a/ui/apps/dashboard/i18n.config.cjs b/ui/apps/dashboard/i18n.config.cjs index 89576328..498b13c5 100644 --- a/ui/apps/dashboard/i18n.config.cjs +++ b/ui/apps/dashboard/i18n.config.cjs @@ -49,4 +49,13 @@ module.exports = { key: 'please input your key', token: '', }, + // [i18n resources] + resources: { + zh: { + translation: zhTexts, + }, + en: { + translation: enTexts, + }, + }, }; diff --git a/ui/apps/dashboard/index.html b/ui/apps/dashboard/index.html index 570085d0..d86e68a9 100644 --- a/ui/apps/dashboard/index.html +++ b/ui/apps/dashboard/index.html @@ -18,9 +18,11 @@
- + -