Note: This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.
CloudGauge is a web application designed to run a comprehensive set of compliance, security, cost optimization, and best-practice checks against a Google Cloud Organization.
It is built with Python/Flask and deployed as a serverless application on Google Cloud Run. The application leverages Cloud Tasks to run scans asynchronously, ensuring that even very large organizations can be scanned without browser timeouts.
Final results are delivered as an interactive HTML report and a CSV file stored in a Google Cloud Storage bucket. The reports also feature Gemini-powered executive summaries and gCloud remediation suggestions.
- Features
- Architecture
- Deployment Instructions
- How to Use
- Troubleshooting
- Cleanup Script
- License & Support
CloudGauge scans your organization across several key domains, modeled after the Google Cloud Architecture Framework.
- Organization Policies: Checks boolean policies against a list of best practices.
- Organization IAM: Scans for public principals (
allUsers,allAuthenticatedUsers) and primitive roles (owner,orgAdmin) at the org level. - Project IAM: Scans all projects for the use of primitive
roles/ownerandroles/editor. - Security Command Center: Verifies that SCC Premium is enabled.
- SA Key Rotation: Finds user-managed service account keys older than 90 days.
- Public GCS Buckets: Detects GCS buckets that are publicly accessible.
- Open Firewall Rules: Scans all VPCs for firewall rules open to the internet (
0.0.0.0/0).
- Idle Resources: Finds idle Cloud SQL instances, VMs, persistent disks, and unassociated IP addresses.
- Rightsizing: Identifies overprovisioned VMs and underutilized reservations.
- Cost Insights: Provides an on-demand, detailed scan for CPU/memory usage, idle images, and more.
- Essential Contacts: Ensures contacts are configured for
SECURITY,TECHNICAL, andLEGALcategories. - Service Health: Verifies that the Personalized Service Health API is enabled.
- Cloud SQL Resilience: Checks for High Availability (HA) configuration, automated backups, and Point-in-Time Recovery (PITR).
- GCS Versioning: Finds buckets without object versioning enabled.
- GKE Hygiene: Checks for clusters not on a release channel and node pools with auto-upgrade disabled.
- Resilience Assets: Identifies zonal MIGs (recommends regional) and single-region disk snapshots.
- Audit Logging: Checks for an organization-level log sink.
- OS Config Coverage: Identifies running VMs (excluding GKE/Dataproc) that are not reporting to the OS Config service.
- Monitoring Coverage: Scans for projects missing key alert policies (e.g., Quota, Cloud SQL, GKE).
- Network Analyzer: Ingests and normalizes insights for VPC, GKE, and PSA IP address utilization.
- Standalone VMs: Finds VMs not managed by a Managed Instance Group (MIG).
- Quota Utilization: Identifies any regional compute quotas exceeding 80% utilization.
- Unattended Projects: Flags projects with low utilization.
The application follows a robust, scalable, and asynchronous "fire-and-forget" pattern. This ensures the user gets an immediate response while the heavy work (which can take many minutes) is done in the background.
- UI Trigger: A user navigates to the Cloud Run URL and submits an Organization ID.
- Task Creation: The
/scanendpoint creates a Cloud Task with the scan details and redirects the user to a status page. - Background Worker: Cloud Tasks securely invokes the
/run-scanendpoint in the background. - Parallel Processing: The worker executes dozens of checks, running project-level scans in parallel using a thread pool.
- Report Storage: The worker generates the HTML/CSV reports and uploads them to Google Cloud Storage.
- Status Polling: The user's status page polls an API endpoint until the report files are found in GCS, at which point it displays the download links.
The diagram below illustrates the asynchronous "fire-and-forget" pattern.
graph LR
%% The diagram is now Left-to-Right for a clearer flow.
%% Custom styling has been removed to ensure readability on any background.
%% Column 1: User
subgraph User
A[Selects Scan Scope] --> B{Lists Resources};
B --> C[Selects Resource ID];
C --> D[Submits Form];
end
%% Column 2: The "Frontend" part of the Cloud Run service
subgraph Cloud Run - Initial Request
D -- "1. POST Request" --> E{Scan Endpoint};
E -- "2. Creates Task" --> F[(Cloud Tasks)];
E -- "3. Redirects" --> G[Status Page];
G -- "7. Polls API" --> H{Status API};
end
%% Column 3: The "Backend" part of the Cloud Run service (the worker)
subgraph Cloud Run - Background Worker
F -- "4. Invokes Worker" --> I{Worker Endpoint};
subgraph Worker Process
I --> J[1. Init status.json];
J --> K{2. Start Parallel Checks};
K -- "Dispatches" --> L1[IAM Checks];
K -- "Dispatches" --> L2[Cost Checks];
K -- "Dispatches" --> L3[...];
L1 -- "Writes to" --> M([Local /tmp Files]);
L2 -- "Writes to" --> M;
L3 -- "Writes to" --> M;
M --> N[3. Aggregate Findings];
N --> O[4. Generate Reports];
O --> P[5. Upload Reports];
P --> Q[6. Cleanup tmp Files];
end
end
%% Column 4: External Google Cloud Services
subgraph External GCP Services
L1 -- "queries" --> APIS([Cloud APIs]);
L2 -- "queries" --> APIS;
L3 -- "queries" --> APIS;
H -- "reads" --> GCS_STATUS([status.json in GCS]);
K -- "sends progress updates to" --> GCS_STATUS;
P -- "writes to" --> GCS_REPORTS([Final Reports in GCS]);
end
Follow the Common Prerequisites first, then choose Method 1 or Method 2 to deploy.
-
Enable APIs:
- A Google Cloud Project with billing enabled.
- gcloud CLI installed and authenticated (
gcloud auth login). - Run the following command to enable all necessary APIs:
gcloud services enable \ run.googleapis.com \ cloudbuild.googleapis.com \ cloudtasks.googleapis.com \ iam.googleapis.com \ cloudresourcemanager.googleapis.com \ logging.googleapis.com \ recommender.googleapis.com \ securitycenter.googleapis.com \ servicehealth.googleapis.com \ essentialcontacts.googleapis.com \ compute.googleapis.com \ container.googleapis.com \ sqladmin.googleapis.com \ osconfig.googleapis.com \ monitoring.googleapis.com \ storage.googleapis.com \ aiplatform.googleapis.com \ cloudasset.googleapis.com -
Create Service Account & Grant Permissions:
- This SA will be used by the Cloud Run service to scan the organization and create tasks.
# Set your Organization ID
export ORG_ID="<your-org-id>"
# Set Project and SA variables
export PROJECT_ID=$(gcloud config get-value project)
export SA_NAME="cloudgauge-sa"
export SA_EMAIL="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
# Create the Service Account
gcloud iam service-accounts create ${SA_NAME} --display-name="CloudGauge Service Account"
# Grant Permissions
# 1. Grant ORG-level roles to read assets and policies
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/browser"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/cloudasset.viewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/compute.networkViewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/essentialcontacts.viewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/recommender.iamViewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/logging.viewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/monitoring.viewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/orgpolicy.policyViewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/resourcemanager.organizationViewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/servicehealth.viewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/securitycenter.settingsViewer"
gcloud organizations add-iam-policy-binding ${ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/iam.securityReviewer"
# 2. Grant PROJECT-level roles (on the project where Cloud Run is deployed)
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/aiplatform.user"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/cloudtasks.admin"
# 3. Service Account Token Creator and User role to the SA itself for signed URLs
gcloud iam service-accounts add-iam-policy-binding ${SA_EMAIL} --member="serviceAccount:${SA_EMAIL}" --role="roles/iam.serviceAccountTokenCreator"
gcloud iam service-accounts add-iam-policy-binding ${SA_EMAIL} --member="serviceAccount:${SA_EMAIL}" --role="roles/iam.serviceAccountUser"
- Create GCS Bucket:
export BUCKET_NAME="cloudgauge-reports-${PROJECT_ID}"
gsutil mb -p ${PROJECT_ID} gs://${BUCKET_NAME}
gcloud storage buckets add-iam-policy-binding gs://${BUCKET_NAME} --member="serviceAccount:${SA_EMAIL}" --role="roles/storage.objectAdmin"
First, you need your own copy of the code.
- Navigate to the CloudGauge GitHub repository.
- Click the Fork button in the top-right corner of the page.
- Choose your GitHub account as the destination for the fork. This will create a copy of the repository under your account (e.g.,
https://github.com/your-username/CloudGauge).
Now, let's create the initial Cloud Run service and connect it to your new repository.
- In the Google Cloud Console, go to the Cloud Run page.
- Click Create Service.
- Select Continuously deploy new revisions from a source repository and click Set up with Cloud Build.
- A new panel will appear. In the "Source" section, under "Repository", click Manage connected repositories.
- A new window will pop up, prompting you to install the Google Cloud Build app on GitHub.
- Select your GitHub username or organization.
- In the "Repository access" section, choose either All repositories or Only select repositories. If you choose the latter, make sure you select your forked
CloudGaugerepository. - Click Install or Save.
- Back in the Cloud Console, select your newly connected forked repository and branch (
main), then click Next. - In the Build Settings section:
- Build Type: Select
Dockerfile. - Source location: Keep the default
/Dockerfile. - Click Save.
- Build Type: Select
- Configure the service details:
- Service name: Give it a name like
cloudgauge-service. - Region: Choose a region, for example,
asia-south1.
- Service name: Give it a name like
- Expand the "Container(s), Volumes, Networking, Security" section.
- Go to the Identity & Security tab and select the service account you previously created (e.g.,
cloudgauge-sa@...). - Go to the General tab and set the Request Timeout to
3600seconds. - Go to the Variables & Secrets tab and add the following Environment Variables. Replace the example values with your own.
PROJECT_ID: Your GCP Project ID (e.g.,my-gcp-project)TASK_QUEUE:cloudgauge-scan-queueRESULTS_BUCKET: The name of your GCS bucket (e.g.,cloudgauge-reports-my-gcp-project)SERVICE_ACCOUNT_EMAIL: The full email of your service accountLOCATION: The region you selected (e.g.,asia-south1)
- Go to the Identity & Security tab and select the service account you previously created (e.g.,
- Click Create. The service will start building and deploying.
To function correctly, the service account needs two key permissions granted directly on the Cloud Run service itself. This ensures all permissions are tightly scoped and follow security best practices.
Cloud Run Invoker (roles/run.invoker): This role is required to allow the Cloud Tasks service to securely trigger your CloudGauge service to start a scan. This permission is granted specifically on the new Cloud Run service you just deployed.
Cloud Run Viewer (roles/run.viewer): This role allows the service to automatically discover its own public URL when it starts up. This feature enables a single-step deployment, removing the need to manually update the service with its own URL. This permission is granted at the service level.
By granting both roles at the service level, you ensure the service account only has the minimum permissions required on the specific resource it needs to access.
Open the Cloud Shell or your local terminal with gCloud installed and run the following commands, replacing the placeholders with your values.
# Store your service account email in a variable for convenience
SA_EMAIL="[email protected]"
SERVICE_NAME="your-chosen-service-name"
export REGION="asia-south1" # Or your chosen region
# Grants permission to be invoked by Cloud Tasks
gcloud run services add-iam-policy-binding ${SERVICE_NAME} --member="serviceAccount:${SA_EMAIL}" --role="roles/run.invoker" --region=${REGION}
# Grants permission to view its own service details to find its URL
gcloud run services add-iam-policy-binding ${SERVICE_NAME} --member="serviceAccount:${SA_EMAIL}" --role="roles/run.viewer" --region=${REGION}
With these permissions set, your CloudGauge instance is fully deployed and ready to use. You can now proceed to the application's URL to start your first scan.
This method gives you manual control over the build and deploy steps.
- Clone this repository:
git clone https://github.com/GoogleCloudPlatform/CloudGauge
cd cloudgauge
- Set Environment Variables:
- (You should already have
PROJECT_IDandSA_EMAILfrom the common setup)
- (You should already have
export REGION="asia-south1" # Or your preferred region
export SERVICE_NAME="cloudgauge-service"
export BUCKET_NAME="cloudgauge-reports-${PROJECT_ID}"
export QUEUE_NAME="cloudgauge-scan-queue"
- **Build and Deploy Service **:
- This command builds the container and deploys it.
# Build the container image using Cloud Build
gcloud builds submit . --tag "gcr.io/${PROJECT_ID}/${SERVICE_NAME}" --region=${REGION}
# Deploy to Cloud Run
gcloud run deploy ${SERVICE_NAME} \
--image "gcr.io/${PROJECT_ID}/${SERVICE_NAME}" \
--service-account ${SA_EMAIL} \
--region ${REGION} \
--allow-unauthenticated \
--platform managed \
--timeout=3600 \
--memory=1Gi \
--set-env-vars=PROJECT_ID=${PROJECT_ID},TASK_QUEUE=${QUEUE_NAME},RESULTS_BUCKET=${BUCKET_NAME},SERVICE_ACCOUNT_EMAIL=${SA_EMAIL},LOCATION=${REGION}
- Grant Invoker & Viewer Permission:
- Now that the service exists, give its SA permission to invoke it.
gcloud run services add-iam-policy-binding ${SERVICE_NAME} \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/run.invoker" \
--region=${REGION}
gcloud run services add-iam-policy-binding ${SERVICE_NAME} \
--member="serviceAccount:${SA_EMAIL}" \
--role="roles/run.viewer" \
--region=${REGION}
Your service is now fully deployed and configured!
- Navigate to your service's URL (
${SERVICE_URL}). - Select your Scope from Dropdown menu : Organization, Folder or Project
- Select the resource from the Dropdown
- Click "Start Scan".
- You will be redirected to a status page. Wait for the scan to complete (this can take 5-15 minutes depending on org size).
- Once finished, links to the Interactive HTML Report and Download CSV Report will appear.
If the status page is stuck for a long time, the background worker is likely failing.
- Go to the Cloud Run page in the Google Cloud Console.
- Click on your service (
cloudgauge-service). - Go to the LOGS tab.
- Look for log entries for requests made to the
/run-scanURL. - If you see logs for
/run-scan, look for any errors in red.
- Go to the Cloud Tasks page in the Google Cloud Console.
- Click on your queue (
cloudgauge-scan-queue). - Go to the LOGS tab.
- Look at the status of the task attempts. If you see a
PERMISSION_DENIED(HTTP 403) error, it means you missed the "Grant Invoker Permission" step.
- Error Message: You might see an error in the Cloud Run logs that says: “
Memory limit of 512 MiB exceeded …” - Cause: The scan process is using more memory than the default amount allocated to the Cloud Run service.
- Solution: You need to increase the memory allocated to your service.
- Via Console:
- Click "Edit & Deploy New Revision" on your Cloud Run service page.
- Under the "General" tab, find "Memory allocation" and increase it (e.g., to
2 GiB). - Click Deploy.
- Via gcloud CLI:
- Via Console:
gcloud run services update cloudgauge-service \
--memory=2Gi \
--region=<your-region>
- Error Message: The logs show a
PERMISSION_DENIEDerror related to a specific Google Cloud service, such asstorage.googleapis.comoriam.googleapis.com. - Cause: The service account (
cloudgauge-sa@...) is missing an IAM role needed to access a specific API. - Solution: The error message will usually state which permission is missing. Go back to the "Common Prerequisites" section of the deployment instructions and verify that all
gcloud ... add-iam-policy-bindingcommands were run successfully against the correct service account. You may need to re-run the command for the missing role.
- Error Message: The Cloud Run revision is not becoming healthy, and the logs show an error right at startup, such as
ModuleNotFoundErrorin Python or an error about a missing environment variable. - Cause: This typically happens for one of two reasons:
- An environment variable was not set correctly.
- There is a bug in the application code or a missing dependency in
requirements.txt.
- Solution:
- Check the LOGS tab for the specific error message that occurs when the container tries to start.
- If the error is related to a variable, click "Edit & Deploy New Revision," go to the "Variables & Secrets" tab, and ensure all required variables (
PROJECT_ID,WORKER_URL, etc.) are present and have the correct values. - If it is a code error, you will need to fix the source code and deploy a new revision.
- Error Message: The logs show an HTTP
504status code or a message like "The request has been terminated because it has reached its deadline." - Cause: The scan is taking longer to complete than the configured request timeout on the Cloud Run service.
- Solution: The deployment instructions recommend a timeout of
3600seconds (1 hour). Verify this was set correctly.- Via Console: Go to "Edit & Deploy New Revision" and check the "Request timeout" value under the "General" tab.
- Via gcloud CLI: You can update the service with the correct timeout using:
gcloud run services update cloudgauge-service \
--timeout=3600 \
--region=<your-region>
-
Symptom: When running a Cloud Build, the process fails during steps that require fetching external packages (e.g.,
pip install,apt-get update, ornpm install). You may see timeout errors or messages related to network connectivity and egress being blocked. -
Cause: By default, Cloud Build runs on workers in a Google-managed project that is outside your organization's VPC Service Controls (VPC SC) perimeter. Your perimeter is correctly blocking egress traffic from these external workers, preventing them from accessing public repositories to download dependencies.
-
Solution: Use Cloud Build private pools. This provisions dedicated build workers that run inside your own VPC network, making all build traffic internal and compliant with your security perimeter.
1. Create a Private Pool in Your VPC: First, create a private worker pool connected to your VPC network. This ensures all build steps are executed within your perimeter.
gcloud builds worker-pools create [POOL_NAME] \
--project=[PROJECT_ID] \
--region=[REGION] \
--peered-network=projects/[PROJECT_ID]/global/networks/[VPC_NETWORK]
Replace [POOL_NAME], [PROJECT_ID], [REGION], and [VPC_NETWORK] with your specific values.
2. Configure a Secure Egress Route for the Private Pool
A private pool inside a VPC SC perimeter cannot access public package repositories by default. You need to provide a route to the internet.
Note: Cloud NAT will not work for this use case. Private pools reside in a separate, Google-managed VPC peered to yours. Cloud NAT does not provide service across a VPC peering connection.
The recommended solution is to use a dedicated Compute Engine VM as a secure NAT Gateway.
- Create a NAT Gateway VM: Provision a small Compute Engine VM within your VPC. This VM should have an external IP address and be configured to perform network address translation (masquerading). You can use a startup script to enable IP forwarding and set the necessary iptables rules.
- Create Custom Routes: You must create custom routes to direct traffic from your private pool's IP range to the NAT gateway VM. This ensures only the build workers' traffic is routed for external access, leaving other resources unaffected.
- Configure Firewall Rules: Create VPC firewall rules to:
- Allow ingress traffic from the private pool's IP range to the NAT gateway VM.
- Allow egress traffic from the NAT gateway VM to the internet (0.0.0.0/0).
3. Run Your Build Using the Private Pool: Modify your gcloud builds submit command to include the --worker-pool flag, pointing it to your newly created private pool.
gcloud builds submit . \
--tag "gcr.io/[PROJECT_ID]/[SERVICE_NAME]" \
--region=[REGION] \
--worker-pool=projects/[PROJECT_ID]/locations/[REGION]/workerPools/[POOL_NAME]
This command now directs Cloud Build to use a worker from your internal pool. The worker's traffic is routed through your secure NAT Gateway VM, allowing it to fetch external dependencies while remaining fully compliant with your VPC SC perimeter.
Symptom: You need to store your container images in a specific Google Cloud region (e.g., asia-south1 for organization policy resource location constraints), but by default, gcr.io hosts images in multi-regional locations (us, eu, asia) and does not offer specific regional control.
Cause: Google Container Registry (gcr.io) is a multi-regional service. To gain fine-grained control over the storage location of your images, you should use Artifact Registry, which is Google Cloud's recommended service for managing container images and language packages.
Solution: Create a Docker repository in Artifact Registry in your desired region and update your build commands to point to the new regional endpoint.
Step 1: Create a Regional Artifact Registry Repository
First, create a new Docker-format repository in your chosen region. This example uses asia-south1 (Mumbai).
gcloud artifacts repositories create cloudgauge-repo \
--repository-format=docker \
--location=asia-south1 \
--description="CloudGauge Docker repository in Mumbai"
You only need to run this command once to set up the repository.
Step 2: Update Your Build and Push Commands
Next, you must change the image path in your build and push commands from gcr.io/... to the new Artifact Registry path. The new format is [REGION]-docker.pkg.dev/[PROJECT_ID]/[REPO_NAME]/[IMAGE_NAME].
If you're using Cloud Build, update the --tag flag in your gcloud builds submit command:
gcloud builds submit . --tag "asia-south1-docker.pkg.dev/[PROJECT_ID]/cloudgauge-repo/[SERVICE_NAME]"
If you are building your image locally, update your docker tag and docker push commands:
# 1. Build the image
docker build -t cloudgauge-image .
# 2. Tag the image for your new Artifact Registry repo
docker tag cloudgauge-image asia-south1-docker.pkg.dev/[PROJECT_ID]/cloudgauge-repo/cloudgauge-image
# 3. Push the image
docker push asia-south1-docker.pkg.dev/[PROJECT_ID]/cloudgauge-repo/cloudgauge-image
By following these steps, you can ensure your container images are stored and managed in the specific Google Cloud region that meets your requirements.
This gCloud script will remove all the resources created by the tool.
Before running the script, replace the placeholder values in the first few lines with the ones you used during deployment.
#!/bin/bash
# --- CONFIGURE THESE VARIABLES ---
export YOUR_ORG_ID="123456789012" # Replace with your Organization ID
export PROJECT_ID="your-gcp-project-id" # Replace with your Project ID
export REGION="asia-south1" # Replace with the region you deployed to
# --- END CONFIGURATION ---
# Set derived variables
export SERVICE_NAME="cloudgauge-service"
export QUEUE_NAME="cloudgauge-scan-queue"
export BUCKET_NAME="cloudgauge-reports-${PROJECT_ID}"
export SA_NAME="cloudgauge-sa"
export SA_EMAIL="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
# --- DELETION SCRIPT STARTS HERE ---
echo "--- Starting Cleanup for CloudGauge in project ${PROJECT_ID} ---"
# 1. Delete the Cloud Run service
echo "Deleting Cloud Run service: ${SERVICE_NAME}..."
gcloud run services delete ${SERVICE_NAME} --region=${REGION} --platform=managed --quiet
# 2. Delete the Cloud Tasks queue
echo "Deleting Cloud Tasks queue: ${QUEUE_NAME}..."
gcloud tasks queues delete ${QUEUE_NAME} --location=${REGION} --quiet
# 3. Delete the GCS bucket and all its contents
echo "Deleting GCS bucket: gs://${BUCKET_NAME}..."
gsutil -m rm -r "gs://${BUCKET_NAME}"
# 4. Delete the container image from GCR
echo "Deleting container image..."
gcloud container images delete "gcr.io/${PROJECT_ID}/${SERVICE_NAME}" --force-delete-tags --quiet
# 5. Remove all IAM policy bindings for the service account
echo "Removing IAM bindings for ${SA_EMAIL}..."
# Organization-level roles
gcloud organizations remove-iam-policy-binding ${YOUR_ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/cloudresourcemanager.organizationViewer" --quiet
gcloud organizations remove-iam-policy-binding ${YOUR_ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/iam.securityReviewer" --quiet
gcloud organizations remove-iam-policy-binding ${YOUR_ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/recommender.organizationViewer" --quiet
gcloud organizations remove-iam-policy-binding ${YOUR_ORG_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/cloudasset.viewer" --quiet
# Project-level roles
gcloud projects remove-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/cloudtasks.admin" --quiet
gcloud projects remove-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/storage.objectAdmin" --quiet
gcloud projects remove-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/iam.serviceAccountTokenCreator" --quiet
gcloud projects remove-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/aiplatform.user" --quiet
gcloud projects remove-iam-policy-binding ${PROJECT_ID} --member="serviceAccount:${SA_EMAIL}" --role="roles/run.invoker" --quiet # This was added to the SA itself, but good to be explicit
# 6. Delete the Service Account
echo "Deleting service account: ${SA_EMAIL}..."
gcloud iam service-accounts delete ${SA_EMAIL} --quiet
echo "✅ Cleanup complete!"
Here's a breakdown of what each command in the script does:
- Delete Cloud Run Service:
gcloud run services delete- This removes the main web application itself, stopping it from running and incurring costs.
- Delete Cloud Tasks Queue:
gcloud tasks queues delete- Your script automatically creates a Cloud Tasks queue named
cloudgauge-scan-queue. This command deletes that queue.
- Your script automatically creates a Cloud Tasks queue named
- Delete GCS Bucket:
gsutil -m rm -r- This command deletes the
cloudgauge-reports-...bucket and all the HTML/CSV reports stored inside it. The-mflag helps it run faster if there are many report files.
- This command deletes the
- Delete Container Image:
gcloud container images delete- When you deployed the service, Cloud Build created a container image and stored it in Google Container Registry (GCR). This command deletes that stored image to keep your registry clean.
- Remove IAM Bindings:
gcloud ... remove-iam-policy-binding- This is a critical step. Before deleting the service account, you should remove all the permissions (roles) you granted it at both the Organization and Project levels. This prevents "ghost" principals from showing up in your IAM policies.
- Delete Service Account:
gcloud iam service-accounts delete- This is the final step. After removing its permissions, you can safely delete the
cloudgauge-saservice account itself.
- This is the final step. After removing its permissions, you can safely delete the
This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
For issues or feature requests, please file an issue on the project's GitHub page.
