A Rust library for retrieving carbon emission values from cloud providers.
Carbem provides a unified interface for querying carbon emission data from various cloud service providers. This library helps developers build more environmentally conscious applications by making it easy to access and analyze the carbon footprint of cloud infrastructure.
- 🌍 Multi-provider support: Unified API for different cloud providers
- ⚡ Async/await: Built with modern async Rust for high performance
- 🔒 Type-safe: Leverages Rust's type system for reliable carbon data handling
- 🚀 Easy to use: Simple and intuitive API design
- 🐍 Python Bindings: Native Python integration via PyO3
- 🔧 Flexible Filtering: Filter by regions, services, and resources
Add this to your Cargo.toml:
[dependencies]
carbem = "0.2.0"Install from PyPI:
pip install carbem-pythonFor development setup with maturin:
pip install maturin
maturin developFor standalone Rust applications, use the builder pattern with environment variables:
use carbem::{CarbemClient, EmissionQuery, TimePeriod};
use chrono::{Utc, Duration};
#[tokio::main]
async fn main() -> carbem::Result<()> {
// Configure client from environment variables
let client = CarbemClient::new()
.with_azure_from_env()?;
// Create a query
let query = EmissionQuery {
provider: "azure".to_string(),
regions: vec!["subscription-id".to_string()],
time_period: TimePeriod {
start: Utc::now() - Duration::days(30),
end: Utc::now(),
},
services: Some(vec!["compute".to_string(), "storage".to_string()]),
resources: None,
};
let emissions = client.query_emissions(&query).await?;
for emission in emissions {
println!("Service: {}, Emissions: {} kg CO2eq",
emission.service.unwrap_or_default(),
emission.emissions_kg_co2eq);
}
Ok(())
}For Python applications, use the get_emissions_py function:
import carbem
import json
from datetime import datetime, timedelta
# Azure configuration
config = json.dumps({
"access_token": "your-azure-bearer-token"
})
# Query for last 30 days
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=30)
query = json.dumps({
"start_date": start_date.strftime("%Y-%m-%dT%H:%M:%SZ"),
"end_date": end_date.strftime("%Y-%m-%dT%H:%M:%SZ"),
"regions": ["your-subscription-id"],
})
# Get emissions data
result = carbem.get_emissions_py("azure", config, query)
emissions = json.loads(result)
print(f"Found {len(emissions)} emission records")Create a .env file in your project root:
# Azure Carbon Emissions Configuration
CARBEM_AZURE_ACCESS_TOKEN=your_azure_bearer_token_here
# OR alternatively use:
# AZURE_TOKEN=your_azure_bearer_token_hereCARBEM_AZURE_ACCESS_TOKEN: Azure access tokenAZURE_TOKEN: Alternative Azure access token variable
For Python applications, configuration is passed as JSON strings to the get_emissions_py function. See the Python API Documentation for detailed configuration examples and usage patterns.
The Azure provider requires minimal configuration:
use carbem::AzureConfig;
let config = AzureConfig {
access_token: "your-bearer-token".to_string(),
};use carbem::{CarbemClient, AzureConfig, EmissionQuery, TimePeriod};
use chrono::{Utc, Duration};
#[tokio::main]
async fn main() -> carbem::Result<()> {
// Create a client and configure Azure provider
let config = AzureConfig {
access_token: "your-bearer-token".to_string(),
};
let client = CarbemClient::new()
.with_azure(config)?;
// Query carbon emissions for the last 30 days
let query = EmissionQuery {
provider: "azure".to_string(),
regions: vec!["subscription-id".to_string()], // Use your subscription IDs
time_period: TimePeriod {
start: Utc::now() - Duration::days(30),
end: Utc::now(),
},
services: None,
resources: None,
};
let emissions = client.query_emissions(&query).await?;
for emission in emissions {
println!("Date: {}, Region: {}, Emissions: {} kg CO2eq",
emission.time_period.start.format("%Y-%m-%d"),
emission.region,
emission.emissions_kg_co2eq);
}
Ok(())
}- Report Types: All report type from the API are supported.
- Queries: All query parameters are supported.
Google Cloud Platform is not supported at the moment (October 11th 2025). Data are available only after exporting them to BigQuery as discussed in this page. Thus, one will need to make a query to the BigQuery API, which makes a standard implementation not possible at the moment.
AWS is not supported at the moment (October 11th 2025). Data are available in S3 buckets as discussed in this page. An endpoint existed but was discontinued on July 23rd 2025 (ref).
- Core library infrastructure
- Azure Carbon Emission Reports API
- IBM Cloud Carbon Calculator
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Additional providers planned
The library includes a comprehensive test suite:
# Run all tests
cargo test
# Run specific Azure provider tests
cargo test providers::azure
# Run with output
cargo test -- --nocaptureTest coverage includes:
- Provider creation and configuration
- Query conversion and validation
- Date parsing and time period handling
- Data conversion from Azure API responses
- Error handling for invalid configurations
- API Documentation: Available on docs.rs
- Python API Reference - Detailed function documentation and usage patterns
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
This project aims to support sustainability efforts in cloud computing by making carbon emission data more accessible to developers and organizations.