|
1 | 1 | # Fleet Telemetry Consumer |
2 | 2 |
|
3 | | -This application consumes vehicle telemetry data from Tesla's [Fleet Telemetry project](https://github.com/teslamotors/fleet-telemetry), processes it using Protobuf, and exports metrics to Prometheus. It is intended to be deployed in a Kubernetes environment and can be monitored using Prometheus. |
| 3 | + |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +Fleet Telemetry Consumer is a robust application designed to consume vehicle telemetry data from Tesla's [Fleet Telemetry project](https://github.com/teslamotors/fleet-telemetry). It processes the data using Protobuf and exports metrics to Prometheus, enabling real-time monitoring and analysis. The application is containerized with Docker and optimized for deployment in Kubernetes environments using Kustomize. |
4 | 8 |
|
5 | 9 | ## Features |
6 | 10 |
|
7 | | -- **Kafka Consumer**: Consumes messages from a Kafka topic (`tesla_V` by default) containing vehicle telemetry data from Tesla’s Fleet Telemetry project. |
8 | | -- **Protobuf Processing**: Deserializes the Protobuf data defined in Tesla's [vehicle_data.proto](https://github.com/teslamotors/fleet-telemetry/blob/main/protos/vehicle_data.proto) and processes telemetry information such as location, door status, and sensor values. |
9 | | -- **Prometheus Metrics**: Exposes vehicle telemetry data as Prometheus metrics, enabling real-time monitoring. |
10 | | -- **Docker & Kubernetes**: Dockerized and ready for Kubernetes deployment with `Kustomize` support. |
| 11 | +- **Kafka Consumer**: Efficiently consumes messages from a Kafka topic (`tesla_V` by default) containing vehicle telemetry data from Tesla’s Fleet Telemetry project. |
| 12 | +- **Protobuf Processing**: Deserializes Protobuf data defined in Tesla's [vehicle_data.proto](https://github.com/teslamotors/fleet-telemetry/blob/main/protos/vehicle_data.proto) to process telemetry information such as location, door status, and various sensor values. |
| 13 | +- **Prometheus Metrics**: Exposes vehicle telemetry data as Prometheus metrics, facilitating seamless real-time monitoring. |
| 14 | +- **Docker & Kubernetes**: Fully Dockerized for easy containerization and equipped with `Kustomize` support for streamlined Kubernetes deployments. |
| 15 | +- **Dashboard Integration**: Includes Grafana dashboards for visualizing telemetry data, enabling comprehensive insights into fleet operations. |
11 | 16 |
|
12 | 17 | ## Prerequisites |
13 | 18 |
|
14 | | -- **Docker**: Ensure Docker is installed to build and run the application. |
| 19 | +Before setting up the Fleet Telemetry Consumer, ensure you have the following installed and configured: |
| 20 | + |
| 21 | +- **Docker**: To build and run the application containers. |
| 22 | +- **Docker Compose**: For orchestrating multi-container Docker applications. |
| 23 | +- **Kubernetes**: For deploying the application in a cluster environment. |
| 24 | +- **Kustomize**: To manage Kubernetes configurations. |
15 | 25 | - **Kafka**: A running Kafka cluster with the topic `tesla_V`. |
16 | | -- **Prometheus**: To scrape metrics exposed by the app. |
17 | | -- **Kubernetes**: For deployment using `Kustomize`. |
| 26 | +- **Prometheus**: To scrape and store metrics exposed by the application. |
| 27 | +- **Grafana**: For visualizing the metrics collected by Prometheus. |
18 | 28 |
|
19 | 29 | ## Tesla's Fleet Telemetry Data |
20 | 30 |
|
21 | 31 | The application consumes telemetry data structured according to Tesla's [Protobuf definition](https://github.com/teslamotors/fleet-telemetry/blob/main/protos/vehicle_data.proto). The Protobuf message `Payload` includes fields for various vehicle data such as: |
22 | 32 |
|
23 | 33 | - `LocationValue`: GPS coordinates (latitude, longitude) |
24 | | -- `DoorValue`: The open/closed status of the vehicle’s doors |
| 34 | +- `DoorValue`: Open/closed status of the vehicle’s doors |
25 | 35 | - `DoubleValue`, `FloatValue`, `IntValue`, etc.: Different sensor values |
26 | 36 | - `TimeValue`: Timestamp data related to telemetry events |
27 | 37 |
|
28 | | -The application extracts and processes these fields, converting them into Prometheus metrics. |
| 38 | +The application extracts and processes these fields, converting them into Prometheus metrics for monitoring and analysis. |
29 | 39 |
|
30 | 40 | ## Configuration |
31 | 41 |
|
32 | | -The application requires a configuration file in JSON format. This file contains settings for Kafka consumer configuration. By default, the app looks for `config.json`. |
| 42 | +The application requires a configuration file in YAML format. This file contains settings for Kafka consumer configuration and AWS integration. By default, the app looks for `examples/config.yaml`. |
| 43 | + |
| 44 | +### Example `config.yaml`: |
33 | 45 |
|
34 | | -### Example `config.json`: |
35 | | -```json |
36 | | -{ |
37 | | - "bootstrap.servers": "localhost:9092", |
38 | | - "group.id": "fleet-telemetry-consumer", |
39 | | - "auto.offset.reset": "earliest" |
40 | | -} |
| 46 | +```yaml |
| 47 | +bootstrap.servers: "localhost:9092" |
| 48 | +group.id: "fleet-telemetry-consumer" |
| 49 | +auto.offset.reset: "earliest" |
| 50 | +aws: |
| 51 | + access_key_id: "YOUR_AWS_ACCESS_KEY_ID" |
| 52 | + secret_access_key: "YOUR_AWS_SECRET_ACCESS_KEY" |
| 53 | + bucket: |
| 54 | + name: "ceph-tesla" |
| 55 | + host: "rook-ceph-rgw-my-store.rook-ceph.svc.cluster.local" |
| 56 | + port: 80 |
| 57 | + protocol: "http" |
| 58 | + region: "us-east-1" |
| 59 | + enabled: true |
41 | 60 | ``` |
42 | 61 |
|
43 | 62 | ## Build and Run |
44 | 63 |
|
45 | | -### Build Docker Image |
| 64 | +### Using Makefile |
46 | 65 |
|
47 | | -To build the Docker image for the consumer: |
| 66 | +The project includes a `Makefile` to simplify build and run commands. |
| 67 | + |
| 68 | +#### Build the Docker Image |
48 | 69 |
|
49 | 70 | ```bash |
50 | | -docker buildx build --platform linux/amd64 --load -t fleet-telemetry-consumer . |
| 71 | +make build |
| 72 | +``` |
| 73 | + |
| 74 | +#### Run the Application with Docker Compose |
| 75 | + |
| 76 | +```bash |
| 77 | +make run |
51 | 78 | ``` |
52 | 79 |
|
53 | | -### Run with Docker Compose |
| 80 | +### Docker Instructions |
| 81 | + |
| 82 | +#### Build Docker Image Manually |
| 83 | + |
| 84 | +```bash |
| 85 | +docker buildx build --platform linux/amd64 --load -t fleet-telemetry-consumer . |
| 86 | +``` |
54 | 87 |
|
55 | | -To start the application using Docker Compose: |
| 88 | +#### Run with Docker Compose |
56 | 89 |
|
57 | 90 | ```bash |
58 | 91 | docker compose up --build |
59 | 92 | ``` |
60 | 93 |
|
61 | 94 | ### Kubernetes Deployment |
62 | 95 |
|
63 | | -To deploy the application in a Kubernetes environment using Kustomize, use the following `kustomization.yaml` configuration: |
| 96 | +Deploy the application in a Kubernetes environment using Kustomize. |
| 97 | + |
| 98 | +#### Setup Kustomization |
64 | 99 |
|
65 | | -Apply the configuration with: |
| 100 | +Ensure your `kustomization.yaml` includes the necessary resources: |
| 101 | + |
| 102 | +```yaml |
| 103 | +apiVersion: kustomize.config.k8s.io/v1beta1 |
| 104 | +kind: Kustomization |
| 105 | +
|
| 106 | +resources: |
| 107 | + - ./fleet-telemetry-consumer |
| 108 | + - ./dashboards |
| 109 | +``` |
| 110 | + |
| 111 | +#### Apply the Configuration |
66 | 112 |
|
67 | 113 | ```bash |
68 | | -kubectl apply -k kustomization |
| 114 | +kubectl apply -k kustomization/ |
69 | 115 | ``` |
70 | 116 |
|
71 | | -This will deploy the fleet telemetry consumer into the Kubernetes cluster. |
| 117 | +This command will deploy the Fleet Telemetry Consumer into your Kubernetes cluster along with the configured Grafana dashboards. |
72 | 118 |
|
73 | 119 | ## Prometheus Metrics |
74 | 120 |
|
75 | | -The consumer exposes metrics at `/metrics` on port `2112`. Prometheus can scrape this endpoint to monitor the application. The metrics include various vehicle telemetry data such as: |
| 121 | +The consumer exposes metrics at `/metrics` on port `2112`. Prometheus can scrape this endpoint to monitor various vehicle telemetry data, including: |
76 | 122 |
|
77 | | -- Vehicle location |
78 | | -- Door states |
79 | | -- Time and speed metrics |
80 | | -- Boolean and numerical telemetry data |
| 123 | +- **Vehicle Location**: GPS coordinates (latitude, longitude) |
| 124 | +- **Door States**: Open or closed status of each door |
| 125 | +- **Time and Speed Metrics**: Timestamps and speed-related data |
| 126 | +- **Sensor Data**: Boolean and numerical telemetry values |
81 | 127 |
|
82 | | -### Sample Prometheus Query: |
| 128 | +### Sample Prometheus Query |
83 | 129 |
|
84 | 130 | ```promql |
85 | 131 | vehicle_data{field="Latitude"} |
86 | 132 | ``` |
87 | 133 |
|
88 | | -## Contributions |
| 134 | +## Grafana Dashboards |
| 135 | + |
| 136 | +The application includes pre-configured Grafana dashboards for visualizing telemetry data. |
| 137 | + |
| 138 | +- **Vehicle Locations**: Displays real-time locations of vehicles on a geomap. |
| 139 | +- **Odometer**: Shows odometer readings per vehicle. |
| 140 | +- **Battery Level**: Monitors battery levels across the fleet. |
| 141 | +- **Temperature Readings**: Tracks inside and outside temperatures. |
| 142 | +- **Vehicle Speed**: Visualizes speed metrics. |
| 143 | +- **Gear Status**: Displays current gear states. |
| 144 | +- **Battery Metrics**: Provides detailed battery performance insights. |
| 145 | + |
| 146 | +### Importing Dashboards |
| 147 | + |
| 148 | +Ensure that the dashboards are included in the Kubernetes deployment by verifying the `kustomization/dashboards/vehicle-data.json` file is correctly referenced. |
| 149 | + |
| 150 | +## Examples |
| 151 | + |
| 152 | +### Example Data |
| 153 | + |
| 154 | +Sample telemetry data can be found in `examples/data.json`. This data follows the structure defined by Tesla's Protobuf schema and is used for testing and development purposes. |
| 155 | + |
| 156 | +### Commands |
| 157 | + |
| 158 | +Common commands for managing the application: |
| 159 | + |
| 160 | +```bash |
| 161 | +# Ensure dependencies are up to date |
| 162 | +go mod tidy |
| 163 | +
|
| 164 | +# Run the application with a specific configuration |
| 165 | +go run main.go -config examples/config.yaml |
| 166 | +
|
| 167 | +# Set environment variables for AWS integration |
| 168 | +export AWS_ACCESS_KEY_ID=your_access_key_id |
| 169 | +export AWS_SECRET_ACCESS_KEY=your_secret_access_key |
| 170 | +export AWS_BUCKET_HOST=rook-ceph-rgw-my-store.rook-ceph.svc.cluster.local |
| 171 | +export AWS_BUCKET_NAME=ceph-tesla |
| 172 | +export AWS_BUCKET_PORT=80 |
| 173 | +export AWS_BUCKET_PROTOCOL=http |
| 174 | +export AWS_BUCKET_REGION=us-east-1 |
| 175 | +export AWS_ENABLED=true |
| 176 | +``` |
| 177 | + |
| 178 | +## Development |
| 179 | + |
| 180 | +### Dependencies |
| 181 | + |
| 182 | +The project manages dependencies using Go Modules. Ensure all dependencies are installed by running: |
| 183 | + |
| 184 | +```bash |
| 185 | +go mod download |
| 186 | +``` |
| 187 | + |
| 188 | +### Dockerfile |
| 189 | + |
| 190 | +The `Dockerfile` is multi-staged to optimize build times and reduce the final image size. |
| 191 | + |
| 192 | +```dockerfile |
| 193 | +# syntax=docker/dockerfile:1 |
| 194 | +
|
| 195 | +FROM golang:1.22.5-bullseye AS build |
| 196 | +
|
| 197 | +# Install build dependencies and librdkafka dependencies |
| 198 | +RUN apt-get update && apt-get install -y \ |
| 199 | + build-essential \ |
| 200 | + wget \ |
| 201 | + libssl-dev \ |
| 202 | + libsasl2-dev \ |
| 203 | + libzstd-dev \ |
| 204 | + pkg-config \ |
| 205 | + liblz4-dev \ |
| 206 | + && rm -rf /var/lib/apt/lists/* |
| 207 | +
|
| 208 | +# Build and install librdkafka from source |
| 209 | +ENV LIBRDKAFKA_VERSION=1.9.2 |
| 210 | +RUN wget https://github.com/edenhill/librdkafka/archive/refs/tags/v${LIBRDKAFKA_VERSION}.tar.gz && \ |
| 211 | + tar -xzf v${LIBRDKAFKA_VERSION}.tar.gz && \ |
| 212 | + cd librdkafka-${LIBRDKAFKA_VERSION} && \ |
| 213 | + ./configure --prefix=/usr && \ |
| 214 | + make && make install && \ |
| 215 | + ldconfig |
| 216 | +
|
| 217 | +WORKDIR /go/src/fleet-telemetry-consumer |
| 218 | +
|
| 219 | +COPY go.mod go.sum ./ |
| 220 | +RUN go mod download |
| 221 | +
|
| 222 | +COPY . . |
| 223 | +
|
| 224 | +# Build with dynamic linking |
| 225 | +RUN go build -tags dynamic -o /go/bin/fleet-telemetry-consumer |
| 226 | +
|
| 227 | +# Use a minimal base image |
| 228 | +FROM debian:bullseye-slim |
| 229 | +
|
| 230 | +WORKDIR / |
| 231 | +
|
| 232 | +# Install runtime dependencies |
| 233 | +RUN apt-get update && apt-get install -y \ |
| 234 | + libssl1.1 \ |
| 235 | + libsasl2-2 \ |
| 236 | + libzstd1 \ |
| 237 | + liblz4-1 \ |
| 238 | + && rm -rf /var/lib/apt/lists/* |
| 239 | +
|
| 240 | +COPY --from=build /go/bin/fleet-telemetry-consumer /fleet-telemetry-consumer |
| 241 | +
|
| 242 | +ENTRYPOINT ["/fleet-telemetry-consumer"] |
| 243 | +``` |
| 244 | + |
| 245 | +## Contribution |
| 246 | + |
| 247 | +Contributions are welcome! If you'd like to improve Fleet Telemetry Consumer, please follow these steps: |
| 248 | + |
| 249 | +1. Fork the repository. |
| 250 | +2. Create a new branch for your feature or bugfix. |
| 251 | +3. Commit your changes with clear and descriptive messages. |
| 252 | +4. Push your branch to your forked repository. |
| 253 | +5. Open a pull request detailing your changes. |
| 254 | + |
| 255 | +Please ensure your code adheres to the project's coding standards and passes all tests. |
| 256 | + |
| 257 | +## License |
| 258 | + |
| 259 | +This project is licensed under the [MIT License](LICENSE). |
| 260 | + |
| 261 | +--- |
89 | 262 |
|
90 | | -Feel free to open issues or submit pull requests if you'd like to contribute! |
| 263 | +*For any questions or support, please open an issue on the [GitHub repository](https://github.com/rajsinghtech/fleet-telemetry-consumer/issues).* |
0 commit comments