diff --git a/CFPwireframe.md b/CFPwireframe.md deleted file mode 100644 index 9be8453..0000000 --- a/CFPwireframe.md +++ /dev/null @@ -1,63 +0,0 @@ -### Homepage -``` -+-----------------------------+ -| Homepage | -|-----------------------------| -| Headline | -| Introduction | -| [CTA Button] | -+-----------------------------+ -``` - -### Onboarding Flow -``` -+-----------------------------+ -| Onboarding Flow | -|-----------------------------| -| Step 1: Introduction | -| Step 2: Features Overview | -| Step 3: Set Up Profile | -| [Start Using the App] | -+-----------------------------+ -``` - -### Dashboard -``` -+-----------------------------+ -| Dashboard | -|-----------------------------| -| Carbon Footprint Overview | -| [Charts & Graphs] | -| Recommendations | -+-----------------------------+ -``` - -### User Profile -``` -+-----------------------------+ -| User Profile | -|-----------------------------| -| Profile Info | -| Settings & Preferences | -| Social Media Connections | -+-----------------------------+ -``` - -### Resource Center -``` -+-----------------------------+ -| Resource Center | -|-----------------------------| -| Articles & Videos | -| [Search & Filter] | -+-----------------------------+ -``` - -### Community -``` -+-----------------------------+ -| Community | -|-----------------------------| -| Forums & Discussions | -| User Groups & Challenges | -+-----------------------------+ diff --git a/CO2_emissions_per_capita,_2017_(Our_World_in_Data).svg b/CO2_emissions_per_capita,_2017_(Our_World_in_Data).svg new file mode 100644 index 0000000..b0c4ff0 --- /dev/null +++ b/CO2_emissions_per_capita,_2017_(Our_World_in_Data).svg @@ -0,0 +1,12 @@ + + + + + + + + + + + +CO₂ emissions per capita, 2017Average carbon dioxide (CO₂) emissions per capita measured in tonnes per year.0No data<0 t1 t2.5 t5 t7.5 t10 t12.5 t15 t17.5 t20 t25 t>50 t/>}Source: OWID based on CDIAC; Global Carbon Project; Gapminder & UNOurWorldInData.org/co2-and-other-greenhouse-gas-emissions/ \ No newline at end of file diff --git a/Electricity_consumption_per_country_map.png b/Electricity_consumption_per_country_map.png new file mode 100644 index 0000000..41ab9b4 Binary files /dev/null and b/Electricity_consumption_per_country_map.png differ diff --git a/README.md b/README.md index 0ea1419..4d2608e 100644 --- a/README.md +++ b/README.md @@ -4,124 +4,9 @@ The AI Onboarding webApp aims to address the pressing issue of carbon footprint reduction through innovative technology. By leveraging autonomous small satellites (smallsats) for earth observation, this webApp provides users with personalized insights and recommendations to help them reduce their carbon footprint. The goal is to empower individuals and organizations to take actionable steps towards a more sustainable future. -## Project Structure - -This project includes the following components: -1. **Wireframe Design** -2. **Prototype** -3. **Mockup Design** - -## Wireframe Design - -### Homepage -``` -+-----------------------------+ -| Homepage | -|-----------------------------| -| Headline | -| Introduction | -| [CTA Button] | -+-----------------------------+ -``` - -### Onboarding Flow -``` -+-----------------------------+ -| Onboarding Flow | -|-----------------------------| -| Step 1: Introduction | -| Step 2: Features Overview | -| Step 3: Set Up Profile | -| [Start Using the App] | -+-----------------------------+ -``` - -### Dashboard -``` -+-----------------------------+ -| Dashboard | -|-----------------------------| -| Carbon Footprint Overview | -| [Charts & Graphs] | -| Recommendations | -+-----------------------------+ -``` - -### User Profile -``` -+-----------------------------+ -| User Profile | -|-----------------------------| -| Profile Info | -| Settings & Preferences | -| Social Media Connections | -+-----------------------------+ -``` - -### Resource Center -``` -+-----------------------------+ -| Resource Center | -|-----------------------------| -| Articles & Videos | -| [Search & Filter] | -+-----------------------------+ -``` - -### Community -``` -+-----------------------------+ -| Community | -|-----------------------------| -| Forums & Discussions | -| User Groups & Challenges | -+-----------------------------+ -``` - -## Prototype - -The interactive prototype can be found [here](https://aton4st.blogspot.com). It includes detailed mockups and user flows to illustrate the user experience and interactions. - -## Mockup Design - -### Homepage Mockup -![Homepage Mockup](https://github.com/aimtyaem/EOInfo/blob/ea27746647fb4cf297cf11372eb35207329a6180/1739718419%20(1).jpg) - -### Onboarding Flow Mockup -![Onboarding Flow Mockup](#) - -### Dashboard Mockup -![Dashboard Mockup](#) - -### User Profile Mockup -![User Profile Mockup](#) - -### Resource Center Mockup -![Resource Center Mockup](#) - -### Community Mockup -![Community Mockup](#) - -## Team Members - -- **Project Manager**: Oversees project timelines, coordinates tasks, ensures communication, manages resources. -- **Frontend Developer**: Designs user interface, implements interactive elements, ensures responsive design. -- **Backend Developer**: Manages server-side logic, databases, API integration, ensures security. -- **AI Specialist**: Develops machine learning models, trains AI systems, integrates AI with the web app. -- **Data Scientist**: Collects and processes data, performs data analysis, ensures data accuracy. -- **UX/UI Designer**: Designs user-friendly interfaces, creates visual designs, conducts user testing. -- **Sustainability Expert**: Provides sustainability insights, suggests carbon reduction strategies, validates data. -- **Marketing Specialist**: Promotes the web app, engages with users, gathers feedback, manages social media. - -## Getting Started - -To get started with the development of this project, follow the steps below: -1. Clone the repository. -2. Install necessary dependencies. -3. Follow the wireframe and mockup designs to develop the frontend and backend components. -4. Integrate AI models and data processing modules. -5. Conduct user testing and gather feedback for improvements. - -We hope this project inspires and empowers users to contribute to a sustainable future by reducing their carbon footprints with the help of advanced technology. - -For more information, please contact aimt16@hotmail.com. \ No newline at end of file +## Datasets +1. CSV files. +2. Raster Images. +3. Manual input. +4. Cloud carbon report. +5. Electricity expenses incurred from grid power. diff --git a/cloud_carbon_report (1).md b/cloud_carbon_report (1).md new file mode 100644 index 0000000..8ed19de --- /dev/null +++ b/cloud_carbon_report (1).md @@ -0,0 +1,59 @@ +# Carbon Footprint Report for CloudTech Solutions + +## Energy Cost Analysis +| Energy Source | Cost per Unit | +|---------------|-------------| +| electricity | $85.00 | +| natural_gas | $12.00 | +| fuel_oil | $0.75 | + +## Cloud Infrastructure Profile +### AWS Region Emissions Factors +| Region | Emissions Factor (tCO2e) | Data Source | +|--------|--------------------------|-------------| +| us-east-1 | 0.000416 | EPA | +| us-east-2 | 0.000440 | EPA | +| us-west-1 | 0.000351 | EPA | +| us-west-2 | 0.000351 | EPA | +| us-gov-east-1 | 0.000416 | EPA | +| us-gov-west-1 | 0.000351 | EPA | +| af-south-1 | 0.000928 | carbonfootprint.com | +| ap-east-1 | 0.000810 | carbonfootprint.com | +| ap-south-1 | 0.000708 | carbonfootprint.com | +| ap-northeast-3 | 0.000506 | carbonfootprint.com | +| ap-northeast-2 | 0.000500 | carbonfootprint.com | +| ap-southeast-1 | 0.000409 | EMA Singapore | +| ap-southeast-2 | 0.000790 | carbonfootprint.com | +| ap-northeast-1 | 0.000506 | carbonfootprint.com | +| ca-central-1 | 0.000130 | carbonfootprint.com | +| cn-north-1 | 0.000555 | carbonfootprint.com | +| cn-northwest-1 | 0.000555 | carbonfootprint.com | +| eu-central-1 | 0.000338 | EEA | +| eu-west-1 | 0.000316 | EEA | +| eu-west-2 | 0.000228 | EEA | +| eu-south-1 | 0.000233 | EEA | +| eu-west-3 | 0.000052 | EEA | +| eu-north-1 | 0.000008 | EEA | +| me-south-1 | 0.000732 | carbonfootprint.com | +| sa-east-1 | 0.000074 | carbonfootprint.com | + +### Server Architecture Efficiency +| Architecture | Power Consumption Range | +|--------------|--------------------------| +| Graviton | 0.47-1.69 W | +| Ivy Bridge | 3.04-8.25 W | +| Sandy Bridge | 2.17-8.58 W | +| Haswell | 1.90-6.01 W | +| Sky Lake | 0.64-4.19 W | +| Cascade Lake | 0.64-3.97 W | +| EPYC 2nd Gen | 0.47-1.69 W | +| Graviton2 | 0.47-1.69 W | +| Broadwell | 0.71-3.69 W | +| EPYC 1st Gen | 0.82-2.55 W | +| Coffee Lake | 1.14-5.42 W | + +## Optimization Recommendations +1. **Region Optimization**: Consider shifting workloads to lower-emission regions like eu-north-1 +2. **Architecture Upgrade**: Migrate to Graviton-based instances for better energy efficiency +3. **Renewable Energy**: Explore AWS Renewable Energy Programs for carbon offset +4. **Instance Right-Sizing**: Use compute-optimized architectures for energy-intensive workloads \ No newline at end of file diff --git a/coefficients-aws-use.csv b/coefficients-aws-use.csv new file mode 100644 index 0000000..50d2ff2 --- /dev/null +++ b/coefficients-aws-use.csv @@ -0,0 +1,12 @@ +,Architecture,Min Watts,Max Watts,GB/Chip +0,Graviton,0.4742621527777778,1.6929615162037037,129.77777777777777 +1,Ivy Bridge,3.0369270833333335,8.248611111111112,14.933333333333334 +2,Sandy Bridge,2.1694411458333334,8.575357663690477,16.480916030534353 +3,Haswell,1.9005681818181814,6.012910353535353,27.310344827586206 +4,Sky Lake,0.6446044454253452,4.193436438541878,80.43037974683544 +5,Cascade Lake,0.6389493581523519,3.9673047343937564,98.11764705882354 +6,EPYC 2nd Gen,0.4742621527777778,1.6929615162037037,129.77777777777777 +7,Graviton2,0.4742621527777778,1.6929615162037037,129.77777777777777 +8,Broadwell,0.7128342245989304,3.6853275401069516,69.6470588235294 +9,EPYC 1st Gen,0.82265625,2.553125,89.6 +10,Coffee Lake,1.138425925925926,5.421759259259258,19.555555555555557 diff --git a/energy_costs.csv b/energy_costs.csv new file mode 100644 index 0000000..e194261 --- /dev/null +++ b/energy_costs.csv @@ -0,0 +1,4 @@ +energy_source,cost_per_unit +electricity,$85.00 +natural_gas,$12.00 +fuel_oil,$0.75 \ No newline at end of file diff --git a/grid-emissions-factors-aws.csv b/grid-emissions-factors-aws.csv new file mode 100644 index 0000000..187f6e5 --- /dev/null +++ b/grid-emissions-factors-aws.csv @@ -0,0 +1,26 @@ +Region,Country,NERC Region,CO2e (metric ton/kWh),Source +us-east-1,United States,SERC,0.000415755,EPA +us-east-2,United States,RFC,0.000440187,EPA +us-west-1,United States,WECC,0.000350861,EPA +us-west-2,United States,WECC,0.000350861,EPA +us-gov-east-1,United States,SERC,0.000415755,EPA +us-gov-west-1,United States,WECC,0.000350861,EPA +af-south-1,South Africa,,0.000928,carbonfootprint.com +ap-east-1,Hong Kong,,0.00081,carbonfootprint.com +ap-south-1,India,,0.000708,carbonfootprint.com +ap-northeast-3,Japan,,0.000506,carbonfootprint.com +ap-northeast-2,South Korea,,0.0005,carbonfootprint.com +ap-southeast-1,Singapore,,0.0004085,EMA Singapore +ap-southeast-2,Australia,,0.00079,carbonfootprint.com +ap-northeast-1,Japan,,0.000506,carbonfootprint.com +ca-central-1,Canada,,0.00013,carbonfootprint.com +cn-north-1,China,,0.000555,carbonfootprint.com +cn-northwest-1,China,,0.000555,carbonfootprint.com +eu-central-1,Germany,,0.000338,EEA +eu-west-1,Ireland,,0.000316,EEA +eu-west-2,England,,0.000228,EEA +eu-south-1,Italy,,0.000233,EEA +eu-west-3,France,,0.000052,EEA +eu-north-1,Sweden,,0.000008,EEA +me-south-1,Bahrain,,0.000732,carbonfootprint.com +sa-east-1,Brazil,,0.000074,carbonfootprint.com \ No newline at end of file diff --git a/notebooks/python-api-bindings-GHG.ipynb b/notebooks/python-api-bindings-GHG.ipynb new file mode 100644 index 0000000..ed27c8f --- /dev/null +++ b/notebooks/python-api-bindings-GHG.ipynb @@ -0,0 +1,1917 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "KPtzOzgJ-Ak2" + }, + "source": [ + "# Edge Impulse Python API Bindings Example" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "sKIz3w8K_dN1" + }, + "source": [ + "[![View in Edge Impulse docs](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/.assets/images/ei-badge.svg)](https://docs.edgeimpulse.com/docs/tutorials/api-examples/python-api-bindings-example)\n", + "[![Open in Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/edgeimpulse/notebooks/blob/main/notebooks/python-api-bindings-example.ipynb)\n", + "[![View on GitHub](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/.assets/images/badge-view-on-github.svg)](https://github.com/edgeimpulse/notebooks/blob/main/notebooks/python-api-bindings-example.ipynb)\n", + "[![Download notebook](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/.assets/images/badge-download-notebook.svg)](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/notebooks/python-api-bindings-example.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gLWb8AoZ_ZyY" + }, + "source": [ + "The [Python SDK](https://docs.edgeimpulse.com/docs/tools/edge-impulse-python-sdk) is built on top of the [Edge Impulse Python API bindings](https://pypi.org/project/edgeimpulse-api/), which is known as the _edgeimpulse_api_ package. These are Python wrappers for all of the [web API calls](https://docs.edgeimpulse.com/reference/edge-impulse-api/edge-impulse-api) that you can use to interact with Edge Impulse projects programmatically (i.e. without needing to use the Studio graphical interface).\n", + "\n", + "The API reference guide for using the Python API bindings can be found [here](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api).\n", + "\n", + "This example will walk you through the process of using the Edge Impulse API bindings to upload data, define an impulse, process features, train a model, and deploy the impulse as a C++ library.\n", + "\n", + "After creating your project and copying the API key, feel free to leave the project open in a browser window so you can watch the changes as we make API calls. You might need to refresh the browser after each call to see the changes take affect.\n", + "\n", + "> **Important!** This project will add data and remove any current features and models in a project. We highly recommend creating a new project when running this notebook! Don't say we didn't warn you if you mess up an existing project." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "TFny1qVW99dN", + "outputId": "4b0687dc-b39e-4365-e5c7-3644f353e0a4", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Collecting edgeimpulse-api\n", + " Downloading edgeimpulse_api-1.75.2-py3-none-any.whl.metadata (1.5 kB)\n", + "Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (2.32.4)\n", + "Collecting aenum<4.0.0,>=3.1.11 (from edgeimpulse-api)\n", + " Downloading aenum-3.1.16-py3-none-any.whl.metadata (3.8 kB)\n", + "Requirement already satisfied: pydantic<3,>=1.10.17 in /usr/local/lib/python3.12/dist-packages (from edgeimpulse-api) (2.11.9)\n", + "Requirement already satisfied: python_dateutil<3.0.0,>=2.5.3 in /usr/local/lib/python3.12/dist-packages (from edgeimpulse-api) (2.9.0.post0)\n", + "Collecting urllib3<2.0.0,>=1.25.3 (from edgeimpulse-api)\n", + " Downloading urllib3-1.26.20-py2.py3-none-any.whl.metadata (50 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.1/50.1 kB\u001b[0m \u001b[31m1.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hRequirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests) (3.4.3)\n", + "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests) (3.10)\n", + "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests) (2025.8.3)\n", + "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (0.7.0)\n", + "Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (2.33.2)\n", + "Requirement already satisfied: typing-extensions>=4.12.2 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (4.15.0)\n", + "Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3,>=1.10.17->edgeimpulse-api) (0.4.2)\n", + "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.12/dist-packages (from python_dateutil<3.0.0,>=2.5.3->edgeimpulse-api) (1.17.0)\n", + "Downloading edgeimpulse_api-1.75.2-py3-none-any.whl (1.6 MB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m21.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hDownloading aenum-3.1.16-py3-none-any.whl (165 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m165.6/165.6 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hDownloading urllib3-1.26.20-py2.py3-none-any.whl (144 kB)\n", + "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m144.2/144.2 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[?25hInstalling collected packages: aenum, urllib3, edgeimpulse-api\n", + " Attempting uninstall: urllib3\n", + " Found existing installation: urllib3 2.5.0\n", + " Uninstalling urllib3-2.5.0:\n", + " Successfully uninstalled urllib3-2.5.0\n", + "Successfully installed aenum-3.1.16 edgeimpulse-api-1.75.2 urllib3-1.26.20\n" + ] + } + ], + "source": [ + "# Install the Edge Impulse API bindings and the requests package\n", + "!python -m pip install edgeimpulse-api requests" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "kV6EOSOuC9nV" + }, + "outputs": [], + "source": [ + "import json\n", + "import re\n", + "import os\n", + "import pprint\n", + "import time\n", + "\n", + "import requests" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "IppiSCw4_0eH" + }, + "outputs": [], + "source": [ + "# Import the API objects we plan to use\n", + "from edgeimpulse_api import (\n", + " ApiClient,\n", + " BuildOnDeviceModelRequest,\n", + " Configuration,\n", + " DeploymentApi,\n", + " DSPApi,\n", + " DSPConfigRequest,\n", + " GenerateFeaturesRequest,\n", + " Impulse,\n", + " ImpulseApi,\n", + " JobsApi,\n", + " ProjectsApi,\n", + " SetKerasParameterRequest,\n", + " StartClassifyJobRequest,\n", + " UpdateProjectRequest,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tHum_KkPAfhG" + }, + "source": [ + "You will need to obtain an API key from an Edge Impulse project. Log into [edgeimpulse.com](https://edgeimpulse.com/) and create a new project. Open the project, navigate to **Dashboard** and click on the **Keys** tab to view your API keys. Double-click on the API key to highlight it, right-click, and select **Copy**.\n", + "\n", + "![Copy API key from Edge Impulse project](https://raw.githubusercontent.com/edgeimpulse/notebooks/main/.assets/images/python-sdk-copy-ei-api-key.png)\n", + "\n", + "Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.\n", + "\n", + "Paste that API key string in the `EI_API_KEY` value in the following cell:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "id": "GpIaKwEJAhpI" + }, + "outputs": [], + "source": [ + "# Settings\n", + "API_KEY = \"ei_b42a548790554d9ffa4cd6f624e480573afccf4a670dbfcddf33085c4f4da15f\" # Change this to your Edge Impulse API key\n", + "API_HOST = \"https://studio.edgeimpulse.com/v1\"\n", + "DATASET_PATH = \"dataset/gestures\"\n", + "OUTPUT_PATH = \".\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "W0qE0bWCrNvP" + }, + "source": [ + "## Initialize API clients\n", + "\n", + "The Python API bindings use a series of submodules, each encapsulating one of the API subsections (e.g. Projects, DSP, Learn, etc.). To use these submodules, you need to instantiate a generic API module and use that to instantiate the individual API objects. We'll use these objects to make the API calls later.\n", + "\n", + "To configure a client, you generally create a configuration object (often from a dict) and then pass that object as an argument to the client." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "id": "NB0g7vxErNQF" + }, + "outputs": [], + "source": [ + "# Create top-level API client\n", + "config = Configuration(\n", + " host=API_HOST,\n", + " api_key={\"ApiKeyAuthentication\": API_KEY}\n", + ")\n", + "client = ApiClient(config)\n", + "\n", + "# Instantiate sub-clients\n", + "deployment_api = DeploymentApi(client)\n", + "dsp_api = DSPApi(client)\n", + "impulse_api = ImpulseApi(client)\n", + "jobs_api = JobsApi(client)\n", + "projects_api = ProjectsApi(client)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lPOr6bSjqse4" + }, + "source": [ + "## Initialize project\n", + "\n", + "Before uploading data, we should make sure the project is in the regular impulse flow mode, rather than [BYOM mode](https://docs.edgeimpulse.com/docs/edge-impulse-studio/bring-your-own-model-byom). We'll also need the project ID for most of the other API calls in the future.\n", + "\n", + "Notice that the general pattern for calling API functions is to instantiate a configuration/request object and pass it to the API method that's part of the submodule. You can find which parameters a specific API call expects by looking at [the call's documentation page](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/update_project).\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Projects / List (active) projects](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/list_active_projects)\n", + " * [Projects / Update project](https://docs.edgeimpulse.com/reference/edge-impulse-api/projects/update_project)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "id": "AFOytMLU_ulh", + "outputId": "d41e49a6-97a5-4977-d015-0da34f193a2c", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Project ID: 797297\n" + ] + } + ], + "source": [ + "# Get the project ID, which we'll need for future API calls\n", + "response = projects_api.list_projects()\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") == False:\n", + " raise RuntimeError(\"Could not obtain the project ID.\")\n", + "else:\n", + " project_id = response.projects[0].id\n", + "\n", + "# Print the project ID\n", + "print(f\"Project ID: {project_id}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "id": "cWggMwaIqrpS", + "outputId": "45af2019-2c8c-464c-ece2-22d0d8379c58", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Project is now in impulse workflow.\n" + ] + } + ], + "source": [ + "# Create request object with the required parameters\n", + "update_project_request = UpdateProjectRequest.from_dict({\n", + " \"inPretrainedModelFlow\": False,\n", + "})\n", + "\n", + "# Update the project and check the response for errors\n", + "response = projects_api.update_project(\n", + " project_id=project_id,\n", + " update_project_request=update_project_request,\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") == False:\n", + " raise RuntimeError(\"Could not obtain the project ID.\")\n", + "else:\n", + " print(\"Project is now in impulse workflow.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z_GzBa0YBzGo" + }, + "source": [ + "## Upload dataset\n", + "\n", + "We'll start by downloading the gesture dataset from https://docs.edgeimpulse.com/docs/pre-built-datasets/continuous-gestures. Note that the [ingestion API](https://docs.edgeimpulse.com/reference/data-ingestion/ingestion-api) is separate from the regular Edge Impulse API: the URL and interface are different. As a result, we must construct the request manually and cannot rely on the Python API bindings.\n", + "\n", + "We rely on the ingestion service using the string before the first period in the filename to determine the label. For example, \"idle.1.cbor\" will be automatically assigned the label \"idle.\" If you wish to set a label manually, you must specify the `x-label` parameter in the headers. Note that you can only define a label this way when uploading a group of data at a time. For example, setting `\"x-label\": \"idle\"` in the headers would give all data uploaded with that call the label \"idle.\"\n", + "\n", + "API calls used with associated documentation:\n", + "\n", + " * [Ingestion service](https://docs.edgeimpulse.com/reference/data-ingestion/ingestion-api)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "id": "InjgAOyRAn6z" + }, + "outputs": [], + "source": [ + "# Download and unzip gesture dataset\n", + "!mkdir -p dataset/\n", + "!wget -P dataset -q https://cdn.edgeimpulse.com/datasets/gestures.zip\n", + "!unzip -q dataset/gestures.zip -d {DATASET_PATH}" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "id": "OGMm_7ELHMFb" + }, + "outputs": [], + "source": [ + "def upload_files(api_key, path, subset):\n", + " \"\"\"\n", + " Upload files in the given path/subset (where subset is \"training\" or\n", + " \"testing\")\n", + " \"\"\"\n", + "\n", + " # Construct request\n", + " url = f\"https://ingestion.edgeimpulse.com/api/{subset}/files\"\n", + " headers = {\n", + " \"x-api-key\": api_key,\n", + " \"x-disallow-duplicates\": \"true\",\n", + " }\n", + "\n", + " # Get file handles and create dataset to upload\n", + " files = []\n", + " file_list = os.listdir(os.path.join(path, subset))\n", + " for file_name in file_list:\n", + " file_path = os.path.join(path, subset, file_name)\n", + " if os.path.isfile(file_path):\n", + " file_handle = open(file_path, \"rb\")\n", + " files.append((\"data\", (file_name, file_handle, \"multipart/form-data\")))\n", + "\n", + " # Upload the files\n", + " response = requests.post(\n", + " url=url,\n", + " headers=headers,\n", + " files=files,\n", + " )\n", + "\n", + " # Print any errors for files that did not upload\n", + " upload_responses = response.json()[\"files\"]\n", + " for resp in upload_responses:\n", + " if not resp[\"success\"]:\n", + " print(resp)\n", + "\n", + " # Close all the handles\n", + " for handle in files:\n", + " handle[1][1].close()" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "id": "8witLfBgH-Ay", + "outputId": "af447da7-702b-4bac-fc75-f631cb7b925a", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Uploading training dataset...\n", + "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781475)'}\n", + "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781500)'}\n", + "{'success': False, 'error': 'An item with this hash already exists (ids: 2287781508)'}\n", + "Uploading testing dataset...\n" + ] + } + ], + "source": [ + "# Upload the dataset to the project\n", + "print(\"Uploading training dataset...\")\n", + "upload_files(API_KEY, DATASET_PATH, \"training\")\n", + "print(\"Uploading testing dataset...\")\n", + "upload_files(API_KEY, DATASET_PATH, \"testing\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8isx_nKdOqSs" + }, + "source": [ + "## Create an impulse\n", + "\n", + "Now that we uploaded our data, it's time to create an impulse. An \"impulse\" is a combination of processing (feature extraction) and learning blocks. The general flow of data is:\n", + "\n", + "> data -> input block -> processing block(s) -> learning block(s)\n", + "\n", + "Only the processing and learning blocks make up the \"impulse.\" However, we must still specify the input block, as it allows us to perform preprocessing, like windowing (for time series data) or cropping/scaling (for image data).\n", + "\n", + "Your project will have one input block, but it can contain multiple processing and learning blocks. Specific outputs from the processing block can be specified as inputs to the learning blocks. However, for simplicity, we'll just show one processing block and one learning block.\n", + "\n", + "> **Note:** Historically, processing blocks were called \"DSP blocks,\" as they focused on time series data. In Studio, the name has been changed to \"Processing block,\" as the blocks work with different types of data, but you'll see it referred to as \"DSP block\" in the API.\n", + "\n", + "It's important that you define the input block with the same parameters as your captured data, especially the sampling rate! Additionally, the processing block axes names **must** match up with their names in the dataset.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Impulse / Get impulse blocks](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/get_impulse_blocks)\n", + " * [Impulse / Delete impulse](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/delete_impulse)\n", + " * [Impulse / Create impulse](https://docs.edgeimpulse.com/reference/edge-impulse-api/impulse/create_impulse)" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "id": "Djn91Lq-ZpR8" + }, + "outputs": [], + "source": [ + "# To start, let's fetch a list of all the available blocks\n", + "response = impulse_api.get_impulse_blocks(\n", + " project_id=project_id\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not get impulse blocks.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "id": "nTOB4175asrn", + "outputId": "df3ba75f-51ce-48ed-bb8e-ab8b86fa025b", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Input blocks\n", + "[\n", + " {\n", + " \"type\": \"time-series\",\n", + " \"title\": \"Time series data\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Operates on time series sensor data like vibration or audio data. Lets you slice up data into windows.\",\n", + " \"name\": \"Time series\",\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"image\",\n", + " \"title\": \"Images\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Processes discrete images for object detection or classification.\",\n", + " \"name\": \"Image\",\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"features\",\n", + " \"title\": \"Pre-processed features\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Processes pre-processed features, or non time-series data.\",\n", + " \"name\": \"Features\",\n", + " \"blockType\": \"official\"\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "# Print the available input blocks\n", + "print(\"Input blocks\")\n", + "print(json.dumps(json.loads(response.to_json())[\"inputBlocks\"], indent=2))" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "id": "7UIhLBJLa2U-", + "outputId": "b9bdc649-d720-4e56-ac07-3627b70abd35", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Processing blocks\n", + "[\n", + " {\n", + " \"type\": \"flatten\",\n", + " \"title\": \"Flatten\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Flatten an axis into a single value, useful for slow-moving averages like temperature data, in combination with other blocks.\",\n", + " \"name\": \"Flatten\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"image\",\n", + " \"title\": \"Image\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Preprocess and normalize image data, and optionally reduce the color depth.\",\n", + " \"name\": \"Image\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"Image\",\n", + " \"required\": true\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"mfcc\",\n", + " \"title\": \"Audio (MFCC)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Extracts features from audio signals using Mel Frequency Cepstral Coefficients, great for human voice.\",\n", + " \"name\": \"MFCC\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 4,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"Signal\",\n", + " \"description\": \"The input signal to create an MFCC spectrogram from\",\n", + " \"required\": true\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"mfe\",\n", + " \"title\": \"Audio (MFE)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Extracts a spectrogram from audio signals using Mel-filterbank energy features, great for both voice and non-voice audio.\",\n", + " \"name\": \"MFE\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 4,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"Signal\",\n", + " \"description\": \"The input signal to create an MFE spectrogram from\",\n", + " \"required\": true\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"spectral-analysis\",\n", + " \"title\": \"Spectral Analysis\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time.\",\n", + " \"name\": \"Spectral features\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 4,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"spectrogram\",\n", + " \"title\": \"Spectrogram\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Extracts a spectrogram from audio or sensor data, great for non-voice audio or data with continuous frequencies.\",\n", + " \"name\": \"Spectrogram\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 4,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"Signal\",\n", + " \"description\": \"The input signal to create a spectrogram from\",\n", + " \"required\": true\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"syntiant\",\n", + " \"title\": \"Audio (Syntiant)\",\n", + " \"author\": \"Syntiant\",\n", + " \"description\": \"Syntiant only. Compute log Mel-filterbank energy features from an audio signal.\",\n", + " \"name\": \"Syntiant\",\n", + " \"recommended\": true,\n", + " \"experimental\": true,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"Signal\",\n", + " \"description\": \"The input signal to create a spectrogram from\",\n", + " \"required\": true\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"syntiant-imu\",\n", + " \"title\": \"IMU (Syntiant)\",\n", + " \"author\": \"Syntiant\",\n", + " \"description\": \"Syntiant only. Great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time.\",\n", + " \"name\": \"Syntiant IMU\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"hr\",\n", + " \"title\": \"HR and HRV features\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Process PPG or ECG data into heart rate and heart rate variability features.\",\n", + " \"name\": \"HR/HRV\",\n", + " \"recommended\": true,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\",\n", + " \"namedAxes\": [\n", + " {\n", + " \"name\": \"PPG/ECG\",\n", + " \"description\": \"PPG signal to convert to heart rate\",\n", + " \"required\": true\n", + " },\n", + " {\n", + " \"name\": \"Accelerometer X\",\n", + " \"description\": \"One channel of accelerometer data\",\n", + " \"required\": false\n", + " },\n", + " {\n", + " \"name\": \"Accelerometer Y\",\n", + " \"description\": \"One channel of accelerometer data\",\n", + " \"required\": false\n", + " },\n", + " {\n", + " \"name\": \"Accelerometer Z\",\n", + " \"description\": \"One channel of accelerometer data\",\n", + " \"required\": false\n", + " }\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"raw\",\n", + " \"title\": \"Raw Data\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Use data without pre-processing. Useful if you want to use deep learning to learn features.\",\n", + " \"name\": \"Raw data\",\n", + " \"recommended\": false,\n", + " \"experimental\": false,\n", + " \"latestImplementationVersion\": 1,\n", + " \"blockType\": \"official\"\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "# Print the available processing blocks\n", + "print(\"Processing blocks\")\n", + "print(json.dumps(json.loads(response.to_json())[\"dspBlocks\"], indent=2))" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "id": "MYrjrUB7a7Et", + "outputId": "648a17ed-608b-41ea-a550-26f9ddbadf60", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Learning blocks\n", + "[\n", + " {\n", + " \"type\": \"keras\",\n", + " \"title\": \"Classification\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Learns patterns from data, and can apply these to new data. Great for categorizing movement or recognizing audio.\",\n", + " \"name\": \"Classifier\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-transfer-image\",\n", + " \"title\": \"Transfer Learning (Images)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Fine tune a pre-trained image classification model on your data. Good performance even with relatively small image datasets.\",\n", + " \"name\": \"Transfer learning\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-object-detection\",\n", + " \"title\": \"Object Detection (Images)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Fine tune a pre-trained object detection model on your data. Good performance even with relatively small image datasets.\",\n", + " \"name\": \"Object detection\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-regression\",\n", + " \"title\": \"Regression\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Learns patterns from data, and can apply these to new data. Great for predicting numeric continuous values.\",\n", + " \"name\": \"Regression\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-transfer-kws\",\n", + " \"title\": \"Transfer Learning (Keyword Spotting)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Fine tune a pre-trained keyword spotting model on your data. Good performance even with relatively small keyword datasets.\",\n", + " \"name\": \"Transfer learning (Keyword Spotting)\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"anomaly-gmm\",\n", + " \"title\": \"Anomaly Detection (GMM)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Find outliers in new data. A Gaussian mixture model (GMM) models the shape of data using a probability distribution. New data that is unlikely according to this model can be considered anomalous.\",\n", + " \"name\": \"Anomaly detection (GMM)\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"anomaly\",\n", + " \"title\": \"Anomaly Detection (K-means)\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Find outliers in new data. Good for recognizing unknown states, and to complement classifiers. Works best with low dimensionality features like the output of the spectral features block.\",\n", + " \"name\": \"Anomaly detection\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-visual-anomaly\",\n", + " \"title\": \"Visual Anomaly Detection - FOMO-AD\",\n", + " \"author\": \"Edge Impulse\",\n", + " \"description\": \"Detect visual anomalies. Extracts visual features using a pre-trained backbone, and applies a scoring function to evaluate how anomalous a sample is by comparing the extracted features to the learned model. Does not require anomalous data.\",\n", + " \"name\": \"Visual Anomaly Detection\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\"\n", + " },\n", + " {\n", + " \"type\": \"keras-akida\",\n", + " \"title\": \"Classification - BrainChip Akida\\u2122\",\n", + " \"author\": \"BrainChip\",\n", + " \"description\": \"Learns patterns from data, and can apply these to new data. Great for categorizing movement or recognizing audio. Only works with BrainChip Akida devices\",\n", + " \"name\": \"Classifier\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\",\n", + " \"supportedTargets\": [\n", + " \"brainchip-akd1000\"\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"keras-akida-transfer-image\",\n", + " \"title\": \"Transfer Learning (Images) - BrainChip Akida\\u2122\",\n", + " \"author\": \"BrainChip\",\n", + " \"description\": \"Fine tune a pre-trained image classification model on your data. Good performance even with relatively small image datasets. Only works with BrainChip Akida devices\",\n", + " \"name\": \"Transfer learning\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\",\n", + " \"supportedTargets\": [\n", + " \"brainchip-akd1000\"\n", + " ]\n", + " },\n", + " {\n", + " \"type\": \"keras-akida-object-detection\",\n", + " \"title\": \"Object Detection (Images) - BrainChip Akida\\u2122\",\n", + " \"author\": \"BrainChip\",\n", + " \"description\": \"Fine tune a pre-trained object detection model on your data. Good performance even with relatively small image datasets. Only works with BrainChip Akida devices\",\n", + " \"name\": \"Object detection\",\n", + " \"recommended\": false,\n", + " \"blockType\": \"official\",\n", + " \"supportedTargets\": [\n", + " \"brainchip-akd1000\"\n", + " ]\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "# Print the available learning blocks\n", + "print(\"Learning blocks\")\n", + "print(json.dumps(json.loads(response.to_json())[\"learnBlocks\"], indent=2))" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "id": "5j-g9mkrLB9k" + }, + "outputs": [], + "source": [ + "# Give our impulse blocks IDs, which we'll use later\n", + "processing_id = 2\n", + "learning_id = 3\n", + "\n", + "# Impulses (and their blocks) are defined as a collection of key/value pairs\n", + "impulse = Impulse.from_dict({\n", + " \"inputBlocks\": [\n", + " {\n", + " \"id\": 1,\n", + " \"type\": \"time-series\",\n", + " \"name\": \"Time series\",\n", + " \"title\": \"Time series data\",\n", + " \"windowSizeMs\": 1000,\n", + " \"windowIncreaseMs\": 500,\n", + " \"frequencyHz\": 62.5,\n", + " \"padZeros\": True,\n", + " }\n", + " ],\n", + " \"dspBlocks\": [\n", + " {\n", + " \"id\": processing_id,\n", + " \"type\": \"spectral-analysis\",\n", + " \"name\": \"Spectral Analysis\",\n", + " \"implementationVersion\": 4,\n", + " \"title\": \"processing\",\n", + " \"axes\": [\"accX\", \"accY\", \"accZ\"],\n", + " \"input\": 1,\n", + " }\n", + " ],\n", + " \"learnBlocks\": [\n", + " {\n", + " \"id\": learning_id,\n", + " \"type\": \"keras\",\n", + " \"name\": \"Classifier\",\n", + " \"title\": \"Classification\",\n", + " \"dsp\": [processing_id],\n", + " }\n", + " ],\n", + "})" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": { + "id": "NxgDfPVFRxAO" + }, + "outputs": [], + "source": [ + "# Delete the current impulse in the project\n", + "response = impulse_api.delete_impulse(\n", + " project_id=project_id\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not delete current impulse.\")\n", + "\n", + "# Add blocks to impulse\n", + "response = impulse_api.create_impulse(\n", + " project_id=project_id,\n", + " impulse=impulse\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not create impulse.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1vuJumLp58U1" + }, + "source": [ + "## Configure processing block\n", + "\n", + "Before generating features, we need to configure the processing block. We'll start by printing all the available parameters for the `spectral-analysis` block, which we set when we created the impulse above.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [DSP / Get config](https://docs.edgeimpulse.com/reference/edge-impulse-api/dsp/get_config)\n", + " * [DSP / Set config](https://docs.edgeimpulse.com/reference/edge-impulse-api/dsp/set_config)" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "id": "Ht2LegOF1rYb", + "outputId": "6a3f7df5-05cf-4d72-cdca-c7676a426c99", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "[\n", + " {\n", + " \"parameter\": \"scale-axes\",\n", + " \"description\": \"Multiplies axes by this number\",\n", + " \"currentValue\": \"1\",\n", + " \"defaultValue\": \"1\",\n", + " \"type\": \"float\"\n", + " },\n", + " {\n", + " \"parameter\": \"input-decimation-ratio\",\n", + " \"description\": \"Decimate signal to improve effeciency\",\n", + " \"currentValue\": \"1\",\n", + " \"defaultValue\": \"1\",\n", + " \"type\": \"select\",\n", + " \"options\": [\n", + " \"1\",\n", + " \"3\",\n", + " \"10\",\n", + " \"30\",\n", + " \"100\",\n", + " \"1000\"\n", + " ]\n", + " },\n", + " {\n", + " \"parameter\": \"filter-type\",\n", + " \"description\": \"Type of filter to apply to the raw data. (Example: low is low pass)\",\n", + " \"currentValue\": \"none\",\n", + " \"defaultValue\": \"none\",\n", + " \"type\": \"select\",\n", + " \"options\": [\n", + " \"low\",\n", + " \"high\",\n", + " \"none\"\n", + " ]\n", + " },\n", + " {\n", + " \"parameter\": \"filter-cutoff\",\n", + " \"description\": \"Cut-off frequency in hertz\",\n", + " \"currentValue\": \"3\",\n", + " \"defaultValue\": \"3\",\n", + " \"type\": \"float\"\n", + " },\n", + " {\n", + " \"parameter\": \"filter-order\",\n", + " \"description\": \"Number of poles to use in filter. More improves filtering at expense of latency. Use zero to only mask FFT bins and skip filtering.\",\n", + " \"currentValue\": \"6\",\n", + " \"defaultValue\": \"6\",\n", + " \"type\": \"int\"\n", + " },\n", + " {\n", + " \"parameter\": \"analysis-type\",\n", + " \"description\": \"Type of spectral analysis to apply\",\n", + " \"currentValue\": \"FFT\",\n", + " \"defaultValue\": \"FFT\",\n", + " \"type\": \"select\",\n", + " \"options\": [\n", + " \"FFT\",\n", + " \"Wavelet\"\n", + " ]\n", + " },\n", + " {\n", + " \"parameter\": \"fft-length\",\n", + " \"description\": \"Number of FFT points\",\n", + " \"currentValue\": \"16\",\n", + " \"defaultValue\": \"16\",\n", + " \"type\": \"int\"\n", + " },\n", + " {\n", + " \"parameter\": \"spectral-peaks-count\",\n", + " \"description\": \"Number of spectral power peaks\",\n", + " \"currentValue\": \"3\",\n", + " \"defaultValue\": \"3\",\n", + " \"type\": \"int\"\n", + " },\n", + " {\n", + " \"parameter\": \"spectral-peaks-threshold\",\n", + " \"description\": \"Minimum (normalized) threshold for a peak, this eliminates peaks that are very close\",\n", + " \"currentValue\": \"0.1\",\n", + " \"defaultValue\": \"0.1\",\n", + " \"type\": \"float\"\n", + " },\n", + " {\n", + " \"parameter\": \"spectral-power-edges\",\n", + " \"description\": \"Splits the spectral density in various buckets\",\n", + " \"currentValue\": \"0.1, 0.5, 1.0, 2.0, 5.0\",\n", + " \"defaultValue\": \"0.1, 0.5, 1.0, 2.0, 5.0\",\n", + " \"type\": \"string\"\n", + " },\n", + " {\n", + " \"parameter\": \"do-log\",\n", + " \"description\": \"Apply log base 10 to spectrum\",\n", + " \"currentValue\": \"true\",\n", + " \"defaultValue\": \"true\",\n", + " \"type\": \"boolean\"\n", + " },\n", + " {\n", + " \"parameter\": \"do-fft-overlap\",\n", + " \"description\": \"When more than one FFT is needed to cover a window, then setting true will reuse the last half of the previous FFT frame. Similar to frame stride.\",\n", + " \"currentValue\": \"true\",\n", + " \"defaultValue\": \"true\",\n", + " \"type\": \"boolean\"\n", + " },\n", + " {\n", + " \"parameter\": \"wavelet-level\",\n", + " \"description\": \"Decomposition level (must be >= 0)\",\n", + " \"currentValue\": \"1\",\n", + " \"defaultValue\": \"1\",\n", + " \"type\": \"int\"\n", + " },\n", + " {\n", + " \"parameter\": \"wavelet\",\n", + " \"description\": \"Wavelet to use\",\n", + " \"currentValue\": \"db4\",\n", + " \"defaultValue\": \"db4\",\n", + " \"type\": \"select\",\n", + " \"options\": [\n", + " \"bior1.3\",\n", + " \"bior1.5\",\n", + " \"bior2.2\",\n", + " \"bior2.4\",\n", + " \"bior2.6\",\n", + " \"bior2.8\",\n", + " \"bior3.1\",\n", + " \"bior3.3\",\n", + " \"bior3.5\",\n", + " \"bior3.7\",\n", + " \"bior3.9\",\n", + " \"bior4.4\",\n", + " \"bior5.5\",\n", + " \"bior6.8\",\n", + " \"coif1\",\n", + " \"coif2\",\n", + " \"coif3\",\n", + " \"db2\",\n", + " \"db3\",\n", + " \"db4\",\n", + " \"db5\",\n", + " \"db6\",\n", + " \"db7\",\n", + " \"db8\",\n", + " \"db9\",\n", + " \"db10\",\n", + " \"haar\",\n", + " \"rbio1.3\",\n", + " \"rbio1.5\",\n", + " \"rbio2.2\",\n", + " \"rbio2.4\",\n", + " \"rbio2.6\",\n", + " \"rbio2.8\",\n", + " \"rbio3.1\",\n", + " \"rbio3.3\",\n", + " \"rbio3.5\",\n", + " \"rbio3.7\",\n", + " \"rbio3.9\",\n", + " \"rbio4.4\",\n", + " \"rbio5.5\",\n", + " \"rbio6.8\",\n", + " \"sym2\",\n", + " \"sym3\",\n", + " \"sym4\",\n", + " \"sym5\",\n", + " \"sym6\",\n", + " \"sym7\",\n", + " \"sym8\",\n", + " \"sym9\",\n", + " \"sym10\"\n", + " ]\n", + " },\n", + " {\n", + " \"parameter\": \"extra-low-freq\",\n", + " \"description\": \"Decimate signal to improve low frequency resolution\",\n", + " \"currentValue\": \"false\",\n", + " \"defaultValue\": \"false\",\n", + " \"type\": \"boolean\"\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "# Get processing block config\n", + "response = dsp_api.get_dsp_config(\n", + " project_id=project_id,\n", + " dsp_id=processing_id\n", + ")\n", + "\n", + "# Construct user-readable parameters\n", + "settings = []\n", + "for group in response.config:\n", + " for item in group.items:\n", + " element = {}\n", + " element[\"parameter\"] = item.param\n", + " element[\"description\"] = item.help\n", + " element[\"currentValue\"] = item.value\n", + " element[\"defaultValue\"] = item.default_value\n", + " element[\"type\"] = item.type\n", + " if hasattr(item, \"select_options\") and \\\n", + " getattr(item, \"select_options\") is not None:\n", + " element[\"options\"] = [i.value for i in item.select_options]\n", + " settings.append(element)\n", + "\n", + "# Print the settings\n", + "print(json.dumps(settings, indent=2))" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": { + "id": "TPEuV3ku3vuN", + "outputId": "1e2e6561-b552-4f3c-ff97-b5ea3afcada0", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Processing block has been configured.\n" + ] + } + ], + "source": [ + "# Define processing block configuration\n", + "config_request = DSPConfigRequest.from_dict({\n", + " \"config\": {\n", + " \"scale-axes\": 1.0,\n", + " \"input-decimation-ratio\": 1,\n", + " \"filter-type\": \"none\",\n", + " \"analysis-type\": \"FFT\",\n", + " \"fft-length\": 16,\n", + " \"do-log\": True,\n", + " \"do-fft-overlap\": True,\n", + " \"extra-low-freq\": False,\n", + " }\n", + "})\n", + "\n", + "# Set processing block configuration\n", + "response = dsp_api.set_dsp_config(\n", + " project_id=project_id,\n", + " dsp_id=processing_id,\n", + " dsp_config_request=config_request\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not start feature generation job.\")\n", + "else:\n", + " print(\"Processing block has been configured.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dJxMwnVhRrYG" + }, + "source": [ + "## Run processing block to generate features\n", + "\n", + "After we've defined the impulse, we then want to use our processing block(s) to extract features from our data. We'll skip feature importance and feature explorer to make this go faster.\n", + "\n", + "Generating features kicks off a job in Studio. A \"job\" involves instantiating a Docker container and running a custom script in the container to perform some action. In our case, that involves reading in data, extracting features from that data, and saving those features as Numpy (.npy) files in our project.\n", + "\n", + "Because jobs can take a while, the API call will return immediately. If the call was successful, the response will contain a job number. We can then monitor that job and wait for it to finish before continuing.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Jobs / Generate features](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/generate_features)\n", + " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": { + "id": "gdLwkXUS_QMR" + }, + "outputs": [], + "source": [ + "def poll_job(jobs_api, project_id, job_id):\n", + " \"\"\"Wait for job to complete\"\"\"\n", + "\n", + " # Wait for job to complete\n", + " while True:\n", + "\n", + " # Check on job status\n", + " response = jobs_api.get_job_status(\n", + " project_id=project_id,\n", + " job_id=job_id\n", + " )\n", + " if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " print(\"ERROR: Could not get job status\")\n", + " return False\n", + " else:\n", + " if hasattr(response, \"job\") and hasattr(response.job, \"finished\"):\n", + " if response.job.finished:\n", + " print(f\"Job completed at {response.job.finished}\")\n", + " return response.job.finished_successful\n", + " else:\n", + " print(\"ERROR: Response did not contain a 'job' field.\")\n", + " return False\n", + "\n", + " # Print that we're still running and wait\n", + " print(f\"Waiting for job {job_id} to finish...\")\n", + " time.sleep(2.0)" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": { + "id": "dxddUwKWWcj7", + "outputId": "e7058157-25e3-4821-ad0b-38cca5016235", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Waiting for job 38711472 to finish...\n", + "Job completed at 2025-10-10T13:31:27.149Z\n", + "Features have been generated.\n" + ] + } + ], + "source": [ + "# Define generate features request\n", + "generate_features_request = GenerateFeaturesRequest.from_dict({\n", + " \"dspId\": processing_id,\n", + " \"calculate_feature_importance\": False,\n", + " \"skip_feature_explorer\": True,\n", + "})\n", + "\n", + "# Generate features\n", + "response = jobs_api.generate_features_job(\n", + " project_id=project_id,\n", + " generate_features_request=generate_features_request,\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not start feature generation job.\")\n", + "\n", + "# Extract job ID\n", + "job_id = response.id\n", + "\n", + "# Wait for job to complete\n", + "success = poll_job(jobs_api, project_id, job_id)\n", + "if success:\n", + " print(\"Features have been generated.\")\n", + "else:\n", + " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": { + "id": "0wk6uWvwAVia", + "outputId": "fdf764e6-0b7e-4c7e-946c-1c25ce576c7a", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Go here to download the generated features in NumPy format:\n", + "https://studio.edgeimpulse.com/v1/api/797297/dsp-data/2/x/training\n", + "https://studio.edgeimpulse.com/v1/api/797297/dsp-data/2/y/training\n" + ] + } + ], + "source": [ + "# Optional: download NumPy features (x: training data, y: training labels)\n", + "print(\"Go here to download the generated features in NumPy format:\")\n", + "print(f\"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/x/training\")\n", + "print(f\"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/y/training\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8q3_LLwwEoEA" + }, + "source": [ + "## Use learning block to train model\n", + "\n", + "Now that we have trained features, we can run the learning block to train the model on those features. Note that Edge Impulse has a number of learning blocks, each with different methods of configuration. We'll be using the \"keras\" block, which uses TensorFlow and Keras under the hood.\n", + "\n", + "You can use the [get_keras](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api/api/learn_api#get_keras) and [set_keras](https://docs.edgeimpulse.com/reference/python-api-bindings/edgeimpulse_api/api/learn_api#set_keras) functions to configure the granular settings. We'll use the defaults for that block and just set the number of epochs and learning rate for training.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Jobs / Train model (Keras)](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/train_model_-keras)\n", + " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)\n", + " * [Jobs / Get logs](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_logs)" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": { + "id": "_PtkJ0ikBf9l", + "outputId": "49e7342a-9c2f-47d0-cf6a-36eb8bc2a427", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Waiting for job 38711522 to finish...\n", + "Job completed at 2025-10-10T13:32:55.022Z\n", + "Model has been trained.\n" + ] + } + ], + "source": [ + " # Define training request\n", + "keras_parameter_request = SetKerasParameterRequest.from_dict({\n", + " \"mode\": \"visual\",\n", + " \"training_cycles\": 10,\n", + " \"learning_rate\": 0.001,\n", + " \"train_test_split\": 0.8,\n", + " \"skip_embeddings_and_memory\": True,\n", + "})\n", + "\n", + "# Train model\n", + "response = jobs_api.train_keras_job(\n", + " project_id=project_id,\n", + " learn_id=learning_id,\n", + " set_keras_parameter_request=keras_parameter_request,\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not start training job.\")\n", + "\n", + "# Extract job ID\n", + "job_id = response.id\n", + "\n", + "# Wait for job to complete\n", + "success = poll_job(jobs_api, project_id, job_id)\n", + "if success:\n", + " print(\"Model has been trained.\")\n", + "else:\n", + " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8LAglLwn6Jma" + }, + "source": [ + "Now that the model has been trained, we can go back to the job logs to find the accuracy metrics for both the float32 and int8 quantization levels. We'll need to parse the logs to find these. Because the logs are printed with the most recent events first, we'll work backwards through the log to find these metrics." + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": { + "id": "y3fb0yfm6ceG" + }, + "outputs": [], + "source": [ + "def get_metrics(response, quantization=None):\n", + " \"\"\"\n", + " Parse the response to find the accuracy/training metrics for a given\n", + " quantization level. If quantization is None, return the first set of metrics\n", + " found.\n", + " \"\"\"\n", + " metrics = None\n", + " delimiter_str = \"calculate_classification_metrics\"\n", + "\n", + " # Skip finding quantization metrics if not given\n", + " if quantization:\n", + " quantization_found = False\n", + " else:\n", + " quantization_found = True\n", + "\n", + " # Parse logs\n", + " for log in reversed(response.to_dict()[\"stdout\"]):\n", + " data_field = log[\"data\"]\n", + " if quantization_found:\n", + " substrings = data_field.split(\"\\n\")\n", + " for substring in substrings:\n", + " substring = substring.strip()\n", + " if substring.startswith(delimiter_str):\n", + " metrics = json.loads(substring[len(delimiter_str):])\n", + " break\n", + " else:\n", + " if data_field.startswith(f\"Calculating {quantization} accuracy\"):\n", + " quantization_found = True\n", + "\n", + " return metrics" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": { + "id": "AB47VpTXxwnL", + "outputId": "6189a558-ce7c-4e60-9972-c90a972b1ee3", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "ERROR: Could not get training metrics.\n" + ] + } + ], + "source": [ + "# Get the job logs for the previous job\n", + "response = jobs_api.get_jobs_logs(\n", + " project_id=project_id,\n", + " job_id=job_id\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not get job log.\")\n", + "\n", + "# Print training metrics (quantization is \"float32\" or \"int8\")\n", + "quantization = \"float32\"\n", + "metrics = get_metrics(response, quantization)\n", + "if metrics:\n", + " print(f\"Training metrics for {quantization} quantization:\")\n", + " pprint.pprint(metrics)\n", + "else:\n", + " print(\"ERROR: Could not get training metrics.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dIuT-Mhp-71J" + }, + "source": [ + "## Test the impulse\n", + "\n", + "As with any good machine learning project, we should test the accuracy of the model using our holdout (\"testing\") set. We'll call the `classify` API function to make that happen and then parse the job logs to get the results.\n", + "\n", + "In most cases, using `int8` quantization will result in a faster, smaller model, but you will slightly lose some accuracy.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Jobs / Classify](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/classify)\n", + " * [Jobs / Get job status](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_job_status)\n", + " * [Jobs / Get logs](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/get_logs)" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": { + "id": "HdEksW2M-7Ob", + "outputId": "763c70a8-c0f6-4e0c-8f62-85fd7ce83a40", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Waiting for job 38711606 to finish...\n", + "Job completed at 2025-10-10T13:34:39.367Z\n", + "Inference performed on test set.\n" + ] + } + ], + "source": [ + " # Set the model quantization level (\"float32\", \"int8\", or \"akida\")\n", + "quantization = \"int8\"\n", + "classify_request = StartClassifyJobRequest.from_dict({\n", + " \"model_variants\": quantization\n", + "})\n", + "\n", + "# Start model testing job\n", + "response = jobs_api.start_classify_job(\n", + " project_id=project_id,\n", + " start_classify_job_request=classify_request\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not start classify job.\")\n", + "\n", + "# Extract job ID\n", + "job_id = response.id\n", + "\n", + "# Wait for job to complete\n", + "success = poll_job(jobs_api, project_id, job_id)\n", + "if success:\n", + " print(\"Inference performed on test set.\")\n", + "else:\n", + " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": { + "id": "RYTJl-7GCC65", + "outputId": "0137c9c1-d6d3-440d-caa8-5d000eee0ad6", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "ERROR: Could not get test metrics.\n" + ] + } + ], + "source": [ + "# Get the job logs for the previous job\n", + "response = jobs_api.get_jobs_logs(\n", + " project_id=project_id,\n", + " job_id=job_id\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not get job log.\")\n", + "\n", + "# Print\n", + "metrics = get_metrics(response)\n", + "if metrics:\n", + " print(f\"Test metrics for {quantization} quantization:\")\n", + " pprint.pprint(metrics)\n", + "else:\n", + " print(\"ERROR: Could not get test metrics.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MsEAb8V6MO2B" + }, + "source": [ + "## Deploy the impulse\n", + "\n", + "Now that you've trained the model, let's build it as a C++ library and download it. We'll start by printing out the available target devices. Note that this list changes depending on how you've configured your impulse. For example, if you use a Syntiant-specific learning block, then you'll see Syntiant boards listed. We'll use the \"zip\" target, which gives us a generic C++ library that we can use for nearly any hardware.\n", + "\n", + "The `engine` must be one of:\n", + "\n", + "```\n", + "tflite\n", + "tflite-eon\n", + "tflite-eon-ram-optimized\n", + "tensorrt\n", + "tensaiflow\n", + "drp-ai\n", + "tidl\n", + "akida\n", + "syntiant\n", + "memryx\n", + "neox\n", + "```\n", + "\n", + "We'll use `tflite`, as that's the most ubiquitous.\n", + "\n", + "`modelType` is the quantization level. Your options are:\n", + "\n", + "```\n", + "float32\n", + "int8\n", + "```\n", + "\n", + "In most cases, using `int8` quantization will result in a faster, smaller model, but you will slightly lose some accuracy.\n", + "\n", + "API calls (links to associated documentation):\n", + "\n", + " * [Deployment / Deployment targets (data sources)](https://docs.edgeimpulse.com/reference/edge-impulse-api/deployment/deployment_targets_-data_sources)\n", + " * [Jobs / Build on-device model](https://docs.edgeimpulse.com/reference/edge-impulse-api/jobs/build_on-device_model)\n", + " * [Deployment / Download](https://docs.edgeimpulse.com/reference/edge-impulse-api/deployment/download)" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": { + "id": "9kePPtX7OsbM", + "outputId": "897d2de3-843b-4a30-9736-3108e0dab425", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "zip\n", + "zip-linux\n", + "android-cpp\n", + "arduino\n", + "cubemx\n", + "wasm\n", + "wasm-browser-simd\n", + "wasm-node-simd\n", + "tensorrt\n", + "ethos-alif-ensemble-e7-hp\n", + "ethos-alif-ensemble-e7-he\n", + "ethos-nxp-imx93\n", + "ethos-alif-ensemble-e7-he-cmsis-pack\n", + "ethos-alif-ensemble-e7-hp-cmsis-pack\n", + "ethos-himax-wiseeye2\n", + "ethos-u85\n", + "ethos-u85-cmsis-pack\n", + "synaptics-tensaiflow-lib\n", + "meta-tf\n", + "memryx-dfp\n", + "tidl-lib-am62a\n", + "tidl-lib-am68a\n", + "slcc\n", + "disco-l475vg\n", + "ambiq-apollo5\n", + "arduino-nano-33-ble-sense\n", + "arduino-nicla-vision\n", + "runner-linux-aarch64-advantech-icam540\n", + "espressif-esp32\n", + "raspberry-pi-rp2040\n", + "raspberry-pi-pico2\n", + "raspberry-pi-pico2-w\n", + "silabs-thunderboard2\n", + "silabs-xg24\n", + "himax-we-i\n", + "infineon-cy8ckit-062s2\n", + "infineon-cy8ckit-062-ble\n", + "nordic-nrf52840-dk\n", + "nordic-nrf5340-dk\n", + "nordic-nrf9160-dk\n", + "nordic-thingy53\n", + "nordic-thingy53-nrf7002eb\n", + "nordic-thingy91\n", + "nordic-nrf7002-dk\n", + "nordic-nrf9161-dk\n", + "nordic-nrf9151-dk\n", + "nordic-nrf54l15-dk\n", + "sony-spresense\n", + "sony-spresense-commonsense\n", + "ti-launchxl\n", + "renesas-ck-ra6m5\n", + "brickml\n", + "brickml-module\n", + "alif-ensemble-e7\n", + "alif-ensemble-e7-he\n", + "alif-ensemble-e7-hp-sram\n", + "alif-ensemble-e7-devkit\n", + "alif-ensemble-e7-he-devkit\n", + "alif-ensemble-e7-hp-sram-devkit\n", + "seeed-grove-vision-ai\n", + "runner-linux-aarch64\n", + "runner-linux-armv7\n", + "runner-linux-x86_64\n", + "runner-linux-aarch64-akd1000\n", + "runner-linux-x86_64-akd1000\n", + "runner-linux-aarch64-qnn\n", + "runner-linux-aarch64-gpu\n", + "qualcomm-gstreamer-ml-pipeline-eim\n", + "runner-mac-x86_64\n", + "runner-mac-arm64\n", + "runner-linux-aarch64-tda4vm\n", + "runner-linux-aarch64-am62a\n", + "particle\n", + "iar\n", + "runner-linux-aarch64-am68a\n", + "particle-p2\n", + "cmsis-package\n", + "runner-linux-aarch64-jetson-nano\n", + "runner-linux-aarch64-rzg2l\n", + "runner-linux-aarch64-jetson-orin\n", + "runner-linux-aarch64-jetson-orin-6-0\n", + "st-aton-lib\n" + ] + } + ], + "source": [ + "# Get the available devices\n", + "response = deployment_api.list_deployment_targets_for_project_data_sources(\n", + " project_id=project_id\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not get device list.\")\n", + "\n", + "# Print the available devices\n", + "targets = [x.to_dict()[\"format\"] for x in response.targets]\n", + "for target in targets:\n", + " print(target)" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": { + "id": "qInW3vE6OaN6", + "outputId": "0147eb66-cf77-4997-f9bc-c29487cb50ca", + "colab": { + "base_uri": "https://localhost:8080/" + } + }, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Waiting for job 38711667 to finish...\n", + "Job completed at 2025-10-10T13:36:41.499Z\n", + "Impulse built.\n" + ] + } + ], + "source": [ + "# Choose the target hardware (from the list above), engine,\n", + "target_hardware = \"zip\"\n", + "engine = \"tflite\"\n", + "quantization = \"int8\"\n", + "\n", + "# Construct request\n", + "device_model_request = BuildOnDeviceModelRequest.from_dict({\n", + " \"engine\": engine,\n", + " \"modelType\": quantization\n", + "})\n", + "\n", + "# Start build job\n", + "response = jobs_api.build_on_device_model_job(\n", + " project_id=project_id,\n", + " type=target_hardware,\n", + " build_on_device_model_request=device_model_request,\n", + ")\n", + "if not hasattr(response, \"success\") or getattr(response, \"success\") is False:\n", + " raise RuntimeError(\"Could not start feature generation job.\")\n", + "\n", + "# Extract job ID\n", + "job_id = response.id\n", + "\n", + "# Wait for job to complete\n", + "success = poll_job(jobs_api, project_id, job_id)\n", + "if success:\n", + " print(\"Impulse built.\")\n", + "else:\n", + " print(f\"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": { + "id": "QLU8jDNFpv9T" + }, + "outputs": [], + "source": [ + "# Get the download link information\n", + "response = deployment_api.download_build(\n", + " project_id=project_id,\n", + " type=target_hardware,\n", + " model_type=quantization,\n", + " engine=engine,\n", + " _preload_content=False,\n", + ")\n", + "if response.status != 200:\n", + " raise RuntimeError(\"Could not get download information.\")\n", + "\n", + "# Find the file name in the headers\n", + "file_name = re.findall(r\"filename\\*?=(.+)\", response.headers[\"Content-Disposition\"])[0].replace(\"utf-8''\", \"\")\n", + "file_path = os.path.join(OUTPUT_PATH, file_name)\n", + "\n", + "# Write the contents to a file\n", + "with open(file_path, \"wb\") as f:\n", + " f.write(response.data)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "klQoJH9yvO2C" + }, + "source": [ + "You should have a .zip file in the same directory as this notebook. Download or move it to somewhere else on your computer and unzip it. You can now follow [this guide](https://docs.edgeimpulse.com/docs/run-inference/cpp-library/deploy-your-model-as-a-c-library) to link and compile the library as part of an application." + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file