This Rust library provides two modes for running Edge Impulse models:
- FFI Mode (Default & Recommended): Uses direct C++ SDK integration via FFI bindings. This is the default and recommended mode for all new applications.
 - EIM Mode (Legacy/Compatibility): Uses Edge Impulse's EIM (Edge Impulse Model) format for inference. This mode is only for backward compatibility and is not recommended for new projects due to performance penalties.
 
- Run Edge Impulse models on Linux and MacOS
 - Support for different model types:
- Classification models
 - Object detection models
 - Visual anomaly detection models
 
 - Support for different sensor types:
- Camera
 - Microphone
 - Accelerometer
 - Positional sensors
 
 - Continuous classification mode support
 - Debug output option
 
- Upload data to Edge Impulse projects
 - Support for multiple data categories:
- Training data
 - Testing data
 - Anomaly data
 
 - Handle various file formats:
- Images (JPG, PNG)
 - Audio (WAV)
 - Video (MP4, AVI)
 - Sensor data (CBOR, JSON, CSV)
 
 
You can copy your Edge Impulse model from a custom directory path using the EI_MODEL environment variable:
export EI_MODEL=/path/to/your/edge-impulse-model
cargo buildThis will automatically copy the model files and build the C++ SDK bindings.
Set your Edge Impulse project credentials and build:
export EI_PROJECT_ID=12345
export EI_API_KEY=ei_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cargo clean
cargo buildThis will automatically download your model from Edge Impulse and build the C++ SDK bindings.
Note: The download process may take several minutes on the first build. Never commit your API key to version control.
The build system checks for models in the following order:
- Custom model path specified by 
EI_MODELenvironment variable - Edge Impulse API download using 
EI_PROJECT_IDandEI_API_KEY 
This means you can:
- Copy from a custom path (useful for Docker builds, CI/CD)
 - Download from Edge Impulse Studio (requires API credentials)
 
Note: Since edge-impulse-ffi-rs is managed as a Cargo dependency, you cannot easily manually copy files into its model/ directory. Use environment variables instead.
You can use environment variables to control model selection and build options for host builds:
# With local model
EI_MODEL=~/Downloads/model-person-detection cargo build --release --features ffi
# With API credentials
EI_PROJECT_ID=12345 EI_API_KEY=your-api-key cargo build --release --features ffi
# With full TensorFlow Lite
USE_FULL_TFLITE=1 EI_MODEL=~/Downloads/model-person-detection cargo build --release --features ffiThis project supports cross-compilation to aarch64 (ARM64) using Docker. This is useful for deploying to ARM64 devices like Raspberry Pi 4, NVIDIA Jetson, or other ARM64 servers.
- Docker and docker-compose installed
 - Model files available (via 
EI_MODEL,EI_PROJECT_ID/EI_API_KEY, or copied tomodel/) 
# Build for aarch64 using Docker with local model
EI_MODEL=~/Downloads/model-person-detection docker-compose up --build
# Build for aarch64 using Docker with API credentials
EI_PROJECT_ID=12345 EI_API_KEY=your-api-key docker-compose up --build
# Build for aarch64 using Docker with existing model in model/ directory
docker-compose up --buildAfter building, you can run examples inside the Docker container:
# Run image inference example
docker-compose run --rm -e EI_MODEL=/host-model aarch64-build bash -c "target/aarch64-unknown-linux-gnu/release/examples/image_infer --image /assets/test_image.jpg"
# Run basic inference example
docker-compose run --rm -e EI_MODEL=/host-model aarch64-build bash -c "target/aarch64-unknown-linux-gnu/release/examples/basic_infer --features '0.1,0.2,0.3'"You can place test images in examples/assets/ to avoid copying them each time. This folder is gitignored, so your test images won't be committed to the repository.
# Copy your test image
cp ~/Downloads/person.5j8hm0ug.jpg examples/assets/
# Run with the test image
docker-compose run --rm -e EI_MODEL=/host-model aarch64-build bash -c "target/aarch64-unknown-linux-gnu/release/examples/image_infer --image /assets/person.5j8hm0ug.jpg"The Docker setup supports the same environment variables as local builds:
# Use custom model path
EI_MODEL=~/Downloads/model-person-detection docker-compose up --build
# Use Edge Impulse API
EI_PROJECT_ID=12345 EI_API_KEY=your-api-key docker-compose up --build
# Use full TensorFlow Lite
USE_FULL_TFLITE=1 EI_MODEL=~/Downloads/model-person-detection docker-compose up --build# Build Docker image
docker-compose build
# Build example in container
docker-compose run --rm aarch64-build cargo build --example image_infer --target aarch64-unknown-linux-gnu --features ffi --release
# Run example in container
docker-compose run --rm aarch64-build ./target/aarch64-unknown-linux-gnu/release/examples/image_infer --image /assets/test_image.jpg --debugcargo run --example basic_infer -- --features "0.1,0.2,0.3"cargo run --example image_infer -- --image /path/to/image.pngcargo run --example audio_infer -- --audio /path/to/audio.wav# For EIM mode, enable the eim feature and provide a model path
cargo run --example basic_infer --no-default-features --features eim -- --model path/to/model.eim --features "0.1,0.2,0.3"Add this to your Cargo.toml:
[dependencies]
edge-impulse-runner = "2.0.0"FFI mode is enabled by default. You can provide model files in several ways:
- Copy from custom path: Set 
EI_MODEL=/path/to/your/model - Download from Edge Impulse Studio: Set 
EI_PROJECT_IDandEI_API_KEYenvironment variables 
Note: The project now has its own model/ directory that follows the same pattern as edge-impulse-ffi-rs. Models are automatically copied here during aarch64 builds, and this directory is gitignored.
use edge_impulse_runner::{EdgeImpulseModel, InferenceResult};
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a new model instance using FFI (default)
    let mut model = EdgeImpulseModel::new()?;
    // Prepare normalized features (e.g., image pixels, audio samples)
    let features: Vec<f32> = vec![0.1, 0.2, 0.3];
    // Run inference
    let result = model.infer(features, None)?;
    // Process results
    match result.result {
        InferenceResult::Classification { classification } => {
            println!("Classification: {:?}", classification);
        }
        InferenceResult::ObjectDetection { bounding_boxes, classification, object_tracking: _ } => {
            println!("Detected objects: {:?}", bounding_boxes);
            if !classification.is_empty() {
                println!("Classification: {:?}", classification);
            }
        }
        InferenceResult::VisualAnomaly { visual_anomaly_grid, visual_anomaly_max, visual_anomaly_mean, anomaly } => {
            let (normalized_anomaly, normalized_max, normalized_mean, normalized_regions) =
                model.normalize_visual_anomaly(
                    anomaly,
                    visual_anomaly_max,
                    visual_anomaly_mean,
                    &visual_anomaly_grid.iter()
                        .map(|bbox| (bbox.value, bbox.x as u32, bbox.y as u32, bbox.width as u32, bbox.height as u32))
                        .collect::<Vec<_>>()
                );
            println!("Anomaly score: {:.2}%", normalized_anomaly * 100.0);
        }
    }
    Ok(())
}You can dynamically set thresholds for different model types at runtime.
use edge_impulse_runner::{EdgeImpulseModel, InferenceResult};
use edge_impulse_runner::types::ModelThreshold;
fn main() -> Result<(), Box<dyn std::error::Error>> {
    let mut model = EdgeImpulseModel::new()?;
    // Set object detection threshold
    let obj_threshold = ModelThreshold::ObjectDetection {
        id: 8,  // Block ID (use actual ID from your model)
        min_score: 0.3,  // Minimum confidence score
    };
    model.set_threshold(obj_threshold)?;
    // Set anomaly detection threshold
    let anomaly_threshold = ModelThreshold::AnomalyGMM {
        id: 1,  // Block ID (use actual ID from your model)
        min_anomaly_score: 0.4,  // Minimum anomaly score
    };
    model.set_threshold(anomaly_threshold)?;
    // Set object tracking threshold
    let tracking_threshold = ModelThreshold::ObjectTracking {
        id: 2,  // Block ID
        keep_grace: 5,  // Grace period
        max_observations: 10,  // Max observations
        threshold: 0.7,  // Tracking threshold
    };
    model.set_threshold(tracking_threshold)?;
    Ok(())
}let mut model = EdgeImpulseModel::new_with_debug(true)?;Not recommended except for legacy/dev use.
use edge_impulse_runner::{EdgeImpulseModel, InferenceResult};
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a new model instance using EIM (legacy)
    let mut model = EdgeImpulseModel::new_eim("path/to/model.eim")?;
    // ...
}The crate supports different backends through Cargo features:
[dependencies]
edge-impulse-runner = { version = "2.0.0" }ffi(default): Enable FFI direct mode (requiresedge-impulse-ffi-rsdependency)eim: Enable EIM binary communication mode (legacy)
Version 2.0.0 introduces significant improvements and new features while maintaining backward compatibility for EIM mode.
- Direct FFI calls: New FFI backend for improved performance
 - No inter-process communication: Eliminates socket overhead
 - Lower latency: Direct calls to Edge Impulse C++ SDK
 - Reduced memory usage: No separate model process
 
- Consistent naming: 
EimModelrenamed toEdgeImpulseModelfor clarity - Dual backend support: Same API works with both FFI and EIM modes
 - Feature-based selection: Choose backend via Cargo features
 - Constructor change: 
EdgeImpulseModel::new()is now FFI,new_eim()is for legacy EIM 
- Trait-based backends: Clean abstraction for different inference engines
 
- EIM mode is only for backward compatibility and is not recommended for new projects.
 
- Default (TensorFlow Lite Micro):
cargo build
 - Full TensorFlow Lite:
USE_FULL_TFLITE=1 cargo build
 
You can specify the target platform explicitly using these environment variables:
# Apple Silicon (M1/M2/M3)
TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 cargo build
# Intel Mac
TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 cargo build# Linux x86_64
TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 cargo build
# Linux ARM64
TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 cargo build
# Linux ARMv7
TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 cargo build# Jetson Nano
TARGET_JETSON_NANO=1 USE_FULL_TFLITE=1 cargo build
# Jetson Orin
TARGET_JETSON_ORIN=1 USE_FULL_TFLITE=1 cargo buildFull TensorFlow Lite uses prebuilt binaries from the tflite/ directory:
| Platform | Directory | 
|---|---|
| macOS ARM64 | tflite/mac-arm64/ | 
| macOS x86_64 | tflite/mac-x86_64/ | 
| Linux x86 | tflite/linux-x86/ | 
| Linux ARM64 | tflite/linux-aarch64/ | 
| Linux ARMv7 | tflite/linux-armv7/ | 
| Jetson Nano | tflite/linux-jetson-nano/ | 
Note: If no platform is specified, the build system will auto-detect based on your current system architecture.
# Copy model from a mounted volume in Docker
EI_MODEL=/mnt/models/my-project cargo build
# Copy model from a relative path
EI_MODEL=../shared-models/project-123 cargo build
# Copy model and use full TensorFlow Lite
EI_MODEL=/opt/models/my-project USE_FULL_TFLITE=1 cargo build
# Copy model with platform-specific flags
EI_MODEL=/path/to/model TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 cargo buildIf you want to switch between models or make sure you always get the latest version of your model, you need to clean the edge-impulse-ffi-rs dependency:
# Clean the edge-impulse-ffi-rs dependency completely (including model files)
cargo clean -p edge-impulse-ffi-rs
# Then rebuild with your new model source
EI_MODEL=/path/to/new/model cargo build
# or
EI_PROJECT_ID=12345 EI_API_KEY=your-api-key cargo buildTo completely start over and remove all cached model files:
# Clean everything
cargo clean
# Rebuild with your desired model source
EI_MODEL=/path/to/model cargo buildNote: The CLEAN_MODEL=1 environment variable only works when building edge-impulse-ffi-rs directly. When using it as a dependency through edge-impulse-runner-rs, use cargo clean -p edge-impulse-ffi-rs instead.
This project supports a wide range of advanced build flags for hardware accelerators, backends, and cross-compilation, mirroring the Makefile from Edge Impulse's example-standalone-inferencing-linux. You can combine these flags as needed:
| Flag | Purpose / Effect | 
|---|---|
USE_TVM=1 | 
Enable Apache TVM backend (requires TVM_HOME env var) | 
USE_ONNX=1 | 
Enable ONNX Runtime backend | 
USE_QUALCOMM_QNN=1 | 
Enable Qualcomm QNN delegate (requires QNN_SDK_ROOT env var) | 
USE_ETHOS=1 | 
Enable ARM Ethos-U delegate | 
USE_AKIDA=1 | 
Enable BrainChip Akida backend | 
USE_MEMRYX=1 | 
Enable MemryX backend | 
LINK_TFLITE_FLEX_LIBRARY=1 | 
Link TensorFlow Lite Flex library | 
EI_CLASSIFIER_USE_MEMRYX_SOFTWARE=1 | 
Use MemryX software mode (with Python bindings) | 
TENSORRT_VERSION=8.5.2 | 
Set TensorRT version for Jetson platforms | 
TVM_HOME=/path/to/tvm | 
Path to TVM installation (required for USE_TVM=1) | 
QNN_SDK_ROOT=/path/to/qnn | 
Path to Qualcomm QNN SDK (required for USE_QUALCOMM_QNN=1) | 
PYTHON_CROSS_PATH=... | 
Path prefix for cross-compiling Python bindings | 
# Build with ONNX Runtime and full TensorFlow Lite for TI AM68A
USE_ONNX=1 TARGET_AM68A=1 USE_FULL_TFLITE=1 cargo build
# Build with TVM backend (requires TVM_HOME)
USE_TVM=1 TVM_HOME=/opt/tvm TARGET_RENESAS_RZV2L=1 USE_FULL_TFLITE=1 cargo build
# Build with Qualcomm QNN delegate (requires QNN_SDK_ROOT)
USE_QUALCOMM_QNN=1 QNN_SDK_ROOT=/opt/qnn TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 cargo build
# Build with ARM Ethos-U delegate
USE_ETHOS=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 cargo build
# Build with MemryX backend in software mode
USE_MEMRYX=1 EI_CLASSIFIER_USE_MEMRYX_SOFTWARE=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 cargo build
# Build with TensorFlow Lite Flex library
LINK_TFLITE_FLEX_LIBRARY=1 USE_FULL_TFLITE=1 cargo build
# Build for Jetson Nano with specific TensorRT version
TARGET_JETSON_NANO=1 TENSORRT_VERSION=8.5.2 USE_FULL_TFLITE=1 cargo buildSee the Makefile in Edge Impulse's example-standalone-inferencing-linux for more details on what each flag does. Not all combinations are valid for all models/platforms.
Some examples (particularly video capture) requires GStreamer to be installed:
- macOS: Install both runtime and development packages from gstreamer.freedesktop.org
 - Linux: Install required packages (libgstreamer1.0-dev and related packages)
 
The crate uses the EdgeImpulseError type to provide detailed error information:
use edge_impulse_runner::{EdgeImpulseModel, EdgeImpulseError};
match EdgeImpulseModel::new("model.eim") {
    Ok(mut model) => {
        match model.infer(vec![0.1, 0.2, 0.3], None) {
            Ok(result) => println!("Success!"),
            Err(EdgeImpulseError::InvalidInput(msg)) => println!("Invalid input: {}", msg),
            Err(e) => println!("Other error: {}", e),
        }
    },
    Err(e) => println!("Failed to load model: {}", e),
}The library uses a trait-based backend abstraction that allows switching between different inference engines:
pub trait InferenceBackend: Send + Sync {
    fn new(config: BackendConfig) -> Result<Self, EdgeImpulseError> where Self: Sized;
    fn infer(&mut self, features: Vec<f32>, debug: Option<bool>) -> Result<InferenceResponse, EdgeImpulseError>;
    fn parameters(&self) -> Result<&ModelParameters, EdgeImpulseError>;
    // ... other methods
}- Communicates with Edge Impulse binary files over Unix sockets
 - Supports all features including continuous mode and threshold configuration
 - Requires 
.eimfiles to be present 
- Direct FFI calls to the Edge Impulse C++ SDK
 - Improved performance with no inter-process communication overhead
 - Requires the 
edge-impulse-ffi-rscrate as a dependency - Model must be compiled into the binary
 
The Edge Impulse Inference Runner uses a Unix socket-based IPC mechanism to communicate with the model process in EIM mode. The protocol is JSON-based and follows a request-response pattern.
When a new model is created, the following sequence occurs:
{
    "hello": 1,
    "id": 1
}{
    "success": true,
    "id": 1,
    "model_parameters": {
        "axis_count": 1,
        "frequency": 16000.0,
        "has_anomaly": 0,
        "image_channel_count": 3,
        "image_input_frames": 1,
        "image_input_height": 96,
        "image_input_width": 96,
        "image_resize_mode": "fit-shortest",
        "inferencing_engine": 4,
        "input_features_count": 9216,
        "interval_ms": 1.0,
        "label_count": 1,
        "labels": ["class1"],
        "model_type": "classification",
        "sensor": 3,
        "slice_size": 2304,
        "threshold": 0.5,
        "use_continuous_mode": false
    },
    "project": {
        "deploy_version": 1,
        "id": 12345,
        "name": "Project Name",
        "owner": "Owner Name"
    }
}For each inference request:
{
    "classify": [0.1, 0.2, 0.3],
    "id": 1,
    "debug": false
}For classification models:
{
    "success": true,
    "id": 2,
    "result": {
        "classification": {
            "class1": 0.8,
            "class2": 0.2
        }
    }
}For object detection models:
{
    "success": true,
    "id": 2,
    "result": {
        "bounding_boxes": [
            {
                "label": "object1",
                "value": 0.95,
                "x": 100,
                "y": 150,
                "width": 50,
                "height": 50
            }
        ],
        "classification": {
            "class1": 0.8,
            "class2": 0.2
        }
    }
}For visual anomaly detection models:
{
    "success": true,
    "id": 2,
    "result": {
        "visual_anomaly": {
            "anomaly": 5.23,
            "visual_anomaly_max": 7.89,
            "visual_anomaly_mean": 4.12,
            "visual_anomaly_grid": [
                {
                    "value": 0.955,
                    "x": 24,
                    "y": 40,
                    "width": 8,
                    "height": 16
                }
            ]
        }
    }
}When errors occur:
{
    "success": false,
    "error": "Error message",
    "id": 2
}error: Error types and handlinginference: Model management and inference functionalitybackends: Backend abstraction and implementationsingestion: Data upload and project managementtypes: Common types and parameters
Model is not downloaded or you see warnings like Could not find edge-impulse-ffi-rs build output directory:
- Make sure the environment variables are set:
EI_PROJECT_IDEI_API_KEY
 - Run:
cargo clean cargo build --features ffi
 - The FFI crate is always pulled from the official GitHub repository, not from a local path.
 
Download fails with authentication error:
- Verify your API key is correct and has access to the project
 - Check that the project ID exists and is accessible
 - Ensure environment variables are set correctly: 
EI_PROJECT_IDandEI_API_KEY 
Download times out:
- The download process can take several minutes for large models
 - Check your internet connection
 
Build job fails:
- Ensure your Edge Impulse project has a valid model deployed
 - Check the Edge Impulse Studio for any build errors
 - Verify the project has the correct deployment target (Linux)
 
If automated download fails, you can:
- Manually download the model from Edge Impulse Studio
 - Extract it to the 
model/directory - Unset the environment variables: 
unset EI_PROJECT_ID EI_API_KEY - Build normally with 
cargo build 
BSD-3-Clause