Skip to main content

TensorFlowLite for Microcontrollers Library

The Meadow.TensorFlow library provides Meadow users with the ability to run machine learning models directly on microcontrollers using TensorFlow Lite. It is designed to enable inference from pre-trained models efficiently, using minimal resources, while being optimized for IoT environments.

With this library, you can load and run models for tasks like image recognition, gesture detection, and other forms of inference. This documentation will walk you through how to integrate TensorFlow into your Meadow application, load models, and run inferences.

Using TensorFlow on Meadow

TensorFlow Lite for Microcontrollers on Meadow provides a modern .NET API, allowing you to run machine learning inference directly on your Meadow devices. This enables you to build intelligent IoT applications that can process data locally without relying on cloud services.

Integrating TensorFlow into Your Meadow Application

To get started, you'll need to add the Meadow.TensorFlowLiteForMicrocontrollers library to your Meadow application. You can do this by adding the NuGet package to your project:

dotnet add package Meadow.TensorFlowLiteForMicrocontrollers

Loading a TensorFlow Lite Model

First, include your TensorFlow Lite model (.tflite file) in your Meadow project. Set its Build Action to EmbeddedResource so it gets embedded into your application assembly.

To load the model, read it as a byte array from the embedded resources:

// Load the model from embedded resources
var assembly = Assembly.GetExecutingAssembly();

using (var stream = assembly.GetManifestResourceStream("YourNamespace.Models.gesture_model.tflite"))
{
byte[] modelData = new byte[stream.Length];
stream.Read(modelData, 0, modelData.Length);

// Create the GestureModel object with the model data
gestureModel = new GestureModel(modelData);
}

Replace "YourNamespace.Models.gesture_model.tflite" with the actual namespace and resource path of your model file.

Preparing Input Data

The GestureModel class provides a ModelInput property to manage inputs. You need to prepare your input data according to the model's expected input format.

For example, if your model expects a float array of a specific size:

// Prepare your input data (e.g., sensor readings)
float[] inputData = new float[] { /* your input data */ };

// Set the input data
gestureModel.ModelInput.SetData(inputData);

Running Inference

With the model loaded and input data prepared, you can run inference by calling the Predict method:

// Run the prediction
var output = gestureModel.Predict();

Interpreting the Results

The ModelOutput object contains the output data from the model, which you can interpret according to your application's logic.

For example, if the model outputs probabilities for different classes:

// Assuming output contains a float array of probabilities
float[] probabilities = output.GetOutputData();

// Find the class with the highest probability
int predictedClass = Array.IndexOf(probabilities, probabilities.Max());

// Use the predicted class in your application
Console.WriteLine($"Predicted Class: {predictedClass}");

Best Practices

  • Model Optimization - Optimize your model for size and performance using quantization and pruning techniques.
  • Resource Management - Meadow devices have limited resources. Keep your models small and efficient.
  • Input Validation - Ensure that the input data matches the model's expected format and data types.
  • Error Handling - Implement error handling to manage issues with model loading and inference execution.