Skip to content

Latest commit

 

History

History
148 lines (107 loc) · 6.54 KB

new_platform_support.md

File metadata and controls

148 lines (107 loc) · 6.54 KB

Porting to a new platform

At its core, TFLM is a portable library that can be used on a variety of target hardware to run inference on TfLite models.

Prior to integrating TFLM with a specific hardware involves tasks that is outside the scope of the TFLM project, including:

  • Toolchain setup - TFLM requires support for C++17
  • Set up and installation of board-specific SDKs and IDEs
  • Compiler flags and Linker setup
  • Integrating peripherals such as cameras, microphones and accelerometers to provide the sensor inputs for the ML models.

In this guide we outline our recommended approach for integrating TFLM with a new target hardware assuming that you have already set up a development and debugging environment for you board independent of TLFLM.

Step 1: Build TFLM Static Library with Reference Kernels

Use the TFLM project generation script to create a directory tree containing only the sources that are necessary to build the code TFLM library.

python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \
  -e hello_world \
  -e micro_speech \
  -e person_detection \
  /tmp/tflm-tree

This will create a folder that looks like the following at the top-level:

examples  LICENSE  tensorflow  third_party

All the code in the tensorflow and third_party folders can be compiled into a single static library (for example libtflm.a) using your platform-specific build system.

TFLM's third party dependencies are spearated out in case there is a need to have shared libraries for the third party code to avoid symbol collisions.

Note that for IDEs, it might be sufficient to simply include the folder created by the TFLM project generation script into the overall IDE tree.

Step 2: Customize Logging and Timing Function for your Platform

Replace the following files with a version that is specific to your target platform:

These can be placed anywhere in your directory tree. The only requirement is that when linking TFLM into a binary, the implementations of the functions in debug_log.h, micro_time.h and system_setup.h can be found.

For example, the implementations of these functions for:

  • Sparkfun Edge is the implementation of these functions for the Sparkfun Edge.

Step 3: Running the hello_world Example

Once you have completed step 2, you should be set up to run the hello_world example and see the output over the UART.

cp -r /tmp/tflm-tree/examples/hello_world <path-to-platform-specific-hello-world>

The hello_world example should not need any customization and you should be able to directly build and run it.

Step 4: Building and Customizing Additional Examples

We recommend that you fork the TFLM examples and then modify them as needed (to add support for peripherals etc.) to run on your target platform.

Step 5: Integrating Optimized Kernel Implementations

TFLM has optimized kernel implementations for a variety of targets that are in sub-folders of the kernels directory.

It is possible to use the project generation script to create a tree with these optimized kernel implementations (and associated third party dependencies).

For example:

python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \
  -e hello_world -e micro_speech -e person_detection \
  --makefile_options="TARGET=cortex_m_generic OPTIMIZED_KERNEL_DIR=cmsis_nn TARGET_ARCH=project_generation" \
  /tmp/tflm-cmsis

will create an output tree with all the sources and headers needed to use the optimized cmsis_nn kernels for Cortex-M platforms.

Advanced Integration Topics

In order to have tighter coupling between your platform-specific TFLM integration and the upstream TFLM repository, you might want to consider the following:

  1. Set up a GitHub repository for your platform
  2. Nightly sync between TFLM and your platform-specific GitHub repository
  3. Using GitHub actions for CI

For some pointers on how to set this up, we refer you to the GitHub repositories that integrated TFLM for the:

  • Arduino: supported by the TFLM team
  • Sparkfun Edge: for demonstration purposes only, not officially supported.

Once you are set up with continuous integration and the ability to integrate newer versions of TFLM with your platform, feel free to add a build badge to TFLM's Community Supported TFLM Examples.

Getting Help

Here are some ways that you can reach out to get help.