Skip to content

Commit

Permalink
updated compression and generic benchmark documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
ddavis-2015 committed Dec 11, 2024
1 parent 9353cda commit 4d07fef
Show file tree
Hide file tree
Showing 2 changed files with 135 additions and 50 deletions.
113 changes: 73 additions & 40 deletions tensorflow/lite/micro/docs/compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,15 +234,48 @@ Only the following methods are required to implement decompression within kernel
* `MicroContext::AllocateDecompressionScratchBuffer` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)):
Allocates a scratch memory buffer within the `MicroInterpreter` to hold the
decompressed tensor data.
decompressed tensor data. The returned scratch memory handle must be retained
(typically through kernel `OpData`) for use during the kernel inference operation.
* `MicroContext::GetTensorCompressionData` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)):
Retrieves compressed tensor information (see [compression.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/compression.h)).
* `tflite::micro::GetTensorData` ([kernel_util.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/kernel_util.h)):
The four argument version of this method will automatically decompress the
tensor data into the supplied scratch memory buffer.
The four parameter version of this method will automatically decompress the
tensor data into the supplied scratch memory buffer. The lifetime of a scratch
buffer is the same as the lifetime of the current kernel operator being processed.
Each call to the four parameter version of this method will always result in a
decompression operation being performed, if the tensor supplied is compressed.
Please see the [TRANSPOSE_CONV](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/transpose_conv.cc)
reference kernel code for an example of how tensor decompression is implemented.
reference kernel code for an example of how to implement tensor decompression
within a kernel.
### Alternate Decompression Memory
Alternate decompression memory regions allow the use of specialized memory
available to the processor, to be used as the target of a tensor decompression
operation. Such memory is typically mapped by the application through a linker
script. The application would then use a C++ attribute of the form:
```
__attribute__((section(".your-specialized-memory")))
```
to link one or more application symbols to the specialized memory region.
Only a single API is required to use alternate decompression memory regions in
an application:
* `MicroInterpreter::SetDecompressionMemory` ([micro_interpreter.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_interpreter.h)):
Specify the address and size of each alternate decompression
memory region. This method must be called before the application calls
`MicroInterpreter::AllocateTensors`. The lifetime of the method parameter must
equal the lifetime of the `MicroInterpreter` instance. The memory regions
specified by the method parameter must not overlap, and each region is considered
to be non-contiguous with all other regions.
Specifying alternate decompression memory will cause `MicroContext::AllocateDecompressionScratchBuffer`
and `tflite::micro::GetTensorData` (the four parameter version)
to automatically attempt to allocate memory for the decompression destination
buffer from available memory in one of the alternate memory regions. If no
alternate memory region of sufficient size is available, a scratch buffer will
be allocated within the `MicroInterpreter` arena.
# How to Compress a Model
Expand All @@ -253,61 +286,61 @@ a tensor to just four values among the tensor elements, a fixed-width of two bit
can be used for each element. This would result in nearly a four-fold decrease
in the size of an INT8 tensor.
Tensors to compress are specified with the `--tensors="#, #, ...#"` flag.
Per-channel quantized tensors using an alternate quantization axis (such as the
filter tensor supplied to DEPTHWISE_CONV) must use the `--alt_axis_tensors=` flag.
Tensors to compress are specified with a `YAML` file. For example, if tensors
5, 10, 11, 22 of subgraph 0 of the model are to be compressed, the contents of
the file would be as follows:
```
tensors:

- subgraph: 0
tensor: 5
compression:
- lut:
index_bitwidth: 4

- subgraph: 0
tensor: 10
compression:
- lut:
index_bitwidth: 4

- subgraph: 0
tensor: 11
compression:
- lut:
index_bitwidth: 2

- subgraph: 0
tensor: 22
compression:
- lut:
index_bitwidth: 2
```
Note that each tensor can have a different bit width (1 through 7 bits).
First, align your binned model:
```
bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- binned_model.tflite binned_and_aligned.tflite
bazel run -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- binned_model.tflite binned_and_aligned.tflite
```
Next, compress the model, supplying as arguments the target tensors:
Next, compress the model:
```
bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/compression:compress -- binned_and_aligned.tflite compressed.tflite --tensors="1, 2, 7, 10, 3, 5"
bazel run -s tensorflow/lite/micro/compression:compress -- --input=binned_and_aligned.tflite --output=compressed.tflite --spec=spec.yaml
```
Then align the model:
```
bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite
bazel run -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite
```
# The Generic Benchmark Application
The Generic Benchmark Application can be used to see the size of the model, the
amount of arena memory used, and the size of the interpreter data structures
including those involved with tensor compression.
The benchmark also reports total inference time, as well as time taken for
tensor decompression. Timing data may be either wall-clock time or processor
cycle time. The type of timing data is dependent on the underlying platform
and/or simulator used. In some cases, no timing data is available.
The benchmark output includes a CRC32 of the output tensor(s) for comparison
within the same platform on which the benchmark is run.
tensor decompression.
For additional information on the Generic Benchmark Application, please refer to
this [document](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/benchmarking/README.md).
## How to Run the Generic Benchmark Application
The Generic Benchmark Application can only be built using `make`.
### Without Compression
HIFI3 example:
```
make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=binned_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi3 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=HIFI_190304_swupgrade
```
The model path can be an abolute path, or relative to your local TFLM repository.
### With Compression
HIFI5 example:
```
make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=compressed_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi5 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=PRD_H5_RDO_07_01_2022 USE_TFLM_COMPRESSION=1
```
The model path can be an abolute path, or relative to your local TFLM repository.
72 changes: 62 additions & 10 deletions tensorflow/lite/micro/tools/benchmarking/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,73 @@
# Generic Benchmarking Tool build/run instructions
# Generic Benchmarking Tool
This tool can be used to benchmark any TfLite format model. The tool can be
compiled in one of two ways:
1. Such that it takes command line arguments, allowing the path to the model
file to be specified as a program argument
2. With a model compiled into the tool, allowing use in any simulator or on
any hardware platform

Building the tool with the model compiled in uses two additional Makefile
All tool output is prefaced with metadata. The metadata consists of compiler
version and flags, and the target information supplied on the `make` command
line. For some targets, version information for external libraries used with
optimized kernels is available.

If the model is compiled into the tool, additional model analysis information
is added to the metadata. This includes data usage within the model, each model
subgraph and operation in inference execution order, and information on all
tensors in the model.

The tool will output a CRC32 of all input tensors, followed by the profiling
times for the pre-inference phase of the MicroInterpreter. Next is the output
of the inference profiling times for each operator, and a summary total for
all inference operations. Finally a CRC32 of all output tensors and the
MicroInterpreter arena memory usage are output.

# Generic Benchmarking Tool build/run instructions
Building the tool with the model compiled in uses two additional `make`
variables:
* `GENERIC_BENCHMARK_MODEL_PATH`: the path to the TfLite format model file. This
can be a relative or absolute path. This variable is required.
* `GENERIC_BENCHMARK_MODEL_PATH`: the path to the TfLite format model file.
The model path can be an abolute path, or relative to your local TFLM repository.
This variable is required.
* `GENERIC_BENCHMARK_ARENA_SIZE`: the size of the TFLM interpreter arena, in bytes.
This variable is optional.

## Tested, working targets
## Tested targets
* x86
* cortex_m_qemu (no timing data)
* Xtensa (p6, hifi3)
* Xtensa (p6, hifi3, hifi5)
* cortex_m_corstone_300

## Tested, non-working targets
* none currently
## Use with compressed models
When the tool is used with compressed models, additional profiling timing will
be output. This will consist of profiling timing for each tensor decompressed
during inference,
and a summary total. While this profiling timing is output separately, the
timing for decompression is also included in the normal inference profiling
timing and summary total.

To use the tool with a compressed model, the `make` variables must include:
```
USE_TFLM_COMPRESSION=1
```

The tensor decompression operation can occur with an alternate destination
memory region. This allows specialized memory to be used as the decompressed
data destination. The tool supports a single alternate decompression region.
Use the following `make` variables to specify an alternate decompression region:
* `GENERIC_BENCHMARK_ALT_MEM_ATTR`: a C++ attribute specifying the alternate
memory as mapped through a linker script.
* `GENERIC_BENCHMARK_ALT_MEM_SIZE`: the alternate memory region size in bytes.

Both `make` variables are required (along with `USE_TFLM_COMPRESSION=1`) for the
tool to use the alternate decompression region.

An example build and run command line for Xtensa with alternate decompression memory:
```
make -f tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=compressed.tflite TARGET=xtensa TARGET_ARCH=hifi3 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=HIFI_190304_swupgrade USE_TFLM_COMPRESSION=1 GENERIC_BENCHMARK_ALT_MEM_ATTR='__attribute__\(\(section\(\".specialized_memory_region\"\)\)\)' GENERIC_BENCHMARK_ALT_MEM_SIZE=`expr 64 \* 1024`
```

For more information on model compression, please see the
[compression document](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/docs/compression.md).

## Build and run for x86
Build for command line arguments:
Expand All @@ -43,6 +90,11 @@ Build and run with model compiled into tool:
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa TARGET_ARCH=vision_p6 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=P6_200528 BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=/tmp/keyword_scrambled.tflite GENERIC_BENCHMARK_ARENA_SIZE=`expr 50 \* 1024`
```

Build and run with a compressed model compiled into the tool:
```
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa TARGET_ARCH=vision_p6 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=P6_200528 BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=/tmp/keyword_scrambled.tflite GENERIC_BENCHMARK_ARENA_SIZE=`expr 50 \* 1024` USE_TFLM_COMPRESSION=1
```

## Build and run for Cortex-M using Corstone 300 simulator
Build and run with model compiled into tool:
```
Expand All @@ -51,13 +103,13 @@ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=cortex_m_corstone_300

## Build and run using Bazel

This is only for the x86 command line argument build, and does not contain meta-data:
This is only for the x86 command line argument build, and does not contain metadata:
```
bazel build tensorflow/lite/micro/tools/benchmarking:tflm_benchmark
bazel-bin/tensorflow/lite/micro/tools/benchmarking/tflm_benchmark tensorflow/lite/micro/models/person_detect.tflite
```

## Example output with meta-data and built-in model layer information
## Example output with metadata and built-in model layer information

This sample output is for Cortex-M using Corstone 300:
```
Expand Down

0 comments on commit 4d07fef

Please sign in to comment.