diff --git a/tensorflow/lite/micro/docs/compression.md b/tensorflow/lite/micro/docs/compression.md index 267bdfb1f7b..e29069fbfef 100644 --- a/tensorflow/lite/micro/docs/compression.md +++ b/tensorflow/lite/micro/docs/compression.md @@ -234,15 +234,48 @@ Only the following methods are required to implement decompression within kernel * `MicroContext::AllocateDecompressionScratchBuffer` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)): Allocates a scratch memory buffer within the `MicroInterpreter` to hold the -decompressed tensor data. +decompressed tensor data. The returned scratch memory handle must be retained +(typically through kernel `OpData`) for use during the kernel inference operation. * `MicroContext::GetTensorCompressionData` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)): Retrieves compressed tensor information (see [compression.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/compression.h)). * `tflite::micro::GetTensorData` ([kernel_util.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/kernel_util.h)): -The four argument version of this method will automatically decompress the -tensor data into the supplied scratch memory buffer. +The four parameter version of this method will automatically decompress the +tensor data into the supplied scratch memory buffer. The lifetime of a scratch +buffer is the same as the lifetime of the current kernel operator being processed. +Each call to the four parameter version of this method will always result in a +decompression operation being performed, if the tensor supplied is compressed. Please see the [TRANSPOSE_CONV](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/transpose_conv.cc) -reference kernel code for an example of how tensor decompression is implemented. +reference kernel code for an example of how to implement tensor decompression +within a kernel. + +### Alternate Decompression Memory + +Alternate decompression memory regions allow the use of specialized memory +available to the processor, to be used as the target of a tensor decompression +operation. Such memory is typically mapped by the application through a linker +script. The application would then use a C++ attribute of the form: +``` +__attribute__((section(".your-specialized-memory"))) +``` +to link one or more application symbols to the specialized memory region. + +Only a single API is required to use alternate decompression memory regions in +an application: +* `MicroInterpreter::SetDecompressionMemory` ([micro_interpreter.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_interpreter.h)): +Specify the address and size of each alternate decompression +memory region. This method must be called before the application calls +`MicroInterpreter::AllocateTensors`. The lifetime of the method parameter must +equal the lifetime of the `MicroInterpreter` instance. The memory regions +specified by the method parameter must not overlap, and each region is considered +to be non-contiguous with all other regions. + +Specifying alternate decompression memory will cause `MicroContext::AllocateDecompressionScratchBuffer` +and `tflite::micro::GetTensorData` (the four parameter version) +to automatically attempt to allocate memory for the decompression destination +buffer from available memory in one of the alternate memory regions. If no +alternate memory region of sufficient size is available, a scratch buffer will +be allocated within the `MicroInterpreter` arena. # How to Compress a Model @@ -253,61 +286,54 @@ a tensor to just four values among the tensor elements, a fixed-width of two bit can be used for each element. This would result in nearly a four-fold decrease in the size of an INT8 tensor. -Tensors to compress are specified with the `--tensors="#, #, ...#"` flag. -Per-channel quantized tensors using an alternate quantization axis (such as the -filter tensor supplied to DEPTHWISE_CONV) must use the `--alt_axis_tensors=` flag. - -First, align your binned model: +Tensors to compress are specified with a `YAML` file. For example, if tensors +5, 10, 11, 22 of subgraph 0 of the model are to be compressed, the contents of +the file would be as follows: ``` -bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- binned_model.tflite binned_and_aligned.tflite +tensors: + + - subgraph: 0 + tensor: 5 + compression: + - lut: + index_bitwidth: 4 + + - subgraph: 0 + tensor: 10 + compression: + - lut: + index_bitwidth: 4 + + - subgraph: 0 + tensor: 11 + compression: + - lut: + index_bitwidth: 2 + + - subgraph: 0 + tensor: 22 + compression: + - lut: + index_bitwidth: 2 ``` +Note that each tensor can have a different bit width (1 through 7 bits). -Next, compress the model, supplying as arguments the target tensors: +Once the `YAML` specification is ready, compress the model using the following: ``` -bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/compression:compress -- binned_and_aligned.tflite compressed.tflite --tensors="1, 2, 7, 10, 3, 5" +bazel run -s tensorflow/lite/micro/compression:compress -- --input=binned.tflite --output=compressed.tflite --spec=spec.yaml ``` Then align the model: ``` -bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite +bazel run -s tensorflow/lite/micro/tools:tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite ``` # The Generic Benchmark Application The Generic Benchmark Application can be used to see the size of the model, the amount of arena memory used, and the size of the interpreter data structures -including those involved with tensor conpression. - -The benchmark also reports total inference time, as well as time taken for -tensor decompression. Timing data may be either wall-clock time or processor -cycle time. The type of timing data is dependent on the underlying platform -and/or simulator used. In some cases, no timing data is available. - -The benchmark output includes a CRC32 of the output tensor(s) for comparison -within the same platform on which the benchmark is run. +including those involved with tensor compression. The benchmark also reports +total inference time, as well as time taken for tensor decompression. For additional information on the Generic Benchmark Application, please refer to this [document](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/benchmarking/README.md). - -## How to Run the Generic Benchmark Application - -The Generic Benchmark Application can only be built using `make`. - -### Without Compression - -HIFI3 example: -``` -make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=binned_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi3 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=HIFI_190304_swupgrade -``` - -The model path can be an abolute path, or relative to your local TFLM repository. - -### With Compression - -HIFI5 example: -``` -make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=compressed_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi5 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=PRD_H5_RDO_07_01_2022 USE_TFLM_COMPRESSION=1 -``` - -The model path can be an abolute path, or relative to your local TFLM repository. -