Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft PR: testing #2628

Draft
wants to merge 73 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
82a3b15
testing
ddavis-2015 Aug 8, 2024
9af664d
Add DEPTHWISE_CONV kernel unit test.
ddavis-2015 Aug 16, 2024
cc1a4a0
cleanup
ddavis-2015 Aug 16, 2024
13c5f71
Add DEPTHWISE_CONV compressed kernel unit tests.
ddavis-2015 Aug 17, 2024
8dbda0f
Fix XTENSA compiler warnings.
ddavis-2015 Aug 24, 2024
56cb57e
Add FULLY_CONNECTED compression support for XTENSA.
ddavis-2015 Aug 25, 2024
37bec9a
fix xtensa FULLY_CONNECTED copyright
ddavis-2015 Aug 25, 2024
3f1d113
Add DEPTHWISE_CONV compression support to XTENSA.
ddavis-2015 Aug 25, 2024
120bdbd
Change DecompressToBuffer API
ddavis-2015 Aug 27, 2024
7c406d7
Add compression support for DEPTHWISE_CONV for XTENSA P6.
ddavis-2015 Aug 27, 2024
f1a49ea
Add compression support for CONV2D for XTENSA P6.
ddavis-2015 Aug 27, 2024
404bd9c
Add compression support for FULLY_CONNECTED for XTENSA P6.
ddavis-2015 Aug 27, 2024
7fed169
fix copyright
ddavis-2015 Aug 27, 2024
23d2111
Add TFLM compression testing to XTENSA CI
ddavis-2015 Aug 27, 2024
26e53de
update micro speech test arena size
ddavis-2015 Aug 27, 2024
d458447
more viewer output
ddavis-2015 Aug 30, 2024
41dd3a3
update code style check exclusions
ddavis-2015 Aug 30, 2024
ff63716
Add profiling to decompression code using external context.
ddavis-2015 Sep 4, 2024
b392b6e
viewer works with more models
ddavis-2015 Sep 6, 2024
e85d9b7
compile without optimized kernel directory (reference kernels only)
ddavis-2015 Sep 6, 2024
7ac50a0
Update CONCATENATION kernel to more closely match TfLite.
ddavis-2015 Sep 7, 2024
e099723
Add TFLM compression support for CONCATENATION.
ddavis-2015 Sep 8, 2024
ab3cc79
Allow compression script to handle value tables of length 1.
ddavis-2015 Sep 8, 2024
37e8ee3
update copyright
ddavis-2015 Sep 9, 2024
4c6e6ad
Update compression script to support buffer sharing between tensors i…
ddavis-2015 Sep 9, 2024
a072400
Add TFLM compression support to the ASSIGN_VARIABLE kernel
ddavis-2015 Sep 9, 2024
c77d48e
model viewer also shows tensor is_variable value
ddavis-2015 Sep 11, 2024
307430f
make profilers static.
ddavis-2015 Sep 11, 2024
ac2054a
Handle \ (backslash) appearing within the metadata
ddavis-2015 Sep 11, 2024
5f26a6b
refactor bit width 4 decompression code.
ddavis-2015 Sep 19, 2024
be37537
Add TFLM compression README
ddavis-2015 Sep 19, 2024
14bc841
TFLM compression bitwidth 2 optimization
ddavis-2015 Sep 20, 2024
7cf87b5
TFLM compression bitwidth 2 improvements
ddavis-2015 Sep 21, 2024
2bac419
cleanup and fix issues with bitwidth 2 decompression optimization
ddavis-2015 Sep 21, 2024
46e8fe6
TFLM compression bitwidth 3 optimized
ddavis-2015 Sep 24, 2024
d51a035
compression document updates
ddavis-2015 Sep 24, 2024
1cf9c78
refactor and further optimize bitwidth 4 decompression.
ddavis-2015 Sep 24, 2024
e9b4621
Fix name of bitwidth 4 decompression method
ddavis-2015 Sep 26, 2024
8c0a01c
refactoring
ddavis-2015 Sep 27, 2024
24c1152
refactoring
ddavis-2015 Sep 28, 2024
3f901a7
add comment
ddavis-2015 Sep 28, 2024
445278f
Merge branch 'main' into bq-compression
ddavis-2015 Sep 29, 2024
d1a281e
Improve compression documentation
ddavis-2015 Oct 2, 2024
eb85180
add xtensa bit width 4 decompression code
ddavis-2015 Oct 4, 2024
487c17a
add xtensa any bit width decompression code
ddavis-2015 Oct 4, 2024
b84853c
testing
ddavis-2015 Oct 8, 2024
2388549
refactor decompression code into reference and platform specific
ddavis-2015 Oct 11, 2024
99c6e35
revert to original Cadence bit width 4 code
ddavis-2015 Oct 11, 2024
ad2b1c3
reduce HIFI5 decompression code size
ddavis-2015 Oct 13, 2024
77bb05d
align compressed tensor data as per schema
ddavis-2015 Oct 14, 2024
9bb2b63
cleanup
ddavis-2015 Oct 14, 2024
b318421
add decompression unit test
ddavis-2015 Oct 17, 2024
81ecf2e
decompression unit test improvements
ddavis-2015 Oct 18, 2024
efedcc2
working decompression unit test
ddavis-2015 Oct 18, 2024
a110e41
fix C++ bitwidth 6 & 7 decompression
ddavis-2015 Oct 18, 2024
4894265
pre-merge empty commit
ddavis-2015 Oct 19, 2024
3d765e6
Squashed commit of the following:
ddavis-2015 Oct 19, 2024
4a02b22
cleanup
ddavis-2015 Oct 19, 2024
821dfdf
Cleanup header file usage.
ddavis-2015 Oct 20, 2024
d96b614
fix CI code style errors
ddavis-2015 Oct 21, 2024
122db20
add compression build/test to bazel default test script
ddavis-2015 Oct 21, 2024
459569a
use kernel optimzer level -O3 and -LNO:simd for Xtensa HIFI5
ddavis-2015 Oct 21, 2024
2d825e3
fix code style errors.
ddavis-2015 Oct 21, 2024
b43c16c
fix code style errors.
ddavis-2015 Oct 21, 2024
7dc34a9
update to latest Cadence decompression code.
ddavis-2015 Oct 22, 2024
df29a4c
header file cleanup.
ddavis-2015 Oct 23, 2024
8c53ee3
first cut at op relocation script.
ddavis-2015 Oct 30, 2024
fc4b473
additional operator relocation check.
ddavis-2015 Oct 30, 2024
ac4a0a4
cleanup
ddavis-2015 Oct 31, 2024
4dca8e7
keep generic benchmark application binary size stable regardless of w…
ddavis-2015 Oct 31, 2024
0d889e0
Fix MicroProfiler bug with ClearEvents().
ddavis-2015 Nov 2, 2024
40e7530
fix arena
ddavis-2015 Nov 4, 2024
7776cda
remove [[maybe_unused]]
ddavis-2015 Nov 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/sync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,5 +62,5 @@ jobs:
author: TFLM-bot <[email protected]>
body: "BUG=automated sync from upstream\nNO_CHECK_TFLITE_FILES=automated sync from upstream"
labels: bot:sync-tf, ci:run
reviewers: rascani
reviewers: suleshahid

3 changes: 3 additions & 0 deletions .style.yapf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[style]
based_on_style = pep8
indent_width = 2
14 changes: 14 additions & 0 deletions BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,17 @@ refresh_compile_commands(
name = "refresh_compile_commands",
targets = ["//..."],
)

load("@bazel_skylib//rules:common_settings.bzl", "bool_flag")

bool_flag(
name = "with_compression",
build_setting_default = False,
)

config_setting(
name = "with_compression_enabled",
flag_values = {
":with_compression": "True",
},
)
4 changes: 2 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* @tensorflow/micro

/.github/ @advaitjain @rockyrhodes @rascani
/ci/ @advaitjain @rockyrhodes @rascani
/.github/ @advaitjain @rockyrhodes @suleshahid
/ci/ @advaitjain @rockyrhodes @suleshahid
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ Below are some tips that might be useful and improve the development experience.

```
pip install yapf
yapf log_parser.py -i --style='{based_on_style: pep8, indent_width: 2}'
yapf log_parser.py -i'
```

* Add a git hook to check for code style etc. prior to creating a pull request:
Expand Down
34 changes: 0 additions & 34 deletions ci/temp_patches/tf_update_visibility.patch

This file was deleted.

11 changes: 7 additions & 4 deletions codegen/build_def.bzl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
""" Build rule for generating ML inference code from TFLite model. """

load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

def tflm_inference_library(
name,
Expand All @@ -20,12 +20,12 @@ def tflm_inference_library(
srcs = [tflite_model],
outs = [name + ".h", name + ".cc"],
tools = ["//codegen:code_generator"],
cmd = "$(location //codegen:code_generator) " +
cmd = "$(location //codegen:code_generator) --quiet " +
"--model=$< --output_dir=$(RULEDIR) --output_name=%s" % name,
visibility = ["//visibility:private"],
)

native.cc_library(
tflm_cc_library(
name = name,
hdrs = [name + ".h"],
srcs = [name + ".cc"],
Expand All @@ -39,6 +39,9 @@ def tflm_inference_library(
"//tensorflow/lite/micro:micro_common",
"//tensorflow/lite/micro:micro_context",
],
copts = micro_copts(),
target_compatible_with = select({
"//conditions:default": [],
"//:with_compression_enabled": ["@platforms//:incompatible"],
}),
visibility = visibility,
)
22 changes: 20 additions & 2 deletions codegen/code_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@
""" Generates C/C++ source code capable of performing inference for a model. """

import os
import pathlib

from absl import app
from absl import flags
from collections.abc import Sequence

from tflite_micro.codegen import inference_generator
from tflite_micro.codegen import graph
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

# Usage information:
# Default:
Expand All @@ -48,15 +48,33 @@
"'model' basename."),
required=False)

_QUIET = flags.DEFINE_bool(
name="quiet",
default=False,
help="Suppress informational output (e.g., for use in for build system)",
required=False)


def main(argv: Sequence[str]) -> None:
if _QUIET.value:
restore = os.environ.get("TF_CPP_MIN_LOG_LEVEL", "0")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils
os.environ["TF_CPP_MIN_LOG_LEVEL"] = restore
else:
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

output_dir = _OUTPUT_DIR.value or os.path.dirname(_MODEL_PATH.value)
output_name = _OUTPUT_NAME.value or os.path.splitext(
os.path.basename(_MODEL_PATH.value))[0]

model = flatbuffer_utils.read_model(_MODEL_PATH.value)

print("Generating inference code for model: {}".format(_MODEL_PATH.value))
if not _QUIET.value:
print("Generating inference code for model: {}".format(_MODEL_PATH.value))
output_path = pathlib.Path(output_dir) / output_name
print(f"Generating {output_path}.h")
print(f"Generating {output_path}.cc")

inference_generator.generate(output_dir, output_name,
graph.OpCodeTable([model]), graph.Graph(model))
Expand Down
1 change: 0 additions & 1 deletion codegen/inference_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ class ModelData(TypedDict):

def _render(output_file: pathlib.Path, template_file: pathlib.Path,
model_data: ModelData) -> None:
print("Generating {}".format(output_file))
t = template.Template(filename=str(template_file))
with output_file.open('w+') as file:
file.write(t.render(**model_data))
Expand Down
5 changes: 2 additions & 3 deletions codegen/runtime/BUILD
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

package(default_visibility = ["//visibility:public"])

cc_library(
tflm_cc_library(
name = "micro_codegen_context",
srcs = ["micro_codegen_context.cc"],
hdrs = ["micro_codegen_context.h"],
copts = micro_copts(),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite/kernels:op_macros",
Expand Down
5 changes: 2 additions & 3 deletions python/tflite_micro/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ load("@rules_python//python:packaging.bzl", "py_package", "py_wheel")
load("@tflm_pip_deps//:requirements.bzl", "requirement")
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)
load(
"//tensorflow:extra_rules.bzl",
Expand All @@ -24,15 +24,14 @@ package_group(
packages = tflm_python_op_resolver_friends(),
)

cc_library(
tflm_cc_library(
name = "python_ops_resolver",
srcs = [
"python_ops_resolver.cc",
],
hdrs = [
"python_ops_resolver.h",
],
copts = micro_copts(),
visibility = [
":op_resolver_friends",
"//tensorflow/lite/micro/integration_tests:__subpackages__",
Expand Down
5 changes: 0 additions & 5 deletions python/tflite_micro/interpreter_wrapper.cc
Original file line number Diff line number Diff line change
Expand Up @@ -104,11 +104,6 @@ bool CheckTensor(const TfLiteTensor* tensor) {
return false;
}

if (tensor->sparsity != nullptr) {
PyErr_SetString(PyExc_ValueError, "TFLM doesn't support sparse tensors");
return false;
}

int py_type_num = TfLiteTypeToPyArrayType(tensor->type);
if (py_type_num == NPY_NOTYPE) {
PyErr_SetString(PyExc_ValueError, "Unknown tensor type.");
Expand Down
5 changes: 2 additions & 3 deletions signal/micro/kernels/BUILD
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)

package(licenses = ["notice"])

cc_library(
tflm_cc_library(
name = "register_signal_ops",
srcs = [
"delay.cc",
Expand All @@ -31,7 +31,6 @@ cc_library(
"irfft.h",
"rfft.h",
],
copts = micro_copts(),
visibility = [
"//tensorflow/lite/micro",
],
Expand Down
10 changes: 7 additions & 3 deletions tensorflow/compiler/mlir/lite/core/api/BUILD
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load(
"//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = ["//visibility:public"],
licenses = ["notice"],
)

cc_library(
tflm_cc_library(
name = "error_reporter",
srcs = ["error_reporter.cc"],
hdrs = ["error_reporter.h"],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
deps = [],
)
14 changes: 9 additions & 5 deletions tensorflow/lite/core/api/BUILD
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load(
"//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = ["//visibility:private"],
licenses = ["notice"],
)

cc_library(
tflm_cc_library(
name = "api",
srcs = [
"flatbuffer_conversions.cc",
Expand All @@ -17,7 +21,7 @@ cc_library(
"flatbuffer_conversions.h",
"tensor_utils.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
visibility = ["//visibility:public"],
deps = [
":error_reporter",
Expand All @@ -33,13 +37,13 @@ cc_library(
# also exported by the "api" target, so that targets which only want to depend
# on these small abstract base class modules can express more fine-grained
# dependencies without pulling in tensor_utils and flatbuffer_conversions.
cc_library(
tflm_cc_library(
name = "error_reporter",
hdrs = [
"error_reporter.h",
"//tensorflow/compiler/mlir/lite/core/api:error_reporter.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
visibility = [
"//visibility:public",
],
Expand Down
12 changes: 6 additions & 6 deletions tensorflow/lite/experimental/microfrontend/lib/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ cc_test(
name = "filterbank_test",
srcs = ["filterbank_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":filterbank",
Expand All @@ -156,7 +156,7 @@ cc_test(
name = "frontend_test",
srcs = ["frontend_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":frontend",
Expand All @@ -168,7 +168,7 @@ cc_test(
name = "log_scale_test",
srcs = ["log_scale_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":log_scale",
Expand All @@ -180,7 +180,7 @@ cc_test(
name = "noise_reduction_test",
srcs = ["noise_reduction_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":noise_reduction",
Expand All @@ -192,7 +192,7 @@ cc_test(
name = "pcan_gain_control_test",
srcs = ["pcan_gain_control_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":pcan_gain_control",
Expand All @@ -204,7 +204,7 @@ cc_test(
name = "window_test",
srcs = ["window_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":window",
Expand Down
10 changes: 7 additions & 3 deletions tensorflow/lite/kernels/BUILD
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load(
"//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = [
Expand All @@ -17,15 +21,15 @@ cc_library(
deps = ["//tensorflow/lite/micro:micro_log"],
)

cc_library(
tflm_cc_library(
name = "kernel_util",
srcs = [
"kernel_util.cc",
],
hdrs = [
"kernel_util.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
deps = [
"//tensorflow/lite:array",
"//tensorflow/lite:kernel_api",
Expand Down
Loading
Loading