Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft PR: testing #2628

Draft
wants to merge 95 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
82a3b15
testing
ddavis-2015 Aug 8, 2024
9af664d
Add DEPTHWISE_CONV kernel unit test.
ddavis-2015 Aug 16, 2024
cc1a4a0
cleanup
ddavis-2015 Aug 16, 2024
13c5f71
Add DEPTHWISE_CONV compressed kernel unit tests.
ddavis-2015 Aug 17, 2024
8dbda0f
Fix XTENSA compiler warnings.
ddavis-2015 Aug 24, 2024
56cb57e
Add FULLY_CONNECTED compression support for XTENSA.
ddavis-2015 Aug 25, 2024
37bec9a
fix xtensa FULLY_CONNECTED copyright
ddavis-2015 Aug 25, 2024
3f1d113
Add DEPTHWISE_CONV compression support to XTENSA.
ddavis-2015 Aug 25, 2024
120bdbd
Change DecompressToBuffer API
ddavis-2015 Aug 27, 2024
7c406d7
Add compression support for DEPTHWISE_CONV for XTENSA P6.
ddavis-2015 Aug 27, 2024
f1a49ea
Add compression support for CONV2D for XTENSA P6.
ddavis-2015 Aug 27, 2024
404bd9c
Add compression support for FULLY_CONNECTED for XTENSA P6.
ddavis-2015 Aug 27, 2024
7fed169
fix copyright
ddavis-2015 Aug 27, 2024
23d2111
Add TFLM compression testing to XTENSA CI
ddavis-2015 Aug 27, 2024
26e53de
update micro speech test arena size
ddavis-2015 Aug 27, 2024
d458447
more viewer output
ddavis-2015 Aug 30, 2024
41dd3a3
update code style check exclusions
ddavis-2015 Aug 30, 2024
ff63716
Add profiling to decompression code using external context.
ddavis-2015 Sep 4, 2024
b392b6e
viewer works with more models
ddavis-2015 Sep 6, 2024
e85d9b7
compile without optimized kernel directory (reference kernels only)
ddavis-2015 Sep 6, 2024
7ac50a0
Update CONCATENATION kernel to more closely match TfLite.
ddavis-2015 Sep 7, 2024
e099723
Add TFLM compression support for CONCATENATION.
ddavis-2015 Sep 8, 2024
ab3cc79
Allow compression script to handle value tables of length 1.
ddavis-2015 Sep 8, 2024
37e8ee3
update copyright
ddavis-2015 Sep 9, 2024
4c6e6ad
Update compression script to support buffer sharing between tensors i…
ddavis-2015 Sep 9, 2024
a072400
Add TFLM compression support to the ASSIGN_VARIABLE kernel
ddavis-2015 Sep 9, 2024
c77d48e
model viewer also shows tensor is_variable value
ddavis-2015 Sep 11, 2024
307430f
make profilers static.
ddavis-2015 Sep 11, 2024
ac2054a
Handle \ (backslash) appearing within the metadata
ddavis-2015 Sep 11, 2024
5f26a6b
refactor bit width 4 decompression code.
ddavis-2015 Sep 19, 2024
be37537
Add TFLM compression README
ddavis-2015 Sep 19, 2024
14bc841
TFLM compression bitwidth 2 optimization
ddavis-2015 Sep 20, 2024
7cf87b5
TFLM compression bitwidth 2 improvements
ddavis-2015 Sep 21, 2024
2bac419
cleanup and fix issues with bitwidth 2 decompression optimization
ddavis-2015 Sep 21, 2024
46e8fe6
TFLM compression bitwidth 3 optimized
ddavis-2015 Sep 24, 2024
d51a035
compression document updates
ddavis-2015 Sep 24, 2024
1cf9c78
refactor and further optimize bitwidth 4 decompression.
ddavis-2015 Sep 24, 2024
e9b4621
Fix name of bitwidth 4 decompression method
ddavis-2015 Sep 26, 2024
8c0a01c
refactoring
ddavis-2015 Sep 27, 2024
24c1152
refactoring
ddavis-2015 Sep 28, 2024
3f901a7
add comment
ddavis-2015 Sep 28, 2024
445278f
Merge branch 'main' into bq-compression
ddavis-2015 Sep 29, 2024
d1a281e
Improve compression documentation
ddavis-2015 Oct 2, 2024
eb85180
add xtensa bit width 4 decompression code
ddavis-2015 Oct 4, 2024
487c17a
add xtensa any bit width decompression code
ddavis-2015 Oct 4, 2024
b84853c
testing
ddavis-2015 Oct 8, 2024
2388549
refactor decompression code into reference and platform specific
ddavis-2015 Oct 11, 2024
99c6e35
revert to original Cadence bit width 4 code
ddavis-2015 Oct 11, 2024
ad2b1c3
reduce HIFI5 decompression code size
ddavis-2015 Oct 13, 2024
77bb05d
align compressed tensor data as per schema
ddavis-2015 Oct 14, 2024
9bb2b63
cleanup
ddavis-2015 Oct 14, 2024
b318421
add decompression unit test
ddavis-2015 Oct 17, 2024
81ecf2e
decompression unit test improvements
ddavis-2015 Oct 18, 2024
efedcc2
working decompression unit test
ddavis-2015 Oct 18, 2024
a110e41
fix C++ bitwidth 6 & 7 decompression
ddavis-2015 Oct 18, 2024
4894265
pre-merge empty commit
ddavis-2015 Oct 19, 2024
3d765e6
Squashed commit of the following:
ddavis-2015 Oct 19, 2024
4a02b22
cleanup
ddavis-2015 Oct 19, 2024
821dfdf
Cleanup header file usage.
ddavis-2015 Oct 20, 2024
d96b614
fix CI code style errors
ddavis-2015 Oct 21, 2024
122db20
add compression build/test to bazel default test script
ddavis-2015 Oct 21, 2024
459569a
use kernel optimzer level -O3 and -LNO:simd for Xtensa HIFI5
ddavis-2015 Oct 21, 2024
2d825e3
fix code style errors.
ddavis-2015 Oct 21, 2024
b43c16c
fix code style errors.
ddavis-2015 Oct 21, 2024
7dc34a9
update to latest Cadence decompression code.
ddavis-2015 Oct 22, 2024
df29a4c
header file cleanup.
ddavis-2015 Oct 23, 2024
8c53ee3
first cut at op relocation script.
ddavis-2015 Oct 30, 2024
fc4b473
additional operator relocation check.
ddavis-2015 Oct 30, 2024
ac4a0a4
cleanup
ddavis-2015 Oct 31, 2024
4dca8e7
keep generic benchmark application binary size stable regardless of w…
ddavis-2015 Oct 31, 2024
0d889e0
Fix MicroProfiler bug with ClearEvents().
ddavis-2015 Nov 2, 2024
40e7530
fix arena
ddavis-2015 Nov 4, 2024
7776cda
remove [[maybe_unused]]
ddavis-2015 Nov 5, 2024
cfd9890
expand model_facade
ddavis-2015 Nov 10, 2024
f651c88
single pending ops queue
ddavis-2015 Nov 10, 2024
5e1a1c9
changes to make the memory planner debug output easier to interpret
ddavis-2015 Nov 11, 2024
ae6a207
Implement alternate profiler for MicroInterpreter.
ddavis-2015 Nov 18, 2024
fddf003
Fix typo.
ddavis-2015 Nov 18, 2024
2f8cead
Revert FakeMicroContext changes for alternate profiler.
ddavis-2015 Nov 18, 2024
83dafce
cleanup
ddavis-2015 Nov 19, 2024
0a49b2a
Add input tensor CRC to Generic Benchmark application.
ddavis-2015 Nov 20, 2024
300751d
Update to latest Cadence code. Int8 any bitwidth on normal quant axi…
ddavis-2015 Nov 20, 2024
a6dc3e0
Pre-rebase to main empty commit
ddavis-2015 Nov 20, 2024
defad29
Squashed commit of the following:
ddavis-2015 Nov 25, 2024
2788d32
support for alternate decompression memory.
ddavis-2015 Dec 2, 2024
b9c62b9
initial refactor of compression unit tests
ddavis-2015 Dec 6, 2024
81795e7
finished refactor of compression unit tests
ddavis-2015 Dec 6, 2024
cc1fb65
place TestConvFloat() back into conv_test_common.cc
ddavis-2015 Dec 6, 2024
9353cda
fix concatenation unit test to match refactored unit test helper fram…
ddavis-2015 Dec 6, 2024
4d07fef
updated compression and generic benchmark documentation
ddavis-2015 Dec 11, 2024
81e548b
Add GetOptionalTensorData (four parameter version).
ddavis-2015 Dec 13, 2024
4c5079b
Update transpose_conv for optional bias tensors when compression is e…
ddavis-2015 Dec 13, 2024
e788456
Fixes to depthwise_conv for optional bias tensor when compression is …
ddavis-2015 Dec 13, 2024
01eb927
Fixed scratch size calculation for conv for HiFi targets for scenario…
ddavis-2015 Dec 13, 2024
e54866c
Fixes to CONV for optional bias tensor when compression is enabled.
ddavis-2015 Dec 14, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,15 @@ build --cxxopt -std=c++17
# Treat warnings as errors
build --copt -Werror

# Common options for --config=ci
build:ci --curses=no
build:ci --color=no
build:ci --noshow_progress
build:ci --noshow_loading_progress
build:ci --show_timestamps
build:ci --terminal_columns=0
build:ci --verbose_failures

# When building with the address sanitizer
# E.g., bazel build --config asan
build:asan --repo_env CC=clang
Expand Down
8 changes: 4 additions & 4 deletions .github/mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,13 @@ queue_rules:
- name: default
checks_timeout: 2 h
branch_protection_injection_mode: queue
merge_method: squash
conditions:
- base=main
- label=ci:ready_to_merge
commit_message_template: |
{{ title }} (#{{ number }})
{{ body_raw }}


pull_request_rules:
Expand All @@ -15,10 +19,6 @@ pull_request_rules:
actions:
queue:
name: default
method: squash
commit_message_template: |
{{ title }} (#{{ number }})
{{ body_raw }}

- name: remove ci:ready_to_merge label
conditions:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/sync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,5 +62,5 @@ jobs:
author: TFLM-bot <[email protected]>
body: "BUG=automated sync from upstream\nNO_CHECK_TFLITE_FILES=automated sync from upstream"
labels: bot:sync-tf, ci:run
reviewers: rascani
reviewers: suleshahid

3 changes: 3 additions & 0 deletions .style.yapf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[style]
based_on_style = pep8
indent_width = 2
14 changes: 14 additions & 0 deletions BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,17 @@ refresh_compile_commands(
name = "refresh_compile_commands",
targets = ["//..."],
)

load("@bazel_skylib//rules:common_settings.bzl", "bool_flag")

bool_flag(
name = "with_compression",
build_setting_default = False,
)

config_setting(
name = "with_compression_enabled",
flag_values = {
":with_compression": "True",
},
)
4 changes: 2 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* @tensorflow/micro

/.github/ @advaitjain @rockyrhodes @rascani
/ci/ @advaitjain @rockyrhodes @rascani
/.github/ @advaitjain @rockyrhodes @suleshahid
/ci/ @advaitjain @rockyrhodes @suleshahid
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ Below are some tips that might be useful and improve the development experience.

```
pip install yapf
yapf log_parser.py -i --style='{based_on_style: pep8, indent_width: 2}'
yapf log_parser.py -i'
```

* Add a git hook to check for code style etc. prior to creating a pull request:
Expand Down
3 changes: 2 additions & 1 deletion WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ load("//python:py_pkg_cc_deps.bzl", "py_pkg_cc_deps")

py_pkg_cc_deps(
name = "numpy_cc_deps",
includes = ["numpy/core/include"],
includes = ["numpy/_core/include"],
pkg = requirement("numpy"),
)

Expand All @@ -101,6 +101,7 @@ py_pkg_cc_deps(
http_archive(
name = "nnlib_hifi4",
build_file = "@tflite_micro//third_party/xtensa/nnlib_hifi4:nnlib_hifi4.BUILD",
integrity = "sha256-ulZ+uY4dRsbDUMZbZtD972eghclWQrqYRb0Y4Znfyyc=",
strip_prefix = "nnlib-hifi4-34f5f995f28d298ae2b6e2ba6e76c32a5cb34989",
urls = ["https://github.com/foss-xtensa/nnlib-hifi4/archive/34f5f995f28d298ae2b6e2ba6e76c32a5cb34989.zip"],
)
34 changes: 0 additions & 34 deletions ci/temp_patches/tf_update_visibility.patch

This file was deleted.

29 changes: 23 additions & 6 deletions codegen/build_def.bzl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
""" Build rule for generating ML inference code from TFLite model. """

load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

def tflm_inference_library(
name,
Expand All @@ -18,14 +18,28 @@ def tflm_inference_library(
native.genrule(
name = generated_target,
srcs = [tflite_model],
outs = [name + ".h", name + ".cc"],
outs = [
name + ".h",
name + ".cc",
name + ".log",
],
tools = ["//codegen:code_generator"],
cmd = "$(location //codegen:code_generator) " +
"--model=$< --output_dir=$(RULEDIR) --output_name=%s" % name,
cmd = """
# code_generator (partially because it uses Tensorflow) outputs
# much noise to the console. Intead, write output to a logfile to
# prevent noise in the error-free bazel output.
NAME=%s
LOGFILE=$(RULEDIR)/$$NAME.log
$(location //codegen:code_generator) \
--model=$< \
--output_dir=$(RULEDIR) \
--output_name=$$NAME \
>$$LOGFILE 2>&1
""" % name,
visibility = ["//visibility:private"],
)

native.cc_library(
tflm_cc_library(
name = name,
hdrs = [name + ".h"],
srcs = [name + ".cc"],
Expand All @@ -39,6 +53,9 @@ def tflm_inference_library(
"//tensorflow/lite/micro:micro_common",
"//tensorflow/lite/micro:micro_context",
],
copts = micro_copts(),
target_compatible_with = select({
"//conditions:default": [],
"//:with_compression_enabled": ["@platforms//:incompatible"],
}),
visibility = visibility,
)
22 changes: 20 additions & 2 deletions codegen/code_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@
""" Generates C/C++ source code capable of performing inference for a model. """

import os
import pathlib

from absl import app
from absl import flags
from collections.abc import Sequence

from tflite_micro.codegen import inference_generator
from tflite_micro.codegen import graph
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

# Usage information:
# Default:
Expand All @@ -48,15 +48,33 @@
"'model' basename."),
required=False)

_QUIET = flags.DEFINE_bool(
name="quiet",
default=False,
help="Suppress informational output (e.g., for use in for build system)",
required=False)


def main(argv: Sequence[str]) -> None:
if _QUIET.value:
restore = os.environ.get("TF_CPP_MIN_LOG_LEVEL", "0")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils
os.environ["TF_CPP_MIN_LOG_LEVEL"] = restore
else:
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

output_dir = _OUTPUT_DIR.value or os.path.dirname(_MODEL_PATH.value)
output_name = _OUTPUT_NAME.value or os.path.splitext(
os.path.basename(_MODEL_PATH.value))[0]

model = flatbuffer_utils.read_model(_MODEL_PATH.value)

print("Generating inference code for model: {}".format(_MODEL_PATH.value))
if not _QUIET.value:
print("Generating inference code for model: {}".format(_MODEL_PATH.value))
output_path = pathlib.Path(output_dir) / output_name
print(f"Generating {output_path}.h")
print(f"Generating {output_path}.cc")

inference_generator.generate(output_dir, output_name,
graph.OpCodeTable([model]), graph.Graph(model))
Expand Down
1 change: 0 additions & 1 deletion codegen/inference_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ class ModelData(TypedDict):

def _render(output_file: pathlib.Path, template_file: pathlib.Path,
model_data: ModelData) -> None:
print("Generating {}".format(output_file))
t = template.Template(filename=str(template_file))
with output_file.open('w+') as file:
file.write(t.render(**model_data))
Expand Down
5 changes: 2 additions & 3 deletions codegen/runtime/BUILD
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

package(default_visibility = ["//visibility:public"])

cc_library(
tflm_cc_library(
name = "micro_codegen_context",
srcs = ["micro_codegen_context.cc"],
hdrs = ["micro_codegen_context.h"],
copts = micro_copts(),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite/kernels:op_macros",
Expand Down
5 changes: 2 additions & 3 deletions python/tflite_micro/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ load("@rules_python//python:packaging.bzl", "py_package", "py_wheel")
load("@tflm_pip_deps//:requirements.bzl", "requirement")
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)
load(
"//tensorflow:extra_rules.bzl",
Expand All @@ -24,15 +24,14 @@ package_group(
packages = tflm_python_op_resolver_friends(),
)

cc_library(
tflm_cc_library(
name = "python_ops_resolver",
srcs = [
"python_ops_resolver.cc",
],
hdrs = [
"python_ops_resolver.h",
],
copts = micro_copts(),
visibility = [
":op_resolver_friends",
"//tensorflow/lite/micro/integration_tests:__subpackages__",
Expand Down
5 changes: 0 additions & 5 deletions python/tflite_micro/interpreter_wrapper.cc
Original file line number Diff line number Diff line change
Expand Up @@ -104,11 +104,6 @@ bool CheckTensor(const TfLiteTensor* tensor) {
return false;
}

if (tensor->sparsity != nullptr) {
PyErr_SetString(PyExc_ValueError, "TFLM doesn't support sparse tensors");
return false;
}

int py_type_num = TfLiteTypeToPyArrayType(tensor->type);
if (py_type_num == NPY_NOTYPE) {
PyErr_SetString(PyExc_ValueError, "Unknown tensor type.");
Expand Down
8 changes: 6 additions & 2 deletions python/tflite_micro/numpy_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ int TfLiteTypeToPyArrayType(TfLiteType tf_lite_type) {
case kTfLiteFloat16:
return NPY_FLOAT16;
case kTfLiteBFloat16:
// TODO(b/329491949): NPY_BFLOAT16 currently doesn't exist
return NPY_FLOAT16;
// TODO(b/329491949): Supports other ml_dtypes user-defined types.
return NPY_USERDEF;
case kTfLiteFloat64:
return NPY_FLOAT64;
case kTfLiteInt32:
Expand Down Expand Up @@ -114,6 +114,10 @@ TfLiteType TfLiteTypeFromPyType(int py_type) {
return kTfLiteComplex64;
case NPY_COMPLEX128:
return kTfLiteComplex128;
case NPY_USERDEF:
// User-defined types are defined in ml_dtypes. (bfloat16, float8, etc.)
// Fow now, we only support bfloat16.
return kTfLiteBFloat16;
// Avoid default so compiler errors created when new types are made.
}
return kTfLiteNoType;
Expand Down
5 changes: 2 additions & 3 deletions signal/micro/kernels/BUILD
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)

package(licenses = ["notice"])

cc_library(
tflm_cc_library(
name = "register_signal_ops",
srcs = [
"delay.cc",
Expand All @@ -31,7 +31,6 @@ cc_library(
"irfft.h",
"rfft.h",
],
copts = micro_copts(),
visibility = [
"//tensorflow/lite/micro",
],
Expand Down
10 changes: 7 additions & 3 deletions tensorflow/compiler/mlir/lite/core/api/BUILD
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load(
"//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = ["//visibility:public"],
licenses = ["notice"],
)

cc_library(
tflm_cc_library(
name = "error_reporter",
srcs = ["error_reporter.cc"],
hdrs = ["error_reporter.h"],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
deps = [],
)
1 change: 1 addition & 0 deletions tensorflow/lite/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ cc_library(
srcs = ["array.cc"],
hdrs = ["array.h"],
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite/core/c:common",
],
)
Expand Down
2 changes: 2 additions & 0 deletions tensorflow/lite/array.cc
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ limitations under the License.

#include "tensorflow/lite/array.h"

#include "tensorflow/lite/c/common.h"

namespace tflite {
namespace array_internal {

Expand Down
Loading
Loading