Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] Fix op tensor nhwc #3190

Open
wants to merge 13 commits into
base: develop
Choose a base branch
from
Open

Conversation

atamazov
Copy link
Contributor

@atamazov atamazov commented Aug 9, 2024

WIP for #2860

…ut is not set explicitly and strides are known. [API] Extend layouts with 1/2/3D defauls and "unknown", for internal use.
… for internal use. [tensor] Add support for non-default 1/2/3D layouts.
…icit "NCHWc4" and "NCHWc8", "CHWNc" - to "CHWNc4" and "CHWNc8". This is necessary to fully support conversion of strings to enums.
@atamazov
Copy link
Contributor Author

atamazov commented Aug 9, 2024

@CAHEK7 @averinevg For early preview

@CAHEK7
Copy link
Contributor

CAHEK7 commented Aug 13, 2024

Tensor operations are layout agnostic, the most important limitation is all the tensors must have the same layout. Another limitation from the current implementation - all the kernels, even the generic one, do not support strides in the last dimension, so even if the layout is not default, but the last dimension is tightly packed, we can use most of the kernels (in the other words, we must check that the last dimension stride is 1 and skip any layout checks).
Or if the layout is transposed using strides, we should rearrange lengths and strides for all the tensors to make tightly packed dimension the last.
Layout information is required only for keeping all the tensors in the same layout.

Technically we even can omit this check if(lay_a != lay_b) if we can normalize tensor B layout and still have the last non-singular dimension's stride equals to 1.
For example, for NCHW tensor A we can apply any NXXX tensor B if it has the following dimensions: N111. Because there is only 1 scalar value applied to the whole CHW instance.

… & adapt to changes from PR ROCm#3213

# RESOLVED Conflicts:
#	src/include/miopen/conv/problem_description.hpp
#	src/include/miopen/graphapi/tensor.hpp
#	src/include/miopen/tensor.hpp
#	src/tensor.cpp
…t4D5D, GetLayoutEnum, GetLayout_str: add suport for n_dims = {1,2,3}. Remove the "4D5D" suffices.
…oes not resolve the issue; remove it. [tensor] Removed notion of "default layout".
@atamazov
Copy link
Contributor Author

🛠️ Detection of "mismatched tensor layouts" cannot help to resolve the problem. The PR will be reworked.

@junliume junliume marked this pull request as ready for review September 27, 2024 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants