-
Notifications
You must be signed in to change notification settings - Fork 837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BugFix] Skip identity conversions for 8-bit and 16-bit types #3622
Conversation
… it is an identity conversion This change affects the following type of shaders (which does identity conversion for 8/16 bit types): ``` layout(local_size_x = 1, local_size_y = 1, local_size_z = 1) in; layout(binding = 0) readonly buffer A { TYPE data_a[]; }; layout(binding = 1) writeonly buffer D { TYPE data_d[]; }; void main() { const uint i = gl_GlobalInvocationID.x; data_d[i] = TYPE(data_a[i]); } ``` Earlier (before this fix), it generated incorrect SPIR-V convert instructions.
Contribution License Aggreement instructed me to include |
If you work for a tech company they may own the copyright on your work. thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. I think the change itself looks reasonable. However, it would be good to add some unit tests for this (and possibly the pattern in the similar issue #3607, if your PR fixes that as well). Let me know if you have any questions about the testing framework that glslang uses or about how to add tests.
Actually, I'm just going to make a commit adding some test cases on top of yours and then I will merge this. |
HI @arcady-lunarg , Apologies for the late response. This change doesn't fix the #3607 issue. I planned to look that into this weekend. Sorry if I will be late. You may proceed to merge this for now. |
Though, now that I actually read the GLSL_EXT_shader_16bit_storage spec, it seems that these identity conversions are actually forbidden and indeed that converting directly from 8-bit to 16-bit without going through int is also forbidden. That seems unnecessarily strict and I have to wonder if that was the original intention of the spec writers to forbid this. @jeffbolznv, as the original spec writer, might you weigh in here? |
The original GLSL spec does say "Identity constructors, like float(float) are also legal, but of little use", so IMO this should still be allowed in the sized storage extensions as well. |
The original 8/16-bit storage support in Vulkan was very limited because some IHVs didn't really support 8/16-bit types and could only load them from memory but not really operate on them as smaller types. I don't remember specifics of the GLSL extension beyond that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to add the tests myself, thanks, this change looks good.
Original Issue Reported: #3616
This change affects the following type of shaders (which do identity conversion for 8/16 bit types):
Earlier (before this fix), it generated incorrect SPIR-V convert instructions.
Ran
ctest
and it reports all tests are passsing:Test project /home/ravi/OpenSource/glslang/build
Start 1: glslang-testsuite
1/2 Test #1: glslang-testsuite ................ Passed 30.53 sec
Start 2: glslang-gtests
2/2 Test #2: glslang-gtests ................... Passed 15.38 sec
100% tests passed, 0 tests failed out of 2
Total Test time (real) = 45.92 sec