Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xe: ocl: gemm: fix gemm_with_post_ops accumulator type #2289

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rjoursler
Copy link
Contributor

Fixes the following error observed on XeLP.

$ ~/dnnl/build_clang/tests/benchdnn/benchdnn  --mode-modifier=P --matmul --engine=gpu --dt=s8:u8:f16 --attr-scales=src:common:0.25 2048x13:13x512
[   0][DST][0:0] exp_f32:         9.5 exp:         9.5 got:           9 diff:     0.5 rdiff:0.0526316
[   1][DST][0:1] exp_f32:         1.5 exp:         1.5 got:           1 diff:     0.5 rdiff:0.333333
[   2][DST][0:2] exp_f32:         5.5 exp:         5.5 got:           5 diff:     0.5 rdiff:0.0909091
[   3][DST][0:3] exp_f32:         9.5 exp:         9.5 got:           9 diff:     0.5 rdiff:0.0526316
[   4][DST][0:4] exp_f32:        4.25 exp:        4.25 got:           4 diff:    0.25 rdiff:0.0588235
[   5][DST][0:5] exp_f32:        14.5 exp:        14.5 got:          14 diff:     0.5 rdiff:0.0344828
[   8][DST][0:8] exp_f32:       -1.25 exp:       -1.25 got:          -1 diff:    0.25 rdiff:     0.2
[   9][DST][0:9] exp_f32:        9.25 exp:        9.25 got:           9 diff:    0.25 rdiff:0.027027
[  10][DST][0:10] exp_f32:       11.25 exp:       11.25 got:          11 diff:    0.25 rdiff:0.0222222
[  11][DST][0:11] exp_f32:        -2.5 exp:        -2.5 got:          -2 diff:     0.5 rdiff:     0.2
[COMPARE_STATS][DST]: trh=0 err_max_diff:    0.75 err_max_rdiff:       1 all_max_diff:    0.75 all_max_rdiff:       1
[PRIM_REF][INFO]: L2_size:327680 bytes; per_core_L3_size:2621440 bytes; nthr:24; impl_name:gemm:jit:f32
[PRIM_REF][REPRO]: --mode-modifier=P --matmul --engine=cpu --attr-scales=src:common:0.25 2048x13:13x512
0:FAILED (errors:776201 total:1048576) __REPRO: --mode-modifier=P --matmul --engine=gpu --dt=s8:u8:f16 --attr-scales=src:common:0.25 2048x13:13x512
tests:1 passed:0 skipped:0 mistrusted:0 unimplemented:0 invalid_arguments:0 failed:1 listed:0
total: 0.47s; fill: 0.00s (0%); compute_ref: 0.01s (2%); compare: 0.01s (2%);

Fixes MFDNN-12822

@rjoursler rjoursler requested a review from a team as a code owner December 18, 2024 22:29
@github-actions github-actions bot added the platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel label Dec 18, 2024
@@ -98,7 +98,7 @@ __kernel void gemm_post_ops(__global SRC_DATA_T *src, __global BIA_DATA_T *bias,
#else
ACC_DATA_T acc = SRC_TO_ACC(src[data_idx]);
#endif
float accumulator = acc;
float accumulator = convert_float(acc);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated to this issue but do we need to be concerned with potential f64 precision loss?

Copy link
Contributor Author

@rjoursler rjoursler Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯 Yes. Here is a tracker: MFDNN-12893.

@rjoursler
Copy link
Contributor Author

make test
disable device_cpu
enable device_gpu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants