Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patches relating to OpenMP imperfect loop collapse #958

Open
wants to merge 7 commits into
base: aomp-dev
Choose a base branch
from

Conversation

jtb20
Copy link

@jtb20 jtb20 commented Jun 19, 2024

Various patches relating to internal issue SWDEV-229378 (upstream PR llvm/llvm-project#96087)

@@ -49,6 +52,10 @@ function parse(){
fi
[[ "$line" =~ $testname_regex ]]
llnltest=${BASH_REMATCH[1]}
if [ "$outfile" = "failing-tests.txt" ] && \
grep -Fq "$llnltest" xfail-list then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing a semi-colon here after xfail-list.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...huh, but yeah. I wonder why it didn't complain! I'll fix that, thanks.

@estewart08
Copy link
Contributor

Just to confirm, we expect openmp5.0-tests/test_imperfect_loop_warning.cxx to pass once the llvm patch lands?

@jtb20
Copy link
Author

jtb20 commented Jun 20, 2024

Just to confirm, we expect openmp5.0-tests/test_imperfect_loop_warning.cxx to pass once the llvm patch lands?

Yes (although the "test_imperfect_loop" of course xfails, in this hackish new way).

This patch uses the subprocess 'communicate' method in the LLNL Python
test script, as recommended in the documentation, and adds some
return-code checks.  This was mostly done as a side-effect of
investigating another problem, but is probably helpful anyway.
This patch provides a somewhat ad-hoc way of adding compilation options
to particular tests in the LLNL/openmp5.0-tests suite. In the test_list
file, just write options after a forward slash:

  foo.cpp / --extra-opt
This patch fixes a mysterious bug with the LLNL test script: if it takes
more than two minutes to run, it silently exits without producing any
output (with logging enabled).  This can happen for debug builds of the
compiler, even on a fast machine.

This happens because the underlying Python script buffers output because
it knows it is writing to a pipe.  But if the 'timeout' command kills
that script before it has completed, all its output is then discarded.

The fix is twofold: firstly the "timed-out" condition is now detected
in the check_LLNL.sh script, and secondly the timeout time is increased.
This patch adds the -Ropenmp-opt=analysis option to the compilation
of the test_imperfect_loop.cxx LLNL/openmp5.0-tests test, and a new
pseudo-test to check that the right 'remark' is emitted by the compiler.

This depends on the upstream PR:

  llvm/llvm-project#96087
The test test_imperfect_loop.cxx contains a loop collapse operation that
cannot be executed safely in parallel, so should be marked as expected
to fail, somehow.  This patch adds a basic way to do that.
This patch adds some error checking to the parse_LLNL.sh script, and
avoids parsing `ls` output when looking for the results file.
This patch splits the smoke-fails/imperfect-loop-collapse test into two
variants, one of which does and one which doesn't need unified shared
memory (which is orthogonal to the main purpose of the test).  It also
fixes the HSA_XNACK setting for smoke-fails/imperfect-loop-collapse-usm
version, so both versions now work.
@jtb20
Copy link
Author

jtb20 commented Jun 28, 2024

Thank you for the approval!

I can't merge this though -- it could wait until the upstream imperfect loop analysis option lands on amd-staging, but it probably doesn't need to (just the "remark" output-scanning test will fail for now, I guess).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants