Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ALPAKA support #562

Open
jeffhammond opened this issue Apr 2, 2021 · 8 comments
Open

ALPAKA support #562

jeffhammond opened this issue Apr 2, 2021 · 8 comments

Comments

@jeffhammond
Copy link
Member

Add support for https://github.com/alpaka-group/alpaka because we want to support all the C++ programming models.

@ax3l know anybody who can help here? 😉

@ax3l
Copy link

ax3l commented Apr 2, 2021

Fantastic idea! Lets ping @psychocoderHPC @BenjaminW3 @j-stephan @bernhardmgruber @sbastrakov @bussmann.

Background: The Parallel Reserach Kernels (ParRes Kernels) are a set of simple programs that can be used to explore the features of a parallel platform: https://github.com/ParRes/Kernels

@jeffhammond
Copy link
Member Author

The context here is that we support a wide range of modern C++ parallel models, including Kokkos, TBB, OpenMP and C++17 Parallel STL, so adding an Alpaka port means people can compare a lot of things at once using tests that were created by people who are relatively objective. https://youtu.be/bXeDfA21-VA shows some examples of things that have been done with them before.

I also suspect that the total porting time is less than a day, since the total amount of code that needs porting is very small (nstream is ~3 lines, transpose is ~4 lines, stencil is ~4 lines plus code generation, etc).

@bussmann
Copy link

bussmann commented Apr 3, 2021

Great idea. Will do our best to support this. Happy Easter holidays from Germany!

@jeffhammond
Copy link
Member Author

jeffhammond commented Apr 3, 2021

Porting guide

Do it in this order:

  1. nstream (1D parallelism)
  2. transpose (2D parallelism)
  3. stencil (2D parallelism)
  4. dgemm (3D parallelism) optional
  5. p2p (complicated parallelism)

Look at the Python implementations if you want the easiest-to-read code as a reference. Or look at whatever language you like best. The simplest implementation will be named "kernel.suffix".

Detail

transpose

It must use a standard row or column major storage. In distributed memory, you must decompose in only one dimension so the communication is all-to-all.

Blocking for cache/TLB is useful on CPUs. GPU optimizations are tricky. The CUDA implementation is not optimal. It will be fixed eventually.

stencil

Figure out one pattern (e.g. star with radius=2) and then tell me so I can roll it into the code generator.

dgemm

Read https://www.cs.utexas.edu/users/flame/pubs/blis3_ipdps14.pdf and implement that if you can but I've never done this and won't judge you at all for just writing triple loops and calling it good.

p2p

Look at slides 30-37 of https://drive.google.com/file/d/1yNQiG-wjBI4Iu6yDPV6WcQL-r8Yt9RSV/view if it helps to understand the design space. Hyperplane method is probably best on GPU unless you use cooperative groups or do other tricky stuff.

@jyoung3131
Copy link

Hi @jeffhammond - I'm helping with some benchmarks for PIConGPU and Alpaka, so I'll take a look at these kernel ports in more detail.

@jeffhammond
Copy link
Member Author

btw @jyoung3131 if you want to be a PIC boss, you'll see there is a PIC PRK with a limited number of implementations. @hattom added SOA and AOS versions in Fortran that would be great targets to study with the C++ stuff.

PRK % find . -name "pic*" | grep -v dep
./AMPI/PIC/pic.c
./Cxx11/pic-sycl.cc
./Cxx11/pic.cc
./MPI1/PIC-static/pic.c
./FORTRAN/pic_soa.F90
./FORTRAN/pic.F90
./FORTRAN/pic-openmp.F90
./FORTRAN/pic_soa-openmp.F90
./FG_MPI/PIC-static/pic.c
./SERIAL/PIC/pic.c
./OPENMP/PIC/pic.c

@bussmann
Copy link

bussmann commented Apr 6, 2021

LOVE the PIC PRK stuff, @jeffhammond!

If one dares to use some more 'experimental' work I recommend looking into combining Alpaka wit Llama to tackle SoA/AoS and other data layout decisions with a single source code.

Thanks for looking into this, @jyoung3131 , please coordinate with the Alpaka team, we'll be glad to support this.

@psychocoderHPC
Copy link

@bussmann you forgot to link llama documentation + llama github

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants