-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified tests #1117
Unified tests #1117
Conversation
format! |
format! |
6a738ff
to
123f3c4
Compare
Co-authored-by: Tobias Ribizel <[email protected]>
53759b2
to
a02d707
Compare
Co-authored-by: Yuhsiang M. Tsai <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice work! A few comments and questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for tackling this!
It is a lot to go through so I'd suggest a few of us do it.
@@ -368,7 +368,7 @@ TYPED_TEST(ResidualNorm, SelfCalulatesAndWaitsTillResidualGoal) | |||
ASSERT_EQ(stop_status.get_data()[0].has_converged(), false); | |||
ASSERT_EQ(one_changed, false); | |||
|
|||
solution->at(0) = rhs_val - r<T>::value * T{0.9} * rhs_norm->at(0); | |||
solution->at(0) = rhs_val - r<T>::value * T{0.5} * rhs_norm->at(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because 0.9 can't be exactly represented in single precision? 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, because I relaxed the complex error bounds to include a sqrt(2) factor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks very good. I've left some smaller remarks. I guess we need to see if the coverage changes due to this.
I would also suggest getting more than 2 approvals, because a single person won't find all issues that might be hiding somewhere.
@@ -189,7 +189,8 @@ struct reduction_factor { | |||
using nc_output = remove_complex<OutputType>; | |||
using nc_precision = remove_complex<Precision>; | |||
static constexpr nc_output value{ | |||
std::numeric_limits<nc_precision>::epsilon() * nc_output{10}}; | |||
std::numeric_limits<nc_precision>::epsilon() * nc_output{10} * | |||
(gko::is_complex<Precision>() ? nc_output{1.4142} : one<nc_output>())}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is sqrt(2) right? So why not
(gko::is_complex<Precision>() ? nc_output{1.4142} : one<nc_output>())}; | |
(gko::is_complex<Precision>() ? nc_output{std::sqrt(2f)} : one<nc_output>())}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice suggestion, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately this doesn't work, because it needs to be constexpr
TEST_F(Csr, OneAutomaticalWorksWithDifferentMatrices) | ||
{ | ||
auto automatical = std::make_shared<Mtx::automatical>(exec); | ||
#ifdef GKO_COMPILING_CUDA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
#ifdef GKO_COMPILING_CUDA | |
#if defined(GKO_COMPILING_CUDA) |
for uniformity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're not very consistent about this surprisingly, but most cases use #ifdef
, not #if defined
:
grep -rh "#if" test | sort | uniq -c
2 #if !(GINKGO_COMMON_SINGLE_MODE)
25 #if GINKGO_COMMON_SINGLE_MODE
1 #if GKO_HAVE_PAPI_SDE
3 #if HAS_REFERENCE
2 #if defined(GKO_COMPILING_CUDA) || defined(GKO_COMPILING_HIP) || \
1 #if defined(GKO_COMPILING_OMP) || defined(GKO_COMPILING_CUDA) || \
1 #if defined(HAS_CUDA)
1 #if defined(HAS_CUDA) || defined(HAS_HIP)
2 #if defined(HAS_HIP) || defined(HAS_CUDA)
6 #ifdef GINKGO_FAST_TESTS
6 #ifdef GKO_COMPILING_CUDA
1 #ifdef GKO_COMPILING_DPCPP
6 #ifdef GKO_COMPILING_HIP
4 #ifdef GKO_COMPILING_OMP
2 #ifndef GKO_COMPILING_DPCPP
2 #ifndef GKO_COMPILING_HIP
1 #ifndef GKO_COMPILING_OMP
1 #ifndef GKO_TEST_UTILS_EXECUTOR_HPP_
* remove unnecessary includes * pull common type aliases into CommonTestFixture * fix some test bounds Co-authored-by: Marcel Koch <[email protected]> Co-authored-by: Fritz Göbel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the excess system in Isai uses true/false in different tests.
I am not in favor of that the init_executor uses overload by passing nullptr not template to choose which implementation.
GKO_ASSERT_MTX_EQ_SPARSITY(square_dmtx, square_mtx); | ||
ASSERT_TRUE(square_dmtx->is_sorted_by_column_index()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not looking into the eq sparsity.
maybe put the check of sorted_by_column before eq_sparsity in case eq_sparsity sorts the matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the assertions take the matrix by const pointer, so this shouldn't be an issue
GKO_ASSERT_MTX_NEAR(inverse, d_inverse, 10 * r<value_type>::value); | ||
GKO_ASSERT_MTX_NEAR(inverse, d_inverse, 100 * r<value_type>::value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe need to be careful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like to increase the error bounds with big margins to make sure we don't have any flaky tests if some compiler optimizations change the evaluation order. Do you think this is an issue here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not know actually.
Without knowing the testing problem, I can somehow accept 10 r as rounding issue. 100 r will be a little higher such that I start wondering whether there's something wrong or the wrong codes may also pass the test
gko::stop::ResidualNorm<value_type>::build() | ||
.with_reduction_factor(value_type{1e-15}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still surprised at float germs can use 1e-15 as reduction_factor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1e-15 is representable as a normal float, are you talking about the unit roundoff here? But I think we always run into the Iteration stopping criterion anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, unit roundoff here. it's hard to reach that when in float.
I see, I miss the iteration criterion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Codecov ReportBase: 90.61% // Head: 90.06% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## develop #1117 +/- ##
===========================================
- Coverage 90.61% 90.06% -0.55%
===========================================
Files 508 508
Lines 44334 44327 -7
===========================================
- Hits 40173 39925 -248
- Misses 4161 4402 +241
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Co-authored-by: Yuhsiang M. Tsai <[email protected]>
format! |
Co-authored-by: Tobias Ribizel <[email protected]>
SonarCloud Quality Gate failed. 0 Bugs |
Advertise release 1.5.0 and last changes + Add changelog, + Update third party libraries + A small fix to a CMake file See PR: #1195 The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as: - MPI-based multi-node support for all matrix formats and most solvers; - full DPC++/SYCL support, - functionality and interface for GPU-resident sparse direct solvers, - an interface for wrapping solvers with scaling and reordering applied, - a new algebraic Multigrid solver/preconditioner, - improved mixed-precision support, - support for device matrix assembly, and much more. If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions). Supported systems and requirements: + For all platforms, CMake 3.13+ + C++14 compliant compiler + Linux and macOS + GCC: 5.5+ + clang: 3.9+ + Intel compiler: 2018+ + Apple LLVM: 8.0+ + NVHPC: 22.7+ + Cray Compiler: 14.0.1+ + CUDA module: CUDA 9.2+ or NVHPC 22.7+ + HIP module: ROCm 4.0+ + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`. + Windows + MinGW and Cygwin: GCC 5.5+ + Microsoft Visual Studio: VS 2019 + CUDA module: CUDA 9.2+, Microsoft Visual Studio + OpenMP module: MinGW or Cygwin. Algorithm and important feature additions: + Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148)) + Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110), [#1142](#1142)) + Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082)) + Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059)) + Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982), [#986](#986)) + Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926)) + Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117)) + Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965)) + Add support for mixed real/complex BLAS operations ([#864](#864)) + Add a FFT LinOp for all but DPC++/SYCL ([#701](#701)) + Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775)) + Add CSR scaling ([#848](#848)) + Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890)) + Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901)) + Add mixed precision SparsityCsr SpMV support ([#970](#970)) + Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964)) + Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942)) Deprecations and important changes: + Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)). + Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101)) + Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052)) + Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020)) + Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)). + Drop official support for old CUDA < 9.2 ([#887](#887)) Improved performance additions: + Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028)) + Add HIP unsafe atomic option for AMD ([#1091](#1091)) + Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)). + Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809)) Fixes: + Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189)) + Fix issues with hwloc-related tests ([#1074](#1074)) + Fix include headers for GCC 12 ([#1071](#1071)) + Fix for simple-solver-logging example ([#1066](#1066)) + Fix for potential memory leak in Logger ([#1056](#1056)) + Fix logging of mixin classes ([#1037](#1037)) + Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753)) + Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978)) + Fix uninitialized data ([#958](#958)) + Fix CUDA version requirement for cusparseSpSM ([#953](#953)) + Fix several issues within bash-script ([#1016](#1016)) + Fixes for `NVHPC` compiler support ([#1194](#1194)) Other additions: + Simplify and properly name GMRES kernels ([#861](#861)) + Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109)) + Improve gdb pretty printer ([#987](#987), [#1114](#1114)) + Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035)) + Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032)) + Better CSR strategy defaults ([#969](#969)) + Add `move_from` to `PolymorphicObject` ([#997](#997)) + Remove unnecessary device_guard usage ([#956](#956)) + Improvements to the generic accessor for mixed-precision ([#727](#727)) + Add a naive lower triangular solver implementation for CUDA ([#764](#764)) + Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897)) + Add a L1 norm implementation ([#900](#900)) + Add reduce_add for arrays ([#831](#831)) + Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)). + Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123)) + Make IDR random initilization deterministic ([#1116](#1116)) + Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088)) + Update CUDA archCoresPerSM ([#1175](#1116)) + Add kernels for Csr sparsity pattern lookup ([#994](#994)) + Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027)) + Add a binary IO format for matrix data ([#984](#984)) + Add a tuple zip_iterator implementation ([#966](#966)) + Simplify kernel stubs and declarations ([#888](#888)) + Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859)) + Simplify copy to device in tests and examples ([#863](#863)) + More verbose output to array assertions ([#858](#858)) + Allow parallel compilation for Jacobi kernels ([#871](#871)) + Change clang-format pointer alignment to left ([#872](#872)) + Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137)) + Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154)) + Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903), [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
This is a big one 😀 Most of our tests are 90% - 100% identical between different executors. In terms of maintenance, I don't think there is value in keeping them separate. If there is specific behavior we need to test, we can+
TODO:
Depends on #1123
Closes #483