Skip to content

Commit

Permalink
[EASTL 3.17.06] (#412)
Browse files Browse the repository at this point in the history
Added more information for read_depends in the atomic.h doc

Added comment about 128 bit atomic intrinsics on msvc

Added some extra 128 bit atomic load tests

set_decomposition/set_difference_2 implementation

Allow eastl::atomic_load_cond to use non default constructible types

Adding static_assert tests for eastl::size and eastl::ssize

eastl::atomic<T> added support for constant initialization

fixed msvc warning in variant

fixed clang warning for single-line if statements indentation

Adding Paul Pedriana's documentation for the motivation of EASTL submitted to the ISO committee.

Fix off-by-one error in Timsort. If the final run is of exactly length 2, under some conditions, we end up trying to sort off the end of the input array.

Added missing header to bonus/adaptors.h

Fixed usage of wrong template parameter for alignment of atomic type pun cast
  • Loading branch information
MaxEWinkler authored Jan 18, 2021
1 parent dda5082 commit fad5471
Show file tree
Hide file tree
Showing 26 changed files with 4,903 additions and 4,530 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,11 @@ Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on compiling and testi

EASTL was created by Paul Pedriana and he maintained the project for roughly 10 years.

Roberto Parolin is the current EASTL owner and primary maintainer within EA and is responsible for the open source repository.
Max Winkler is the secondary maintainer for EASTL within EA and on the open source repository.
EASTL was subsequently maintained by Roberto Parolin for more than 8 years.
He was the driver and proponent for getting EASTL opensourced.
Rob was a mentor to all members of the team and taught us everything we ever wanted to know about C++ spookyness.

Max Winkler is the current EASTL owner and primary maintainer within EA and is responsible for the open source repository.

Significant EASTL contributions were made by (in alphabetical order):

Expand Down
Binary file added doc/EASTL-n2271.pdf
Binary file not shown.
857 changes: 491 additions & 366 deletions include/EASTL/algorithm.h

Large diffs are not rendered by default.

72 changes: 56 additions & 16 deletions include/EASTL/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -243,36 +243,27 @@
// Deviations from the standard. This does not include new features added:
//
// 1.
// Description: Atomic class constructors are not and will not be constexpr.
// Reasoning : We assert in the constructor that the this pointer is properly aligned.
// There are no other constexpr functions that can be called in a constexpr
// context. The only use for constexpr here is const-init time or ensuring
// that the object's value is placed in the executable at compile-time instead
// of having to call the ctor at static-init time. If you are using constexpr
// to solve static-init order fiasco, there are other solutions for that.
//
// 2.
// Description: Atomics are always lock free
// Reasoning : We don't want people to fall into performance traps where implicit locking
// is done. If your user defined type is large enough to not support atomic
// instructions then your user code should do the locking.
//
// 3.
// 2.
// Description: Atomic objects can not be volatile
// Reasoning : Volatile objects do not make sense in the context of eastl::atomic<T>.
// Use the given memory orders to get the ordering you need.
// Atomic objects have to become visible on the bus. See below for details.
//
// 4.
// 3.
// Description: Consume memory order is not supported
// Reasoning : See below for the reasoning.
//
// 5.
// 4.
// Description: ATOMIC_INIT() macros and the ATOMIC_LOCK_FREE macros are not implemented
// Reasoning : Use the is_lock_free() method instead of the macros.
// ATOMIC_INIT() macros aren't needed since the default constructor value initializes.
//
// 6.
// 5.
// Description: compare_exchange failure memory order cannot be stronger than success memory order
// Reasoning : Besides the argument that it ideologically does not make sense that a failure
// of the atomic operation shouldn't have a stricter ordering guarantee than the
Expand All @@ -284,7 +275,7 @@
// that versions of compilers that say they support C++17 do not properly adhere to this
// new requirement in their intrinsics. Thus we will not support this.
//
// 7.
// 6.
// Description: All memory orders are distinct types instead of enum values
// Reasoning : This will not affect how the API is used in user code.
// It allows us to statically assert on invalid memory orders since they are compile-time types
Expand Down Expand Up @@ -1384,13 +1375,62 @@
// The read_depends operation can be used on loads from only an eastl::atomic<T*> type. The return pointer of the load must and can only be used to then further load values. And that is it.
// If you are unsure, upgrade this load to an acquire operation.
//
// MyStruct* ptr = gAtomicPtr.load(read_depends);
// MyStruct* ptr = gAtomicPtr.load(memory_order_read_depends);
// int a = ptr->a;
// int b = ptr->b;
// return a + b;
//
// The loads from ptr after the gAtomicPtr load ensure that the correct values of a and b are observed. This pairs with a Release operation on the writer side by releasing gAtomicPtr.
//
//
// As said above the returned pointer from a .load(memory_order_read_depends) can only be used to then further load values.
// Dereferencing(*) and Arrow Dereferencing(->) are valid operations on return values from .load(memory_order_read_depends).
//
// MyStruct* ptr = gAtomicPtr.load(memory_order_read_depends);
// int a = ptr->a; - VALID
// int a = *ptr; - VALID
//
// Since dereferencing is just indexing via some offset from some base address, this also means addition and subtraction of constants is ok.
//
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
// int a = *(ptr + 1) - VALID
// int a = *(ptr - 1) - VALID
//
// Casts also work correctly since casting is just offsetting a pointer depending on the inheritance hierarchy or if using intrusive containers.
//
// ReadDependsIntrusive** intrusivePtr = gAtomicPtr.load(memory_order_read_depends);
// ReadDependsIntrusive* ptr = ((ReadDependsIntrusive*)(((char*)intrusivePtr) - offsetof(ReadDependsIntrusive, next)));
//
// Base* basePtr = gAtomicPtr.load(memory_order_read_depends);
// Dervied* derivedPtr = static_cast<Derived*>(basePtr);
//
// Both of the above castings from the result of the load are valid for this memory order.
//
// You can reinterpret_cast the returned pointer value to a uintptr_t to set bits, clear bits, or xor bits but the pointer must be casted back before doing anything else.
//
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
// ptr = reinterpret_cast<int*>(reinterpret_cast<uintptr_t>(ptr) & ~3);
//
// Do not use any equality or relational operator (==, !=, >, <, >=, <=) results in the computation of offsets before dereferencing.
// As we learned above in the Control Dependencies section, CPUs will not order Load-Load Control Dependencies. Relational and equality operators are often compiled using branches.
// It doesn't have to be compiled to branched, condition instructions could be used. Or some architectures provide comparison instructions such as set less than which do not need
// branches when using the result of the relational operator in arithmetic statements. Then again short circuiting may need to introduct branches since C++ guarantees the
// rest of the expression must not be evaluated.
// The following odd code is forbidden.
//
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
// int* ptr2 = ptr + (ptr >= 0);
// int a = *ptr2;
//
// Only equality comparisons against nullptr are allowed. This is becase the compiler cannot assume that the address of the loaded value is some known address and substitute our loaded value.
// int* ptr = gAtomicPtr.load(memory_order_read_depends);
// if (ptr == nullptr); - VALID
// if (ptr != nullptr); - VALID
//
// Thus the above sentence that states:
// The return pointer of the load must and can only be used to then further load values. And that is it.
// must be respected by the programmer. This memory order is an optimization added for efficient read heavy pointer swapping data structures. IF you are unsure, use memory_order_acquire.
//
// ******** Relaxed && eastl::atomic<T> guarantees ********
//
// We saw various ways that compiler barriers do not help us and that we need something more granular to make sure accesses are not mangled by the compiler to be considered atomic.
Expand Down Expand Up @@ -1586,7 +1626,7 @@
// ----------------------------------------------------------------------------------------
//
// In this example it is entirely possible that we observe r0 = 1 && r1 = 0 even though we have source code causality and sequentially consistent operations.
// Observability is tied to the atomic object on which the operation was performed and the thread fence doesn't synchronize-with the fetch_add because there is no
// Observability is tied to the atomic object on which the operation was performed and the thread fence doesn't synchronize-with the fetch_add because
// there is no load above the fence that reads the value from the fetch_add.
//
// ******** Sequential Consistency Semantics ********
Expand Down
1 change: 1 addition & 0 deletions include/EASTL/bonus/adaptors.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
#include <EASTL/internal/config.h>
#include <EASTL/internal/move_help.h>
#include <EASTL/type_traits.h>
#include <EASTL/iterator.h>

#if defined(EA_PRAGMA_ONCE_SUPPORTED)
#pragma once // Some compilers (e.g. VC++) benefit significantly from using this. We've measured 3-4% build speed improvements in apps as a result.
Expand Down
2 changes: 1 addition & 1 deletion include/EASTL/fixed_vector.h
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ namespace eastl
get_allocator() = x.get_allocator(); // The primary effect of this is to copy the overflow allocator.
#endif

base_type::template DoAssign<move_iterator<iterator>, true>(make_move_iterator(x.begin()), make_move_iterator(x.end()), false_type()); // Shorter route.
base_type::template DoAssign<move_iterator<iterator>, true>(eastl::make_move_iterator(x.begin()), eastl::make_move_iterator(x.end()), false_type()); // Shorter route.
}
return *this;
}
Expand Down
5 changes: 2 additions & 3 deletions include/EASTL/internal/atomic/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -127,12 +127,12 @@ namespace internal
\
public: /* ctors */ \
\
atomic(type desired) EA_NOEXCEPT \
EA_CONSTEXPR atomic(type desired) EA_NOEXCEPT \
: Base{ desired } \
{ \
} \
\
atomic() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<type>) = default; \
EA_CONSTEXPR atomic() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<type>) = default; \
\
public: \
\
Expand All @@ -148,7 +148,6 @@ namespace internal
}



#define EASTL_ATOMIC_USING_ATOMIC_BASE(type) \
public: \
\
Expand Down
4 changes: 2 additions & 2 deletions include/EASTL/internal/atomic/atomic_base_width.h
Original file line number Diff line number Diff line change
Expand Up @@ -198,12 +198,12 @@ namespace internal
\
public: /* ctors */ \
\
atomic_base_width(T desired) EA_NOEXCEPT \
EA_CONSTEXPR atomic_base_width(T desired) EA_NOEXCEPT \
: Base{ desired } \
{ \
} \
\
atomic_base_width() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>) = default; \
EA_CONSTEXPR atomic_base_width() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>) = default; \
\
atomic_base_width(const atomic_base_width&) EA_NOEXCEPT = delete; \
\
Expand Down
2 changes: 1 addition & 1 deletion include/EASTL/internal/atomic/atomic_casts.h
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ EASTL_FORCE_INLINE Pun AtomicTypePunCast(const T& fromType) EA_NOEXCEPT
* aligned_storage ensures we can TypePun objects that aren't trivially default constructible
* but still trivially copyable.
*/
typename eastl::aligned_storage<sizeof(Pun), alignof(T)>::type ret;
typename eastl::aligned_storage<sizeof(Pun), alignof(Pun)>::type ret;
memcpy(eastl::addressof(ret), eastl::addressof(fromType), sizeof(Pun));
return reinterpret_cast<Pun&>(ret);
}
Expand Down
4 changes: 2 additions & 2 deletions include/EASTL/internal/atomic/atomic_flag.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ class atomic_flag
{
public: /* ctors */

atomic_flag(bool desired)
EA_CONSTEXPR atomic_flag(bool desired) EA_NOEXCEPT
: mFlag{ desired }
{
}

atomic_flag() EA_NOEXCEPT
EA_CONSTEXPR atomic_flag() EA_NOEXCEPT
: mFlag{ false }
{
}
Expand Down
8 changes: 4 additions & 4 deletions include/EASTL/internal/atomic/atomic_integral.h
Original file line number Diff line number Diff line change
Expand Up @@ -69,12 +69,12 @@ namespace internal

public: /* ctors */

atomic_integral_base(T desired) EA_NOEXCEPT
EA_CONSTEXPR atomic_integral_base(T desired) EA_NOEXCEPT
: Base{ desired }
{
}

atomic_integral_base() EA_NOEXCEPT = default;
EA_CONSTEXPR atomic_integral_base() EA_NOEXCEPT = default;

atomic_integral_base(const atomic_integral_base&) EA_NOEXCEPT = delete;

Expand Down Expand Up @@ -227,12 +227,12 @@ namespace internal
\
public: /* ctors */ \
\
atomic_integral_width(T desired) EA_NOEXCEPT \
EA_CONSTEXPR atomic_integral_width(T desired) EA_NOEXCEPT \
: Base{ desired } \
{ \
} \
\
atomic_integral_width() EA_NOEXCEPT = default; \
EA_CONSTEXPR atomic_integral_width() EA_NOEXCEPT = default; \
\
atomic_integral_width(const atomic_integral_width&) EA_NOEXCEPT = delete; \
\
Expand Down
8 changes: 4 additions & 4 deletions include/EASTL/internal/atomic/atomic_pointer.h
Original file line number Diff line number Diff line change
Expand Up @@ -70,12 +70,12 @@ namespace internal

public: /* ctors */

atomic_pointer_base(T* desired) EA_NOEXCEPT
EA_CONSTEXPR atomic_pointer_base(T* desired) EA_NOEXCEPT
: Base{ desired }
{
}

atomic_pointer_base() EA_NOEXCEPT = default;
EA_CONSTEXPR atomic_pointer_base() EA_NOEXCEPT = default;

atomic_pointer_base(const atomic_pointer_base&) EA_NOEXCEPT = delete;

Expand Down Expand Up @@ -203,12 +203,12 @@ namespace internal
\
public: /* ctors */ \
\
atomic_pointer_width(T* desired) EA_NOEXCEPT \
EA_CONSTEXPR atomic_pointer_width(T* desired) EA_NOEXCEPT \
: Base{ desired } \
{ \
} \
\
atomic_pointer_width() EA_NOEXCEPT = default; \
EA_CONSTEXPR atomic_pointer_width() EA_NOEXCEPT = default; \
\
atomic_pointer_width(const atomic_pointer_width&) EA_NOEXCEPT = delete; \
\
Expand Down
6 changes: 2 additions & 4 deletions include/EASTL/internal/atomic/atomic_size_aligned.h
Original file line number Diff line number Diff line change
Expand Up @@ -75,16 +75,14 @@ namespace internal
{
public: /* ctors */

atomic_size_aligned(T desired) EA_NOEXCEPT
EA_CONSTEXPR atomic_size_aligned(T desired) EA_NOEXCEPT
: mAtomic{ desired }
{
EASTL_ATOMIC_ASSERT_ALIGNED(sizeof(T));
}

atomic_size_aligned() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>)
EA_CONSTEXPR atomic_size_aligned() EA_NOEXCEPT_IF(eastl::is_nothrow_default_constructible_v<T>)
: mAtomic{} /* Value-Initialize which will Zero-Initialize Trivial Constructible types */
{
EASTL_ATOMIC_ASSERT_ALIGNED(sizeof(T));
}

atomic_size_aligned(const atomic_size_aligned&) EA_NOEXCEPT = delete;
Expand Down
16 changes: 4 additions & 12 deletions include/EASTL/internal/atomic/atomic_standalone.h
Original file line number Diff line number Diff line change
Expand Up @@ -303,41 +303,33 @@ EASTL_FORCE_INLINE typename eastl::atomic<T>::value_type atomic_load_explicit(co
template <typename T, typename Predicate>
EASTL_FORCE_INLINE typename eastl::atomic<T>::value_type atomic_load_cond(const eastl::atomic<T>* atomicObj, Predicate pred) EA_NOEXCEPT
{
typename eastl::atomic<T>::value_type ret;

for (;;)
{
ret = atomicObj->load();
typename eastl::atomic<T>::value_type ret = atomicObj->load();

if (pred(ret))
{
break;
return ret;
}

EASTL_ATOMIC_CPU_PAUSE();
}

return ret;
}

template <typename T, typename Predicate, typename Order>
EASTL_FORCE_INLINE typename eastl::atomic<T>::value_type atomic_load_cond_explicit(const eastl::atomic<T>* atomicObj, Predicate pred, Order order) EA_NOEXCEPT
{
typename eastl::atomic<T>::value_type ret;

for (;;)
{
ret = atomicObj->load(order);
typename eastl::atomic<T>::value_type ret = atomicObj->load(order);

if (pred(ret))
{
break;
return ret;
}

EASTL_ATOMIC_CPU_PAUSE();
}

return ret;
}


Expand Down
16 changes: 12 additions & 4 deletions include/EASTL/internal/atomic/compiler/msvc/compiler_msvc.h
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,16 @@ struct FixedWidth128
} \
}

/**
* In my own opinion, I found the wording on Microsoft docs a little confusing.
* ExchangeHigh means the top 8 bytes so (ptr + 8).
* ExchangeLow means the low 8 butes so (ptr).
* Endianness does not matter since we are just loading data and comparing data.
* Thought of as memcpy() and memcmp() function calls whereby the layout of the
* data itself is irrelevant.
* Only after we type pun back to the original type, and load from memory does
* the layout of the data matter again.
*/
#define EASTL_MSVC_ATOMIC_CMPXCHG_STRONG_INTRIN_128(type, ret, ptr, expected, desired, MemoryOrder) \
{ \
union TypePun \
Expand All @@ -181,9 +191,7 @@ struct FixedWidth128
\
struct exchange128 \
{ \
EASTL_SYSTEM_BIG_ENDIAN_STATEMENT(__int64 hi, lo); \
\
EASTL_SYSTEM_LITTLE_ENDIAN_STATEMENT(__int64 lo, hi); \
__int64 value[2]; \
}; \
\
struct exchange128 exchangePun; \
Expand All @@ -194,7 +202,7 @@ struct FixedWidth128
unsigned char cmpxchgRetChar; \
cmpxchgRetChar = EASTL_MSVC_ATOMIC_CMPXCHG_STRONG_128_OP(cmpxchgRetChar, EASTL_ATOMIC_VOLATILE_TYPE_CAST(__int64, (ptr)), \
EASTL_ATOMIC_TYPE_CAST(__int64, (expected)), \
typePun.exchangePun.hi, typePun.exchangePun.lo, \
typePun.exchangePun.value[1], typePun.exchangePun.value[0], \
MemoryOrder); \
\
ret = static_cast<bool>(cmpxchgRetChar); \
Expand Down
4 changes: 2 additions & 2 deletions include/EASTL/internal/config.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,8 @@
///////////////////////////////////////////////////////////////////////////////

#ifndef EASTL_VERSION
#define EASTL_VERSION "3.17.03"
#define EASTL_VERSION_N 31703
#define EASTL_VERSION "3.17.06"
#define EASTL_VERSION_N 31706
#endif


Expand Down
2 changes: 1 addition & 1 deletion include/EASTL/internal/function_detail.h
Original file line number Diff line number Diff line change
Expand Up @@ -649,7 +649,7 @@ namespace eastl
// We cannot assume that R is default constructible.
// This function is called only when the function object CANNOT be called because it is empty,
// it will always throw or assert so we never use the return value anyways and neither should the caller.
static R DefaultInvoker(Args..., const FunctorStorageType&)
static R DefaultInvoker(Args... /*args*/, const FunctorStorageType& /*functor*/)
{
#if EASTL_EXCEPTIONS_ENABLED
throw eastl::bad_function_call();
Expand Down
Loading

0 comments on commit fad5471

Please sign in to comment.