-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add --convert-tensor-to-scalars
pass
#763
base: main
Are you sure you want to change the base?
add --convert-tensor-to-scalars
pass
#763
Conversation
48a3dec
to
7eb807d
Compare
f077a30
to
c9c863a
Compare
Explicitly, this means it has no control flow? |
Yes, this was something that came up at the HW summit in Leuven last month: at this point, most of the accelerators don't seem to have a native sense of control flow, expecting the host to supply a a simple stream of instructions. |
14e7cd3
to
c4d5c5f
Compare
That turned out to be easier than expected, see PR here: #769
|
7876f1a
to
e600f01
Compare
e600f01
to
bb5c439
Compare
This is a helper pass to remove
tensor.insert/extract
(with constant indices) and (statically shaped)tensor<...>
types from the IR by effectively "unrolling"tensor<axbx!element_type>
into aTypeRange
of a*b copies of!element_type
.This is necessary for targeting RLWE HW accelerators, the first generation of which "understand" only polynomial operations and datatypes. Specifically, this is necessary as passes such as
-bgv-to-polynomial
produce polynomial ops on tensors (e.g., adding two standard ciphertext lowers to a polynomial addition of twotensor<2x!polynomial.polynomial>
).While
-convert-elementwise-to-affine
(see #524) lowers this to (loops over) polynomial operations on individual polynomial values (and the loops can be unrolled via-full-loop-unroll
), the resulting IR still contains various tensor operations, primarilytensor.insert/extract
.This PR introduces a simple pass that replaces tensor types (with static shape) by a
TypeRange
of dim1xdim2... copies of the element type via theOneToNTypeConversion
framework. Note: this framework apparently exists because the standardDialectConversion
and associatedTypeConverter
are very broken when it comes to handling 1->N type conversions (as I found out the hard way).In addition to doing the type conversion, there are also patterns to translate
tensor
operations to corresponding operations on theValueRange
, but the list is fairly incomplete right now, as there are only patterns fortensor.from_elements
andtensor.insert
. Surprisingly, together with folding (which, because left-overValueRanges
are materialized astensor.from_elements
, takes care oftensor.extract
) this is actually already enough for my primary use case. However, I'd like to make this more robust and actually support any tensor ops that can conceptually be "unrolled" this way.I'm posting this as a draft as I'd love to get some suggestions for test cases beyond what the
-bgv-to-polynomial -convert-elementwise-to-affine -full-loop-unroll
pipeline produces, especially given I might want to suggest #524 and this for inclusion upstream at some point.Open ToDos for this PR before it's ready-for-review:
tensor.slice
)Related ToDos:
polynomial.ntt/intt/mul_scalar
which aren'tElementwiseMappable
and therefore not handled by-convert-elementwise-to-affine
(PolyToStandard: handling tensors of poly? #143)