You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is meant to track progress of implementing Array API standard for finch-tensor.
I thought that we could try adding short notes to bullet-points, saying which Finch.jl functions should be called to implement given entry. I think we already had some ideas during one of our first calls.
creation functions: asarray, ones, full, full_like, ... - finch.Tensor constructor, as well as jl.copyto!(arr, jl.broadcasted(Scalar(1)), as well as changing the default of the tensor with Tensor(Dense(Element(1.0))). We may need to distinguish some of these. API: Add asarray function #28, API: Add eye function #32
stats functions: max, mean, min, std, var
set functions: unique_all, unique_counts, unique_inverse, unique_values - eager
Hi @willow-ahrens @hameerabbasi,
This issue is meant to track progress of implementing Array API standard for
finch-tensor
.I thought that we could try adding short notes to bullet-points, saying which
Finch.jl
functions should be called to implement given entry. I think we already had some ideas during one of our first calls.Array API: https://data-apis.org/array-api/latest/index.html
Backlog
main namespace
astype
- API:finch.astype
function #15 - eageradd
,multiply
,cos
, ...) - API: Lazy API #17 (partially...)xp.prod
,xp.sum
) -jl.sum
andjl.prod
, also justjl.reduce
- API: Lazy API #17matmul
- implemented withfinch.tensordot
for non-stacked input. Should be rewritten withjl.mul
/ Finch einsum.tensordot
-finch.tensordot
- API: Implementtensordot
andmatmul
#22where
-jl.broadcast(jl.ifelse, cond, a, b)
- API: Implementwhere
andnonzero
#30argmin
/argmax
-jl.argmin
(bug willow if this isn't implemented already) - eager for nowtake
-jl.getindex
eager for nownonzero
- this is an eager function, but it is implemented asffindnz(arr)
- API: Implementwhere
andnonzero
#30asarray
,ones
,full
,full_like
, ... -finch.Tensor
constructor, as well asjl.copyto!(arr, jl.broadcasted(Scalar(1))
, as well as changing the default of the tensor withTensor(Dense(Element(1.0)))
. We may need to distinguish some of these. API: Addasarray
function #28, API: Addeye
function #32max
,mean
,min
,std
,var
unique_all
,unique_counts
,unique_inverse
,unique_values
- eagerall
,any
concat
- eager for nowexpand_dims
- lazyflip
-eager for nowreshape
- eager for nowroll
- eager for nowsqueeze
- lazystack
- eager for nowargsort
/sort
- eagerbroadcast_arrays
- eager for nowbroadcast_to
- eager for nowcan_cast
/finfo
/iinfo
/result_type
bitwise_and
/bitwise_left_shift
/bitwise_invert
/bitwise_or
/bitwise_right_shift
/bitwise_xor
linalg
namespace(I copied those from the benchmark suite. If something turns out to be unfeasible we can drop it.)
linalg.vecdot
-finch.tensordot
linalg.vector_norm
-finch.norm
linalg.trace
- eagerlinalg.tensordot
- implemented in the main namespace. Just needs an aliaslinalg.outer
linalg.cross
- eager for nowlinalg.matrix_transpose
- lazylinalg.matrix_power
- eager (call matmul on sparse matrix until it gets too dense)linalg.matrix_norm
- fornuc
or2
, call external library. Forfro
,inf
,1
,0
,-1
,-inf
, calljl.norm
.xp.linalg.diagonal
-finch.tensordot(finch.diagmask(), mtx)
xp.linalg.cholesky
- call CHOLMOD or somethingxp.linalg.det
- call EIGEN or somethingxp.linalg.eigh
- call external libraryxp.linalg.eigvalsh
- call external libraryxp.linalg.inv
- call external library -scipy.sparse.linalg.inv
xp.linalg.matrix_rank
- call external libraryxp.linalg.pinv
- call external libraryTensor
methods and attributesTensor.to_device()
-finch.moveto
miscellaneous
The text was updated successfully, but these errors were encountered: