-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance and array semantics #33
Comments
Thank you for your insightful comments! Indeed, It never occurred to me, to load an array in RAM with the
Now after my recent commit:
It is true that What alternatives could be implemented? ds["myvar"][:,:] # does not work so well if the dimensionality of myvar is not known a priori
Array(ds["myvar"]) # works now, but it feels a bit wordy to me
ds["myvar"][] # like references, but it looks a bit obscure to most people I guess |
Great, thanks! I don't think there's anything wrong with |
I had a look at variable = read(ds["myvar"]) |
Calling functions which iterate over array elements (e.g.
Statistics.mean
) on netcdf datasets can be very slow. It would be useful to have some performance tips to e.g. first convert a dataset to anArray
.On that note, I noticed that simply calling
Array(dataset)
is also slow. I take it from the examples in the manual that the suggested way to convert is to calldataset[:]
. However this has different behaviour from ordinary Julia multidimensional arrays, which return a vector:The text was updated successfully, but these errors were encountered: