-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed index in pandas on pyspark does not work as expected #2204
Comments
Yeah, you described all correctly. The order isn't guaranteed in general. Can you try to turn on |
I did that now with Order is not the only source of my confusion though. I do not understand a lot in the example I have described above
Many thanks for the time and effort you are putting in to help me. :) |
I do understand the number of partitions issue. It was a lapse in my understanding. But, I do think that the default indexes are not generated correctly. Are there any updates on this? |
This is done on pandas on pyspark but the same is true for koalas as well (at least for now when I tested last)
There are 3 different kinds of default indexes in pandas on pyspark. I am not able to replicate their said behavior:
Setting up to test:
tests:
Question: Why is not the number of partitions 1 since for when the default index is set to 'sequence' all the data must be collected on a single node.
tests:
Questions: The dataframe being distributed to all 8 cores is the expected behaviour but, the indexes should not be ordered which they are. It seems this behaviour is also like
sequence
type default index only.tests:
Questions: This is also
sequence
type behaviour only. The index generated is an ordered sequence from 1 to wherever. It should be monotonically increasing numbers with an indeterministic gap.Can somebody please help me clarify what I am not understanding correctly and what is the exact expected behaviour for all three types of the default index?
The text was updated successfully, but these errors were encountered: