Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grouper object design doc #8510

Merged
merged 4 commits into from
Mar 6, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
244 changes: 244 additions & 0 deletions design_notes/grouper_objects.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,244 @@
# Grouper Objects
**Author**: Deepak Cherian <[email protected]>
**Created**: Nov 21, 2023

## Abstract

I propose the addition of Grouper objects to Xarray's public API so that
```python
Dataset.groupby(x=BinGrouper(bins=np.arange(10, 2))))
```
is identical to today's syntax:
```python
Dataset.groupby_bins("x", bins=np.arange(10, 2))
```

## Motivation and scope

Xarray's GroupBy API implements the split-apply-combine pattern (Wickham, 2011)[^1], which applies to a very large number of problems: histogramming, compositing, climatological averaging, resampling to a different time frequency, etc.
The pattern abstracts the following pseudocode:
```python
results = []
for element in unique_labels:
subset = ds.sel(x=(ds.x == element)) # split
# subset = ds.where(ds.x == element, drop=True) # alternative
result = subset.mean() # apply
results.append(result)

xr.concat(results) # combine
```

to
```python
ds.groupby('x').mean() # splits, applies, and combines
```

Efficient vectorized implementations of this pattern are implemented in numpy's [`ufunc.at`](https://numpy.org/doc/stable/reference/generated/numpy.ufunc.at.html), [`ufunc.reduceat`](https://numpy.org/doc/stable/reference/generated/numpy.ufunc.reduceat.html), [`numbagg.grouped`](https://github.com/numbagg/numbagg/blob/main/numbagg/grouped.py), [`numpy_groupies`](https://github.com/ml31415/numpy-groupies), and probably more.
These vectorized implementations *all* require, as input, an array of integer codes or labels that identify unique elements in the array being grouped over (`'x'` in the example above).
```python
import numpy as np

# array to reduce
a = np.array([1, 1, 1, 1, 2])

# initial value for result
out = np.zeros((3,), dtype=int)

# integer codes
labels = np.array([0, 0, 1, 2, 1])

# groupby-reduction
np.add.at(out, labels, a)
out # array([2, 3, 1])
```

One can 'factorize' or construct such an array of integer codes using `pandas.factorize` or `numpy.unique(..., return_inverse=True)` for categorical arrays; `pandas.cut`, `pandas.qcut`, or `np.digitize` for discretizing continuous variables.
In practice, since `GroupBy` objects exist, much of complexity in applying the groupby paradigm stems from appropriately factorizing or generating labels for the operation.
Consider these two examples:
1. [Bins that vary in a dimension](https://flox.readthedocs.io/en/latest/user-stories/nD-bins.html)
2. [Overlapping groups](https://flox.readthedocs.io/en/latest/user-stories/overlaps.html)
3. [Rolling resampling](https://github.com/pydata/xarray/discussions/8361)

Anecdotally, less sophisticated users commonly resort to the for-loopy implementation illustrated by the pseudocode above when the analysis at hand is not easily expressed using the API presented by Xarray's GroupBy object.
dcherian marked this conversation as resolved.
Show resolved Hide resolved
Xarray's GroupBy API today abstracts away the split, apply, and combine stages but not the "factorize" stage.
Grouper objects will close the gap.

## Usage and impact

<!-- This section describes how users of NumPy will use features described in this -->
<!-- NEP. It should be comprised mainly of code examples that wouldn't be possible -->
<!-- without acceptance and implementation of this NEP, as well as the impact the -->
<!-- proposed changes would have on the ecosystem. This section should be written -->
<!-- from the perspective of the users of NumPy, and the benefits it will provide -->
<!-- them; and as such, it should include implementation details only if -->
<!-- necessary to explain the functionality. -->
dcherian marked this conversation as resolved.
Show resolved Hide resolved

Grouper objects
1. Will abstract useful factorization algorithms, and
2. Present a natural way to extend GroupBy to grouping by multiple variables: `ds.groupby(x=BinGrouper(...), t=Resampler(freq="M", ...)).mean()`.

In addition, Grouper objects provide a nice interface to add often-requested grouping functionality
1. A new `SpaceResampler` would allow specifying resampling spatial dimensions. ([issue](https://github.com/pydata/xarray/issues/4008))
2. `RollingTimeResampler` would allow rolling-like functionality that understands timestamps ([issue](https://github.com/pydata/xarray/issues/3216))
Comment on lines +74 to +75
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How fundamentally different are space and time resampling? If the difference is small enough (units / step size), would it be possible to merge both into a single Resampler Grouper object? Then it might also be possible to apply this to coordinates with physical dimensions other than space / time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this could be done, they're the same, its just the user interface that could be different. For time it's nice to write freq="M" and for space spacing=2.5 (?). The other arguments carry over but for time they can be timedelta or datetime objects. So it feels nice to separate them for easy validation, and makes the user code more readable.

Copy link
Collaborator

@keewis keewis Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main point was that we might want to resample using (numeric?) coordinates that are neither time nor space, but "spacing" might be appropriate in that case, as well? In which case we'd only need a general "Resampler" and a specialized "TimeResampler" (4 in total: Resampler / RollingResampler and TimeResampler / RollingTimeResampler). Unless I'm missing something?

Copy link
Contributor Author

@dcherian dcherian Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resample using (numeric?) coordinates that are neither time nor space, but "spacing" might be appropriate in that case, as well

Ah yes, agree! The Resampler is really just a convenient wrapper around BinGrouper

3. A `QuantileBinGrouper` to abstract away `pd.cut` ([issue](https://github.com/pydata/xarray/discussions/7110))
4. A `SeasonGrouper` and `SeasonResampler` would abstract away common annoyances with such calculations today
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1. Support seasons that span a year-end.
2. Only include seasons with complete data coverage.
3. Allow grouping over seasons of unequal length
4. See [this xcdat discussion](https://github.com/xCDAT/xcdat/issues/416) for a `SeasonGrouper` like functionality:
5. Return results with seasons in a sensible order
5. Weighted grouping ([issue](https://github.com/pydata/xarray/issues/3937))
1. Once `IntervalIndex` like objects are supported, `Resampler` groupers can account for interval lengths when resampling.

## Backward Compatibility

Xarray's existing grouping functionality will be exposed using two new Groupers:
1. `UniqueGrouper` which uses `pandas.factorize`
2. `BinGrouper` which uses `pandas.cut`
3. `TimeResampler` which mimics pandas' `.resample`

Grouping by single variables will be unaffected so that `ds.groupby('x')` will be identical to `ds.groupby(x=UniqueGrouper())`.
Similarly, `ds.groupby_bins('x', bins=np.arange(10, 2))` will be unchanged and identical to `ds.groupby(x=BinGrouper(bins=np.arange(10, 2)))`.

## Detailed description

All Grouper objects will subclass from a Grouper object
```python
import abc

class Grouper(abc.ABC):
@abc.abstractmethod
def factorize(self, group):
raise NotImplementedError

class CustomGrouper(Grouper):
def factorize(self, group):
...
return codes, group_indices, unique_coord, full_index

def weights(self, group):
...
return weights
```

### The `factorize` method
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a better name? "encode"? "to_codes"? "label"? "to_integer_labels"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for something involving "labels", as IIUC this method is about grouping coordinate "labels" into common groups

Copy link
Member

@benbovy benbovy Dec 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.find_groups()? IIUC The goal of a Grouper is to find and extract the groups from a variable? Assuming that this method returns all the information needed to represent the groups, wouldn't it be better to encapsulate the returned items in an instance of a simple dataclass Groups?

And maybe rename the method argument from group to var? I find "group" a bit confusing here... It is more a variable that possibly contains groups of a certain kind.

class MyGrouper(Grouper):

    def find_groups(self, var: Variable) -> Groups:
        ...
        return Groups(...)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kind of like find_groups. (As an aside this maybe does not depend so much, as this method will almost only be used internally?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. In flox I use by

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit late, but throughout the document you're using the term "factorize" when you talk about constructing an array of integer indices for the "split" step, so maybe that means that using factorize as the method name would be fine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't like factorize because no one knows what it means :) AFAICT pandas chose that name, and I haven't seen it used anywhere else. I think label_groups would be best, and most easy to understand for a new user.

Copy link
Collaborator

@keewis keewis Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, that makes sense. However, since this is advanced API I think we can require people to learn the new meaning, which is not actually new since we're borrowing from pandas (we could take split_indices, to_split_indices or just .split if we wanted to stay closer to "split-apply-combine", though).

What does other "split-apply-combine" software call this? From what I can tell:

  • polars: to_physical (I don't get what the background there is)
  • dataframes.jl: compute_indices

Today, the `factorize` method takes as input the group variable and returns 4 variables (I propose to clean this up below):
1. `codes`: An array of same shape as the `group` with int dtype. NaNs in `group` are coded by `-1` and ignored later.
2. `group_indices` is a list of index location of `group` elements that belong to a single group.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be part of the interface? I think it can be calculated from codes if necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it can be calculated.

We'd lose an optimization where the current code uses slices for resampling, and unindexed coordinates. I think it's valuable to preserve the former at least. What do you think of this function returning a named tuple with an optional indices property. That way, Groupers can opt in to optimizing the indices, potentially reusing any intermediate step they've computed for generating the codes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think returning a dataclass with some optional fields would indeed be a nice way to handle this. It would also be forward compatible with any future interface changes.

3. `unique_coord` is (usually) a `pandas.Index` object of all unique `group` members present in `group`.
4. `full_index` is a `pandas.Index` of all `group` members. This is different from `unique_coord` for binning and resampling, where not all groups in the output may be represented in the input `group`. For grouping by a categorical variable e.g. `['a', 'b', 'a', 'c']`, `full_index` and `unique_coord` are identical.
There is some redundancy here since `unique_coord` is always equal to or a subset of `full_index`.
We can clean this up (see Implementation below).

### The `weights` method (?)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"to_weights"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"compute_weights"? "find_weights"?


The proposed `weights` method is optional and unimplemented today.
Groupers with `weights` will allow composing `weighted` and `groupby` ([issue](https://github.com/pydata/xarray/issues/3937)).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weights seems potentially useful, but I'm not sure we need it for composing weighted() and groupby(). For example, you could implement x.weighted(w).groupby(k).mean() with something like (x * w).groupby(k).sum() / w.groupby(k).sum(). This would likely be more efficient, too, at least if the grouped over dimension is large.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your implementation point but the weights can depend on the parameters to the Grouper object. For example a time bounds-aware resampling would have non-uniform weights, (and the Grouper object is more of a Regridder object). Adding the weights method would allow us to encapsulate that functionality.

The `weights` method should return an appropriate array of weights such that the following property is satisfied
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to allow these operations to be efficient, we should definitely allow and encourage returning weights as sparse arrays.

```python
gb_sum = ds.groupby(by).sum()

weights = CustomGrouper.weights(by)
weighted_sum = xr.dot(ds, weights)

assert_identical(gb_sum, weighted_sum)
```
For example, the boolean weights for `group=np.array(['a', 'b', 'c', 'a', 'a'])` should be
```
[[1, 0, 0, 1, 1],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0]]
```
This is the boolean "summarization matrix" referred to in the classic Iverson (1980, Section 4.3)[^2].

For a rolling resampling, windowed weights are possible
```
[[0.5, 1, 0.5, 0, 0],
[0, 0.25, 1, 1, 0],
[0, 0, 0, 1, 1]]
```

### The `preferred_chunks` method (?)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite speculative but seems cool!


Rechunking support is another optional extension point.
In `flox` I experimented some with automatically rechunking to make a groupby more parallel-friendly ([example 1](https://flox.readthedocs.io/en/latest/generated/flox.rechunk_for_blockwise.html), [example 2](https://flox.readthedocs.io/en/latest/generated/flox.rechunk_for_cohorts.html)).
A great example is for resampling-style groupby reductions, for which `codes` might look like
```
0001|11122|3333
```
where `|` represents chunk boundaries. A simple rechunking to
```
000|111122|3333
```
would make this resampling reduction an embarassingly parallel blockwise problem.

Similarly consider monthly-mean climatologies for which the month numbers might be
```
1 2 3 4 5 | 6 7 8 9 10 | 11 12 1 2 3 | 4 5 6 7 8 | 9 10 11 12 |
```
A slight rechunking to
```
1 2 3 4 | 5 6 7 8 | 9 10 11 12 | 1 2 3 4 | 5 6 7 8 | 9 10 11 12 |
```
allows us to reduce `1, 2, 3, 4` separately from `5,6,7,8` and `9, 10, 11, 12` while still being parallel friendly (see the [flox documentation](https://flox.readthedocs.io/en/latest/implementation.html#method-cohorts) for more).

We could attempt to detect these patterns, or we could just have the Grouper take as input `chunks` and return a tuple of "nice" chunk sizes to rechunk to.
```python
def preferred_chunks(self, chunks: ChunksTuple) -> ChunksTuple:
pass
```
For monthly means, since the period of repetition of labels is 12, the Grouper might choose possible chunk sizes of `((2,),(3,),(4,),(6,))`.
For resampling, the Grouper could choose to resample to a multiple or an even fraction of the resampling frequency.

## Related work
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interested in what else is out there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Pandas has [Grouper objects](https://pandas.pydata.org/docs/reference/api/pandas.Grouper.html#pandas-grouper) that represent the GroupBy instruction.
However, these objects do not appear to be extension points, unlike the Grouper objects proposed here.
Instead, Pandas' `ExtensionArray` has a [`factorize`](https://pandas.pydata.org/docs/reference/api/pandas.api.extensions.ExtensionArray.factorize.html) method.

Composing rolling with time resampling is a common workload:
1. Polars has [`group_by_dynamic`](https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by_dynamic.html) which appears to be like the proposed `RollingResampler`.
2. scikit-downscale provides [`PaddedDOYGrouper`](
https://github.com/pangeo-data/scikit-downscale/blob/e16944a32b44f774980fa953ea18e29a628c71b8/skdownscale/pointwise_models/groupers.py#L19)
Comment on lines +197 to +198
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would be quite happy to have this directly in xarray! We have are own implementation at xclim (https://github.com/Ouranosinc/xclim/blob/b905e5529f2757a49f4485b7713b0ef01a45df2c/xclim/sdba/base.py#L103) but it's kind of a mess. Grouping per time period with a window is very useful and common in model output statistics.


## Implementation Proposal

1. Get rid of `squeeze` [issue](https://github.com/pydata/xarray/issues/2157): [PR](https://github.com/pydata/xarray/pull/8506)
2. Merge existing two class implementation to a single Grouper class
1. This design was implemented in [this PR](https://github.com/pydata/xarray/pull/7206) to account for some annoying data dependencies.
2. See [PR](https://github.com/pydata/xarray/pull/8509)
3. Clean up what's returned by `factorize` methods.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs input on what we expect custom factorize methods to implement.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's probably best to try implementing the most important use cases from the lists you've been assembling: by doing so I believe we can figure out the most important ones.

1. A solution here might be to have `group_indices: Mapping[int, Sequence[int]]` be a mapping from group index in `full_index` to a sequence of integers.
2. Return a `namedtuple` or `dataclass` from existing Grouper factorize methods to facilitate API changes in the future.
4. Figure out what to pass to `factorize`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use some input here too, requires some internal cleanup.

1. Xarray eagerly reshapes nD variables to 1D. This is an implementation detail we need not expose.
2. When grouping by an unindexed variable Xarray passes a `_DummyGroup` object. This seems like something we don't want in the public interface. We could special case "internal" Groupers to preserve the optimizations in `UniqueGrouper`.
5. Grouper objects will exposed under the `xr.groupers` Namespace. At first these will include `UniqueGrouper`, `BinGrouper`, and `TimeResampler`.
Copy link
Contributor Author

@dcherian dcherian Dec 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on the namespace? And what other "groupers" would be useful?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems fine to me. We'd need to figure out which groupers to declare out-of-scope for inclusion in xarray itself, though.


## Alternatives
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this at a dev meeting a while ago, but would be good to rethink and commit to a path now.


One major design choice made here was to adopt the syntax `ds.groupby(x=BinGrouper(...))` instead of `ds.groupby(BinGrouper('x', ...))`.
This allows reuse of Grouper objects, example
```python
grouper = BinGrouper(...)
ds.groupby(x=grouper, y=grouper)
```
Comment on lines +217 to +221
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there cases where reusing Groupers would actually be useful? I think this is mostly just about a syntax.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had one where I wanted to use the same bin specification for two variables. But I agree that its minor.

Nominally every property on the grouper is independent of the data variable it is applied to. I think that motivated my suggestion to not encapsulate the data variable in the grouper. But every method on the grouper relies on the data variable as input. So 🤷🏾‍♂️ .

Do you have any advice here on how to think about the coupling/decoupling?

but requires that all variables being grouped by (`x` and `y` above) are present in Dataset `ds`. This does not seem like a bad requirement.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this is probably fine. The variables can also be coordinates (e.g., on a DataArray).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go with that, how would we support variables that don't align with the dataset? I was under the impression that that currently works (not sure, though). If we require a new dimension name for the grouper variable that would be fine with me, but it's also something we should explicitly state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how would we support variables that don't align with the dataset?

We don't. We use an "exact" join today:

if isinstance(group, DataArray):
try:
align(obj, group, join="exact", copy=False)
except ValueError:
raise ValueError(error_msg)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in that case sounds good to me, .assign_coords({name: arr}) is not that much more to write.

Importantly `Grouper` instances will be copied internally so that they can safely cache state that might be shared between `factorize` and `weights`.

Today, it is possible to `ds.groupby(DataArray, ...)`. This syntax will still be supported.

## Discussion

This proposal builds on these discussions:
1. https://github.com/xarray-contrib/flox/issues/191#issuecomment-1328898836
2. https://github.com/pydata/xarray/issues/6610

## Copyright

This document has been placed in the public domain.

## References and footnotes

[^1]: Wickham, H. (2011). The split-apply-combine strategy for data analysis. https://vita.had.co.nz/papers/plyr.html
[^2]: Iverson, K.E. (1980). Notation as a tool of thought. Commun. ACM 23, 8 (Aug. 1980), 444–465. https://doi.org/10.1145/358896.358899
Loading