Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Heatmap binning reducing cardinality for log-scale test points #5

Open
athewsey opened this issue Oct 8, 2024 · 0 comments
Open

Heatmap binning reducing cardinality for log-scale test points #5

athewsey opened this issue Oct 8, 2024 · 0 comments

Comments

@athewsey
Copy link
Contributor

athewsey commented Oct 8, 2024

I recently ran a LatencyHeatmap with input_lengths=[50, 100, 500, 1000], for which my actual generated/observed prompt lengths (num_tokens_input) ended up being [76, 136, 656, 1282] due to the usual tokenizer estimation error / prompt prefixing / etc.

The problem is that because the first two generated values are much closer to each other than the others, the binning() function ended up mapping everything to three bins: labelled [226, 528, 1131] on the plot.

I believe this would also affect similar tests that space their points (pseudo)-logarithmically... Suggested solution options:

  1. Since LatencyHeatmap only generates exactly Nbins prompts today, binning them seems needlessly indirect...
    • We could update binning() to automatically detect if the cardinality of the input vector already exactly matches the number of bins, and return the data as-is if so with no binning
    • It might be nice to also indicate on the plot whether binning was actually done for each axis? E.g. Input tokens (exact), Output tokens (binned)
    • This would not solve the potential effects of the same issue on output tokens (which would still be binned linearly regardless of user's choice of target values) - but it's not such an obvious blocker there because the cardinality of the data is still high so it's less likely to end up with fewer than target number of bins containing data.
  2. Maybe we could provide options for users to toggle whether the bins are linearly or log-spaced? But it would mean more to configure...
  3. Since the distribution of the data (both input & output token counts) is likely to be highly non-uniform, might it make more sense to use adaptive binning techniques to represent it? I know the process is more complex & therefore opaque, but the outputs might align more closely with whatever non-uniformly-spaced target values the user set for the test?
athewsey added a commit that referenced this issue Oct 9, 2024
In (heatmap) plotting, preserve original values if the dataset
exactly matches the requested binning cardinality. Partially
addresses #5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant