Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a new MET tool to interface with the Scores Python Verification package #2978

Open
22 tasks
JohnHalleyGotway opened this issue Sep 23, 2024 · 1 comment
Open
22 tasks
Labels
alert: NEED ACCOUNT KEY Need to assign an account key to this issue alert: NEED CYCLE ASSIGNMENT Need to assign to a release development cycle alert: NEED MORE DEFINITION Not yet actionable, additional definition required MET: Python Embedding priority: high High Priority requestor: Australian BOM Australian Bureau of Meteorology type: new feature Make it do something new

Comments

@JohnHalleyGotway
Copy link
Collaborator

JohnHalleyGotway commented Sep 23, 2024

Describe the New Feature

As discussed on September 23, 2024 with @nicholasloveday, @michelleharrold, and @hertneky (see notes), consider creating a new MET tool to interface with the Scores Python Verification package:

The idea is to create a new tool that's somewhat similar to Ensemble-Stat/Series-Analysis to generates paired data. The forecast could be a single model or an ensemble and the observations could be points or gridded analyses. And this could process a single output time or a time series. This tool would loop through one or more verification tasks, as defined by a configuration file, and generate matched pairs. It would store the forecast and observation data in XArray objects, pass that data to the Score verification package, generate one or more statistics on the data, retrieve the "result" from Scores, and write it to a structured output file as name/value pairs of statistics.

There's many, many more questions and details to be worked out. But this is the basic idea.

  • Where should the documentation of the Scores statistics reside? Within the MET repo or somewhere managed by BOM?
  • Should MET redistribute Scores? Or should we just use tagged release numbers?

Acceptance Testing

List input data types and sources.
Describe tests required for new functionality.

Time Estimate

Estimate the amount of work required here.
Issues should represent approximately 1 to 3 days of work.

Sub-Issues

Consider breaking the new feature down into sub-issues.

  • Add a checkbox for each sub-issue here.

Relevant Deadlines

List relevant project deadlines here or state NONE.

Funding Source

Define the source of funding and account keys here or state NONE.

Define the Metadata

Assignee

  • Select engineer(s) or no engineer required
  • Select scientist(s) or no scientist required

Labels

  • Review default alert labels
  • Select component(s)
  • Select priority
  • Select requestor(s)

Milestone and Projects

  • Select Milestone as a MET-X.Y.Z version, Consider for Next Release, or Backlog of Development Ideas
  • For a MET-X.Y.Z version, select the MET-X.Y.Z Development project

Define Related Issue(s)

Consider the impact to the other METplus components.

New Feature Checklist

See the METplus Workflow for details.

  • Complete the issue definition above, including the Time Estimate and Funding source.
  • Fork this repository or create a branch of develop.
    Branch name: feature_<Issue Number>_<Description>
  • Complete the development and test your changes.
  • Add/update log messages for easier debugging.
  • Add/update unit tests.
  • Add/update documentation.
  • Push local changes to GitHub.
  • Submit a pull request to merge into develop.
    Pull request: feature <Issue Number> <Description>
  • Define the pull request metadata, as permissions allow.
    Select: Reviewer(s) and Development issue
    Select: Milestone as the next official version
    Select: MET-X.Y.Z Development project for development toward the next official release
  • Iterate until the reviewer(s) accept and merge your changes.
  • Delete your fork or branch.
  • Close this issue.
@JohnHalleyGotway JohnHalleyGotway added type: new feature Make it do something new alert: NEED MORE DEFINITION Not yet actionable, additional definition required alert: NEED ACCOUNT KEY Need to assign an account key to this issue alert: NEED CYCLE ASSIGNMENT Need to assign to a release development cycle MET: Python Embedding priority: high High Priority requestor: Australian BOM Australian Bureau of Meteorology labels Sep 23, 2024
@nicholasloveday
Copy link

nicholasloveday commented Sep 23, 2024

Current dependencies for scores for a minimal install for scores are:

  • python >= 3.9 (we will probably drop support for 3.9 sometime soon). We currently test for 3.9-3.12.
  • xarray
  • pandas
  • scipy
  • bottleneck (could probably be made optional as it's just there to speed up a few calculations)
  • scikit-learn (we hope to remove this dependency at some point)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
alert: NEED ACCOUNT KEY Need to assign an account key to this issue alert: NEED CYCLE ASSIGNMENT Need to assign to a release development cycle alert: NEED MORE DEFINITION Not yet actionable, additional definition required MET: Python Embedding priority: high High Priority requestor: Australian BOM Australian Bureau of Meteorology type: new feature Make it do something new
Projects
None yet
Development

No branches or pull requests

2 participants