Metrics for segmentation
In Torchmetrics v1.4, we are happy to introduce a new domain of metrics to the library: segmentation metrics. Segmentation metrics are used to evaluate how well segmentation algorithms are performing, e.g., algorithms that take in an image and pixel-by-pixel decide what kind of object it is. These kind of algorithms are necessary in applications such as self driven cars. Segmentations are closely related to classification metrics, but for now, in Torchmetrics, expect the input to be formatted differently; see the documentation for more info. For now, MeanIoU
and GeneralizedDiceScore
have been added to the subpackage, with many more to follow in upcoming releases of Torchmetrics. We are happy to receive any feedback on metrics to add in the future or the user interface for the new segmentation metrics.
Torchmetrics v1.3 adds new metrics to the classification and image subpackage and has multiple bug fixes and other quality-of-life improvements. We refer to the changelog for the complete list of changes.
[1.4.0] - 2024-05-03
Added
- Added
SensitivityAtSpecificity
metric to classification subpackage (#2217) - Added
QualityWithNoReference
metric to image subpackage (#2288) - Added a new segmentation metric:
- Added support for calculating segmentation quality and recognition quality in
PanopticQuality
metric (#2381) - Added
pretty-errors
for improving error prints (#2431) - Added support for
torch.float
weighted networks for FID and KID calculations (#2483) - Added
zero_division
argument to selected classification metrics (#2198)
Changed
- Made
__getattr__
and__setattr__
ofClasswiseWrapper
more general (#2424)
Fixed
- Fix getitem for metric collection when prefix/postfix is set (#2430)
- Fixed axis names with Precision-Recall curve (#2462)
- Fixed list synchronization with partly empty lists (#2468)
- Fixed memory leak in metrics using list states (#2492)
- Fixed bug in computation of
ERGAS
metric (#2498) - Fixed
BootStrapper
wrapper not working withkwargs
provided argument (#2503) - Fixed warnings being suppressed in
MeanAveragePrecision
when requested (#2501) - Fixed corner-case in
binary_average_precision
when only negative samples are provided (#2507)
Key Contributors
@baskrahmer, @Borda, @ChristophReich1996, @daniel-code, @furkan-celik, @i-aki-y, @jlcsilva, @NielsRogge, @oguz-hanoglu, @SkafteNicki, @ywchan2005
New Contributors since 1.3.0
- @eamonn-zh made their first contribution in #2345
- @nsmlzl made their first contribution in #2346
- @fschlatt made their first contribution in #2364
- @JonasVerbickas made their first contribution in #2358
- @AtomicVar made their first contribution in #2391
- @JDongian made their first contribution in #2400
- @daniel-code made their first contribution in #2390
- @baskrahmer made their first contribution in #2457
- @ChristophReich1996 made their first contribution in #2381
- @lukazso made their first contribution in #2491
- @S-aiueo32 made their first contribution in #2499
- @dominicgkerr made their first contribution in #2493
- @Shoumik-Gandre made their first contribution in #2482
- @randombenj made their first contribution in #2511
- @NielsRogge made their first contribution in #1236
- @i-aki-y made their first contribution in #2198
If we forgot someone due to not matching commit email with GitHub account, let us know :]
Full Changelog: v1.3.0...v1.4.0