Search is not available for this dataset
repo_id
stringlengths 12
110
| file_path
stringlengths 24
164
| content
stringlengths 3
89.3M
| __index_level_0__
int64 0
0
|
---|---|---|---|
public_repos | public_repos/torchmetrics/MANIFEST.in | # Manifest syntax https://packaging.python.org/en/latest/guides/using-manifest-in/
graft wheelhouse
recursive-exclude __pycache__ *.py[cod] *.orig
# include also models
recursive-include src *.pth
# Include the README and CHANGELOG
include *.md
recursive-include src *.md
# Include the license file
include LICENSE
# Include Citation file
include *.cff
# Include marker file for PEP 561
recursive-include src *.typed
exclude *.sh
exclude *.toml
exclude *.svg
# exclude tests from package
recursive-exclude tests *
recursive-exclude site *
exclude tests
# Exclude the documentation files
recursive-exclude docs *
exclude docs
# Include the Requirements
include requirements.txt
recursive-include requirements *.txt
recursive-exclude requirements *.py
# Exclude build configs
exclude *.yml
exclude *.yaml
exclude Makefile
prune .devcontainer
prune .git
prune .github
prune examples*
prune temp*
prune test*
prune SandBox*
| 0 |
public_repos | public_repos/torchmetrics/CITATION.cff | cff-version: 1.2.0
message: "If you want to cite the framework, feel free to use this (but only if you loved it 😊)"
title: "TorchMetrics - Measuring Reproducibility in PyTorch"
abstract:
"A main problem with reproducing machine learning publications is the variance of metric implementations across papers.
A lack of standardization leads to different behavior in mech- anisms such as checkpointing, learning rate schedulers or early stopping, that will influence the reported results.
For example, a complex metric such as Fréchet inception distance (FID) for synthetic image quality evaluation will differ based on the specific interpolation method used.
There have been a few attempts at tackling the reproducibility issues.
Papers With Code links research code with its corresponding paper. Similarly, arXiv recently added a code and data section that links both official and community code to papers.
However, these methods rely on the paper code to be made publicly accessible which is not always possible.
Our approach is to provide the de-facto reference implementation for metrics.
This approach enables proprietary work to still be comparable as long as they’ve used our reference implementations.
We introduce TorchMetrics, a general-purpose metrics package that covers a wide variety of tasks and domains used in the machine learning community.
TorchMetrics provides standard classification and regression metrics; and domain-specific metrics for audio, computer vision, natural language processing, and information retrieval.
Our process for adding a new metric is as follows, first we integrate a well-tested and established third-party library.
Once we’ve verified the implementations and written tests for them, we re-implement them in native PyTorch to enable hardware acceleration and remove any bottlenecks in inter-device transfer."
authors:
- name: Nicki Skafte Detlefsen
orcid: "https://orcid.org/0000-0002-8133-682X"
- name: Jiri Borovec
orcid: "https://orcid.org/0000-0001-7437-824X"
- name: Justus Schock
orcid: "https://orcid.org/0000-0003-0512-3053"
- name: Ananya Harsh
- name: Teddy Koker
- name: Luca Di Liello
- name: Daniel Stancl
- name: Changsheng Quan
- name: Maxim Grechkin
- name: William Falcon
doi: 10.21105/joss.04101
license: "Apache-2.0"
url: "https://www.pytorchlightning.ai"
repository-code: "https://github.com/Lightning-AI/torchmetrics"
date-released: 2022-02-11
keywords:
- machine learning
- deep learning
- artificial intelligence
- metrics
- pytorch
| 0 |
public_repos | public_repos/torchmetrics/setup.py | #!/usr/bin/env python
import glob
import os
import re
from functools import partial
from importlib.util import module_from_spec, spec_from_file_location
from itertools import chain
from pathlib import Path
from typing import Any, Iterable, Iterator, List, Optional, Tuple, Union
from pkg_resources import Requirement, yield_lines
from setuptools import find_packages, setup
_PATH_ROOT = os.path.realpath(os.path.dirname(__file__))
_PATH_SOURCE = os.path.join(_PATH_ROOT, "src")
_PATH_REQUIRE = os.path.join(_PATH_ROOT, "requirements")
_FREEZE_REQUIREMENTS = os.environ.get("FREEZE_REQUIREMENTS", "0").lower() in ("1", "true")
class _RequirementWithComment(Requirement):
strict_string = "# strict"
def __init__(self, *args: Any, comment: str = "", pip_argument: Optional[str] = None, **kwargs: Any) -> None:
super().__init__(*args, **kwargs)
self.comment = comment
if pip_argument is not None and not pip_argument:
raise ValueError("Expected `pip_argument` to either be `None` or an str, but got an empty string")
self.pip_argument = pip_argument
self.strict = self.strict_string in comment.lower()
def adjust(self, unfreeze: bool) -> str:
"""Remove version restrictions unless they are strict.
>>> _RequirementWithComment("arrow<=1.2.2,>=1.2.0", comment="# anything").adjust(False)
'arrow<=1.2.2,>=1.2.0'
>>> _RequirementWithComment("arrow<=1.2.2,>=1.2.0", comment="# strict").adjust(False)
'arrow<=1.2.2,>=1.2.0 # strict'
>>> _RequirementWithComment("arrow<=1.2.2,>=1.2.0", comment="# my name").adjust(True)
'arrow>=1.2.0'
>>> _RequirementWithComment("arrow>=1.2.0, <=1.2.2", comment="# strict").adjust(True)
'arrow<=1.2.2,>=1.2.0 # strict'
>>> _RequirementWithComment("arrow").adjust(True)
'arrow'
"""
out = str(self)
if self.strict:
return f"{out} {self.strict_string}"
if unfreeze:
for operator, version in self.specs:
if operator in ("<", "<="):
# drop upper bound
return out.replace(f"{operator}{version},", "")
return out
def _parse_requirements(strs: Union[str, Iterable[str]]) -> Iterator[_RequirementWithComment]:
r"""Adapted from `pkg_resources.parse_requirements` to include comments.
>>> txt = ['# ignored', '', 'this # is an', '--piparg', 'example', 'foo # strict', 'thing', '-r different/file.txt']
>>> [r.adjust('none') for r in _parse_requirements(txt)]
['this', 'example', 'foo # strict', 'thing']
>>> txt = '\\n'.join(txt)
>>> [r.adjust('none') for r in _parse_requirements(txt)]
['this', 'example', 'foo # strict', 'thing']
"""
lines = yield_lines(strs)
pip_argument = None
for line in lines:
# Drop comments -- a hash without a space may be in a URL.
if " #" in line:
comment_pos = line.find(" #")
line, comment = line[:comment_pos], line[comment_pos:]
else:
comment = ""
# If there is a line continuation, drop it, and append the next line.
if line.endswith("\\"):
line = line[:-2].strip()
try:
line += next(lines)
except StopIteration:
return
if "@" in line or re.search("https?://", line):
# skip lines with links like `pesq @ git+https://github.com/ludlows/python-pesq`
continue
# If there's a pip argument, save it
if line.startswith("--"):
pip_argument = line
continue
if line.startswith("-r "):
# linked requirement files are unsupported
continue
yield _RequirementWithComment(line, comment=comment, pip_argument=pip_argument)
pip_argument = None
def _load_requirements(
path_dir: str, file_name: str = "base.txt", unfreeze: bool = not _FREEZE_REQUIREMENTS
) -> List[str]:
"""Load requirements from a file.
>>> _load_requirements(_PATH_REQUIRE)
['numpy...', 'torch..."]
"""
path = Path(path_dir) / file_name
if not path.exists():
raise ValueError("Path {path} not found for input dir {path_dir} and filename {file_name}.")
text = path.read_text()
return [req.adjust(unfreeze) for req in _parse_requirements(text)]
def _load_readme_description(path_dir: str, homepage: str, version: str) -> str:
"""Load readme as decribtion.
>>> _load_readme_description(_PATH_ROOT, "", "")
'<div align="center">...'
"""
path_readme = os.path.join(path_dir, "README.md")
with open(path_readme, encoding="utf-8") as fp:
text = fp.read()
# https://github.com/Lightning-AI/torchmetrics/raw/master/docs/source/_static/images/lightning_module/pt_to_pl.png
github_source_url = os.path.join(homepage, "raw", version)
# replace relative repository path to absolute link to the release
# do not replace all "docs" as in the readme we replace some other sources with particular path to docs
text = text.replace("docs/source/_static/", f"{os.path.join(github_source_url, 'docs/source/_static/')}")
# readthedocs badge
text = text.replace("badge/?version=stable", f"badge/?version={version}")
text = text.replace("torchmetrics.readthedocs.io/en/stable/", f"torchmetrics.readthedocs.io/en/{version}")
# codecov badge
text = text.replace("/branch/master/graph/badge.svg", f"/release/{version}/graph/badge.svg")
# replace github badges for release ones
text = text.replace("badge.svg?branch=master&event=push", f"badge.svg?tag={version}")
# Azure...
text = text.replace("?branchName=master", f"?branchName=refs%2Ftags%2F{version}")
text = re.sub(r"\?definitionId=\d+&branchName=master", f"?definitionId=2&branchName=refs%2Ftags%2F{version}", text)
skip_begin = r"<!-- following section will be skipped from PyPI description -->"
skip_end = r"<!-- end skipping PyPI description -->"
# todo: wrap content as commented description
return re.sub(rf"{skip_begin}.+?{skip_end}", "<!-- -->", text, flags=re.IGNORECASE + re.DOTALL)
def _load_py_module(fname: str, pkg: str = "torchmetrics"):
spec = spec_from_file_location(os.path.join(pkg, fname), os.path.join(_PATH_SOURCE, pkg, fname))
py = module_from_spec(spec)
spec.loader.exec_module(py)
return py
ABOUT = _load_py_module("__about__.py")
LONG_DESCRIPTION = _load_readme_description(
_PATH_ROOT,
homepage=ABOUT.__homepage__,
version=f"v{ABOUT.__version__}",
)
BASE_REQUIREMENTS = _load_requirements(path_dir=_PATH_REQUIRE, file_name="base.txt")
def _prepare_extras(skip_pattern: str = "^_", skip_files: Tuple[str] = ("base.txt",)) -> dict:
"""Preparing extras for the package listing requirements.
Args:
skip_pattern: ignore files with this pattern, by default all files starting with _
skip_files: ignore some additional files, by default base requirements
Note, particular domain test requirement are aggregated in single "_tests" extra (which is not accessible).
"""
# find all extra requirements
_load_req = partial(_load_requirements, path_dir=_PATH_REQUIRE)
found_req_files = sorted(os.path.basename(p) for p in glob.glob(os.path.join(_PATH_REQUIRE, "*.txt")))
# filter unwanted files
found_req_files = [n for n in found_req_files if not re.match(skip_pattern, n)]
found_req_files = [n for n in found_req_files if n not in skip_files]
found_req_names = [os.path.splitext(req)[0] for req in found_req_files]
# define basic and extra extras
extras_req = {"_tests": []}
for name, fname in zip(found_req_names, found_req_files):
if name.endswith("_test"):
extras_req["_tests"] += _load_req(file_name=fname)
else:
extras_req[name] = _load_req(file_name=fname)
# filter the uniques
extras_req = {n: list(set(req)) for n, req in extras_req.items()}
# create an 'all' keyword that install all possible dependencies
extras_req["all"] = list(chain([pkgs for k, pkgs in extras_req.items() if k not in ("_test", "_tests")]))
extras_req["dev"] = extras_req["all"] + extras_req["_tests"]
return extras_req
# https://packaging.python.org/discussions/install-requires-vs-requirements /
# keep the meta-data here for simplicity in reading this file... it's not obvious
# what happens and to non-engineers they won't know to look in init ...
# the goal of the project is simplicity for researchers, don't want to add too much
# engineer specific practices
if __name__ == "__main__":
setup(
name="torchmetrics",
version=ABOUT.__version__,
description=ABOUT.__docs__,
author=ABOUT.__author__,
author_email=ABOUT.__author_email__,
url=ABOUT.__homepage__,
download_url=os.path.join(ABOUT.__homepage__, "archive", "master.zip"),
license=ABOUT.__license__,
packages=find_packages(where="src"),
package_dir={"": "src"},
long_description=LONG_DESCRIPTION,
long_description_content_type="text/markdown",
include_package_data=True,
zip_safe=False,
keywords=["deep learning", "machine learning", "pytorch", "metrics", "AI"],
python_requires=">=3.8",
setup_requires=[],
install_requires=BASE_REQUIREMENTS,
extras_require=_prepare_extras(),
project_urls={
"Bug Tracker": os.path.join(ABOUT.__homepage__, "issues"),
"Documentation": "https://torchmetrics.rtfd.io/en/latest/",
"Source Code": ABOUT.__homepage__,
},
classifiers=[
"Environment :: Console",
"Natural Language :: English",
# How mature is this project? Common values are
# 3 - Alpha, 4 - Beta, 5 - Production/Stable
"Development Status :: 5 - Production/Stable",
# Indicate who your project is intended for
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Scientific/Engineering :: Information Analysis",
# Pick your license as you wish
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
],
)
| 0 |
public_repos | public_repos/torchmetrics/.readthedocs.yml | # Copyright The Lightning AI team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
# reference: https://docs.readthedocs.io/en/stable/config-file/v2.html#sphinx
sphinx:
fail_on_warning: true
build:
os: "ubuntu-22.04"
tools:
python: "3.9"
commands:
- printenv
- pwd ; ls -lh
- pip install -U pip awscli --user
- python -m awscli s3 sync --no-sign-request s3://sphinx-packages/ dist/ ; ls -lh dist/
- >
pip install -e . -q -r requirements/_docs.txt \
-f 'https://download.pytorch.org/whl/cpu/torch_stable.html' -f dist/ ;
pip list
# this need to be split so `sphinx-build` is picked from previous installation
- bash docs/rtfd-build.sh
- mkdir -p _readthedocs ; mv docs/build/html _readthedocs/html
| 0 |
public_repos | public_repos/torchmetrics/requirements.txt | -r requirements/base.txt
| 0 |
public_repos | public_repos/torchmetrics/.codecov.yml | # see https://docs.codecov.io/docs/codecov-yaml
# Validation check:
# $ curl --data-binary @.codecov.yml https://codecov.io/validate
# https://docs.codecov.io/docs/codecovyml-reference
codecov:
bot: "codecov-io"
strict_yaml_branch: "yaml-config"
require_ci_to_pass: yes
notify:
# after_n_builds: 2
wait_for_ci: yes
coverage:
precision: 0 # 2 = xx.xx%, 0 = xx%
round: nearest # how coverage is rounded: down/up/nearest
range: 40...100 # custom range of coverage colors from red -> yellow -> green
status:
# https://codecov.readme.io/v1.0/docs/commit-status
project:
default:
informational: true
target: 95% # specify the target coverage for each commit status
threshold: 30% # allow this little decrease on project
# https://github.com/codecov/support/wiki/Filtering-Branches
# branches: master
if_ci_failed: error
# https://github.com/codecov/support/wiki/Patch-Status
patch:
default:
informational: true
threshold: 50% # allow this much decrease on patch
changes: false
# https://docs.codecov.com/docs/github-checks#disabling-github-checks-patch-annotations
github_checks:
annotations: false
parsers:
gcov:
branch_detection:
conditional: true
loop: true
macro: false
method: false
javascript:
enable_partials: false
comment:
layout: header, diff
require_changes: false
behavior: default # update if exists else create new
# branches: *
| 0 |
public_repos | public_repos/torchmetrics/Makefile | .PHONY: test clean docs env data
export FREEZE_REQUIREMENTS=1
# assume you have installed need packages
export SPHINX_MOCK_REQUIREMENTS=1
export SPHINX_FETCH_ASSETS=0
clean:
# clean all temp runs
rm -rf $(shell find . -name "mlruns")
rm -rf _ckpt_*
rm -rf .mypy_cache
rm -rf .pytest_cache
rm -rf tests/.pytest_cache
rm -rf ./docs/build
rm -rf ./docs/source/generated
rm -rf ./docs/source/*/generated
rm -rf ./docs/source/api
rm -rf build
rm -rf dist
rm -rf *.egg-info
rm -rf src/*.egg-info
test: clean env data
# run tests with coverage
cd src && python -m pytest torchmetrics
cd tests && python -m pytest unittests -v --cov=torchmetrics
cd tests && python -m coverage report
docs: clean
pip install -e . --quiet -r requirements/_docs.txt
# apt-get install -y texlive-latex-extra dvipng texlive-pictures texlive-fonts-recommended cm-super
TOKENIZERS_PARALLELISM=false python -m sphinx -b html -W --keep-going docs/source docs/build
env:
pip install -e . -U -r requirements/_devel.txt
data:
python -c "from urllib.request import urlretrieve ; urlretrieve('https://pl-public-data.s3.amazonaws.com/metrics/data.zip', 'data.zip')"
unzip -o data.zip -d ./tests
| 0 |
public_repos | public_repos/torchmetrics/.pre-commit-config.yaml | # Copyright The Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
default_language_version:
python: python3
ci:
autofix_prs: true
autoupdate_commit_msg: "[pre-commit.ci] pre-commit suggestions"
autoupdate_schedule: quarterly
# submodules: true
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
# - id: check-json
- id: check-yaml
- id: check-toml
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-case-conflict
- id: check-added-large-files
args: ["--maxkb=100", "--enforce-all"]
- id: detect-private-key
- repo: https://github.com/asottile/pyupgrade
rev: v3.14.0
hooks:
- id: pyupgrade
args: ["--py38-plus"]
name: Upgrade code
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies: [tomli]
args: ["--write-changes"]
exclude: pyproject.toml
- repo: https://github.com/crate-ci/typos
rev: v1.16.17
hooks:
- id: typos
# empty to do not write fixes
args: []
exclude: pyproject.toml
- repo: https://github.com/PyCQA/docformatter
rev: v1.7.5
hooks:
- id: docformatter
additional_dependencies: [tomli]
args: ["--in-place"]
- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
name: Format code
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17
hooks:
- id: mdformat
additional_dependencies:
- mdformat-gfm
- mdformat-black
- mdformat_frontmatter
exclude: |
(?x)^(
CHANGELOG.md|
docs/paper_JOSS/paper.md
)$
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.0.3
hooks:
- id: prettier
# https://prettier.io/docs/en/options.html#print-width
args: ["--print-width=120"]
- repo: https://github.com/asottile/yesqa
rev: v1.5.0
hooks:
- id: yesqa
additional_dependencies:
- pep8-naming
- flake8-pytest-style
- flake8-bandit
- flake8-builtins
- flake8-bugbear
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
# Enforce that noqa annotations always occur with specific codes. Sample annotations: # noqa: F401, # noqa: F401,W203
- id: python-check-blanket-noqa
# Enforce that # type: ignore annotations always occur with specific codes. Sample annotations: # type: ignore[attr-defined], # type: ignore[attr-defined, name-defined]
#- id: python-check-blanket-type-ignore # TODO
# Prevent common mistakes of assert mck.not_called(), assert mck.called_once_with(...) and mck.assert_called.
- id: python-check-mock-methods
# A quick check for the eval() built-in function
#- id: python-no-eval broken check - https://github.com/pre-commit/pygrep-hooks/issues/135
# A quick check for the deprecated .warn() method of python loggers
- id: python-no-log-warn
# Enforce that python3.6+ type annotations are used instead of type comments
#- id: python-use-type-annotations # false positive - https://github.com/pre-commit/pygrep-hooks/issues/154
# Detect common mistake of using single backticks when writing rst
#- id: rst-backticks # todo
# Detect mistake of rst directive not ending with double colon or space before the double colon
- id: rst-directive-colons
# Detect mistake of inline code touching normal text in rst
- id: rst-inline-touching-normal
# Forbid files which have a UTF-8 Unicode replacement character
- id: text-unicode-replacement-char
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.292
hooks:
- id: ruff
args: ["--fix"]
| 0 |
public_repos | public_repos/torchmetrics/pyproject.toml | [metadata]
license_file = "LICENSE"
description-file = "README.md"
[build-system]
requires = ["setuptools", "wheel"]
[tool.check-manifest]
ignore = ["*.yml", ".github", ".github/*"]
[tool.pytest.ini_options]
norecursedirs = [".git", ".github", "dist", "build", "docs"]
addopts = [
"--strict-markers",
"--doctest-modules",
"--doctest-plus",
"--color=yes",
"--disable-pytest-warnings",
]
#filterwarnings = ["error::FutureWarning"] # ToDo
xfail_strict = true
junit_duration_report = "call"
[tool.coverage.report]
exclude_lines = ["pragma: no cover", "pass"]
[tool.coverage.run]
parallel = true
concurrency = "thread"
relative_files = true
[tool.black]
# https://github.com/psf/black
line-length = 120
exclude = "(.eggs|.git|.hg|.mypy_cache|.venv|_build|buck-out|build|dist)"
[tool.docformatter]
recursive = true
# some docstring start with r"""
wrap-summaries = 119
wrap-descriptions = 120
blank = true
[tool.codespell]
#skip = '*.py'
quiet-level = 3
# Todo: comma separated list of words; waiting for:
# https://github.com/codespell-project/codespell/issues/2839#issuecomment-1731601603
# Todo: also adding links until they ignored by its: nature
# https://github.com/codespell-project/codespell/issues/2243#issuecomment-1732019960
ignore-words-list = """
rouge, \
mape, \
wil, \
fpr, \
raison, \
archiv
"""
[tool.typos.default]
extend-ignore-identifiers-re = [
# *sigh* this just isn't worth the cost of fixing
"AttributeID.*Supress.*",
]
[tool.typos.default.extend-identifiers]
# *sigh* this just isn't worth the cost of fixing
MAPE = "MAPE"
WIL = "WIL"
Raison = "Raison"
[tool.typos.default.extend-words]
# Don't correct the surname "Teh"
fpr = "fpr"
mape = "mape"
wil = "wil"
[tool.ruff]
target-version = "py38"
line-length = 120
# Enable Pyflakes `E` and `F` codes by default.
select = [
"E",
"W", # see: https://pypi.org/project/pycodestyle
"F", # see: https://pypi.org/project/pyflakes
"I", #see: https://pypi.org/project/isort/
"D", # see: https://pypi.org/project/pydocstyle
"N", # see: https://pypi.org/project/pep8-naming
"S", # see: https://pypi.org/project/flake8-bandit
]
extend-select = [
"A", # see: https://pypi.org/project/flake8-builtins
"B", # see: https://pypi.org/project/flake8-bugbear
"C4", # see: https://pypi.org/project/flake8-comprehensions
"PT", # see: https://pypi.org/project/flake8-pytest-style
"RET", # see: https://pypi.org/project/flake8-return
"SIM", # see: https://pypi.org/project/flake8-simplify
"YTT", # see: https://pypi.org/project/flake8-2020
"ANN", # see: https://pypi.org/project/flake8-annotations
"TID", # see: https://pypi.org/project/flake8-tidy-imports/
"T10", # see: https://pypi.org/project/flake8-debugger
"Q", # see: https://pypi.org/project/flake8-quotes
"RUF", # Ruff-specific rules
"EXE", # see: https://pypi.org/project/flake8-executable
"ISC", # see: https://pypi.org/project/flake8-implicit-str-concat
"PIE", # see: https://pypi.org/project/flake8-pie
"PLE", # see: https://pypi.org/project/pylint/
"PERF", # see: https://pypi.org/project/perflint/
"PYI", # see: https://pypi.org/project/flake8-pyi/
]
ignore = [
"E731", # Do not assign a lambda expression, use a def
"D100", # todo: Missing docstring in public module
"D104", # todo: Missing docstring in public package
"D107", # Missing docstring in `__init__`
"ANN101", # Missing type annotation for `self` in method
"S301", # todo: `pickle` and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue # todo
"S310", # todo: Audit URL open for permitted schemes. Allowing use of `file:` or custom schemes is often unexpected. # todo
"B905", # todo: `zip()` without an explicit `strict=` parameter
]
# Exclude a variety of commonly ignored directories.
exclude = [
".eggs",
".git",
".mypy_cache",
".ruff_cache",
"__pypackages__",
"_build",
"build",
"dist",
"docs",
]
ignore-init-module-imports = true
unfixable = ["F401"]
[tool.ruff.per-file-ignores]
"setup.py" = ["ANN202", "ANN401"]
"src/**" = ["ANN401"]
"tests/**" = ["S101", "ANN001", "ANN201", "ANN202", "ANN401"]
[tool.ruff.pydocstyle]
# Use Google-style docstrings.
convention = "google"
#[tool.ruff.pycodestyle]
#ignore-overlong-task-comments = true
[tool.ruff.mccabe]
# Unlike Flake8, default to a complexity level of 10.
max-complexity = 10
[tool.mypy]
files = ["src/torchmetrics"]
install_types = "True"
non_interactive = "True"
disallow_untyped_defs = "True"
ignore_missing_imports = "True"
show_error_codes = "True"
warn_redundant_casts = "True"
warn_unused_configs = "True"
warn_unused_ignores = "True"
allow_redefinition = "True"
# disable this rule as the Trainer attributes are defined in the connectors, not in its __init__
disable_error_code = "attr-defined"
# style choices
warn_no_return = "False"
# Ignore mypy errors for these files
# TODO: the goal is for this to be empty
[[tool.mypy.overrides]]
module = [
"torchmetrics.classification.exact_match",
"torchmetrics.classification.f_beta",
"torchmetrics.classification.precision_recall",
"torchmetrics.classification.ranking",
"torchmetrics.classification.recall_at_fixed_precision",
"torchmetrics.classification.roc",
"torchmetrics.classification.stat_scores",
"torchmetrics.detection._mean_ap",
"torchmetrics.detection.mean_ap",
"torchmetrics.functional.image.psnr",
"torchmetrics.functional.image.ssim",
"torchmetrics.image.psnr",
"torchmetrics.image.ssim",
]
ignore_errors = "True"
| 0 |
public_repos | public_repos/torchmetrics/.prettierignore | # Ignore all MD files:
**/*.md
| 0 |
public_repos | public_repos/torchmetrics/README.md | <div align="center">
<img src="docs/source/_static/images/logo.png" width="400px">
**Machine learning metrics for distributed, scalable PyTorch applications.**
______________________________________________________________________
<p align="center">
<a href="#what-is-torchmetrics">What is Torchmetrics</a> •
<a href="#implementing-your-own-module-metric">Implementing a metric</a> •
<a href="#build-in-metrics">Built-in metrics</a> •
<a href="https://lightning.ai/docs/torchmetrics/stable/">Docs</a> •
<a href="#community">Community</a> •
<a href="#license">License</a>
</p>
______________________________________________________________________
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/torchmetrics)](https://pypi.org/project/torchmetrics/)
[![PyPI Status](https://badge.fury.io/py/torchmetrics.svg)](https://badge.fury.io/py/torchmetrics)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/torchmetrics)
](https://pepy.tech/project/torchmetrics)
[![Conda](https://img.shields.io/conda/v/conda-forge/torchmetrics?label=conda&color=success)](https://anaconda.org/conda-forge/torchmetrics)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/torchmetrics/blob/master/LICENSE)
[![CI testing | CPU](https://github.com/Lightning-AI/torchmetrics/actions/workflows/ci-tests.yml/badge.svg?event=push)](https://github.com/Lightning-AI/torchmetrics/actions/workflows/ci-tests.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/Metrics/_apis/build/status%2FTM.unittests?branchName=master)](https://dev.azure.com/Lightning-AI/Metrics/_build/latest?definitionId=54&branchName=master)
[![codecov](https://codecov.io/gh/Lightning-AI/torchmetrics/branch/master/graph/badge.svg?token=NER6LPI3HS)](https://codecov.io/gh/Lightning-AI/torchmetrics)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/torchmetrics/master.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/torchmetrics/master)
[![Documentation Status](https://readthedocs.org/projects/torchmetrics/badge/?version=latest)](https://torchmetrics.readthedocs.io/en/latest/?badge=latest)
[![Discord](https://img.shields.io/discord/1077906959069626439?style=plastic)](https://discord.gg/VptPCZkGNa)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5844769.svg)](https://doi.org/10.5281/zenodo.5844769)
[![JOSS status](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43/status.svg)](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43)
______________________________________________________________________
</div>
## Installation
Simple installation from PyPI
```bash
pip install torchmetrics
```
<details>
<summary>Other installations</summary>
Install using conda
```bash
conda install -c conda-forge torchmetrics
```
Pip from source
```bash
# with git
pip install git+https://github.com/Lightning-AI/torchmetrics.git@release/stable
```
Pip from archive
```bash
pip install https://github.com/Lightning-AI/torchmetrics/archive/refs/heads/release/stable.zip
```
Extra dependencies for specialized metrics:
```bash
pip install torchmetrics[audio]
pip install torchmetrics[image]
pip install torchmetrics[text]
pip install torchmetrics[all] # install all of the above
```
Install latest developer version
```bash
pip install https://github.com/Lightning-AI/torchmetrics/archive/master.zip
```
</details>
______________________________________________________________________
## What is TorchMetrics
TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics. It offers:
- A standardized interface to increase reproducibility
- Reduces boilerplate
- Automatic accumulation over batches
- Metrics optimized for distributed-training
- Automatic synchronization between multiple devices
You can use TorchMetrics with any PyTorch model or with [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/) to enjoy additional features such as:
- Module metrics are automatically placed on the correct device.
- Native support for logging metrics in Lightning to reduce even more boilerplate.
## Using TorchMetrics
### Module metrics
The [module-based metrics](https://lightning.ai/docs/torchmetrics/stable/references/metric.html) contain internal metric states (similar to the parameters of the PyTorch module) that automate accumulation and synchronization across devices!
- Automatic accumulation over multiple batches
- Automatic synchronization between multiple devices
- Metric arithmetic
**This can be run on CPU, single GPU or multi-GPUs!**
For the single GPU/CPU case:
```python
import torch
# import our library
import torchmetrics
# initialize metric
metric = torchmetrics.classification.Accuracy(task="multiclass", num_classes=5)
# move the metric to device you want computations to take place
device = "cuda" if torch.cuda.is_available() else "cpu"
metric.to(device)
n_batches = 10
for i in range(n_batches):
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1).to(device)
target = torch.randint(5, (10,)).to(device)
# metric on current batch
acc = metric(preds, target)
print(f"Accuracy on batch {i}: {acc}")
# metric on all batches using custom accumulation
acc = metric.compute()
print(f"Accuracy on all data: {acc}")
```
Module metric usage remains the same when using multiple GPUs or multiple nodes.
<details>
<summary>Example using DDP</summary>
<!--phmdoctest-mark.skip-->
```python
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from torch import nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torchmetrics
def metric_ddp(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
# create default process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# initialize model
metric = torchmetrics.classification.Accuracy(task="multiclass", num_classes=5)
# define a model and append your metric to it
# this allows metric states to be placed on correct accelerators when
# .to(device) is called on the model
model = nn.Linear(10, 10)
model.metric = metric
model = model.to(rank)
# initialize DDP
model = DDP(model, device_ids=[rank])
n_epochs = 5
# this shows iteration over multiple training epochs
for n in range(n_epochs):
# this will be replaced by a DataLoader with a DistributedSampler
n_batches = 10
for i in range(n_batches):
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
# metric on current batch
acc = metric(preds, target)
if rank == 0: # print only for rank 0
print(f"Accuracy on batch {i}: {acc}")
# metric on all batches and all accelerators using custom accumulation
# accuracy is same across both accelerators
acc = metric.compute()
print(f"Accuracy on all data: {acc}, accelerator rank: {rank}")
# Resetting internal state such that metric ready for new data
metric.reset()
# cleanup
dist.destroy_process_group()
if __name__ == "__main__":
world_size = 2 # number of gpus to parallelize over
mp.spawn(metric_ddp, args=(world_size,), nprocs=world_size, join=True)
```
</details>
### Implementing your own Module metric
Implementing your own metric is as easy as subclassing an [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). Simply, subclass `torchmetrics.Metric`
and just implement the `update` and `compute` methods:
```python
import torch
from torchmetrics import Metric
class MyAccuracy(Metric):
def __init__(self):
# remember to call super
super().__init__()
# call `self.add_state`for every internal state that is needed for the metrics computations
# dist_reduce_fx indicates the function that should be used to reduce
# state from multiple processes
self.add_state("correct", default=torch.tensor(0), dist_reduce_fx="sum")
self.add_state("total", default=torch.tensor(0), dist_reduce_fx="sum")
def update(self, preds: torch.Tensor, target: torch.Tensor) -> None:
# extract predicted class index for computing accuracy
preds = preds.argmax(dim=-1)
assert preds.shape == target.shape
# update metric states
self.correct += torch.sum(preds == target)
self.total += target.numel()
def compute(self) -> torch.Tensor:
# compute final result
return self.correct.float() / self.total
my_metric = MyAccuracy()
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
print(my_metric(preds, target))
```
### Functional metrics
Similar to [`torch.nn`](https://pytorch.org/docs/stable/nn.html), most metrics have both a [module-based](https://lightning.ai/docs/torchmetrics/stable/references/metric.html) and functional version.
The functional versions are simple python functions that as input take [torch.tensors](https://pytorch.org/docs/stable/tensors.html) and return the corresponding metric as a [torch.tensor](https://pytorch.org/docs/stable/tensors.html).
```python
import torch
# import our library
import torchmetrics
# simulate a classification problem
preds = torch.randn(10, 5).softmax(dim=-1)
target = torch.randint(5, (10,))
acc = torchmetrics.functional.classification.multiclass_accuracy(
preds, target, num_classes=5
)
```
### Covered domains and example metrics
In total TorchMetrics contains [100+ metrics](https://lightning.ai/docs/torchmetrics/stable/all-metrics.html), which
covers the following domains:
- Audio
- Classification
- Detection
- Information Retrieval
- Image
- Multimodal (Image-Text)
- Nominal
- Regression
- Text
Each domain may require some additional dependencies which can be installed with `pip install torchmetrics[audio]`,
`pip install torchmetrics['image']` etc.
### Additional features
#### Plotting
Visualization of metrics can be important to help understand what is going on with your machine learning algorithms.
Torchmetrics have built-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly
all modular metrics through the `.plot` method. Simply call the method to get a simple visualization of any metric!
```python
import torch
from torchmetrics.classification import MulticlassAccuracy, MulticlassConfusionMatrix
num_classes = 3
# this will generate two distributions that comes more similar as iterations increase
w = torch.randn(num_classes)
target = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)
preds = lambda it: torch.multinomial((it * w).softmax(dim=-1), 100, replacement=True)
acc = MulticlassAccuracy(num_classes=num_classes, average="micro")
acc_per_class = MulticlassAccuracy(num_classes=num_classes, average=None)
confmat = MulticlassConfusionMatrix(num_classes=num_classes)
# plot single value
for i in range(5):
acc_per_class.update(preds(i), target(i))
confmat.update(preds(i), target(i))
fig1, ax1 = acc_per_class.plot()
fig2, ax2 = confmat.plot()
# plot multiple values
values = []
for i in range(10):
values.append(acc(preds(i), target(i)))
fig3, ax3 = acc.plot(values)
```
<p align="center">
<img src="docs/source/_static/images/plot_example.png" width="1000">
</p>
For examples of plotting different metrics try running [this example file](examples/plotting.py).
## Contribute!
The lightning + TorchMetrics team is hard at work adding even more metrics.
But we're looking for incredible contributors like you to submit new metrics
and improve existing ones!
Join our [Slack](https://www.pytorchlightning.ai/community) to get help with becoming a contributor!
## Community
For help or questions, join our huge community on [Slack](https://www.pytorchlightning.ai/community)!
## Citation
We’re excited to continue the strong legacy of open source software and have been inspired
over the years by Caffe, Theano, Keras, PyTorch, torchbearer, ignite, sklearn and fast.ai.
If you want to cite this framework feel free to use GitHub's built-in citation option to generate a bibtex or APA-Style citation based on [this file](https://github.com/Lightning-AI/torchmetrics/blob/master/CITATION.cff) (but only if you loved it 😊).
## License
Please observe the Apache 2.0 license that is listed in this repository.
In addition, the Lightning framework is Patent Pending.
| 0 |
public_repos | public_repos/torchmetrics/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2020-2022 Lightning-AI team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
public_repos | public_repos/torchmetrics/CHANGELOG.md | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
**Note: we move fast, but still we preserve 0.1 version (one feature release) back compatibility.**
## [UnReleased] - 2023-MM-DD
### Added
- Added more tokenizers for `SacreBLEU` metric ([#2068](https://github.com/Lightning-AI/torchmetrics/pull/2068))
- Added `average` argument to multiclass versions of `PrecisionRecallCurve` and `ROC` ([#2084](https://github.com/Lightning-AI/torchmetrics/pull/2084))
- Added error if `NoTrainInceptionV3` is being initialized without `torch-fidelity` not being installed ([#2143](https://github.com/Lightning-AI/torchmetrics/pull/2143))
- Added support for Pytorch v2.1 ([#2142](https://github.com/Lightning-AI/torchmetrics/pull/2142))
- Added support for logging `MultiTaskWrapper` directly with lightnings `log_dict` method ([#2213](https://github.com/Lightning-AI/torchmetrics/pull/2213))
### Changed
- Change default state of `SpectralAngleMapper` and `UniversalImageQualityIndex` to be tensors ([#2089](https://github.com/Lightning-AI/torchmetrics/pull/2089))
- Changed minimum supported Pytorch version from 1.8 to 1.10 ([#2145](https://github.com/Lightning-AI/torchmetrics/pull/2145))
### Deprecated
- Deprecated `metric._update_called` ([#2141](https://github.com/Lightning-AI/torchmetrics/pull/2141))
- Changed x-/y-axis order for `PrecisionRecallCurve` to be consistent with scikit-learn ([#2183](https://github.com/Lightning-AI/torchmetrics/pull/2183))
### Removed
-
### Fixed
- Fixed numerical stability bug in `LearnedPerceptualImagePatchSimilarity` metric ([#2144](https://github.com/Lightning-AI/torchmetrics/pull/2144))
## [1.2.0] - 2023-09-22
### Added
- Added metric to cluster package:
- `MutualInformationScore` ([#2008](https://github.com/Lightning-AI/torchmetrics/pull/2008))
- `RandScore` ([#2025](https://github.com/Lightning-AI/torchmetrics/pull/2025))
- `NormalizedMutualInfoScore` ([#2029](https://github.com/Lightning-AI/torchmetrics/pull/2029))
- `AdjustedRandScore` ([#2032](https://github.com/Lightning-AI/torchmetrics/pull/2032))
- `CalinskiHarabaszScore` ([#2036](https://github.com/Lightning-AI/torchmetrics/pull/2036))
- `DunnIndex` ([#2049](https://github.com/Lightning-AI/torchmetrics/pull/2049))
- `HomogeneityScore` ([#2053](https://github.com/Lightning-AI/torchmetrics/pull/2053))
- `CompletenessScore` ([#2053](https://github.com/Lightning-AI/torchmetrics/pull/2053))
- `VMeasureScore` ([#2053](https://github.com/Lightning-AI/torchmetrics/pull/2053))
- `FowlkesMallowsIndex` ([#2066](https://github.com/Lightning-AI/torchmetrics/pull/2066))
- `AdjustedMutualInfoScore` ([#2058](https://github.com/Lightning-AI/torchmetrics/pull/2058))
- `DaviesBouldinScore` ([#2071](https://github.com/Lightning-AI/torchmetrics/pull/2071))
- Added `backend` argument to `MeanAveragePrecision` ([#2034](https://github.com/Lightning-AI/torchmetrics/pull/2034))
## [1.1.2] - 2023-09-11
### Fixed
- Fixed tie breaking in ndcg metric ([#2031](https://github.com/Lightning-AI/torchmetrics/pull/2031))
- Fixed bug in `BootStrapper` when very few samples were evaluated that could lead to crash ([#2052](https://github.com/Lightning-AI/torchmetrics/pull/2052))
- Fixed bug when creating multiple plots that lead to not all plots being shown ([#2060](https://github.com/Lightning-AI/torchmetrics/pull/2060))
- Fixed performance issues in `RecallAtFixedPrecision` for large batch sizes ([#2042](https://github.com/Lightning-AI/torchmetrics/pull/2042))
- Fixed bug related to `MetricCollection` used with custom metrics have `prefix`/`postfix` attributes ([#2070](https://github.com/Lightning-AI/torchmetrics/pull/2070))
## [1.1.1] - 2023-08-29
### Added
- Added `average` argument to `MeanAveragePrecision` ([#2018](https://github.com/Lightning-AI/torchmetrics/pull/2018))
### Fixed
- Fixed bug in `PearsonCorrCoef` is updated on single samples at a time ([#2019](https://github.com/Lightning-AI/torchmetrics/pull/2019))
- Fixed support for pixel-wise MSE ([#2017](https://github.com/Lightning-AI/torchmetrics/pull/2017))
- Fixed bug in `MetricCollection` when used with multiple metrics that return dicts with same keys ([#2027](https://github.com/Lightning-AI/torchmetrics/pull/2027))
- Fixed bug in detection intersection metrics when `class_metrics=True` resulting in wrong values ([#1924](https://github.com/Lightning-AI/torchmetrics/pull/1924))
- Fixed missing attributes `higher_is_better`, `is_differentiable` for some metrics ([#2028](https://github.com/Lightning-AI/torchmetrics/pull/2028))
## [1.1.0] - 2023-08-22
### Added
- Added source aggregated signal-to-distortion ratio (SA-SDR) metric ([#1882](https://github.com/Lightning-AI/torchmetrics/pull/1882)
- Added `VisualInformationFidelity` to image package ([#1830](https://github.com/Lightning-AI/torchmetrics/pull/1830))
- Added `EditDistance` to text package ([#1906](https://github.com/Lightning-AI/torchmetrics/pull/1906))
- Added `top_k` argument to `RetrievalMRR` in retrieval package ([#1961](https://github.com/Lightning-AI/torchmetrics/pull/1961))
- Added support for evaluating `"segm"` and `"bbox"` detection in `MeanAveragePrecision` at the same time ([#1928](https://github.com/Lightning-AI/torchmetrics/pull/1928))
- Added `PerceptualPathLength` to image package ([#1939](https://github.com/Lightning-AI/torchmetrics/pull/1939))
- Added support for multioutput evaluation in `MeanSquaredError` ([#1937](https://github.com/Lightning-AI/torchmetrics/pull/1937))
- Added argument `extended_summary` to `MeanAveragePrecision` such that precision, recall, iou can be easily returned ([#1983](https://github.com/Lightning-AI/torchmetrics/pull/1983))
- Added warning to `ClipScore` if long captions are detected and truncate ([#2001](https://github.com/Lightning-AI/torchmetrics/pull/2001))
- Added `CLIPImageQualityAssessment` to multimodal package ([#1931](https://github.com/Lightning-AI/torchmetrics/pull/1931))
- Added new property `metric_state` to all metrics for users to investigate currently stored tensors in memory ([#2006](https://github.com/Lightning-AI/torchmetrics/pull/2006))
## [1.0.3] - 2023-08-08
### Added
- Added warning to `MeanAveragePrecision` if too many detections are observed ([#1978](https://github.com/Lightning-AI/torchmetrics/pull/1978))
### Fixed
- Fix support for int input for when `multidim_average="samplewise"` in classification metrics ([#1977](https://github.com/Lightning-AI/torchmetrics/pull/1977))
- Fixed x/y labels when plotting confusion matrices ([#1976](https://github.com/Lightning-AI/torchmetrics/pull/1976))
- Fixed IOU compute in cuda ([#1982](https://github.com/Lightning-AI/torchmetrics/pull/1982))
## [1.0.2] - 2023-08-02
### Added
- Added warning to `PearsonCorrCoeff` if input has a very small variance for its given dtype ([#1926](https://github.com/Lightning-AI/torchmetrics/pull/1926))
### Changed
- Changed all non-task specific classification metrics to be true subtypes of `Metric` ([#1963](https://github.com/Lightning-AI/torchmetrics/pull/1963))
### Fixed
- Fixed bug in `CalibrationError` where calculations for double precision input was performed in float precision ([#1919](https://github.com/Lightning-AI/torchmetrics/pull/1919))
- Fixed bug related to the `prefix/postfix` arguments in `MetricCollection` and `ClasswiseWrapper` being duplicated ([#1918](https://github.com/Lightning-AI/torchmetrics/pull/1918))
- Fixed missing AUC score when plotting classification metrics that support the `score` argument ([#1948](https://github.com/Lightning-AI/torchmetrics/pull/1948))
## [1.0.1] - 2023-07-13
### Fixed
- Fixes corner case when using `MetricCollection` together with aggregation metrics ([#1896](https://github.com/Lightning-AI/torchmetrics/pull/1896))
- Fixed the use of `max_fpr` in `AUROC` metric when only one class is present ([#1895](https://github.com/Lightning-AI/torchmetrics/pull/1895))
- Fixed bug related to empty predictions for `IntersectionOverUnion` metric ([#1892](https://github.com/Lightning-AI/torchmetrics/pull/1892))
- Fixed bug related to `MeanMetric` and broadcasting of weights when Nans are present ([#1898](https://github.com/Lightning-AI/torchmetrics/pull/1898))
- Fixed bug related to expected input format of pycoco in `MeanAveragePrecision` ([#1913](https://github.com/Lightning-AI/torchmetrics/pull/1913))
## [1.0.0] - 2023-07-04
### Added
- Added `prefix` and `postfix` arguments to `ClasswiseWrapper` ([#1866](https://github.com/Lightning-AI/torchmetrics/pull/1866))
- Added speech-to-reverberation modulation energy ratio (SRMR) metric ([#1792](https://github.com/Lightning-AI/torchmetrics/pull/1792), [#1872](https://github.com/Lightning-AI/torchmetrics/pull/1872))
- Added new global arg `compute_with_cache` to control caching behaviour after `compute` method ([#1754](https://github.com/Lightning-AI/torchmetrics/pull/1754))
- Added `ComplexScaleInvariantSignalNoiseRatio` for audio package ([#1785](https://github.com/Lightning-AI/torchmetrics/pull/1785))
- Added `Running` wrapper for calculate running statistics ([#1752](https://github.com/Lightning-AI/torchmetrics/pull/1752))
- Added`RelativeAverageSpectralError` and `RootMeanSquaredErrorUsingSlidingWindow` to image package ([#816](https://github.com/PyTorchLightning/metrics/pull/816))
- Added support for `SpecificityAtSensitivity` Metric ([#1432](https://github.com/Lightning-AI/metrics/pull/1432))
- Added support for plotting of metrics through `.plot()` method (
[#1328](https://github.com/Lightning-AI/metrics/pull/1328),
[#1481](https://github.com/Lightning-AI/metrics/pull/1481),
[#1480](https://github.com/Lightning-AI/metrics/pull/1480),
[#1490](https://github.com/Lightning-AI/metrics/pull/1490),
[#1581](https://github.com/Lightning-AI/metrics/pull/1581),
[#1585](https://github.com/Lightning-AI/metrics/pull/1585),
[#1593](https://github.com/Lightning-AI/metrics/pull/1593),
[#1600](https://github.com/Lightning-AI/metrics/pull/1600),
[#1605](https://github.com/Lightning-AI/metrics/pull/1605),
[#1610](https://github.com/Lightning-AI/metrics/pull/1610),
[#1609](https://github.com/Lightning-AI/metrics/pull/1609),
[#1621](https://github.com/Lightning-AI/metrics/pull/1621),
[#1624](https://github.com/Lightning-AI/metrics/pull/1624),
[#1623](https://github.com/Lightning-AI/metrics/pull/1623),
[#1638](https://github.com/Lightning-AI/metrics/pull/1638),
[#1631](https://github.com/Lightning-AI/metrics/pull/1631),
[#1650](https://github.com/Lightning-AI/metrics/pull/1650),
[#1639](https://github.com/Lightning-AI/metrics/pull/1639),
[#1660](https://github.com/Lightning-AI/metrics/pull/1660),
[#1682](https://github.com/Lightning-AI/torchmetrics/pull/1682),
[#1786](https://github.com/Lightning-AI/torchmetrics/pull/1786),
)
- Added support for plotting of audio metrics through `.plot()` method ([#1434](https://github.com/Lightning-AI/metrics/pull/1434))
- Added `classes` to output from `MAP` metric ([#1419](https://github.com/Lightning-AI/metrics/pull/1419))
- Added Binary group fairness metrics to classification package ([#1404](https://github.com/Lightning-AI/metrics/pull/1404))
- Added `MinkowskiDistance` to regression package ([#1362](https://github.com/Lightning-AI/metrics/pull/1362))
- Added `pairwise_minkowski_distance` to pairwise package ([#1362](https://github.com/Lightning-AI/metrics/pull/1362))
- Added new detection metric `PanopticQuality` (
[#929](https://github.com/PyTorchLightning/metrics/pull/929),
[#1527](https://github.com/PyTorchLightning/metrics/pull/1527),
)
- Added `PSNRB` metric ([#1421](https://github.com/Lightning-AI/metrics/pull/1421))
- Added `ClassificationTask` Enum and use in metrics ([#1479](https://github.com/Lightning-AI/metrics/pull/1479))
- Added `ignore_index` option to `exact_match` metric ([#1540](https://github.com/Lightning-AI/metrics/pull/1540))
- Add parameter `top_k` to `RetrievalMAP` ([#1501](https://github.com/Lightning-AI/metrics/pull/1501))
- Added support for deterministic evaluation on GPU for metrics that uses `torch.cumsum` operator ([#1499](https://github.com/Lightning-AI/metrics/pull/1499))
- Added support for plotting of aggregation metrics through `.plot()` method ([#1485](https://github.com/Lightning-AI/metrics/pull/1485))
- Added support for python 3.11 ([#1612](https://github.com/Lightning-AI/metrics/pull/1612))
- Added support for auto clamping of input for metrics that uses the `data_range` ([#1606](argument https://github.com/Lightning-AI/metrics/pull/1606))
- Added `ModifiedPanopticQuality` metric to detection package ([#1627](https://github.com/Lightning-AI/metrics/pull/1627))
- Added `PrecisionAtFixedRecall` metric to classification package ([#1683](https://github.com/Lightning-AI/torchmetrics/pull/1683))
- Added multiple metrics to detection package ([#1284](https://github.com/Lightning-AI/metrics/pull/1284))
* `IntersectionOverUnion`
* `GeneralizedIntersectionOverUnion`
* `CompleteIntersectionOverUnion`
* `DistanceIntersectionOverUnion`
- Added `MultitaskWrapper` to wrapper package ([#1762](https://github.com/Lightning-AI/torchmetrics/pull/1762))
- Added `RelativeSquaredError` metric to regression package ([#1765](https://github.com/Lightning-AI/torchmetrics/pull/1765))
- Added `MemorizationInformedFrechetInceptionDistance` metric to image package ([#1580](https://github.com/Lightning-AI/torchmetrics/pull/1580))
### Changed
- Changed `permutation_invariant_training` to allow using a `'permutation-wise'` metric function ([#1794](https://github.com/Lightning-AI/metrics/pull/1794))
- Changed `update_count` and `update_called` from private to public methods ([#1370](https://github.com/Lightning-AI/metrics/pull/1370))
- Raise exception for invalid kwargs in Metric base class ([#1427](https://github.com/Lightning-AI/metrics/pull/1427))
- Extend `EnumStr` raising `ValueError` for invalid value ([#1479](https://github.com/Lightning-AI/metrics/pull/1479))
- Improve speed and memory consumption of binned `PrecisionRecallCurve` with large number of samples ([#1493](https://github.com/Lightning-AI/metrics/pull/1493))
- Changed `__iter__` method from raising `NotImplementedError` to `TypeError` by setting to `None` ([#1538](https://github.com/Lightning-AI/metrics/pull/1538))
- `FID` metric will now raise an error if too few samples are provided ([#1655](https://github.com/Lightning-AI/metrics/pull/1655))
- Allowed FID with `torch.float64` ([#1628](https://github.com/Lightning-AI/metrics/pull/1628))
- Changed `LPIPS` implementation to no more rely on third-party package ([#1575](https://github.com/Lightning-AI/metrics/pull/1575))
- Changed FID matrix square root calculation from `scipy` to `torch` ([#1708](https://github.com/Lightning-AI/torchmetrics/pull/1708))
- Changed calculation in `PearsonCorrCoeff` to be more robust in certain cases ([#1729](https://github.com/Lightning-AI/torchmetrics/pull/1729))
- Changed `MeanAveragePrecision` to `pycocotools` backend ([#1832](https://github.com/Lightning-AI/torchmetrics/pull/1832))
### Deprecated
- Deprecated domain metrics import from package root (
[#1685](https://github.com/Lightning-AI/metrics/pull/1685),
[#1694](https://github.com/Lightning-AI/metrics/pull/1694),
[#1696](https://github.com/Lightning-AI/metrics/pull/1696),
[#1699](https://github.com/Lightning-AI/metrics/pull/1699),
[#1703](https://github.com/Lightning-AI/metrics/pull/1703),
)
### Removed
- Support for python 3.7 ([#1640](https://github.com/Lightning-AI/metrics/pull/1640))
### Fixed
- Fixed support in `MetricTracker` for `MultioutputWrapper` and nested structures ([#1608](https://github.com/Lightning-AI/metrics/pull/1608))
- Fixed restrictive check in `PearsonCorrCoef` ([#1649](https://github.com/Lightning-AI/metrics/pull/1649))
- Fixed integration with `jsonargparse` and `LightningCLI` ([#1651](https://github.com/Lightning-AI/metrics/pull/1651))
- Fixed corner case in calibration error for zero confidence input ([#1648](https://github.com/Lightning-AI/metrics/pull/1648))
- Fix precision-recall curve based computations for float target ([#1642](https://github.com/Lightning-AI/metrics/pull/1642))
- Fixed missing kwarg squeeze in `MultiOutputWrapper` ([#1675](https://github.com/Lightning-AI/torchmetrics/pull/1675))
- Fixed padding removal for 3d input in `MSSSIM` ([#1674](https://github.com/Lightning-AI/torchmetrics/pull/1674))
- Fixed `max_det_threshold` in MAP detection ([#1712](https://github.com/Lightning-AI/torchmetrics/pull/1712))
- Fixed states being saved in metrics that use `register_buffer` ([#1728](https://github.com/Lightning-AI/torchmetrics/pull/1728))
- Fixed states not being correctly synced and device transferred in `MeanAveragePrecision` for `iou_type="segm"` ([#1763](https://github.com/Lightning-AI/torchmetrics/pull/1763))
- Fixed use of `prefix` and `postfix` in nested `MetricCollection` ([#1773](https://github.com/Lightning-AI/torchmetrics/pull/1773))
- Fixed `ax` plotting logging in `MetricCollection ([#1783](https://github.com/Lightning-AI/torchmetrics/pull/1783))
- Fixed lookup for punkt sources being downloaded in `RougeScore` ([#1789](https://github.com/Lightning-AI/torchmetrics/pull/1789))
- Fixed integration with lightning for `CompositionalMetric` ([#1761](https://github.com/Lightning-AI/torchmetrics/pull/1761))
- Fixed several bugs in `SpectralDistortionIndex` metric ([#1808](https://github.com/Lightning-AI/torchmetrics/pull/1808))
- Fixed bug for corner cases in `MatthewsCorrCoef` (
[#1812](https://github.com/Lightning-AI/torchmetrics/pull/1812),
[#1863](https://github.com/Lightning-AI/torchmetrics/pull/1863)
)
- Fixed support for half precision in `PearsonCorrCoef` ([#1819](https://github.com/Lightning-AI/torchmetrics/pull/1819))
- Fixed number of bugs related to `average="macro"` in classification metrics ([#1821](https://github.com/Lightning-AI/torchmetrics/pull/1821))
- Fixed off-by-one issue when `ignore_index = num_classes + 1` in Multiclass-jaccard ([#1860](https://github.com/Lightning-AI/torchmetrics/pull/1860))
## [0.11.4] - 2023-03-10
### Fixed
- Fixed evaluation of `R2Score` with near constant target ([#1576](https://github.com/Lightning-AI/metrics/pull/1576))
- Fixed dtype conversion when metric is submodule ([#1583](https://github.com/Lightning-AI/metrics/pull/1583))
- Fixed bug related to `top_k>1` and `ignore_index!=None` in `StatScores` based metrics ([#1589](https://github.com/Lightning-AI/metrics/pull/1589))
- Fixed corner case for `PearsonCorrCoef` when running in ddp mode but only on single device ([#1587](https://github.com/Lightning-AI/metrics/pull/1587))
- Fixed overflow error for specific cases in `MAP` when big areas are calculated ([#1607](https://github.com/Lightning-AI/metrics/pull/1607))
## [0.11.3] - 2023-02-28
### Fixed
- Fixed classification metrics for `byte` input ([#1521](https://github.com/Lightning-AI/metrics/pull/1474))
- Fixed the use of `ignore_index` in `MulticlassJaccardIndex` ([#1386](https://github.com/Lightning-AI/metrics/pull/1386))
## [0.11.2] - 2023-02-21
### Fixed
- Fixed compatibility between XLA in `_bincount` function ([#1471](https://github.com/Lightning-AI/metrics/pull/1471))
- Fixed type hints in methods belonging to `MetricTracker` wrapper ([#1472](https://github.com/Lightning-AI/metrics/pull/1472))
- Fixed `multilabel` in `ExactMatch` ([#1474](https://github.com/Lightning-AI/metrics/pull/1474))
## [0.11.1] - 2023-01-30
### Fixed
- Fixed type checking on the `maximize` parameter at the initialization of `MetricTracker` ([#1428](https://github.com/Lightning-AI/metrics/issues/1428))
- Fixed mixed precision autocast for `SSIM` metric ([#1454](https://github.com/Lightning-AI/metrics/pull/1454))
- Fixed checking for `nltk.punkt` in `RougeScore` if a machine is not online ([#1456](https://github.com/Lightning-AI/metrics/pull/1456))
- Fixed wrongly reset method in `MultioutputWrapper` ([#1460](https://github.com/Lightning-AI/metrics/issues/1460))
- Fixed dtype checking in `PrecisionRecallCurve` for `target` tensor ([#1457](https://github.com/Lightning-AI/metrics/pull/1457))
## [0.11.0] - 2022-11-30
### Added
- Added `MulticlassExactMatch` to classification metrics ([#1343](https://github.com/Lightning-AI/metrics/pull/1343))
- Added `TotalVariation` to image package ([#978](https://github.com/Lightning-AI/metrics/pull/978))
- Added `CLIPScore` to new multimodal package ([#1314](https://github.com/Lightning-AI/metrics/pull/1314))
- Added regression metrics:
* `KendallRankCorrCoef` ([#1271](https://github.com/Lightning-AI/metrics/pull/1271))
* `LogCoshError` ([#1316](https://github.com/Lightning-AI/metrics/pull/1316))
- Added new nominal metrics:
* `CramersV` ([#1298](https://github.com/Lightning-AI/metrics/pull/1298))
* `PearsonsContingencyCoefficient` ([#1334](https://github.com/Lightning-AI/metrics/pull/1334))
* `TschuprowsT` ([#1334](https://github.com/Lightning-AI/metrics/pull/1334))
* `TheilsU` ([#1337](https://github.com/Lightning-AI/metrics/pull/1334))
- Added option to pass `distributed_available_fn` to metrics to allow checks for custom communication backend for making `dist_sync_fn` actually useful ([#1301](https://github.com/Lightning-AI/metrics/pull/1301))
- Added `normalize` argument to `Inception`, `FID`, `KID` metrics ([#1246](https://github.com/Lightning-AI/metrics/pull/1246))
### Changed
- Changed minimum Pytorch version to be 1.8 ([#1263](https://github.com/Lightning-AI/metrics/pull/1263))
- Changed interface for all functional and modular classification metrics after refactor ([#1252](https://github.com/Lightning-AI/metrics/pull/1252))
### Removed
- Removed deprecated `BinnedAveragePrecision`, `BinnedPrecisionRecallCurve`, `RecallAtFixedPrecision` ([#1251](https://github.com/Lightning-AI/metrics/pull/1251))
- Removed deprecated `LabelRankingAveragePrecision`, `LabelRankingLoss` and `CoverageError` ([#1251](https://github.com/Lightning-AI/metrics/pull/1251))
- Removed deprecated `KLDivergence` and `AUC` ([#1251](https://github.com/Lightning-AI/metrics/pull/1251))
### Fixed
- Fixed precision bug in `pairwise_euclidean_distance` ([#1352](https://github.com/Lightning-AI/metrics/pull/1352))
## [0.10.3] - 2022-11-16
### Fixed
- Fixed bug in `Metrictracker.best_metric` when `return_step=False` ([#1306](https://github.com/Lightning-AI/metrics/pull/1306))
- Fixed bug to prevent users from going into an infinite loop if trying to iterate of a single metric ([#1320](https://github.com/Lightning-AI/metrics/pull/1320))
## [0.10.2] - 2022-10-31
### Changed
- Changed in-place operation to out-of-place operation in `pairwise_cosine_similarity` ([#1288](https://github.com/Lightning-AI/metrics/pull/1288))
### Fixed
- Fixed high memory usage for certain classification metrics when `average='micro'` ([#1286](https://github.com/Lightning-AI/metrics/pull/1286))
- Fixed precision problems when `structural_similarity_index_measure` was used with autocast ([#1291](https://github.com/Lightning-AI/metrics/pull/1291))
- Fixed slow performance for confusion matrix based metrics ([#1302](https://github.com/Lightning-AI/metrics/pull/1302))
- Fixed restrictive dtype checking in `spearman_corrcoef` when used with autocast ([#1303](https://github.com/Lightning-AI/metrics/pull/1303))
## [0.10.1] - 2022-10-21
### Fixed
- Fixed broken clone method for classification metrics ([#1250](https://github.com/Lightning-AI/metrics/pull/1250))
- Fixed unintentional downloading of `nltk.punkt` when `lsum` not in `rouge_keys` ([#1258](https://github.com/Lightning-AI/metrics/pull/1258))
- Fixed type casting in `MAP` metric between `bool` and `float32` ([#1150](https://github.com/Lightning-AI/metrics/pull/1150))
## [0.10.0] - 2022-10-04
### Added
- Added a new NLP metric `InfoLM` ([#915](https://github.com/Lightning-AI/metrics/pull/915))
- Added `Perplexity` metric ([#922](https://github.com/Lightning-AI/metrics/pull/922))
- Added `ConcordanceCorrCoef` metric to regression package ([#1201](https://github.com/Lightning-AI/metrics/pull/1201))
- Added argument `normalize` to `LPIPS` metric ([#1216](https://github.com/Lightning-AI/metrics/pull/1216))
- Added support for multiprocessing of batches in `PESQ` metric ([#1227](https://github.com/Lightning-AI/metrics/pull/1227))
- Added support for multioutput in `PearsonCorrCoef` and `SpearmanCorrCoef` ([#1200](https://github.com/Lightning-AI/metrics/pull/1200))
### Changed
- Classification refactor (
[#1054](https://github.com/Lightning-AI/metrics/pull/1054),
[#1143](https://github.com/Lightning-AI/metrics/pull/1143),
[#1145](https://github.com/Lightning-AI/metrics/pull/1145),
[#1151](https://github.com/Lightning-AI/metrics/pull/1151),
[#1159](https://github.com/Lightning-AI/metrics/pull/1159),
[#1163](https://github.com/Lightning-AI/metrics/pull/1163),
[#1167](https://github.com/Lightning-AI/metrics/pull/1167),
[#1175](https://github.com/Lightning-AI/metrics/pull/1175),
[#1189](https://github.com/Lightning-AI/metrics/pull/1189),
[#1197](https://github.com/Lightning-AI/metrics/pull/1197),
[#1215](https://github.com/Lightning-AI/metrics/pull/1215),
[#1195](https://github.com/Lightning-AI/metrics/pull/1195)
)
- Changed update in `FID` metric to be done in online fashion to save memory ([#1199](https://github.com/Lightning-AI/metrics/pull/1199))
- Improved performance of retrieval metrics ([#1242](https://github.com/Lightning-AI/metrics/pull/1242))
- Changed `SSIM` and `MSSSIM` update to be online to reduce memory usage ([#1231](https://github.com/Lightning-AI/metrics/pull/1231))
### Deprecated
- Deprecated `BinnedAveragePrecision`, `BinnedPrecisionRecallCurve`, `BinnedRecallAtFixedPrecision` ([#1163](https://github.com/Lightning-AI/metrics/pull/1163))
* `BinnedAveragePrecision` -> use `AveragePrecision` with `thresholds` arg
* `BinnedPrecisionRecallCurve` -> use `AveragePrecisionRecallCurve` with `thresholds` arg
* `BinnedRecallAtFixedPrecision` -> use `RecallAtFixedPrecision` with `thresholds` arg
- Renamed and refactored `LabelRankingAveragePrecision`, `LabelRankingLoss` and `CoverageError` ([#1167](https://github.com/Lightning-AI/metrics/pull/1167))
* `LabelRankingAveragePrecision` -> `MultilabelRankingAveragePrecision`
* `LabelRankingLoss` -> `MultilabelRankingLoss`
* `CoverageError` -> `MultilabelCoverageError`
- Deprecated `KLDivergence` and `AUC` from classification package ([#1189](https://github.com/Lightning-AI/metrics/pull/1189))
* `KLDivergence` moved to `regression` package
* Instead of `AUC` use `torchmetrics.utils.compute.auc`
### Fixed
- Fixed a bug in `ssim` when `return_full_image=True` where the score was still reduced ([#1204](https://github.com/Lightning-AI/metrics/pull/1204))
- Fixed MPS support for:
* MAE metric ([#1210](https://github.com/Lightning-AI/metrics/pull/1210))
* Jaccard index ([#1205](https://github.com/Lightning-AI/metrics/pull/1205))
- Fixed bug in `ClasswiseWrapper` such that `compute` gave wrong result ([#1225](https://github.com/Lightning-AI/metrics/pull/1225))
- Fixed synchronization of empty list states ([#1219](https://github.com/Lightning-AI/metrics/pull/1219))
## [0.9.3] - 2022-08-22
### Added
- Added global option `sync_on_compute` to disable automatic synchronization when `compute` is called ([#1107](https://github.dev/Lightning-AI/metrics/pull/1107))
### Fixed
- Fixed missing reset in `ClasswiseWrapper` ([#1129](https://github.com/Lightning-AI/metrics/pull/1129))
- Fixed `JaccardIndex` multi-label compute ([#1125](https://github.com/Lightning-AI/metrics/pull/1125))
- Fix SSIM propagate device if `gaussian_kernel` is False, add test ([#1149](https://github.com/Lightning-AI/metrics/pull/1149))
## [0.9.2] - 2022-06-29
### Fixed
- Fixed mAP calculation for areas with 0 predictions ([#1080](https://github.com/Lightning-AI/metrics/pull/1080))
- Fixed bug where avg precision state and auroc state was not merge when using MetricCollections ([#1086](https://github.com/Lightning-AI/metrics/pull/1086))
- Skip box conversion if no boxes are present in `MeanAveragePrecision` ([#1097](https://github.com/Lightning-AI/metrics/pull/1097))
- Fixed inconsistency in docs and code when setting `average="none"` in `AveragePrecision` metric ([#1116](https://github.com/Lightning-AI/metrics/pull/1116))
## [0.9.1] - 2022-06-08
### Added
- Added specific `RuntimeError` when metric object is on the wrong device ([#1056](https://github.com/Lightning-AI/metrics/pull/1056))
- Added an option to specify own n-gram weights for `BLEUScore` and `SacreBLEUScore` instead of using uniform weights only. ([#1075](https://github.com/Lightning-AI/metrics/pull/1075))
### Fixed
- Fixed aggregation metrics when input only contains zero ([#1070](https://github.com/Lightning-AI/metrics/pull/1070))
- Fixed `TypeError` when providing superclass arguments as `kwargs` ([#1069](https://github.com/Lightning-AI/metrics/pull/1069))
- Fixed bug related to state reference in metric collection when using compute groups ([#1076](https://github.com/Lightning-AI/metrics/pull/1076))
## [0.9.0] - 2022-05-30
### Added
- Added `RetrievalPrecisionRecallCurve` and `RetrievalRecallAtFixedPrecision` to retrieval package ([#951](https://github.com/Lightning-AI/metrics/pull/951))
- Added class property `full_state_update` that determines `forward` should call `update` once or twice (
[#984](https://github.com/Lightning-AI/metrics/pull/984),
[#1033](https://github.com/Lightning-AI/metrics/pull/1033))
- Added support for nested metric collections ([#1003](https://github.com/Lightning-AI/metrics/pull/1003))
- Added `Dice` to classification package ([#1021](https://github.com/Lightning-AI/metrics/pull/1021))
- Added support to segmentation type `segm` as IOU for mean average precision ([#822](https://github.com/Lightning-AI/metrics/pull/822))
### Changed
- Renamed `reduction` argument to `average` in Jaccard score and added additional options ([#874](https://github.com/Lightning-AI/metrics/pull/874))
### Removed
- Removed deprecated `compute_on_step` argument (
[#962](https://github.com/Lightning-AI/metrics/pull/962),
[#967](https://github.com/Lightning-AI/metrics/pull/967),
[#979](https://github.com/Lightning-AI/metrics/pull/979),
[#990](https://github.com/Lightning-AI/metrics/pull/990),
[#991](https://github.com/Lightning-AI/metrics/pull/991),
[#993](https://github.com/Lightning-AI/metrics/pull/993),
[#1005](https://github.com/Lightning-AI/metrics/pull/1005),
[#1004](https://github.com/Lightning-AI/metrics/pull/1004),
[#1007](https://github.com/Lightning-AI/metrics/pull/1007)
)
### Fixed
- Fixed non-empty state dict for a few metrics ([#1012](https://github.com/Lightning-AI/metrics/pull/1012))
- Fixed bug when comparing states while finding compute groups ([#1022](https://github.com/Lightning-AI/metrics/pull/1022))
- Fixed `torch.double` support in stat score metrics ([#1023](https://github.com/Lightning-AI/metrics/pull/1023))
- Fixed `FID` calculation for non-equal size real and fake input ([#1028](https://github.com/Lightning-AI/metrics/pull/1028))
- Fixed case where `KLDivergence` could output `Nan` ([#1030](https://github.com/Lightning-AI/metrics/pull/1030))
- Fixed deterministic for PyTorch<1.8 ([#1035](https://github.com/Lightning-AI/metrics/pull/1035))
- Fixed default value for `mdmc_average` in `Accuracy` ([#1036](https://github.com/Lightning-AI/metrics/pull/1036))
- Fixed missing copy of property when using compute groups in `MetricCollection` ([#1052](https://github.com/Lightning-AI/metrics/pull/1052))
## [0.8.2] - 2022-05-06
### Fixed
- Fixed multi device aggregation in `PearsonCorrCoef` ([#998](https://github.com/Lightning-AI/metrics/pull/998))
- Fixed MAP metric when using custom list of thresholds ([#995](https://github.com/Lightning-AI/metrics/pull/995))
- Fixed compatibility between compute groups in `MetricCollection` and prefix/postfix arg ([#1007](https://github.com/Lightning-AI/metrics/pull/1008))
- Fixed compatibility with future Pytorch 1.12 in `safe_matmul` ([#1011](https://github.com/Lightning-AI/metrics/pull/1011), [#1014](https://github.com/Lightning-AI/metrics/pull/1014))
## [0.8.1] - 2022-04-27
### Changed
- Reimplemented the `signal_distortion_ratio` metric, which removed the absolute requirement of `fast-bss-eval` ([#964](https://github.com/Lightning-AI/metrics/pull/964))
### Fixed
- Fixed "Sort currently does not support bool dtype on CUDA" error in MAP for empty preds ([#983](https://github.com/Lightning-AI/metrics/pull/983))
- Fixed `BinnedPrecisionRecallCurve` when `thresholds` argument is not provided ([#968](https://github.com/Lightning-AI/metrics/pull/968))
- Fixed `CalibrationError` to work on logit input ([#985](https://github.com/Lightning-AI/metrics/pull/985))
## [0.8.0] - 2022-04-14
### Added
- Added `WeightedMeanAbsolutePercentageError` to regression package ([#948](https://github.com/Lightning-AI/metrics/pull/948))
- Added new classification metrics:
* `CoverageError` ([#787](https://github.com/Lightning-AI/metrics/pull/787))
* `LabelRankingAveragePrecision` and `LabelRankingLoss` ([#787](https://github.com/Lightning-AI/metrics/pull/787))
- Added new image metric:
* `SpectralAngleMapper` ([#885](https://github.com/Lightning-AI/metrics/pull/885))
* `ErrorRelativeGlobalDimensionlessSynthesis` ([#894](https://github.com/Lightning-AI/metrics/pull/894))
* `UniversalImageQualityIndex` ([#824](https://github.com/Lightning-AI/metrics/pull/824))
* `SpectralDistortionIndex` ([#873](https://github.com/Lightning-AI/metrics/pull/873))
- Added support for `MetricCollection` in `MetricTracker` ([#718](https://github.com/Lightning-AI/metrics/pull/718))
- Added support for 3D image and uniform kernel in `StructuralSimilarityIndexMeasure` ([#818](https://github.com/Lightning-AI/metrics/pull/818))
- Added smart update of `MetricCollection` ([#709](https://github.com/Lightning-AI/metrics/pull/709))
- Added `ClasswiseWrapper` for better logging of classification metrics with multiple output values ([#832](https://github.com/Lightning-AI/metrics/pull/832))
- Added `**kwargs` argument for passing additional arguments to base class ([#833](https://github.com/Lightning-AI/metrics/pull/833))
- Added negative `ignore_index` for the Accuracy metric ([#362](https://github.com/Lightning-AI/metrics/pull/362))
- Added `adaptive_k` for the `RetrievalPrecision` metric ([#910](https://github.com/Lightning-AI/metrics/pull/910))
- Added `reset_real_features` argument image quality assessment metrics ([#722](https://github.com/Lightning-AI/metrics/pull/722))
- Added new keyword argument `compute_on_cpu` to all metrics ([#867](https://github.com/Lightning-AI/metrics/pull/867))
### Changed
- Made `num_classes` in `jaccard_index` a required argument ([#853](https://github.com/Lightning-AI/metrics/pull/853), [#914](https://github.com/Lightning-AI/metrics/pull/914))
- Added normalizer, tokenizer to ROUGE metric ([#838](https://github.com/Lightning-AI/metrics/pull/838))
- Improved shape checking of `permutation_invariant_training` ([#864](https://github.com/Lightning-AI/metrics/pull/864))
- Allowed reduction `None` ([#891](https://github.com/Lightning-AI/metrics/pull/891))
- `MetricTracker.best_metric` will now give a warning when computing on metric that do not have a best ([#913](https://github.com/Lightning-AI/metrics/pull/913))
### Deprecated
- Deprecated argument `compute_on_step` ([#792](https://github.com/Lightning-AI/metrics/pull/792))
- Deprecated passing in `dist_sync_on_step`, `process_group`, `dist_sync_fn` direct argument ([#833](https://github.com/Lightning-AI/metrics/pull/833))
### Removed
- Removed support for versions of [Pytorch-Lightning](https://github.com/Lightning-AI/lightning) lower than v1.5 ([#788](https://github.com/Lightning-AI/metrics/pull/788))
- Removed deprecated functions, and warnings in Text ([#773](https://github.com/Lightning-AI/metrics/pull/773))
* `WER` and `functional.wer`
- Removed deprecated functions and warnings in Image ([#796](https://github.com/Lightning-AI/metrics/pull/796))
* `SSIM` and `functional.ssim`
* `PSNR` and `functional.psnr`
- Removed deprecated functions, and warnings in classification and regression ([#806](https://github.com/Lightning-AI/metrics/pull/806))
* `FBeta` and `functional.fbeta`
* `F1` and `functional.f1`
* `Hinge` and `functional.hinge`
* `IoU` and `functional.iou`
* `MatthewsCorrcoef`
* `PearsonCorrcoef`
* `SpearmanCorrcoef`
- Removed deprecated functions, and warnings in detection and pairwise ([#804](https://github.com/Lightning-AI/metrics/pull/804))
* `MAP` and `functional.pairwise.manhatten`
- Removed deprecated functions, and warnings in Audio ([#805](https://github.com/Lightning-AI/metrics/pull/805))
* `PESQ` and `functional.audio.pesq`
* `PIT` and `functional.audio.pit`
* `SDR` and `functional.audio.sdr` and `functional.audio.si_sdr`
* `SNR` and `functional.audio.snr` and `functional.audio.si_snr`
* `STOI` and `functional.audio.stoi`
- Removed unused `get_num_classes` from `torchmetrics.utilities.data` ([#914](https://github.com/Lightning-AI/metrics/pull/914))
### Fixed
- Fixed device mismatch for `MAP` metric in specific cases ([#950](https://github.com/Lightning-AI/metrics/pull/950))
- Improved testing speed ([#820](https://github.com/Lightning-AI/metrics/pull/820))
- Fixed compatibility of `ClasswiseWrapper` with the `prefix` argument of `MetricCollection` ([#843](https://github.com/Lightning-AI/metrics/pull/843))
- Fixed `BestScore` on GPU ([#912](https://github.com/Lightning-AI/metrics/pull/912))
- Fixed Lsum computation for `ROUGEScore` ([#944](https://github.com/Lightning-AI/metrics/pull/944))
## [0.7.3] - 2022-03-23
### Fixed
- Fixed unsafe log operation in `TweedieDeviace` for power=1 ([#847](https://github.com/Lightning-AI/metrics/pull/847))
- Fixed bug in MAP metric related to either no ground truth or no predictions ([#884](https://github.com/Lightning-AI/metrics/pull/884))
- Fixed `ConfusionMatrix`, `AUROC` and `AveragePrecision` on GPU when running in deterministic mode ([#900](https://github.com/Lightning-AI/metrics/pull/900))
- Fixed NaN or Inf results returned by `signal_distortion_ratio` ([#899](https://github.com/Lightning-AI/metrics/pull/899))
- Fixed memory leak when using `update` method with tensor where `requires_grad=True` ([#902](https://github.com/Lightning-AI/metrics/pull/902))
## [0.7.2] - 2022-02-10
### Fixed
- Minor patches in JOSS paper.
## [0.7.1] - 2022-02-03
### Changed
- Used `torch.bucketize` in calibration error when `torch>1.8` for faster computations ([#769](https://github.com/Lightning-AI/metrics/pull/769))
- Improve mAP performance ([#742](https://github.com/Lightning-AI/metrics/pull/742))
### Fixed
- Fixed check for available modules ([#772](https://github.com/Lightning-AI/metrics/pull/772))
- Fixed Matthews correlation coefficient when the denominator is 0 ([#781](https://github.com/Lightning-AI/metrics/pull/781))
## [0.7.0] - 2022-01-17
### Added
- Added NLP metrics:
- `MatchErrorRate` ([#619](https://github.com/Lightning-AI/metrics/pull/619))
- `WordInfoLost` and `WordInfoPreserved` ([#630](https://github.com/Lightning-AI/metrics/pull/630))
- `SQuAD` ([#623](https://github.com/Lightning-AI/metrics/pull/623))
- `CHRFScore` ([#641](https://github.com/Lightning-AI/metrics/pull/641))
- `TranslationEditRate` ([#646](https://github.com/Lightning-AI/metrics/pull/646))
- `ExtendedEditDistance` ([#668](https://github.com/Lightning-AI/metrics/pull/668))
- Added `MultiScaleSSIM` into image metrics ([#679](https://github.com/Lightning-AI/metrics/pull/679))
- Added Signal to Distortion Ratio (`SDR`) to audio package ([#565](https://github.com/Lightning-AI/metrics/pull/565))
- Added `MinMaxMetric` to wrappers ([#556](https://github.com/Lightning-AI/metrics/pull/556))
- Added `ignore_index` to retrieval metrics ([#676](https://github.com/Lightning-AI/metrics/pull/676))
- Added support for multi references in `ROUGEScore` ([#680](https://github.com/Lightning-AI/metrics/pull/680))
- Added a default VSCode devcontainer configuration ([#621](https://github.com/Lightning-AI/metrics/pull/621))
### Changed
- Scalar metrics will now consistently have additional dimensions squeezed ([#622](https://github.com/Lightning-AI/metrics/pull/622))
- Metrics having third party dependencies removed from global import ([#463](https://github.com/Lightning-AI/metrics/pull/463))
- Untokenized for `BLEUScore` input stay consistent with all the other text metrics ([#640](https://github.com/Lightning-AI/metrics/pull/640))
- Arguments reordered for `TER`, `BLEUScore`, `SacreBLEUScore`, `CHRFScore` now expect input order as predictions first and target second ([#696](https://github.com/Lightning-AI/metrics/pull/696))
- Changed dtype of metric state from `torch.float` to `torch.long` in `ConfusionMatrix` to accommodate larger values ([#715](https://github.com/Lightning-AI/metrics/pull/715))
- Unify `preds`, `target` input argument's naming across all text metrics ([#723](https://github.com/Lightning-AI/metrics/pull/723), [#727](https://github.com/Lightning-AI/metrics/pull/727))
* `bert`, `bleu`, `chrf`, `sacre_bleu`, `wip`, `wil`, `cer`, `ter`, `wer`, `mer`, `rouge`, `squad`
### Deprecated
- Renamed IoU -> Jaccard Index ([#662](https://github.com/Lightning-AI/metrics/pull/662))
- Renamed text WER metric ([#714](https://github.com/Lightning-AI/metrics/pull/714))
* `functional.wer` -> `functional.word_error_rate`
* `WER` -> `WordErrorRate`
- Renamed correlation coefficient classes: ([#710](https://github.com/Lightning-AI/metrics/pull/710))
* `MatthewsCorrcoef` -> `MatthewsCorrCoef`
* `PearsonCorrcoef` -> `PearsonCorrCoef`
* `SpearmanCorrcoef` -> `SpearmanCorrCoef`
- Renamed audio STOI metric: ([#753](https://github.com/Lightning-AI/metrics/pull/753), [#758](https://github.com/Lightning-AI/metrics/pull/758))
* `audio.STOI` to `audio.ShortTimeObjectiveIntelligibility`
* `functional.audio.stoi` to `functional.audio.short_time_objective_intelligibility`
- Renamed audio PESQ metrics: ([#751](https://github.com/Lightning-AI/metrics/pull/751))
* `functional.audio.pesq` -> `functional.audio.perceptual_evaluation_speech_quality`
* `audio.PESQ` -> `audio.PerceptualEvaluationSpeechQuality`
- Renamed audio SDR metrics: ([#711](https://github.com/Lightning-AI/metrics/pull/711))
* `functional.sdr` -> `functional.signal_distortion_ratio`
* `functional.si_sdr` -> `functional.scale_invariant_signal_distortion_ratio`
* `SDR` -> `SignalDistortionRatio`
* `SI_SDR` -> `ScaleInvariantSignalDistortionRatio`
- Renamed audio SNR metrics: ([#712](https://github.com/Lightning-AI/metrics/pull/712))
* `functional.snr` -> `functional.signal_distortion_ratio`
* `functional.si_snr` -> `functional.scale_invariant_signal_noise_ratio`
* `SNR` -> `SignalNoiseRatio`
* `SI_SNR` -> `ScaleInvariantSignalNoiseRatio`
- Renamed F-score metrics: ([#731](https://github.com/Lightning-AI/metrics/pull/731), [#740](https://github.com/Lightning-AI/metrics/pull/740))
* `functional.f1` -> `functional.f1_score`
* `F1` -> `F1Score`
* `functional.fbeta` -> `functional.fbeta_score`
* `FBeta` -> `FBetaScore`
- Renamed Hinge metric: ([#734](https://github.com/Lightning-AI/metrics/pull/734))
* `functional.hinge` -> `functional.hinge_loss`
* `Hinge` -> `HingeLoss`
- Renamed image PSNR metrics ([#732](https://github.com/Lightning-AI/metrics/pull/732))
* `functional.psnr` -> `functional.peak_signal_noise_ratio`
* `PSNR` -> `PeakSignalNoiseRatio`
- Renamed image PIT metric: ([#737](https://github.com/Lightning-AI/metrics/pull/737))
* `functional.pit` -> `functional.permutation_invariant_training`
* `PIT` -> `PermutationInvariantTraining`
- Renamed image SSIM metric: ([#747](https://github.com/Lightning-AI/metrics/pull/747))
* `functional.ssim` -> `functional.scale_invariant_signal_noise_ratio`
* `SSIM` -> `StructuralSimilarityIndexMeasure`
- Renamed detection `MAP` to `MeanAveragePrecision` metric ([#754](https://github.com/Lightning-AI/metrics/pull/754))
- Renamed Fidelity & LPIPS image metric: ([#752](https://github.com/Lightning-AI/metrics/pull/752))
* `image.FID` -> `image.FrechetInceptionDistance`
* `image.KID` -> `image.KernelInceptionDistance`
* `image.LPIPS` -> `image.LearnedPerceptualImagePatchSimilarity`
### Removed
- Removed `embedding_similarity` metric ([#638](https://github.com/Lightning-AI/metrics/pull/638))
- Removed argument `concatenate_texts` from `wer` metric ([#638](https://github.com/Lightning-AI/metrics/pull/638))
- Removed arguments `newline_sep` and `decimal_places` from `rouge` metric ([#638](https://github.com/Lightning-AI/metrics/pull/638))
### Fixed
- Fixed MetricCollection kwargs filtering when no `kwargs` are present in update signature ([#707](https://github.com/Lightning-AI/metrics/pull/707))
## [0.6.2] - 2021-12-15
### Fixed
- Fixed `torch.sort` currently does not support bool `dtype` on CUDA ([#665](https://github.com/Lightning-AI/metrics/pull/665))
- Fixed mAP properly checks if ground truths are empty ([#684](https://github.com/Lightning-AI/metrics/pull/684))
- Fixed initialization of tensors to be on correct device for `MAP` metric ([#673](https://github.com/Lightning-AI/metrics/pull/673))
## [0.6.1] - 2021-12-06
### Changed
- Migrate MAP metrics from pycocotools to PyTorch ([#632](https://github.com/Lightning-AI/metrics/pull/632))
- Use `torch.topk` instead of `torch.argsort` in retrieval precision for speedup ([#627](https://github.com/Lightning-AI/metrics/pull/627))
### Fixed
- Fix empty predictions in MAP metric ([#594](https://github.com/Lightning-AI/metrics/pull/594), [#610](https://github.com/Lightning-AI/metrics/pull/610), [#624](https://github.com/Lightning-AI/metrics/pull/624))
- Fix edge case of AUROC with `average=weighted` on GPU ([#606](https://github.com/Lightning-AI/metrics/pull/606))
- Fixed `forward` in compositional metrics ([#645](https://github.com/Lightning-AI/metrics/pull/645))
## [0.6.0] - 2021-10-28
### Added
- Added audio metrics:
- Perceptual Evaluation of Speech Quality (PESQ) ([#353](https://github.com/Lightning-AI/metrics/pull/353))
- Short-Time Objective Intelligibility (STOI) ([#353](https://github.com/Lightning-AI/metrics/pull/353))
- Added Information retrieval metrics:
- `RetrievalRPrecision` ([#577](https://github.com/Lightning-AI/metrics/pull/577))
- `RetrievalHitRate` ([#576](https://github.com/Lightning-AI/metrics/pull/576))
- Added NLP metrics:
- `SacreBLEUScore` ([#546](https://github.com/Lightning-AI/metrics/pull/546))
- `CharErrorRate` ([#575](https://github.com/Lightning-AI/metrics/pull/575))
- Added other metrics:
- Tweedie Deviance Score ([#499](https://github.com/Lightning-AI/metrics/pull/499))
- Learned Perceptual Image Patch Similarity (LPIPS) ([#431](https://github.com/Lightning-AI/metrics/pull/431))
- Added `MAP` (mean average precision) metric to new detection package ([#467](https://github.com/Lightning-AI/metrics/pull/467))
- Added support for float targets in `nDCG` metric ([#437](https://github.com/Lightning-AI/metrics/pull/437))
- Added `average` argument to `AveragePrecision` metric for reducing multi-label and multi-class problems ([#477](https://github.com/Lightning-AI/metrics/pull/477))
- Added `MultioutputWrapper` ([#510](https://github.com/Lightning-AI/metrics/pull/510))
- Added metric sweeping:
- `higher_is_better` as constant attribute ([#544](https://github.com/Lightning-AI/metrics/pull/544))
- `higher_is_better` to rest of codebase ([#584](https://github.com/Lightning-AI/metrics/pull/584))
- Added simple aggregation metrics: `SumMetric`, `MeanMetric`, `CatMetric`, `MinMetric`, `MaxMetric` ([#506](https://github.com/Lightning-AI/metrics/pull/506))
- Added pairwise submodule with metrics ([#553](https://github.com/Lightning-AI/metrics/pull/553))
- `pairwise_cosine_similarity`
- `pairwise_euclidean_distance`
- `pairwise_linear_similarity`
- `pairwise_manhatten_distance`
### Changed
- `AveragePrecision` will now as default output the `macro` average for multilabel and multiclass problems ([#477](https://github.com/Lightning-AI/metrics/pull/477))
- `half`, `double`, `float` will no longer change the dtype of the metric states. Use `metric.set_dtype` instead ([#493](https://github.com/Lightning-AI/metrics/pull/493))
- Renamed `AverageMeter` to `MeanMetric` ([#506](https://github.com/Lightning-AI/metrics/pull/506))
- Changed `is_differentiable` from property to a constant attribute ([#551](https://github.com/Lightning-AI/metrics/pull/551))
- `ROC` and `AUROC` will no longer throw an error when either the positive or negative class is missing. Instead return 0 score and give a warning
### Deprecated
- Deprecated `functional.self_supervised.embedding_similarity` in favour of new pairwise submodule
### Removed
- Removed `dtype` property ([#493](https://github.com/Lightning-AI/metrics/pull/493))
### Fixed
- Fixed bug in `F1` with `average='macro'` and `ignore_index!=None` ([#495](https://github.com/Lightning-AI/metrics/pull/495))
- Fixed bug in `pit` by using the returned first result to initialize device and type ([#533](https://github.com/Lightning-AI/metrics/pull/533))
- Fixed `SSIM` metric using too much memory ([#539](https://github.com/Lightning-AI/metrics/pull/539))
- Fixed bug where `device` property was not properly update when metric was a child of a module (#542)
## [0.5.1] - 2021-08-30
### Added
- Added `device` and `dtype` properties ([#462](https://github.com/Lightning-AI/metrics/pull/462))
- Added `TextTester` class for robustly testing text metrics ([#450](https://github.com/Lightning-AI/metrics/pull/450))
### Changed
- Added support for float targets in `nDCG` metric ([#437](https://github.com/Lightning-AI/metrics/pull/437))
### Removed
- Removed `rouge-score` as dependency for text package ([#443](https://github.com/Lightning-AI/metrics/pull/443))
- Removed `jiwer` as dependency for text package ([#446](https://github.com/Lightning-AI/metrics/pull/446))
- Removed `bert-score` as dependency for text package ([#473](https://github.com/Lightning-AI/metrics/pull/473))
### Fixed
- Fixed ranking of samples in `SpearmanCorrCoef` metric ([#448](https://github.com/Lightning-AI/metrics/pull/448))
- Fixed bug where compositional metrics where unable to sync because of type mismatch ([#454](https://github.com/Lightning-AI/metrics/pull/454))
- Fixed metric hashing ([#478](https://github.com/Lightning-AI/metrics/pull/478))
- Fixed `BootStrapper` metrics not working on GPU ([#462](https://github.com/Lightning-AI/metrics/pull/462))
- Fixed the semantic ordering of kernel height and width in `SSIM` metric ([#474](https://github.com/Lightning-AI/metrics/pull/474))
## [0.5.0] - 2021-08-09
### Added
- Added **Text-related (NLP) metrics**:
- Word Error Rate (WER) ([#383](https://github.com/Lightning-AI/metrics/pull/383))
- ROUGE ([#399](https://github.com/Lightning-AI/metrics/pull/399))
- BERT score ([#424](https://github.com/Lightning-AI/metrics/pull/424))
- BLUE score ([#360](https://github.com/Lightning-AI/metrics/pull/360))
- Added `MetricTracker` wrapper metric for keeping track of the same metric over multiple epochs ([#238](https://github.com/Lightning-AI/metrics/pull/238))
- Added other metrics:
- Symmetric Mean Absolute Percentage error (SMAPE) ([#375](https://github.com/Lightning-AI/metrics/pull/375))
- Calibration error ([#394](https://github.com/Lightning-AI/metrics/pull/394))
- Permutation Invariant Training (PIT) ([#384](https://github.com/Lightning-AI/metrics/pull/384))
- Added support in `nDCG` metric for target with values larger than 1 ([#349](https://github.com/Lightning-AI/metrics/pull/349))
- Added support for negative targets in `nDCG` metric ([#378](https://github.com/Lightning-AI/metrics/pull/378))
- Added `None` as reduction option in `CosineSimilarity` metric ([#400](https://github.com/Lightning-AI/metrics/pull/400))
- Allowed passing labels in (n_samples, n_classes) to `AveragePrecision` ([#386](https://github.com/Lightning-AI/metrics/pull/386))
### Changed
- Moved `psnr` and `ssim` from `functional.regression.*` to `functional.image.*` ([#382](https://github.com/Lightning-AI/metrics/pull/382))
- Moved `image_gradient` from `functional.image_gradients` to `functional.image.gradients` ([#381](https://github.com/Lightning-AI/metrics/pull/381))
- Moved `R2Score` from `regression.r2score` to `regression.r2` ([#371](https://github.com/Lightning-AI/metrics/pull/371))
- Pearson metric now only store 6 statistics instead of all predictions and targets ([#380](https://github.com/Lightning-AI/metrics/pull/380))
- Use `torch.argmax` instead of `torch.topk` when `k=1` for better performance ([#419](https://github.com/Lightning-AI/metrics/pull/419))
- Moved check for number of samples in R2 score to support single sample updating ([#426](https://github.com/Lightning-AI/metrics/pull/426))
### Deprecated
- Rename `r2score` >> `r2_score` and `kldivergence` >> `kl_divergence` in `functional` ([#371](https://github.com/Lightning-AI/metrics/pull/371))
- Moved `bleu_score` from `functional.nlp` to `functional.text.bleu` ([#360](https://github.com/Lightning-AI/metrics/pull/360))
### Removed
- Removed restriction that `threshold` has to be in (0,1) range to support logit input (
[#351](https://github.com/Lightning-AI/metrics/pull/351)
[#401](https://github.com/Lightning-AI/metrics/pull/401))
- Removed restriction that `preds` could not be bigger than `num_classes` to support logit input ([#357](https://github.com/Lightning-AI/metrics/pull/357))
- Removed module `regression.psnr` and `regression.ssim` ([#382](https://github.com/Lightning-AI/metrics/pull/382)):
- Removed ([#379](https://github.com/Lightning-AI/metrics/pull/379)):
* function `functional.mean_relative_error`
* `num_thresholds` argument in `BinnedPrecisionRecallCurve`
### Fixed
- Fixed bug where classification metrics with `average='macro'` would lead to wrong result if a class was missing ([#303](https://github.com/Lightning-AI/metrics/pull/303))
- Fixed `weighted`, `multi-class` AUROC computation to allow for 0 observations of some class, as contribution to final AUROC is 0 ([#376](https://github.com/Lightning-AI/metrics/pull/376))
- Fixed that `_forward_cache` and `_computed` attributes are also moved to the correct device if metric is moved ([#413](https://github.com/Lightning-AI/metrics/pull/413))
- Fixed calculation in `IoU` metric when using `ignore_index` argument ([#328](https://github.com/Lightning-AI/metrics/pull/328))
## [0.4.1] - 2021-07-05
### Changed
- Extend typing ([#330](https://github.com/Lightning-AI/metrics/pull/330),
[#332](https://github.com/Lightning-AI/metrics/pull/332),
[#333](https://github.com/Lightning-AI/metrics/pull/333),
[#335](https://github.com/Lightning-AI/metrics/pull/335),
[#314](https://github.com/Lightning-AI/metrics/pull/314))
### Fixed
- Fixed DDP by `is_sync` logic to `Metric` ([#339](https://github.com/Lightning-AI/metrics/pull/339))
## [0.4.0] - 2021-06-29
### Added
- Added **Image-related metrics**:
- Fréchet inception distance (FID) ([#213](https://github.com/Lightning-AI/metrics/pull/213))
- Kernel Inception Distance (KID) ([#301](https://github.com/Lightning-AI/metrics/pull/301))
- Inception Score ([#299](https://github.com/Lightning-AI/metrics/pull/299))
- KL divergence ([#247](https://github.com/Lightning-AI/metrics/pull/247))
- Added **Audio metrics**: SNR, SI_SDR, SI_SNR ([#292](https://github.com/Lightning-AI/metrics/pull/292))
- Added other metrics:
- Cosine Similarity ([#305](https://github.com/Lightning-AI/metrics/pull/305))
- Specificity ([#210](https://github.com/Lightning-AI/metrics/pull/210))
- Mean Absolute Percentage error (MAPE) ([#248](https://github.com/Lightning-AI/metrics/pull/248))
- Added `add_metrics` method to `MetricCollection` for adding additional metrics after initialization ([#221](https://github.com/Lightning-AI/metrics/pull/221))
- Added pre-gather reduction in the case of `dist_reduce_fx="cat"` to reduce communication cost ([#217](https://github.com/Lightning-AI/metrics/pull/217))
- Added better error message for `AUROC` when `num_classes` is not provided for multiclass input ([#244](https://github.com/Lightning-AI/metrics/pull/244))
- Added support for unnormalized scores (e.g. logits) in `Accuracy`, `Precision`, `Recall`, `FBeta`, `F1`, `StatScore`, `Hamming`, `ConfusionMatrix` metrics ([#200](https://github.com/Lightning-AI/metrics/pull/200))
- Added `squared` argument to `MeanSquaredError` for computing `RMSE` ([#249](https://github.com/Lightning-AI/metrics/pull/249))
- Added `is_differentiable` property to `ConfusionMatrix`, `F1`, `FBeta`, `Hamming`, `Hinge`, `IOU`, `MatthewsCorrcoef`, `Precision`, `Recall`, `PrecisionRecallCurve`, `ROC`, `StatScores` ([#253](https://github.com/Lightning-AI/metrics/pull/253))
- Added `sync` and `sync_context` methods for manually controlling when metric states are synced ([#302](https://github.com/Lightning-AI/metrics/pull/302))
### Changed
- Forward cache is reset when `reset` method is called ([#260](https://github.com/Lightning-AI/metrics/pull/260))
- Improved per-class metric handling for imbalanced datasets for `precision`, `recall`, `precision_recall`, `fbeta`, `f1`, `accuracy`, and `specificity` ([#204](https://github.com/Lightning-AI/metrics/pull/204))
- Decorated `torch.jit.unused` to `MetricCollection` forward ([#307](https://github.com/Lightning-AI/metrics/pull/307))
- Renamed `thresholds` argument to binned metrics for manually controlling the thresholds ([#322](https://github.com/Lightning-AI/metrics/pull/322))
- Extend typing ([#324](https://github.com/Lightning-AI/metrics/pull/324),
[#326](https://github.com/Lightning-AI/metrics/pull/326),
[#327](https://github.com/Lightning-AI/metrics/pull/327))
### Deprecated
- Deprecated `functional.mean_relative_error`, use `functional.mean_absolute_percentage_error` ([#248](https://github.com/Lightning-AI/metrics/pull/248))
- Deprecated `num_thresholds` argument in `BinnedPrecisionRecallCurve` ([#322](https://github.com/Lightning-AI/metrics/pull/322))
### Removed
- Removed argument `is_multiclass` ([#319](https://github.com/Lightning-AI/metrics/pull/319))
### Fixed
- AUC can also support more dimensional inputs when all but one dimension are of size 1 ([#242](https://github.com/Lightning-AI/metrics/pull/242))
- Fixed `dtype` of modular metrics after reset has been called ([#243](https://github.com/Lightning-AI/metrics/pull/243))
- Fixed calculation in `matthews_corrcoef` to correctly match formula ([#321](https://github.com/Lightning-AI/metrics/pull/321))
## [0.3.2] - 2021-05-10
### Added
- Added `is_differentiable` property:
* To `AUC`, `AUROC`, `CohenKappa` and `AveragePrecision` ([#178](https://github.com/Lightning-AI/metrics/pull/178))
* To `PearsonCorrCoef`, `SpearmanCorrcoef`, `R2Score` and `ExplainedVariance` ([#225](https://github.com/Lightning-AI/metrics/pull/225))
### Changed
- `MetricCollection` should return metrics with prefix on `items()`, `keys()` ([#209](https://github.com/Lightning-AI/metrics/pull/209))
- Calling `compute` before `update` will now give warning ([#164](https://github.com/Lightning-AI/metrics/pull/164))
### Removed
- Removed `numpy` as direct dependency ([#212](https://github.com/Lightning-AI/metrics/pull/212))
### Fixed
- Fixed auc calculation and add tests ([#197](https://github.com/Lightning-AI/metrics/pull/197))
- Fixed loading persisted metric states using `load_state_dict()` ([#202](https://github.com/Lightning-AI/metrics/pull/202))
- Fixed `PSNR` not working with `DDP` ([#214](https://github.com/Lightning-AI/metrics/pull/214))
- Fixed metric calculation with unequal batch sizes ([#220](https://github.com/Lightning-AI/metrics/pull/220))
- Fixed metric concatenation for list states for zero-dim input ([#229](https://github.com/Lightning-AI/metrics/pull/229))
- Fixed numerical instability in `AUROC` metric for large input ([#230](https://github.com/Lightning-AI/metrics/pull/230))
## [0.3.1] - 2021-04-21
- Cleaning remaining inconsistency and fix PL develop integration (
[#191](https://github.com/Lightning-AI/metrics/pull/191),
[#192](https://github.com/Lightning-AI/metrics/pull/192),
[#193](https://github.com/Lightning-AI/metrics/pull/193),
[#194](https://github.com/Lightning-AI/metrics/pull/194)
)
## [0.3.0] - 2021-04-20
### Added
- Added `BootStrapper` to easily calculate confidence intervals for metrics ([#101](https://github.com/Lightning-AI/metrics/pull/101))
- Added Binned metrics ([#128](https://github.com/Lightning-AI/metrics/pull/128))
- Added metrics for Information Retrieval ([(PL^5032)](https://github.com/Lightning-AI/lightning/pull/5032)):
* `RetrievalMAP` ([PL^5032](https://github.com/Lightning-AI/lightning/pull/5032))
* `RetrievalMRR` ([#119](https://github.com/Lightning-AI/metrics/pull/119))
* `RetrievalPrecision` ([#139](https://github.com/Lightning-AI/metrics/pull/139))
* `RetrievalRecall` ([#146](https://github.com/Lightning-AI/metrics/pull/146))
* `RetrievalNormalizedDCG` ([#160](https://github.com/Lightning-AI/metrics/pull/160))
* `RetrievalFallOut` ([#161](https://github.com/Lightning-AI/metrics/pull/161))
- Added other metrics:
* `CohenKappa` ([#69](https://github.com/Lightning-AI/metrics/pull/69))
* `MatthewsCorrcoef` ([#98](https://github.com/Lightning-AI/metrics/pull/98))
* `PearsonCorrcoef` ([#157](https://github.com/Lightning-AI/metrics/pull/157))
* `SpearmanCorrcoef` ([#158](https://github.com/Lightning-AI/metrics/pull/158))
* `Hinge` ([#120](https://github.com/Lightning-AI/metrics/pull/120))
- Added `average='micro'` as an option in AUROC for multilabel problems ([#110](https://github.com/Lightning-AI/metrics/pull/110))
- Added multilabel support to `ROC` metric ([#114](https://github.com/Lightning-AI/metrics/pull/114))
- Added testing for `half` precision ([#77](https://github.com/Lightning-AI/metrics/pull/77),
[#135](https://github.com/Lightning-AI/metrics/pull/135)
)
- Added `AverageMeter` for ad-hoc averages of values ([#138](https://github.com/Lightning-AI/metrics/pull/138))
- Added `prefix` argument to `MetricCollection` ([#70](https://github.com/Lightning-AI/metrics/pull/70))
- Added `__getitem__` as metric arithmetic operation ([#142](https://github.com/Lightning-AI/metrics/pull/142))
- Added property `is_differentiable` to metrics and test for differentiability ([#154](https://github.com/Lightning-AI/metrics/pull/154))
- Added support for `average`, `ignore_index` and `mdmc_average` in `Accuracy` metric ([#166](https://github.com/Lightning-AI/metrics/pull/166))
- Added `postfix` arg to `MetricCollection` ([#188](https://github.com/Lightning-AI/metrics/pull/188))
### Changed
- Changed `ExplainedVariance` from storing all preds/targets to tracking 5 statistics ([#68](https://github.com/Lightning-AI/metrics/pull/68))
- Changed behaviour of `confusionmatrix` for multilabel data to better match `multilabel_confusion_matrix` from sklearn ([#134](https://github.com/Lightning-AI/metrics/pull/134))
- Updated FBeta arguments ([#111](https://github.com/Lightning-AI/metrics/pull/111))
- Changed `reset` method to use `detach.clone()` instead of `deepcopy` when resetting to default ([#163](https://github.com/Lightning-AI/metrics/pull/163))
- Metrics passed as dict to `MetricCollection` will now always be in deterministic order ([#173](https://github.com/Lightning-AI/metrics/pull/173))
- Allowed `MetricCollection` pass metrics as arguments ([#176](https://github.com/Lightning-AI/metrics/pull/176))
### Deprecated
- Rename argument `is_multiclass` -> `multiclass` ([#162](https://github.com/Lightning-AI/metrics/pull/162))
### Removed
- Prune remaining deprecated ([#92](https://github.com/Lightning-AI/metrics/pull/92))
### Fixed
- Fixed when `_stable_1d_sort` to work when `n>=N` ([PL^6177](https://github.com/Lightning-AI/lightning/pull/6177))
- Fixed `_computed` attribute not being correctly reset ([#147](https://github.com/Lightning-AI/metrics/pull/147))
- Fixed to Blau score ([#165](https://github.com/Lightning-AI/metrics/pull/165))
- Fixed backwards compatibility for logging with older version of pytorch-lightning ([#182](https://github.com/Lightning-AI/metrics/pull/182))
## [0.2.0] - 2021-03-12
### Changed
- Decoupled PL dependency ([#13](https://github.com/Lightning-AI/metrics/pull/13))
- Refactored functional - mimic the module-like structure: classification, regression, etc. ([#16](https://github.com/Lightning-AI/metrics/pull/16))
- Refactored utilities - split to topics/submodules ([#14](https://github.com/Lightning-AI/metrics/pull/14))
- Refactored `MetricCollection` ([#19](https://github.com/Lightning-AI/metrics/pull/19))
### Removed
- Removed deprecated metrics from PL base ([#12](https://github.com/Lightning-AI/metrics/pull/12),
[#15](https://github.com/Lightning-AI/metrics/pull/15))
## [0.1.0] - 2021-02-22
- Added `Accuracy` metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using the `top_k` parameter ([PL^4838](https://github.com/Lightning-AI/lightning/pull/4838))
- Added `Accuracy` metric now enables the computation of subset accuracy for multi-label or multi-dimensional multi-class inputs with the `subset_accuracy` parameter ([PL^4838](https://github.com/Lightning-AI/lightning/pull/4838))
- Added `HammingDistance` metric to compute the hamming distance (loss) ([PL^4838](https://github.com/Lightning-AI/lightning/pull/4838))
- Added `StatScores` metric to compute the number of true positives, false positives, true negatives and false negatives ([PL^4839](https://github.com/Lightning-AI/lightning/pull/4839))
- Added `R2Score` metric ([PL^5241](https://github.com/Lightning-AI/lightning/pull/5241))
- Added `MetricCollection` ([PL^4318](https://github.com/Lightning-AI/lightning/pull/4318))
- Added `.clone()` method to metrics ([PL^4318](https://github.com/Lightning-AI/lightning/pull/4318))
- Added `IoU` class interface ([PL^4704](https://github.com/Lightning-AI/lightning/pull/4704))
- The `Recall` and `Precision` metrics (and their functional counterparts `recall` and `precision`) can now be generalized to Recall@K and Precision@K with the use of `top_k` parameter ([PL^4842](https://github.com/Lightning-AI/lightning/pull/4842))
- Added compositional metrics ([PL^5464](https://github.com/Lightning-AI/lightning/pull/5464))
- Added AUC/AUROC class interface ([PL^5479](https://github.com/Lightning-AI/lightning/pull/5479))
- Added `QuantizationAwareTraining` callback ([PL^5706](https://github.com/Lightning-AI/lightning/pull/5706))
- Added `ConfusionMatrix` class interface ([PL^4348](https://github.com/Lightning-AI/lightning/pull/4348))
- Added multiclass AUROC metric ([PL^4236](https://github.com/Lightning-AI/lightning/pull/4236))
- Added `PrecisionRecallCurve, ROC, AveragePrecision` class metric ([PL^4549](https://github.com/Lightning-AI/lightning/pull/4549))
- Classification metrics overhaul ([PL^4837](https://github.com/Lightning-AI/lightning/pull/4837))
- Added `F1` class metric ([PL^4656](https://github.com/Lightning-AI/lightning/pull/4656))
- Added metrics aggregation in Horovod and fixed early stopping ([PL^3775](https://github.com/Lightning-AI/lightning/pull/3775))
- Added `persistent(mode)` method to metrics, to enable and disable metric states being added to `state_dict` ([PL^4482](https://github.com/Lightning-AI/lightning/pull/4482))
- Added unification of regression metrics ([PL^4166](https://github.com/Lightning-AI/lightning/pull/4166))
- Added persistent flag to `Metric.add_state` ([PL^4195](https://github.com/Lightning-AI/lightning/pull/4195))
- Added classification metrics ([PL^4043](https://github.com/Lightning-AI/lightning/pull/4043))
- Added new Metrics API. ([PL^3868](https://github.com/Lightning-AI/lightning/pull/3868), [PL^3921](https://github.com/Lightning-AI/lightning/pull/3921))
- Added EMB similarity ([PL^3349](https://github.com/Lightning-AI/lightning/pull/3349))
- Added SSIM metrics ([PL^2671](https://github.com/Lightning-AI/lightning/pull/2671))
- Added BLEU metrics ([PL^2535](https://github.com/Lightning-AI/lightning/pull/2535))
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/audio.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
# this need to be the same as used inside speechmetrics
pesq @ git+https://github.com/ludlows/python-pesq
pystoi >=0.3.0, <=0.3.3
torchaudio >=0.10.0
gammatone @ https://github.com/detly/gammatone/archive/master.zip#egg=Gammatone
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/nominal_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
pandas >1.0.0, <=2.0.3 # cannot pin version due to numpy version incompatibility
dython <=0.7.4
scipy >1.0.0, <1.11.0 # cannot pin version due to some version conflicts with `oldest` CI configuration
statsmodels >0.13.5, <=0.14.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/_docs.txt | sphinx ==5.3.0
myst-parser ==1.0.0
nbsphinx ==0.9.3
pandoc ==2.3
docutils ==0.19
sphinxcontrib-fulltoc >=1.0
sphinxcontrib-mockautodoc
lai-sphinx-theme # need to be downloaded from s3://sphinx-packages/
sphinx-autodoc-typehints ==1.23.0
sphinx-paramlinks ==0.6.0
sphinx-togglebutton ==0.3.2
sphinx-copybutton ==0.5.2
lightning >=1.8.0, <2.2.0
lightning-utilities >=0.9.0, <0.10.0
pydantic > 1.0.0, < 3.0.0
# integrations
-r _integrate.txt
-r visual.txt
-r audio.txt
-r detection.txt
-r image.txt
-r multimodal.txt
-r text.txt
-r text_test.txt
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/text_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
jiwer >=2.3.0, <3.1.0
rouge-score >0.1.0, <=0.1.2
bert_score ==0.3.13
huggingface-hub <0.19 # hotfix, failing SDR for latest PT 1.11
sacrebleu >=2.3.0, <2.4.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/_integrate.txt | # contentiously validated integration with these expected ranges
# ToDo: investigate and add validation with 2.0+ on GPU
pytorch-lightning >=1.9.0, <2.0.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/classification_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
pandas >=1.4.0, <=2.0.3
netcal >1.0.0, <=1.3.5 # calibration_error
numpy <1.25.0
fairlearn # group_fairness
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/audio_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
pypesq @ git+https://github.com/vBaiCai/python-pesq
mir-eval >=0.6, <=0.7
speechmetrics @ git+https://github.com/aliutkus/speechmetrics
fast-bss-eval >=0.1.0, <0.1.5
torch_complex <=0.4.3 # needed for fast-bss-eval
srmrpy @ git+https://github.com/jfsantos/SRMRpy
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/detection_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
faster-coco-eval >=1.3.3
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/_devel.txt | # use mandatory dependencies
-r base.txt
# add the testing dependencies
-r _tests.txt
# add extra requirements
-r image.txt
-r text.txt
-r detection.txt
-r audio.txt
-r multimodal.txt
-r visual.txt
# add extra testing
-r image_test.txt
-r text_test.txt
-r audio_test.txt
-r detection_test.txt
-r classification_test.txt
-r nominal_test.txt
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/image.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
scipy >1.0.0, <1.11.0
torchvision >=0.8, <0.17.0
torch-fidelity <=0.4.0 # bumping to allow install version from master, now used in testing
lpips <=0.1.4
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/text.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
nltk >=3.6, <=3.8.1
tqdm >=4.41.0, <=4.66.1
regex >=2021.9.24, <=2023.10.3
transformers >4.4.0, <4.34.2
mecab-python3 >=1.0.6, <1.1.0
mecab-ko >=1.0.0, <1.1.0
mecab-ko-dic >=1.0.0, <1.1.0
ipadic >=1.0.0, <1.1.0
sentencepiece >=0.1.98, <=0.1.99
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/_tests.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
coverage ==7.3.2
pytest ==7.4.3
pytest-cov ==4.1.0
pytest-doctestplus ==1.0.0
pytest-rerunfailures ==12.0
pytest-timeout ==2.2.0
phmdoctest ==1.4.0
psutil <5.10.0
requests <=2.31.0
fire <=0.5.0
cloudpickle >1.3, <=3.0.0
scikit-learn >=1.1.1, <1.4.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/image_test.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
scikit-image >=0.19.0, <=0.21.0
kornia >=0.6.7, <0.7.1
pytorch-msssim ==1.0.0
sewar >=0.4.4, <=0.4.6
numpy <1.25.0
torch-fidelity @ git+https://github.com/toshas/torch-fidelity@master
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/_doctest.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
pytest >=6.0.0, <7.5.0
pytest-doctestplus >=0.9.0, <=1.0.0
pytest-rerunfailures >=10.0, <13.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/visual.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
matplotlib >=3.2.0, <3.8.0
SciencePlots >= 2.0.0, <= 2.1.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/README.md | # Project Requirements
This folder contains all requirements files for the project. The base requirements are located in the `base.txt` file.
Files prefixed with `_` are only meant for development and testing purposes. In general, each subdomain of the project
has a `<domain>.txt` file that contains the necessary requirements for using that subdomain and a `<domain>_test.txt`
file that contains the necessary requirements for testing that subdomain.
To install all extra requirements such that all tests can be run, use the following command:
```bash
pip install -r requirements/_devel.txt # unittests
pip install -r requiremnets/_integrate.txt # integration tests
```
To install all extra requirements so that the documentation can be built, use the following command:
```bash
pip install -r requirements/_docs.txt
# OR just run `make docs`
```
## CI/CD upper bounds automation
For CI stability, we have set for all package versions' upper bounds (the latest version), so with any sudden release,
we won't put our development on fire. Dependabot manages the continuous updates of these upper bounds.
Note that these upper bounds are lifters when installing a package from the source or as a package.
If you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment.
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/typing.txt | mypy ==1.6.1
torch ==2.1.0
types-PyYAML
types-emoji
types-protobuf
types-requests
types-setuptools
types-six
types-tabulate
types-protobuf
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/detection.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
torchvision >=0.8, <0.17.0
pycocotools >2.0.0, <=2.0.7
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/multimodal.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
transformers >=4.10.0, <4.34.2
piq <=0.8.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/requirements/base.txt | # NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment
numpy >1.20.0
packaging >17.1
torch >=1.10.0, <=2.0.1
torch >=1.10.0, <=2.1.0
typing-extensions; python_version < '3.9'
lightning-utilities >=0.8.0, <0.10.0
| 0 |
public_repos/torchmetrics | public_repos/torchmetrics/dockers/README.md | # Docker images
## Build images from Dockerfiles
You can build it on your own, note it takes lots of time, be prepared.
```bash
git clone https://github.com/Lightning-AI/torchmetrics.git
# build with the default arguments
docker image build -t torchmetrics:latest -f dockers/ubuntu-cuda/Dockerfile .
# build with specific arguments
docker image build -t torchmetrics:ubuntu-cuda11.7.1-py3.9-torch1.13 \
-f dockers/base-cuda/Dockerfile \
--build-arg PYTHON_VERSION=3.9 \
--build-arg PYTORCH_VERSION=1.13 \
--build-arg CUDA_VERSION=11.7.1 \
.
```
To run your docker use
```bash
docker image list
docker run --rm -it torchmetrics:latest bash
```
and if you do not need it anymore, just clean it:
```bash
docker image list
docker image rm torchmetrics:latest
```
## Run docker image with GPUs
To run docker image with access to your GPUs, you need to install
```bash
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
and later run the docker image with `--gpus all`. For example,
```bash
docker run --rm -it --gpus all torchmetrics:ubuntu-cuda11.7.1-py3.9-torch1.12
```
| 0 |
public_repos/torchmetrics/dockers | public_repos/torchmetrics/dockers/ubuntu-cuda/Dockerfile | # Copyright The Lightning AI team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG UBUNTU_VERSION=22.04
ARG CUDA_VERSION=11.7.1
FROM nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu${UBUNTU_VERSION}
ARG PYTHON_VERSION=3.10
ARG PYTORCH_VERSION=2.0
SHELL ["/bin/bash", "-c"]
# https://techoverflow.net/2019/05/18/how-to-fix-configuring-tzdata-interactive-input-when-building-docker-images/
ENV \
DEBIAN_FRONTEND="noninteractive" \
TZ="Etc/UTC" \
PATH="$PATH:/root/.local/bin" \
CUDA_TOOLKIT_ROOT_DIR="/usr/local/cuda" \
MKL_THREADING_LAYER="GNU" \
# MAKEFLAGS="-j$(nproc)"
MAKEFLAGS="-j2"
RUN \
apt-get -y update --fix-missing && \
apt-get install -y --no-install-recommends --allow-downgrades --allow-change-held-packages \
build-essential \
pkg-config \
cmake \
git \
wget \
curl \
unzip \
g++ \
cmake \
ffmpeg \
git \
libsndfile1 \
ca-certificates \
software-properties-common \
libopenmpi-dev \
openmpi-bin \
ssh \
&& \
# Install python
add-apt-repository ppa:deadsnakes/ppa && \
apt-get install -y \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-distutils \
python${PYTHON_VERSION}-dev \
&& \
update-alternatives --install /usr/bin/python${PYTHON_VERSION%%.*} python${PYTHON_VERSION%%.*} /usr/bin/python${PYTHON_VERSION} 1 && \
update-alternatives --install /usr/bin/python python /usr/bin/python${PYTHON_VERSION} 1 && \
curl https://bootstrap.pypa.io/get-pip.py | python && \
# Cleaning
apt-get autoremove -y && \
apt-get clean && \
rm -rf /root/.cache && \
rm -rf /var/lib/apt/lists/*
ENV PYTHONPATH="/usr/lib/python${PYTHON_VERSION}/site-packages"
COPY requirements/ requirements/
RUN \
# set particular PyTorch version
pip install -q wget packaging && \
python -m wget https://raw.githubusercontent.com/Lightning-AI/utilities/main/scripts/adjust-torch-versions.py && \
for fpath in `ls requirements/*.txt`; do \
python ./adjust-torch-versions.py $fpath ${PYTORCH_VERSION}; \
done && \
# trying to resolve pesq installation issue
pip install -q "numpy<1.24" && \
CUDA_VERSION_MM=${CUDA_VERSION%.*} && \
CU_VERSION_MM=${CUDA_VERSION_MM//'.'/''} && \
pip install --no-cache-dir -r requirements/_devel.txt \
--find-links "https://download.pytorch.org/whl/cu${CU_VERSION_MM}/torch_stable.html" && \
rm -rf requirements/
RUN \
# Show what we have
pip --version && \
pip list && \
python -c "import sys; ver = sys.version_info ; assert f'{ver.major}.{ver.minor}' == '$PYTHON_VERSION', ver" && \
python -c "import torch; assert torch.__version__.startswith('$PYTORCH_VERSION'), torch.__version__"
| 0 |
public_repos/torchmetrics/src | public_repos/torchmetrics/src/torchmetrics/__about__.py | __version__ = "1.3.0dev"
__author__ = "Lightning-AI et al."
__author_email__ = "name@pytorchlightning.ai"
__license__ = "Apache-2.0"
__copyright__ = f"Copyright (c) 2020-2023, {__author__}."
__homepage__ = "https://github.com/Lightning-AI/torchmetrics"
__docs__ = "PyTorch native Metrics"
__docs_url__ = "https://lightning.ai/docs/torchmetrics/stable/"
__long_doc__ = """
Torchmetrics is a metrics API created for easy metric development and usage in both PyTorch and
[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/). It was originally a part of
Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics
implemented without having to install Pytorch Lightning (even though we would love for you to try it out).
We currently have around 100+ metrics implemented and we continuously are adding more metrics, both within
already covered domains (classification, regression etc.) but also new domains (object detection etc.).
We make sure that all our metrics are rigorously tested such that you can trust them.
"""
__all__ = [
"__author__",
"__author_email__",
"__copyright__",
"__docs__",
"__docs_url__",
"__homepage__",
"__license__",
"__version__",
]
| 0 |
public_repos/torchmetrics/src | public_repos/torchmetrics/src/torchmetrics/aggregation.py | # Copyright The Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Callable, List, Optional, Sequence, Tuple, Union
import torch
from torch import Tensor
from torchmetrics.metric import Metric
from torchmetrics.utilities import rank_zero_warn
from torchmetrics.utilities.data import dim_zero_cat
from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE
from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE
from torchmetrics.wrappers.running import Running
if not _MATPLOTLIB_AVAILABLE:
__doctest_skip__ = ["SumMetric.plot", "MeanMetric.plot", "MaxMetric.plot", "MinMetric.plot"]
class BaseAggregator(Metric):
"""Base class for aggregation metrics.
Args:
fn: string specifying the reduction function
default_value: default tensor value to use for the metric state
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
state_name: name of the metric state
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
"""
is_differentiable = None
higher_is_better = None
full_state_update: bool = False
def __init__(
self,
fn: Union[Callable, str],
default_value: Union[Tensor, List],
nan_strategy: Union[str, float] = "error",
state_name: str = "value",
**kwargs: Any,
) -> None:
super().__init__(**kwargs)
allowed_nan_strategy = ("error", "warn", "ignore")
if nan_strategy not in allowed_nan_strategy and not isinstance(nan_strategy, float):
raise ValueError(
f"Arg `nan_strategy` should either be a float or one of {allowed_nan_strategy}"
f" but got {nan_strategy}."
)
self.nan_strategy = nan_strategy
self.add_state(state_name, default=default_value, dist_reduce_fx=fn)
self.state_name = state_name
def _cast_and_nan_check_input(
self, x: Union[float, Tensor], weight: Optional[Union[float, Tensor]] = None
) -> Tuple[Tensor, Tensor]:
"""Convert input ``x`` to a tensor and check for Nans."""
if not isinstance(x, Tensor):
x = torch.as_tensor(x, dtype=torch.float32, device=self.device)
if weight is not None and not isinstance(weight, Tensor):
weight = torch.as_tensor(weight, dtype=torch.float32, device=self.device)
nans = torch.isnan(x)
if weight is not None:
nans_weight = torch.isnan(weight)
else:
nans_weight = torch.zeros_like(nans).bool()
weight = torch.ones_like(x)
if nans.any() or nans_weight.any():
if self.nan_strategy == "error":
raise RuntimeError("Encountered `nan` values in tensor")
if self.nan_strategy in ("ignore", "warn"):
if self.nan_strategy == "warn":
rank_zero_warn("Encountered `nan` values in tensor. Will be removed.", UserWarning)
x = x[~(nans | nans_weight)]
weight = weight[~(nans | nans_weight)]
else:
if not isinstance(self.nan_strategy, float):
raise ValueError(f"`nan_strategy` shall be float but you pass {self.nan_strategy}")
x[nans | nans_weight] = self.nan_strategy
weight[nans | nans_weight] = self.nan_strategy
return x.float(), weight.float()
def update(self, value: Union[float, Tensor]) -> None:
"""Overwrite in child class."""
def compute(self) -> Tensor:
"""Compute the aggregated value."""
return getattr(self, self.state_name)
class MaxMetric(BaseAggregator):
"""Aggregate a stream of value into their maximum value.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated maximum value over all inputs received
Args:
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import MaxMetric
>>> metric = MaxMetric()
>>> metric.update(1)
>>> metric.update(tensor([2, 3]))
>>> metric.compute()
tensor(3.)
"""
full_state_update: bool = True
max_value: Tensor
def __init__(
self,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(
"max",
-torch.tensor(float("inf")),
nan_strategy,
state_name="max_value",
**kwargs,
)
def update(self, value: Union[float, Tensor]) -> None:
"""Update state with data.
Args:
value: Either a float or tensor containing data. Additional tensor
dimensions will be flattened
"""
value, _ = self._cast_and_nan_check_input(value)
if value.numel(): # make sure tensor not empty
self.max_value = torch.max(self.max_value, torch.max(value))
def plot(
self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
) -> _PLOT_OUT_TYPE:
"""Plot a single or multiple values from the metric.
Args:
val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
If no value is provided, will automatically call `metric.compute` and plot that result.
ax: An matplotlib axis object. If provided will add plot to that axis
Returns:
Figure and Axes object
Raises:
ModuleNotFoundError:
If `matplotlib` is not installed
.. plot::
:scale: 75
>>> # Example plotting a single value
>>> from torchmetrics.aggregation import MaxMetric
>>> metric = MaxMetric()
>>> metric.update([1, 2, 3])
>>> fig_, ax_ = metric.plot()
.. plot::
:scale: 75
>>> # Example plotting multiple values
>>> from torchmetrics.aggregation import MaxMetric
>>> metric = MaxMetric()
>>> values = [ ]
>>> for i in range(10):
... values.append(metric(i))
>>> fig_, ax_ = metric.plot(values)
"""
return self._plot(val, ax)
class MinMetric(BaseAggregator):
"""Aggregate a stream of value into their minimum value.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated minimum value over all inputs received
Args:
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import MinMetric
>>> metric = MinMetric()
>>> metric.update(1)
>>> metric.update(tensor([2, 3]))
>>> metric.compute()
tensor(1.)
"""
full_state_update: bool = True
min_value: Tensor
def __init__(
self,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(
"min",
torch.tensor(float("inf")),
nan_strategy,
state_name="min_value",
**kwargs,
)
def update(self, value: Union[float, Tensor]) -> None:
"""Update state with data.
Args:
value: Either a float or tensor containing data. Additional tensor
dimensions will be flattened
"""
value, _ = self._cast_and_nan_check_input(value)
if value.numel(): # make sure tensor not empty
self.min_value = torch.min(self.min_value, torch.min(value))
def plot(
self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
) -> _PLOT_OUT_TYPE:
"""Plot a single or multiple values from the metric.
Args:
val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
If no value is provided, will automatically call `metric.compute` and plot that result.
ax: An matplotlib axis object. If provided will add plot to that axis
Returns:
Figure and Axes object
Raises:
ModuleNotFoundError:
If `matplotlib` is not installed
.. plot::
:scale: 75
>>> # Example plotting a single value
>>> from torchmetrics.aggregation import MinMetric
>>> metric = MinMetric()
>>> metric.update([1, 2, 3])
>>> fig_, ax_ = metric.plot()
.. plot::
:scale: 75
>>> # Example plotting multiple values
>>> from torchmetrics.aggregation import MinMetric
>>> metric = MinMetric()
>>> values = [ ]
>>> for i in range(10):
... values.append(metric(i))
>>> fig_, ax_ = metric.plot(values)
"""
return self._plot(val, ax)
class SumMetric(BaseAggregator):
"""Aggregate a stream of value into their sum.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated sum over all inputs received
Args:
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import SumMetric
>>> metric = SumMetric()
>>> metric.update(1)
>>> metric.update(tensor([2, 3]))
>>> metric.compute()
tensor(6.)
"""
sum_value: Tensor
def __init__(
self,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(
"sum",
torch.tensor(0.0),
nan_strategy,
state_name="sum_value",
**kwargs,
)
def update(self, value: Union[float, Tensor]) -> None:
"""Update state with data.
Args:
value: Either a float or tensor containing data. Additional tensor
dimensions will be flattened
"""
value, _ = self._cast_and_nan_check_input(value)
if value.numel():
self.sum_value += value.sum()
def plot(
self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
) -> _PLOT_OUT_TYPE:
"""Plot a single or multiple values from the metric.
Args:
val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
If no value is provided, will automatically call `metric.compute` and plot that result.
ax: An matplotlib axis object. If provided will add plot to that axis
Returns:
Figure and Axes object
Raises:
ModuleNotFoundError:
If `matplotlib` is not installed
.. plot::
:scale: 75
>>> # Example plotting a single value
>>> from torchmetrics.aggregation import SumMetric
>>> metric = SumMetric()
>>> metric.update([1, 2, 3])
>>> fig_, ax_ = metric.plot()
.. plot::
:scale: 75
>>> # Example plotting multiple values
>>> from torch import rand, randint
>>> from torchmetrics.aggregation import SumMetric
>>> metric = SumMetric()
>>> values = [ ]
>>> for i in range(10):
... values.append(metric([i, i+1]))
>>> fig_, ax_ = metric.plot(values)
"""
return self._plot(val, ax)
class CatMetric(BaseAggregator):
"""Concatenate a stream of values.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with concatenated values over all input received
Args:
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import CatMetric
>>> metric = CatMetric()
>>> metric.update(1)
>>> metric.update(tensor([2, 3]))
>>> metric.compute()
tensor([1., 2., 3.])
"""
value: Tensor
def __init__(
self,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__("cat", [], nan_strategy, **kwargs)
def update(self, value: Union[float, Tensor]) -> None:
"""Update state with data.
Args:
value: Either a float or tensor containing data. Additional tensor
dimensions will be flattened
"""
value, _ = self._cast_and_nan_check_input(value)
if value.numel():
self.value.append(value)
def compute(self) -> Tensor:
"""Compute the aggregated value."""
if isinstance(self.value, list) and self.value:
return dim_zero_cat(self.value)
return self.value
class MeanMetric(BaseAggregator):
"""Aggregate a stream of value into their mean value.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
- ``weight`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float value with
arbitrary shape ``(...,)``. Needs to be broadcastable with the shape of ``value`` tensor.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated (weighted) mean over all inputs received
Args:
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torchmetrics.aggregation import MeanMetric
>>> metric = MeanMetric()
>>> metric.update(1)
>>> metric.update(torch.tensor([2, 3]))
>>> metric.compute()
tensor(2.)
"""
mean_value: Tensor
def __init__(
self,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(
"sum",
torch.tensor(0.0),
nan_strategy,
state_name="mean_value",
**kwargs,
)
self.add_state("weight", default=torch.tensor(0.0), dist_reduce_fx="sum")
def update(self, value: Union[float, Tensor], weight: Union[float, Tensor] = 1.0) -> None:
"""Update state with data.
Args:
value: Either a float or tensor containing data. Additional tensor
dimensions will be flattened
weight: Either a float or tensor containing weights for calculating
the average. Shape of weight should be able to broadcast with
the shape of `value`. Default to `1.0` corresponding to simple
harmonic average.
"""
# broadcast weight to value shape
if not isinstance(value, Tensor):
value = torch.as_tensor(value, dtype=torch.float32, device=self.device)
if weight is not None and not isinstance(weight, Tensor):
weight = torch.as_tensor(weight, dtype=torch.float32, device=self.device)
weight = torch.broadcast_to(weight, value.shape)
value, weight = self._cast_and_nan_check_input(value, weight)
if value.numel() == 0:
return
self.mean_value += (value * weight).sum()
self.weight += weight.sum()
def compute(self) -> Tensor:
"""Compute the aggregated value."""
return self.mean_value / self.weight
def plot(
self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
) -> _PLOT_OUT_TYPE:
"""Plot a single or multiple values from the metric.
Args:
val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
If no value is provided, will automatically call `metric.compute` and plot that result.
ax: An matplotlib axis object. If provided will add plot to that axis
Returns:
Figure and Axes object
Raises:
ModuleNotFoundError:
If `matplotlib` is not installed
.. plot::
:scale: 75
>>> # Example plotting a single value
>>> from torchmetrics.aggregation import MeanMetric
>>> metric = MeanMetric()
>>> metric.update([1, 2, 3])
>>> fig_, ax_ = metric.plot()
.. plot::
:scale: 75
>>> # Example plotting multiple values
>>> from torchmetrics.aggregation import MeanMetric
>>> metric = MeanMetric()
>>> values = [ ]
>>> for i in range(10):
... values.append(metric([i, i+1]))
>>> fig_, ax_ = metric.plot(values)
"""
return self._plot(val, ax)
class RunningMean(Running):
"""Aggregate a stream of value into their mean over a running window.
Using this metric compared to `MeanMetric` allows for calculating metrics over a running window of values, instead
of the whole history of values. This is beneficial when you want to get a better estimate of the metric during
training and don't want to wait for the whole training to finish to get epoch level estimates.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated sum over all inputs received
Args:
window: The size of the running window.
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import RunningMean
>>> metric = RunningMean(window=3)
>>> for i in range(6):
... current_val = metric(tensor([i]))
... running_val = metric.compute()
... total_val = tensor(sum(list(range(i+1)))) / (i+1) # total mean over all samples
... print(f"{current_val=}, {running_val=}, {total_val=}")
current_val=tensor(0.), running_val=tensor(0.), total_val=tensor(0.)
current_val=tensor(1.), running_val=tensor(0.5000), total_val=tensor(0.5000)
current_val=tensor(2.), running_val=tensor(1.), total_val=tensor(1.)
current_val=tensor(3.), running_val=tensor(2.), total_val=tensor(1.5000)
current_val=tensor(4.), running_val=tensor(3.), total_val=tensor(2.)
current_val=tensor(5.), running_val=tensor(4.), total_val=tensor(2.5000)
"""
def __init__(
self,
window: int = 5,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(base_metric=MeanMetric(nan_strategy=nan_strategy, **kwargs), window=window)
class RunningSum(Running):
"""Aggregate a stream of value into their sum over a running window.
Using this metric compared to `SumMetric` allows for calculating metrics over a running window of values, instead
of the whole history of values. This is beneficial when you want to get a better estimate of the metric during
training and don't want to wait for the whole training to finish to get epoch level estimates.
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
- ``agg`` (:class:`~torch.Tensor`): scalar float tensor with aggregated sum over all inputs received
Args:
window: The size of the running window.
nan_strategy: options:
- ``'error'``: if any `nan` values are encountered will give a RuntimeError
- ``'warn'``: if any `nan` values are encountered will give a warning and continue
- ``'ignore'``: all `nan` values are silently removed
- a float: if a float is provided will impute any `nan` values with this value
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
ValueError:
If ``nan_strategy`` is not one of ``error``, ``warn``, ``ignore`` or a float
Example:
>>> from torch import tensor
>>> from torchmetrics.aggregation import RunningSum
>>> metric = RunningSum(window=3)
>>> for i in range(6):
... current_val = metric(tensor([i]))
... running_val = metric.compute()
... total_val = tensor(sum(list(range(i+1)))) # total sum over all samples
... print(f"{current_val=}, {running_val=}, {total_val=}")
current_val=tensor(0.), running_val=tensor(0.), total_val=tensor(0)
current_val=tensor(1.), running_val=tensor(1.), total_val=tensor(1)
current_val=tensor(2.), running_val=tensor(3.), total_val=tensor(3)
current_val=tensor(3.), running_val=tensor(6.), total_val=tensor(6)
current_val=tensor(4.), running_val=tensor(9.), total_val=tensor(10)
current_val=tensor(5.), running_val=tensor(12.), total_val=tensor(15)
"""
def __init__(
self,
window: int = 5,
nan_strategy: Union[str, float] = "warn",
**kwargs: Any,
) -> None:
super().__init__(base_metric=SumMetric(nan_strategy=nan_strategy, **kwargs), window=window)
| 0 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 32