ccc / README.md
agkphysics's picture
Add app and module.
3db0544 unverified
|
raw
history blame
2 kB
metadata
title: ccc
tags:
  - evaluate
  - metric
description: Concordance correlation coefficient
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false

Metric Card for CCC

Metric Description

The concordance correlation coefficient measures the agreement between two sets of values. It is often used as a measure of inter-rater agreement when ratings have continuous values.

How to Use

The inputs are two sequences of floating point values. For example:

ccc_metric = evaluate.load("agkphysics/ccc")
results = ccc_metric.compute(references=[0.2, 0.1], predictions=[0.1, 0.2])

Inputs

  • predictions (list of float): model predictions
  • references (list of float): reference labels

Output Values

  • ccc: the concordance correlation coefficient. This is a value between -1 (perfect anti-agreement) and 1 (perfect agreement), with 0 indicating no agreement.

Examples

>>> ccc_metric = evaluate.load("agkphysics/ccc")
>>> results = ccc_metric.compute(references=[0.2, 0.1], predictions=[0.1, 0.2])
>>> print(results)
{'ccc': -1.0}
>>> results = ccc_metric.compute(references=[0.1, 0.2], predictions=[0.1, 0.2])
>>> print(results)
{'ccc': 1.0}
>>> results = ccc_metric.compute(references=[0.1, 0.3], predictions=[0.1, 0.2])
>>> print(results)
{'ccc': 0.666666641831399}

Limitations and Bias

Note any known limitations or biases that the metric has, with links and references if possible.

Citation

@article{linConcordanceCorrelationCoefficient1989,
  title = {A {{Concordance Correlation Coefficient}} to {{Evaluate Reproducibility}}},
  author = {Lin, Lawrence I-Kuei},
  year = {1989},
  journal = {Biometrics},
  volume = {45},
  number = {1},
  pages = {255--268},
  publisher = {{International Biometric Society}},
  issn = {0006-341X},
  url = {https://www.jstor.org/stable/2532051},
  doi = {10.2307/2532051}
}

Further References

Wikipedia: https://en.wikipedia.org/wiki/Concordance_correlation_coefficient