--- title: README emoji: 🤗 colorFrom: green colorTo: purple sdk: static pinned: false tags: - evaluate --- 🤗 Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets. It has three types of evaluations: - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in this Space. - **Measurement**: for gaining more insights on datasets and model predictions based on their properties and characteristics -- these are covered in the [Evaluate Measurement](https://huggingface.co/evaluate-measurement) Spaces. - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- covered in the [Evaluate Metric](https://huggingface.co/evaluate-metric) Spaces. All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation!