lvwerra HF staff commited on
Commit
c3d50eb
β€’
1 Parent(s): 558b491

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -1,10 +1,21 @@
1
  ---
2
  title: README
3
- emoji: πŸ¦€
4
- colorFrom: gray
5
- colorTo: green
6
  sdk: static
7
  pinned: false
 
 
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card πŸ”₯
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji: πŸ€—
4
+ colorFrom: green
5
+ colorTo: purple
6
  sdk: static
7
  pinned: false
8
+ tags:
9
+ - evaluate
10
+ - comparison
11
  ---
12
 
13
+ πŸ€— Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets.
14
+
15
+ It has three types of evaluations:
16
+ - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in this Space.
17
+ - **Measurement**: for gaining more insights on datasets and model predictions based on their properties and characteristics -- these are covered in the [Evaluate Measurement](https://huggingface.co/evaluate-measurement) Spaces.
18
+ - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- covered in the [Evaluate Metric](https://huggingface.co/evaluate-metric) Spaces.
19
+
20
+
21
+ All three types of evaluation supported by the πŸ€— Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation!