Victoria Oberascher commited on
Commit
e1aaeed
1 Parent(s): 490df31

fix typo in readme

Browse files
Files changed (1) hide show
  1. README.md +24 -29
README.md CHANGED
@@ -10,10 +10,6 @@ app_file: app.py
10
  pinned: false
11
  ---
12
 
13
- # Metric Card for horizon--metrics
14
-
15
- **_Module Card Instructions:_** _Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples._
16
-
17
  ## SEA-AI/horizon-metrics
18
 
19
  This huggingface metric uses `seametrics.horizon.HorizonMetrics` under the hood to calculate the slope and midpoint errors.
@@ -37,41 +33,40 @@ To get started with horizon-metrics, make sure you have the necessary dependenci
37
  This is how you can quickly evaluate your horizon prediction models using SEA-AI/horizon-metrics:
38
 
39
  ```python
40
- import evaluate
41
 
42
- ground_truth_points = [[[0.0, 0.5384765625], [1.0, 0.4931640625]],
43
- [[0.0, 0.53796875], [1.0, 0.4928515625]],
44
- [[0.0, 0.5374609375], [1.0, 0.4925390625]],
45
- [[0.0, 0.536953125], [1.0, 0.4922265625]],
46
- [[0.0, 0.5364453125], [1.0, 0.4919140625]]]
47
 
48
- prediction_points = [[[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
49
- [[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
50
- [[0.0, 0.523573113510805], [1.0, 0.47642688648919496]],
51
- [[0.0, 0.5200016849393765], [1.0, 0.4728554579177664]],
52
- [[0.0, 0.523573113510805], [1.0, 0.47642688648919496]]]
53
 
54
 
55
- module = evaluate.load("SEA-AI/horizon-metrics")
56
- module.add(predictions=ground_truth_points, references=prediction_points)
57
- result = module.compute()
58
 
59
- print(result)
60
  ```
61
 
62
  This is output the evalutaion metrics for your horizon prediciton model:
63
 
64
  ```python
65
- {
66
- 'average_slope_error': 0.014823194839790999,
67
- 'average_midpoint_error': 0.014285714285714301,
68
- 'stddev_slope_error': 0.01519178791378349,
69
- 'stddev_midpoint_error': 0.0022661781575342445,
70
- 'max_slope_error': 0.033526146567062376,
71
- 'max_midpoint_error': 0.018161272321428612,
72
- 'num_slope_error_jumps': 1,
73
- 'num_midpoint_error_jumps': 1
74
- }
75
  ```
76
 
77
  ### Output Values
 
10
  pinned: false
11
  ---
12
 
 
 
 
 
13
  ## SEA-AI/horizon-metrics
14
 
15
  This huggingface metric uses `seametrics.horizon.HorizonMetrics` under the hood to calculate the slope and midpoint errors.
 
33
  This is how you can quickly evaluate your horizon prediction models using SEA-AI/horizon-metrics:
34
 
35
  ```python
36
+ import evaluate
37
 
38
+ ground_truth_points = [[[0.0, 0.5384765625], [1.0, 0.4931640625]],
39
+ [[0.0, 0.53796875], [1.0, 0.4928515625]],
40
+ [[0.0, 0.5374609375], [1.0, 0.4925390625]],
41
+ [[0.0, 0.536953125], [1.0, 0.4922265625]],
42
+ [[0.0, 0.5364453125], [1.0, 0.4919140625]]]
43
 
44
+ prediction_points = [[[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
45
+ [[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
46
+ [[0.0, 0.523573113510805], [1.0, 0.47642688648919496]],
47
+ [[0.0, 0.5200016849393765], [1.0, 0.4728554579177664]],
48
+ [[0.0, 0.523573113510805], [1.0, 0.47642688648919496]]]
49
 
50
 
51
+ module = evaluate.load("SEA-AI/horizon-metrics")
52
+ module.add(predictions=ground_truth_points, references=prediction_points)
53
+ module.compute()
54
 
 
55
  ```
56
 
57
  This is output the evalutaion metrics for your horizon prediciton model:
58
 
59
  ```python
60
+ {
61
+ 'average_slope_error': 0.014823194839790999,
62
+ 'average_midpoint_error': 0.014285714285714301,
63
+ 'stddev_slope_error': 0.01519178791378349,
64
+ 'stddev_midpoint_error': 0.0022661781575342445,
65
+ 'max_slope_error': 0.033526146567062376,
66
+ 'max_midpoint_error': 0.018161272321428612,
67
+ 'num_slope_error_jumps': 1,
68
+ 'num_midpoint_error_jumps': 1
69
+ }
70
  ```
71
 
72
  ### Output Values