jdev8 commited on
Commit
3706ed7
1 Parent(s): 2cc360c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -35,14 +35,11 @@ configs:
35
  ---
36
  # 🏟️ Long Code Arena (Module Summarization)
37
 
38
- This is the data for Module Summarization benchmark as part of Long Code Arena provided by Jetbrains Research.
39
 
40
  ## How-to
41
 
42
- 1. List all the available configs via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names) and choose an appropriate one
43
-
44
-
45
- 3. Load the data via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
46
 
47
  ```
48
  from datasets import load_dataset
@@ -78,4 +75,4 @@ The data points could be removed upon request
78
 
79
  To compare predicted and ground truth metrics we introduce the new metric based on LLM as an assessor. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. The LLM evaluates which documentation better explains and fits the code. To mitigate variance and potential ordering effects in model responses, we calculate the probability that the generated documentation is superior by averaging the results of two queries:
80
 
81
- For more details about metric implementation go to [our github repository](https://github.com/JetBrains-Research/lca-baselines/tree/module2text)
 
35
  ---
36
  # 🏟️ Long Code Arena (Module Summarization)
37
 
38
+ This is the data for Module Summarization benchmark as part of [Long Code Arena])(https://huggingface.co/spaces/JetBrains-Research/long-code-arena) provided by Jetbrains Research.
39
 
40
  ## How-to
41
 
42
+ Load the data via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
 
 
 
43
 
44
  ```
45
  from datasets import load_dataset
 
75
 
76
  To compare predicted and ground truth metrics we introduce the new metric based on LLM as an assessor. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. The LLM evaluates which documentation better explains and fits the code. To mitigate variance and potential ordering effects in model responses, we calculate the probability that the generated documentation is superior by averaging the results of two queries:
77
 
78
+ For more details about metric implementation go to [our github repository](https://github.com/JetBrains-Research/lca-baselines)