fbohorquez
commited on
Commit
•
dd2058b
1
Parent(s):
6ee4d1c
update model card README.md
Browse files
README.md
CHANGED
@@ -1,125 +1,47 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
datasets:
|
4 |
-
- dev2bit/es2bash
|
5 |
-
language:
|
6 |
-
- es
|
7 |
-
pipeline_tag: text2text-generation
|
8 |
tags:
|
9 |
-
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
example_title: ls
|
16 |
-
- text: Por favor, cambia al directorio /home/user/project/
|
17 |
-
example_title: cd
|
18 |
-
- text: Lista todos los átomos del universo
|
19 |
-
example_title: noCommand
|
20 |
-
- text: ls -lh
|
21 |
-
example_title: literal
|
22 |
-
- text: file.txt
|
23 |
-
example_title: simple
|
24 |
---
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
<p align="center">
|
29 |
-
<img width="460" height="300" src="https://dev2bit.com/wp-content/themes/lovecraft_child/assets/icons/dev2bit_monitor2.svg">
|
30 |
-
</p>
|
31 |
-
|
32 |
-
Developed by dev2bit, es2bash-mt5 is a language transformer model that is capable of predicting the optimal Bash command given a natural language request in Spanish. This model represents a major advancement in human-computer interaction, providing a natural language interface for Unix operating system commands.
|
33 |
-
|
34 |
-
## About the Model
|
35 |
-
|
36 |
-
es2bash-mt5 is a fine-tuning model based on mt5-small. It has been trained on the dev2bit/es2bash dataset, which specializes in translating natural language in Spanish into Bash commands.
|
37 |
-
|
38 |
-
This model is optimized for processing requests related to the commands:
|
39 |
-
|
40 |
-
* `cat`
|
41 |
-
* `ls`
|
42 |
-
* `cd`
|
43 |
-
|
44 |
-
## Usage
|
45 |
-
|
46 |
-
Below is an example of how to use es2bash-mt5 with the Hugging Face Transformers library:
|
47 |
-
|
48 |
-
```python
|
49 |
-
from transformers import pipeline
|
50 |
-
|
51 |
-
translator = pipeline('translation', model='dev2bit/es2bash-mt5')
|
52 |
-
|
53 |
-
request = "listar los archivos en el directorio actual"
|
54 |
-
translated = translator(request, max_length=512)
|
55 |
-
print(translated[0]['translation_text'])
|
56 |
-
```
|
57 |
-
This will print the Bash command corresponding to the given Spanish request.
|
58 |
-
|
59 |
-
## Contributions
|
60 |
-
We appreciate your contributions! You can help improve es2bash-mt5 in various ways, including:
|
61 |
|
62 |
-
|
63 |
-
* Improving the documentation.
|
64 |
-
* Providing usage examples.
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
Desarrollado por dev2bit, `es2bash-mt5` es un modelo transformador de lenguaje que tiene la capacidad de predecir el comando Bash óptimo dada una solicitud en lenguaje natural en español. Este modelo representa un gran avance en la interacción humano-computadora, proporcionando una interfaz de lenguaje natural para los comandos del sistema operativo Unix.
|
71 |
-
|
72 |
-
## Sobre el modelo
|
73 |
-
|
74 |
-
`es2bash-mt5` es un modelo de ajuste fino basado en `mt5-small`. Ha sido entrenado en el conjunto de datos `dev2bit/es2bash`, especializado en la traducción de lenguaje natural en español a comandos Bash.
|
75 |
-
|
76 |
-
Este modelo está optimizado para procesar solicitudes relacionadas con los comandos:
|
77 |
-
* `cat`
|
78 |
-
* `ls`
|
79 |
-
* `cd`
|
80 |
-
|
81 |
-
## Uso
|
82 |
-
|
83 |
-
A continuación, se muestra un ejemplo de cómo usar `es2bash-mt5` con la biblioteca Hugging Face Transformers:
|
84 |
-
|
85 |
-
```python
|
86 |
-
from transformers import pipeline
|
87 |
-
|
88 |
-
translator = pipeline('translation', model='dev2bit/es2bash-mt5')
|
89 |
-
|
90 |
-
request = "listar los archivos en el directorio actual"
|
91 |
-
translated = translator(request, max_length=512)
|
92 |
-
print(translated[0]['translation_text'])
|
93 |
-
```
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
-
* Mejorando la documentación.
|
103 |
-
* Proporcionando ejemplos de uso.
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
108 |
-
It achieves the following results on the evaluation set:
|
109 |
-
- Loss: 0.0928
|
110 |
|
111 |
## Training procedure
|
112 |
|
113 |
### Training hyperparameters
|
114 |
|
115 |
The following hyperparameters were used during training:
|
116 |
-
- learning_rate: 0.
|
117 |
- train_batch_size: 8
|
118 |
- eval_batch_size: 1
|
119 |
- seed: 42
|
120 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
121 |
- lr_scheduler_type: linear
|
122 |
-
- num_epochs:
|
123 |
|
124 |
### Training results
|
125 |
|
@@ -149,7 +71,10 @@ The following hyperparameters were used during training:
|
|
149 |
| 0.123 | 22.0 | 14784 | 0.0938 |
|
150 |
| 0.113 | 23.0 | 15456 | 0.0931 |
|
151 |
| 0.1185 | 24.0 | 16128 | 0.0929 |
|
152 |
-
| 0.1125 | 25.0 | 16800 | 0.
|
|
|
|
|
|
|
153 |
|
154 |
|
155 |
### Framework versions
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
3 |
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
datasets:
|
6 |
+
- es2bash
|
7 |
+
model-index:
|
8 |
+
- name: es2bash-mt5
|
9 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
+
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
+
# es2bash-mt5
|
|
|
|
|
16 |
|
17 |
+
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the es2bash dataset.
|
18 |
+
It achieves the following results on the evaluation set:
|
19 |
+
- Loss: 0.0919
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
## Model description
|
22 |
|
23 |
+
More information needed
|
24 |
|
25 |
+
## Intended uses & limitations
|
26 |
|
27 |
+
More information needed
|
|
|
|
|
28 |
|
29 |
+
## Training and evaluation data
|
30 |
|
31 |
+
More information needed
|
|
|
|
|
32 |
|
33 |
## Training procedure
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|
37 |
The following hyperparameters were used during training:
|
38 |
+
- learning_rate: 0.1
|
39 |
- train_batch_size: 8
|
40 |
- eval_batch_size: 1
|
41 |
- seed: 42
|
42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
- lr_scheduler_type: linear
|
44 |
+
- num_epochs: 28
|
45 |
|
46 |
### Training results
|
47 |
|
|
|
71 |
| 0.123 | 22.0 | 14784 | 0.0938 |
|
72 |
| 0.113 | 23.0 | 15456 | 0.0931 |
|
73 |
| 0.1185 | 24.0 | 16128 | 0.0929 |
|
74 |
+
| 0.1125 | 25.0 | 16800 | 0.0927 |
|
75 |
+
| 0.1213 | 26.0 | 17472 | 0.0925 |
|
76 |
+
| 0.1153 | 27.0 | 18144 | 0.0921 |
|
77 |
+
| 0.1134 | 28.0 | 18816 | 0.0919 |
|
78 |
|
79 |
|
80 |
### Framework versions
|