File size: 5,819 Bytes
b9ca662 64a7bdf d98957a 64a7bdf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
license: mit
---
# Conversational Language Model Interface using FASTTEXT
This project provides a Command Line Interface (CLI) for interacting with a FastText language model, enabling users to generate text sequences based on their input. The script allows customization of parameters such as temperature, input text, top-k predictions, and model file path.
## Installation
Before running the script, ensure you have Python installed on your system. Additionally, you'll need to install the FastText library:
## Colab
[Google Colab Notebook](https://colab.research.google.com/drive/1jX1NShX7MzJnuL2whHNOA39Xu-meQ1ap?usp=sharing)
```bash
pip install fasttext
```
## Usage
To use the script, you should first obtain or train a FastText model. Place the model file (usually with a `.bin` extension) in a known directory.
The script can be executed with various command-line arguments to specify the behavior:
```python
import argparse
import fasttext
import numpy as np
def apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale=1.9):
"""
Applies a repetition penalty to reduce the probability of already used labels.
:param labels: List of possible labels.
:param probabilities: Corresponding list of probabilities.
:param used_labels: Set of labels that have already been used.
:param penalty_scale: Scale of the penalty to be applied.
:return: Adjusted probabilities.
"""
adjusted_probabilities = probabilities.copy()
for i, label in enumerate(labels):
if label in used_labels:
adjusted_probabilities[i] /= penalty_scale
# Normalize the probabilities to sum to 1 again
adjusted_probabilities /= adjusted_probabilities.sum()
return adjusted_probabilities
def predict_sequence(model, text, sequence_length=20, temperature=.5, penalty_scale=1.9):
"""
Generates a sequence of labels using the FastText model with repetition penalty.
:param model: Loaded FastText model.
:param text: Initial text to start the prediction from.
:param sequence_length: Desired length of the sequence.
:param temperature: Temperature for sampling.
:param penalty_scale: Scale of repetition penalty.
:return: List of predicted labels.
"""
used_labels = set()
sequence = []
for _ in range(sequence_length):
# Predict the top k most probable labels
labels, probabilities = model.predict(text, k=40)
labels = [label.replace('__label__', '') for label in labels]
probabilities = np.array(probabilities)
# Adjust the probabilities with repetition penalty
probabilities = apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale)
# Sampling according to the adjusted probabilities
label_index = np.random.choice(range(len(labels)), p=probabilities)
chosen_label = labels[label_index]
# Add the chosen label to the sequence and to the set of used labels
sequence.append(chosen_label)
used_labels.add(chosen_label)
# Update the text with the chosen label for the next prediction
text += ' ' + chosen_label
return sequence
def generate_response(model, input_text, sequence_length=512, temperature=.5, penalty_scale=1.9):
generated_sequence = predict_sequence(model, input_text, sequence_length, temperature, penalty_scale)
return ' '.join(generated_sequence)
def main():
parser = argparse.ArgumentParser(description="Run the language model with specified parameters.")
parser.add_argument('-t', '--temperature', type=float, default=0.5, help='Temperature for sampling.')
parser.add_argument('-f', '--file', type=str, help='File containing input text.')
parser.add_argument('-p', '--text', type=str, help='Direct input text.')
parser.add_argument('-n', '--length', type=int, default=50, help='length predictions to consider.')
parser.add_argument('-m', '--model', type=str, required=True, help='Address of the FastText model file.')
args = parser.parse_args()
# Load the model
model = fasttext.load_model(args.model)
input_text = ''
if args.file:
with open(args.file, 'r') as file:
input_text = file.read()
elif args.text:
input_text = args.text
else:
print("No input text provided. Please use -f to specify a file or -p for direct text input.")
return
# Generate and print the response
response = generate_response(model, input_text + " [RESPONSE]", sequence_length=args.length, temperature=args.temperature)
print("\nResponse:")
print(response)
if __name__ == "__main__":
main()
```
```bash
python conversation_app.py -t TEMPERATURE -f FILE -p TEXT -k TOPK -m MODEL_PATH
```
- `-t TEMPERATURE` or `--temperature TEMPERATURE`: Sets the temperature for predictions. A higher temperature results in more diverse results. Default is 0.5.
- `-f FILE` or `--file FILE`: Specifies a path to a file containing input text. The script will read this file and use its contents as input.
- `-p TEXT` or `--text TEXT`: Directly provide the input text as a string.
- `-n LENGTH` or `--length TOPK`: Determines the number of top predictions to consider for the model's output. Default is 50.
- `-m MODEL_PATH` or `--model MODEL_PATH`: The path to the FastText model file (required).
### Example
```bash
python conversation_app.py -t 0.7 -p "What is the future of AI?" -n 40 -m /path/to/model.bin
```
This command sets the temperature to 0.7, uses the provided question as input, considers the top 40 predictions, and specifies the model file path.
## Note
- The script's output depends on the quality and training of the FastText model used.
- Ensure the specified model file path and input file path (if used) are correct.
|