anto18671 commited on
Commit
63199ca
1 Parent(s): c9d78a1

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -9
README.md CHANGED
@@ -1,6 +1,6 @@
1
- # Linformer-based Language Model Inference on Hugging Face
2
 
3
- This repository provides the code and configuration needed to use the Linformer-based language model hosted on Hugging Face under the model ID `anto18671/lumenspark`. The model is designed for efficient inference, leveraging the Linformer architecture to handle long sequences with reduced memory and computational overhead.
4
 
5
  ## Table of Contents
6
 
@@ -13,11 +13,11 @@ This repository provides the code and configuration needed to use the Linformer-
13
 
14
  ## Introduction
15
 
16
- This project provides the necessary setup and guidance to perform text generation using the Linformer-based language model, optimized for fast and efficient inference. The model is hosted on Hugging Face and can be loaded directly for tasks like text generation, completion, and other language modeling tasks.
17
 
18
  The model has been trained on large datasets like OpenWebText and BookCorpus, but this repository focuses on inference, allowing you to generate text quickly with minimal resource consumption.
19
 
20
- **Note**: This model uses a custom attention mechanism based on Linformer, which is not supported by Hugging Face's `AutoModel` API. Therefore, you must use the provided `LumensparkModel` and `LumensparkConfig` to load the model, as described below.
21
 
22
  ## Model Architecture
23
 
@@ -46,18 +46,28 @@ These parameters can be adjusted during inference to control the nature of the g
46
 
47
  ## Usage
48
 
49
- You can easily load the model and perform inference using Hugging Face's Transformers library. However, as this model uses Linformer-based attention, you **cannot** use the `AutoModel` APIs. Instead, the `LumensparkModel` and `LumensparkConfig` must be loaded, as shown in the following example:
 
 
 
 
 
 
 
 
 
 
50
 
51
  ```python
52
  from lumenspark import LumensparkConfig, LumensparkModel
53
  from transformers import AutoTokenizer
54
 
55
- # Load the configuration and model from Hugging Face
56
- config = LumensparkConfig.from_pretrained("anto18671/lumenspark")
57
- model = LumensparkModel.from_pretrained("anto18671/lumenspark", config=config)
58
 
59
  # Load the tokenizer
60
- tokenizer = AutoTokenizer.from_pretrained("anto18671/lumenspark")
61
 
62
  # Example input text
63
  input_text = "Once upon a time"
 
1
+ # Linformer-based Language Model Inference
2
 
3
+ This repository provides the code and configuration needed to use the Linformer-based language model, designed for efficient inference, leveraging the Linformer architecture to handle long sequences with reduced memory and computational overhead.
4
 
5
  ## Table of Contents
6
 
 
13
 
14
  ## Introduction
15
 
16
+ This project provides the necessary setup and guidance to perform text generation using the Linformer-based language model, optimized for fast and efficient inference. The model can be loaded for tasks like text generation, completion, and other language modeling tasks.
17
 
18
  The model has been trained on large datasets like OpenWebText and BookCorpus, but this repository focuses on inference, allowing you to generate text quickly with minimal resource consumption.
19
 
20
+ **Note**: This model uses a custom attention mechanism based on Linformer. Therefore, you must use the provided `LumensparkModel` and `LumensparkConfig` to load the model.
21
 
22
  ## Model Architecture
23
 
 
46
 
47
  ## Usage
48
 
49
+ You can easily load the model and perform inference by installing the architecture via pip. Since this model uses Linformer-based attention, you **must** install the custom package and load the `LumensparkModel` and `LumensparkConfig`, as shown in the following example:
50
+
51
+ ### Installation
52
+
53
+ First, install the package:
54
+
55
+ ```bash
56
+ pip install lumenspark
57
+ ```
58
+
59
+ ### Inference Example
60
 
61
  ```python
62
  from lumenspark import LumensparkConfig, LumensparkModel
63
  from transformers import AutoTokenizer
64
 
65
+ # Load the configuration and model
66
+ config = LumensparkConfig.from_pretrained("path/to/your/model/config")
67
+ model = LumensparkModel.from_pretrained("path/to/your/model", config=config)
68
 
69
  # Load the tokenizer
70
+ tokenizer = AutoTokenizer.from_pretrained("path/to/your/tokenizer")
71
 
72
  # Example input text
73
  input_text = "Once upon a time"