Bitext commited on
Commit
a91ace9
1 Parent(s): 6f0603c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -1,22 +1,22 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - axolotl
5
- - generated_from_trainer
6
- - text-generation-inference
7
  base_model: mistralai/Mistral-7B-Instruct-v0.2
8
  model_type: mistral
9
  pipeline_tag: text-generation
10
  model-index:
11
- - name: Mistral-7B-Retail-v2
12
- results: []
13
  ---
14
 
15
- # Mistral-7B-Retail-v2
16
 
17
  ## Model Description
18
 
19
- This model, named "Mistral-7B-Retail-v2," is a specially adjusted version of the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It is fine-tuned to manage questions and provide answers related to retail services.
20
 
21
  ## Intended Use
22
 
@@ -28,8 +28,8 @@ This model, named "Mistral-7B-Retail-v2," is a specially adjusted version of the
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
- model = AutoModelForCausalLM.from_pretrained("bitext-llm/Mistral-7B-Retail-v2")
32
- tokenizer = AutoTokenizer.from_pretrained("bitext-llm/Mistral-7B-Retail-v2")
33
 
34
  inputs = tokenizer("<s>[INST] How can I return a purchased item? [/INST]", return_tensors="pt")
35
  outputs = model.generate(inputs['input_ids'], max_length=50)
@@ -38,7 +38,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
38
 
39
  ## Model Architecture
40
 
41
- The "Mistral-7B-Retail-v2" uses the `MistralForCausalLM` structure with a `LlamaTokenizer`. It maintains the setup of the base model but is enhanced to better respond to retail-related questions.
42
 
43
  ## Training Data
44
 
@@ -78,7 +78,7 @@ This model was developed by the Bitext and trained on infrastructure provided by
78
 
79
  ## License
80
 
81
- This model, "Mistral-7B-Retail-v2", is licensed under the Apache License 2.0 by Bitext Innovations International, Inc. This open-source license allows for free use, modification, and distribution of the model but requires that proper credit be given to Bitext.
82
 
83
  ### Key Points of the Apache 2.0 License
84
 
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - axolotl
5
+ - generated_from_trainer
6
+ - text-generation-inference
7
  base_model: mistralai/Mistral-7B-Instruct-v0.2
8
  model_type: mistral
9
  pipeline_tag: text-generation
10
  model-index:
11
+ - name: Mistral-7B-Retail-v1
12
+ results: []
13
  ---
14
 
15
+ # Mistral-7B-Retail-v1
16
 
17
  ## Model Description
18
 
19
+ This model, named "Mistral-7B-Retail-v1," is a specially adjusted version of the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It is fine-tuned to manage questions and provide answers related to retail services.
20
 
21
  ## Intended Use
22
 
 
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
+ model = AutoModelForCausalLM.from_pretrained("bitext-llm/Mistral-7B-Retail-v1")
32
+ tokenizer = AutoTokenizer.from_pretrained("bitext-llm/Mistral-7B-Retail-v1")
33
 
34
  inputs = tokenizer("<s>[INST] How can I return a purchased item? [/INST]", return_tensors="pt")
35
  outputs = model.generate(inputs['input_ids'], max_length=50)
 
38
 
39
  ## Model Architecture
40
 
41
+ The "Mistral-7B-Retail-v1" uses the `MistralForCausalLM` structure with a `LlamaTokenizer`. It maintains the setup of the base model but is enhanced to better respond to retail-related questions.
42
 
43
  ## Training Data
44
 
 
78
 
79
  ## License
80
 
81
+ This model, "Mistral-7B-Retail-v1", is licensed under the Apache License 2.0 by Bitext Innovations International, Inc. This open-source license allows for free use, modification, and distribution of the model but requires that proper credit be given to Bitext.
82
 
83
  ### Key Points of the Apache 2.0 License
84