Stevross commited on
Commit
4aef56b
1 Parent(s): bfc5b09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -8
README.md CHANGED
@@ -6,16 +6,24 @@ tags:
6
  - gpt
7
  - llm
8
  - large language model
9
- - h2o-llmstudio
10
- inference: false
11
- thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
12
  ---
13
  # Model Card
14
  ## Summary
15
 
16
- This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
17
  - Base model: [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
18
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Usage
21
 
@@ -37,7 +45,7 @@ Also make sure you are providing your huggingface token to the pipeline if the m
37
  from transformers import pipeline
38
 
39
  generate_text = pipeline(
40
- model="Stevross/Astrid-7b-Instruct",
41
  torch_dtype="auto",
42
  trust_remote_code=True,
43
  use_fast=True,
@@ -75,13 +83,13 @@ from h2oai_pipeline import H2OTextGenerationPipeline
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
  tokenizer = AutoTokenizer.from_pretrained(
78
- "Stevross/Astrid-7b-Instruct",
79
  use_fast=True,
80
  padding_side="left",
81
  trust_remote_code=True,
82
  )
83
  model = AutoModelForCausalLM.from_pretrained(
84
- "Stevross/Astrid-7b-Instruct",
85
  torch_dtype="auto",
86
  device_map={"": "cuda:0"},
87
  trust_remote_code=True,
@@ -107,7 +115,7 @@ You may also construct the pipeline from the loaded model and tokenizer yourself
107
  ```python
108
  from transformers import AutoModelForCausalLM, AutoTokenizer
109
 
110
- model_name = "Stevross/Astrid-7b-Instruct" # either local folder or huggingface model name
111
  # Important: The prompt needs to be in the same format the model was trained with.
112
  # You can find an example prompt in the experiment logs.
113
  prompt = "<|prompt|>How are you?</s><|answer|>"
 
6
  - gpt
7
  - llm
8
  - large language model
9
+ - PAIX.Cloud
10
+ inference: true
11
+ thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
12
  ---
13
  # Model Card
14
  ## Summary
15
 
 
16
  - Base model: [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
17
 
18
+ This model, Astrid-1B, is a Mistral-7B model for causal language modeling, designed to generate human-like text.
19
+ It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
20
+ Trained in English, it's a versatile tool for a variety of applications.
21
+ This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
22
+
23
+ This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
24
+ - Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
25
+
26
+
27
 
28
  ## Usage
29
 
 
45
  from transformers import pipeline
46
 
47
  generate_text = pipeline(
48
+ model="PAIXAI/Astrid-7b-Instruct",
49
  torch_dtype="auto",
50
  trust_remote_code=True,
51
  use_fast=True,
 
83
  from transformers import AutoModelForCausalLM, AutoTokenizer
84
 
85
  tokenizer = AutoTokenizer.from_pretrained(
86
+ "PAIXAI/Astrid-7b-Instruct",
87
  use_fast=True,
88
  padding_side="left",
89
  trust_remote_code=True,
90
  )
91
  model = AutoModelForCausalLM.from_pretrained(
92
+ "PAIXAI/Astrid-7b-Instruct",
93
  torch_dtype="auto",
94
  device_map={"": "cuda:0"},
95
  trust_remote_code=True,
 
115
  ```python
116
  from transformers import AutoModelForCausalLM, AutoTokenizer
117
 
118
+ model_name = "PAIXAI/Astrid-7b-Instruct" # either local folder or huggingface model name
119
  # Important: The prompt needs to be in the same format the model was trained with.
120
  # You can find an example prompt in the experiment logs.
121
  prompt = "<|prompt|>How are you?</s><|answer|>"