Stevross commited on
Commit
979b5c9
1 Parent(s): e92d368

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -9
README.md CHANGED
@@ -6,15 +6,22 @@ tags:
6
  - gpt
7
  - llm
8
  - large language model
9
- - h2o-llmstudio
10
- inference: false
11
- thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
12
  ---
13
  # Model Card
14
  ## Summary
15
 
16
- This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
17
- - Base model: [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
 
 
 
 
 
 
 
18
 
19
 
20
  ## Usage
@@ -37,7 +44,7 @@ Also make sure you are providing your huggingface token to the pipeline if the m
37
  from transformers import pipeline
38
 
39
  generate_text = pipeline(
40
- model="Stevross/Astrid-7B-OpenOrce-3",
41
  torch_dtype="auto",
42
  trust_remote_code=True,
43
  use_fast=True,
@@ -75,13 +82,13 @@ from h2oai_pipeline import H2OTextGenerationPipeline
75
  from transformers import AutoModelForCausalLM, AutoTokenizer
76
 
77
  tokenizer = AutoTokenizer.from_pretrained(
78
- "Stevross/Astrid-7B-OpenOrce-3",
79
  use_fast=True,
80
  padding_side="left",
81
  trust_remote_code=True,
82
  )
83
  model = AutoModelForCausalLM.from_pretrained(
84
- "Stevross/Astrid-7B-OpenOrce-3",
85
  torch_dtype="auto",
86
  device_map={"": "cuda:0"},
87
  trust_remote_code=True,
@@ -107,7 +114,7 @@ You may also construct the pipeline from the loaded model and tokenizer yourself
107
  ```python
108
  from transformers import AutoModelForCausalLM, AutoTokenizer
109
 
110
- model_name = "Stevross/Astrid-7B-OpenOrce-3" # either local folder or huggingface model name
111
  # Important: The prompt needs to be in the same format the model was trained with.
112
  # You can find an example prompt in the experiment logs.
113
  prompt = "<|prompt|>How are you?<|im_end|><|answer|>"
 
6
  - gpt
7
  - llm
8
  - large language model
9
+ - PAIX.Cloud
10
+ inference: true
11
+ thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
12
  ---
13
  # Model Card
14
  ## Summary
15
 
16
+ - Base model: [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
17
+
18
+ This model, Astrid-7B-Assistant is a Mistral-7B base model for causal language modeling, designed to generate human-like text.
19
+ It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
20
+ Trained in English, it's a versatile tool for a variety of applications.
21
+ This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
22
+
23
+ This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
24
+ - Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
25
 
26
 
27
  ## Usage
 
44
  from transformers import pipeline
45
 
46
  generate_text = pipeline(
47
+ model="PAIXAI/Astrid-Mistral-7B,
48
  torch_dtype="auto",
49
  trust_remote_code=True,
50
  use_fast=True,
 
82
  from transformers import AutoModelForCausalLM, AutoTokenizer
83
 
84
  tokenizer = AutoTokenizer.from_pretrained(
85
+ "PAIXAI/Astrid-Mistral-7B,
86
  use_fast=True,
87
  padding_side="left",
88
  trust_remote_code=True,
89
  )
90
  model = AutoModelForCausalLM.from_pretrained(
91
+ "PAIXAI/Astrid-Mistral-7B,
92
  torch_dtype="auto",
93
  device_map={"": "cuda:0"},
94
  trust_remote_code=True,
 
114
  ```python
115
  from transformers import AutoModelForCausalLM, AutoTokenizer
116
 
117
+ model_name = "PAIXAI/Astrid-Mistral-7B # either local folder or huggingface model name
118
  # Important: The prompt needs to be in the same format the model was trained with.
119
  # You can find an example prompt in the experiment logs.
120
  prompt = "<|prompt|>How are you?<|im_end|><|answer|>"