neighborwang commited on
Commit
9c40b31
1 Parent(s): c04275b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -36
README.md CHANGED
@@ -1,48 +1,18 @@
1
  ---
2
  tags:
3
- - autotrain
4
  - text-generation-inference
5
  - text-generation
6
  - peft
7
  library_name: transformers
8
  base_model: meta-llama/Llama-3.1-8B-Instruct
9
- widget:
10
- - messages:
11
- - role: user
12
- content: What is your favorite condiment?
13
- license: other
14
  datasets:
15
  - neighborwang/modelica_libraries
 
 
 
16
  ---
17
 
18
- # Model Trained Using AutoTrain
19
 
20
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
21
-
22
- # Usage
23
-
24
- ```python
25
-
26
- from transformers import AutoModelForCausalLM, AutoTokenizer
27
-
28
- model_path = "PATH_TO_THIS_REPO"
29
-
30
- tokenizer = AutoTokenizer.from_pretrained(model_path)
31
- model = AutoModelForCausalLM.from_pretrained(
32
- model_path,
33
- device_map="auto",
34
- torch_dtype='auto'
35
- ).eval()
36
-
37
- # Prompt content: "hi"
38
- messages = [
39
- {"role": "user", "content": "hi"}
40
- ]
41
-
42
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
43
- output_ids = model.generate(input_ids.to('cuda'))
44
- response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
45
-
46
- # Model response: "Hello! How can I assist you today?"
47
- print(response)
48
- ```
 
1
  ---
2
  tags:
 
3
  - text-generation-inference
4
  - text-generation
5
  - peft
6
  library_name: transformers
7
  base_model: meta-llama/Llama-3.1-8B-Instruct
8
+ license: apache-2.0
 
 
 
 
9
  datasets:
10
  - neighborwang/modelica_libraries
11
+ language:
12
+ - en
13
+ pipeline_tag: text2text-generation
14
  ---
15
 
16
+ # Codelica
17
 
18
+ A cutting-edge LLM designed to empower users of all skill levels to effortlessly model complex systems and unlock the full potential of Modelica.