FINGU-AI commited on
Commit
1b335e7
1 Parent(s): 4f985e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -8,3 +8,67 @@ model-index:
8
  - name: Qwen-Orpo-v1
9
  results: []
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - name: Qwen-Orpo-v1
9
  results: []
10
  ---
11
+
12
+ ## FINGU-AI/Qwen-Orpo-v1
13
+
14
+ ### Overview
15
+
16
+ The FINGU-AI/Qwen-Orpo-v1 model offers a specialized curriculum tailored to English, speakers interested in finance, investment, and legal frameworks. It aims to enhance language proficiency while providing insights into global finance markets and regulatory landscapes.
17
+
18
+ ### Key Features
19
+
20
+ - **Global Perspective**: Explores diverse financial markets and regulations across English, Korean, and Japanese contexts.
21
+ - **Language Proficiency**: Enhances language skills in English, Korean, and Japanese for effective communication in finance and legal domains.
22
+ - **Career Advancement**: Equips learners with knowledge and skills for roles in investment banking, corporate finance, asset management, and regulatory compliance.
23
+
24
+ ### Model Information
25
+
26
+ - **Model Name**: FINGU-AI/Qwen-Orpo-v1
27
+ - **Description**: FINGU-AI/Qwen-Orpo-v1 model trained on various languages, including English.
28
+ - **Checkpoint**: FINGU-AI/Qwen-Orpo-v1
29
+ - **Author**: Grinda AI Inc.
30
+ - **License**: Apache-2.0
31
+
32
+ ### Training Details
33
+
34
+ - **Fine-Tuning**: The model was fine-tuned on the base model Qwen/Qwen1.5-0.5B-Chat through supervised fine-tuning using the TrL Library and Transformer.
35
+ - **Dataset**: The fine-tuning dataset consisted of 28k training samples.
36
+
37
+
38
+ ### How to Use
39
+
40
+ To use the FINGU-AI/Qwen-Orpo-v1 model, you can utilize the Hugging Face Transformers library.
41
+ Here's a Python code snippet demonstrating how to load the model and generate predictions:
42
+
43
+ ```python
44
+ import torch
45
+ from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig,TextStreamer
46
+
47
+
48
+ model_id = 'FINGU-AI/Qwen-Orpo-v1'
49
+ model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="flash_attention_2", torch_dtype= torch.bfloat16)
50
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
51
+ streamer = TextStreamer(tokenizer)
52
+ model.to('cuda')
53
+
54
+
55
+
56
+ messages = [
57
+ {"role": "system","content": " you are as a finance specialist, help the user and provide accurat information."},
58
+ {"role": "user", "content": " what are the best approch to prevent loss?"},
59
+ ]
60
+ tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
61
+
62
+ generation_params = {
63
+ 'max_new_tokens': 1000,
64
+ 'use_cache': True,
65
+ 'do_sample': True,
66
+ 'temperature': 0.7,
67
+ 'top_p': 0.9,
68
+ 'top_k': 50,
69
+ }
70
+
71
+ outputs = model.generate(tokenized_chat, **generation_params, streamer=streamer)
72
+ decoded_outputs = tokenizer.batch_decode(outputs)
73
+
74
+ ```