nayohan commited on
Commit
104598a
1 Parent(s): 522a603

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model:
5
+ - upstage/SOLAR-10.7B-v1.0
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+
12
+ ### **Loading the Model**
13
+
14
+ Use the following Python code to load the model:
15
+
16
+ ```python
17
+ import torch
18
+ from transformers import AutoModelForCausalLM, AutoTokenizer
19
+
20
+ tokenizer = AutoTokenizer.from_pretrained("nayohan/corningQA-solar-10.7b-v1.0")
21
+ model = AutoModelForCausalLM.from_pretrained(
22
+ "nayohan/corningQA-solar-10.7b-v1.0",
23
+ device_map="auto",
24
+ torch_dtype=torch.float16,
25
+ )
26
+ ```
27
+
28
+ ### **Generating Text**
29
+
30
+ To generate text, use the following Python code:
31
+
32
+ ```python
33
+ text = """You will be shown dialogues between Speaker 1 and Speaker 2. Please read and understand given Dialogue Session, then complete the task under the guidance of Task Introduction.\n\n```
34
+ ```
35
+ Context:
36
+ {context}
37
+ ```
38
+ ```
39
+ Dialogue Session:
40
+ {diaogues}
41
+ ```
42
+ Task Introduction:
43
+ After reading the Dialogue Session, please create an appropriate response in the parts marked ###.
44
+ ```
45
+ Task Result:
46
+ """
47
+ inputs = tokenizer(text, return_tensors="pt")
48
+
49
+ outputs = model.generate(**inputs, max_new_tokens=64)
50
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
51
+ ```
52
+
53
+ ```
54
+ {"input":"You will be shown a dialogues between Speaker 1 and Speaker 2. Please read Context and understand given Dialogue Session, then complete the task under the guidance of Task Introduction.\n\n```\nContext:\npropagation mechanism with line-of-sight, \ufb01rst-order re\ufb02ec-tions and scattering becoming much more dominant. Thismeans shadowing will have severe detrimental effects on theaverage received power. Indeed, channel models developedfor millimeter-wave include a third state, in addition toline-of-sight and non-line-of-sight, to explicitly model an out-age event when received power is too weak to establish alink [3]. Although adaptive beam steering techniques can```\n\n```\nDialogue Session:\nSpeaker 1: How does shadowing affect millimeter-wave channel models?\nSpeaker 2: Shadowing has severe detrimental effects on the average received power and can cause an outage event, leading to a third state in channel models to model this event.\nSpeaker 1: So, what are some techniques that can be used to mitigate the effects of shadowing?\nSpeaker 2: Well, as I mentioned earlier, adaptive beam steering is one approach. By constantly adjusting the direction of the transmit and receive beams, it is possible to maintain a strong enough signal to overcome the effects of shadowing.\nSpeaker 1: That's interesting. But in a highly dynamic environment, like a crowded city, would this technique still be effective?\nSpeaker 2: Good point. In that case, another technique that can be used is diversity reception. By using multiple antennas and taking advantage of the different paths that signals can take, it is possible to improve the signal quality and overcome the effects of shadowing.\nSpeaker 1: Ah, I see. And how does this compare to traditional cellular systems where shadowing is also a concern?\nSpeaker 2: That's a great question. In traditional cellular systems, shadowing is not as severe due to the lower frequencies used. However, in millimeter-wave systems, the wavelength is much smaller, making them more susceptible to shadowing and requiring these specialized techniques for mitigation.\nSpeaker 1: I see. Well, it looks like careful consideration and implementation of these techniques will be crucial for successful millimeter-wave communication. Thank you for explaining this to me.\nSpeaker 2: ###\n```\n\n```\nTask Introduction:\nAfter reading the Dialogue Session, please create an appropriate response in the parts marked ###.\n```\n\nTask Result:","output":" No problem, happy to share my knowledge on the subject.\n"}
55
+
56
+ ```
57
+
58
+ ### **License**
59
+ - [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
60
+ - [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
61
+ - Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release fine-tuned model as cc-by-nc-4.0.
62
+
63
+ ```bibtex
64
+ @misc{kim2023solar,
65
+ title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
66
+ author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
67
+ year={2023},
68
+ eprint={2312.15166},
69
+ archivePrefix={arXiv},
70
+ primaryClass={cs.CL}
71
+ }
72
+ ```