asiansoul commited on
Commit
b104739
1 Parent(s): 58f34f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -40,13 +40,12 @@ I'll find the answer for you.
40
 
41
  Soon real PoSE to extend Llama's context length to 64k with using my merge method : "reborn"[reborn](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
42
 
43
- I have found that most of merge's model outside so far do not actually have 64k in their configs. I will improve it in the next merge.
44
 
45
  256k is not possible. My computer is running out of memory.
46
 
47
  If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
48
 
49
- ### Merge Method
50
 
51
  ### Merge Method
52
 
 
40
 
41
  Soon real PoSE to extend Llama's context length to 64k with using my merge method : "reborn"[reborn](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
42
 
43
+ I have found that most of merge's model outside so far do not actually have 64k in their configs. I will improve it in the next merge with my reborn. If that doesn't work, I guess I'll have to find another way, right?
44
 
45
  256k is not possible. My computer is running out of memory.
46
 
47
  If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
48
 
 
49
 
50
  ### Merge Method
51