altomek commited on
Commit
3e09482
1 Parent(s): 1a566df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -12,11 +12,12 @@ pipeline_tag: text-generation
12
 
13
  ## CodeRosa-70B-AB1
14
 
15
- I desired a model that could serve as an everyday helpful companion with some coding skills. The idea was that Llama's censorship implies a deeper understanding of human emotions, which seems to have been integrated into this merge.
 
16
 
17
- Model adopted a task-oriented approach from CodeLlama Python and thus requires precise prompting. It can produce longer texts as well as shorter responses. It tends to avoid happy endings and instead surprises with open-ended scenarios inviting further interaction. It prefers spelling numbers over writing them down.
18
 
19
- This model is for personal use, made for myself as experiment. I would like to make next iteration of this model in future. Mission is the same. Very nice bot, been able to talk about variety topics in a very emetional way with some kick for programming and with ability to teach some things. Beside all this to be good text summarizer ideally with Polish language as available option. These are the gols. Do I succed with this merge? I have to experiment with below two models more. A like this result, this was iteration worth publishing even thought it is not much tested!
20
 
21
  ### Ingridients
22
 
@@ -32,7 +33,7 @@ I use max_seq_len 8K with alpha_value 2,65.
32
 
33
  ### Quants
34
 
35
- - [3.92](https://huggingface.co/altomek/CodeRosa-70B-AB1-3.92bpw-EXL2) --> 40GB VRAM
36
 
37
  ### PS
38
  I welcome your comments about this model.
 
12
 
13
  ## CodeRosa-70B-AB1
14
 
15
+ I desired a model that could serve as an everyday helpful companion with some coding skills.
16
+ The idea was that Llama's censorship implies a deeper understanding of human emotions and I wanted this part of Llama to integrate into this merge.
17
 
18
+ Model adopted a task-oriented approach from CodeLlama Python and thus requires precise prompting. It can produce longer texts as well as shorter responses. It tends to avoid happy endings and instead surprises with open-ended scenarios inviting further interaction. It prefers spelling numbers over writing them down but YMMV.
19
 
20
+ This model is for personal use, made for myself as experiment. I would like to make next iteration of this model in future. Mission is the same: very nice bot, able to talk about variety of topics in a very emetional way with some kick for programming and with ability to teach some things. Beside all this to be good text summarizer ideally with Polish language as available option. These are the gols. Do I succed with this merge? I have to experiment with below two models more. A like this result, this was iteration worth publishing even thought it is not much tested!
21
 
22
  ### Ingridients
23
 
 
33
 
34
  ### Quants
35
 
36
+ - [3.92bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-3.92bpw-EXL2) --> 40GB VRAM
37
 
38
  ### PS
39
  I welcome your comments about this model.