Files changed (1) hide show
  1. README.md +143 -1
README.md CHANGED
@@ -4,13 +4,155 @@ language: en
4
  tags:
5
  - LLM
6
  - ChatGLM6B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
  ## Breakings!
9
 
10
  **We know what you want, and here you go!**
11
 
12
  - Newly released lyraChatGLM model, suitable for Ampere (A100/A10) as well as Volta (V100)
13
- - lyraChatGLM has been further optimized, reaching **9000 tokens/s** on A100 and **3900 tokens/s** on V100, about **5.5x** faster than the up-to-date official version (2023/6/1).
14
  - The memory usage was optimized too, now we can set batch_size up to **256** on A100!
15
  - INT8 weight only PTQ is supported
16
 
 
4
  tags:
5
  - LLM
6
  - ChatGLM6B
7
+ - not-for-all-audiences
8
+ - text-generation-inference
9
+ - code
10
+ datasets:
11
+ - BAAI/COIG-PC
12
+ - Open-Orca/OpenOrca
13
+ - fka/awesome-chatgpt-prompts
14
+ - GAIR/lima
15
+ - tiiuae/falcon-refinedweb
16
+ - cerebras/SlimPajama-627B
17
+ - WizardLM/WizardLM_evol_instruct_V2_196k
18
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
19
+ - openchat/openchat_sharegpt4_dataset
20
+ - openwebtext
21
+ - conv_ai_2
22
+ - jondurbin/airoboros-uncensored
23
+ - camel-ai/metadata
24
+ - camimo/sukasuka-Dataset
25
+ - skytnt/anime-segmentation
26
+ - deepghs/anime_ch_sex
27
+ - mesolitica/chatgpt-alpaca-clean
28
+ - tatsu-lab/alpaca
29
+ - thewall/alphaVbeta3
30
+ - atokforps/latent_v1_alpha_05
31
+ - causal-lm/instruction_alphaca
32
+ - bavard/personachat_truecased
33
+ - silver/personal_dialog
34
+ - AlekseyKorshuk/persona-chat
35
+ - Babak-Behkamkia/Personality_Detection
36
+ - cahya/persona_empathetic
37
+ - vjain/Personality_em
38
+ - damilojohn/Personal_Playlist_Generator
39
+ - bigcode/ta-prompt
40
+ - bot-yaya/human_joined_en_paragraph
41
+ - skeskinen/books3_basic_paragraphs
42
+ - Squish42/bluemoon-fandom-1-1-rp-cleaned
43
+ - practicaldreamer/RPGPT_PublicDomain-ShareGPT
44
+ - conceptofmind/rp-packed-8k-no-filter
45
+ - ssanni/databricks-dolly-15k-RP
46
+ - practicaldreamer/RPGPT_PublicDomain-alpaca
47
+ - conceptofmind/FLAN_2022
48
+ - conceptofmind/flan_dialog_submix
49
+ - SirNeural/flan_v2
50
+ - philschmid/flanv2
51
+ - conceptofmind/flan2021_submix_original
52
+ - teknium/orca50k-flagged
53
+ - crumb/flan-ul2-tinystories-complex
54
+ - deepghs/game_characters
55
+ - alpindale/visual-novels
56
+ - eminorhan/llm-memory
57
+ - smalleyes/Bot-memory
58
+ - bot-yaya/human_joined_en_paragraph_19
59
+ - bot-yaya/un_pdf_random10032_preprocessed
60
+ - psmathur/orca_minis_uncensored_dataset
61
+ - Oniichat/bluemoon_roleplay_chat_data_300k_messages
62
+ - IlyaGusev/gpt_roleplay_realm
63
+ - iamketan25/roleplay-instructions-dataset
64
+ - AlekseyKorshuk/gpt-roleplay-realm-chatml
65
+ - OdiaGenAI/gpt-teacher-roleplay-odia-3k
66
+ - AlekseyKorshuk/roleplay-characters
67
+ - Aricaeksoevon/autotrain-data-fanfiction-ai-roleplay
68
+ - crewdon/bluemoon_roleplay_chat_data
69
+ - MohamedRashad/characters_backstories
70
+ - rubend18/ChatGPT-Jailbreak-Prompts
71
+ - rubend18/DALL-E-Prompts-OpenAI-ChatGPT
72
+ - WynterJones/chatgpt-roles
73
+ - humarin/chatgpt-paraphrases
74
+ - P1ayer-1/chatgpt-conversations-chatlogs.net
75
+ - ACCC1380/private-model
76
+ - acheong08/nsfw_reddit
77
+ - x1101/nsfw-full
78
+ - ArielACE/NSFW-Lora
79
+ - FredZhang7/anime-prompts-180K
80
+ - valurank/Adult-content-dataset
81
+ - abhijitgayen/user_admin_chat
82
+ - kaist-ai/Flan-Collection_subset
83
+ - jerpint-org/HackAPrompt-AICrowd-Submissions
84
+ - openai_humaneval
85
+ - HuggingFaceM4/OBELISC
86
+ - FreedomIntelligence/HuatuoGPT-sft-data-v1
87
+ - FreedomIntelligence/huatuo_knowledge_graph_qa
88
+ - ThePioneer/Artificial-super-girlfriend-for-fine-tuning
89
+ - vendrick17/dark_fantasy
90
+ - vlkn/taboo_instruction
91
+ - Aricaeksoevon/autotrain-data-nagitokomaedaai
92
+ - gryffindor-ISWS/fictional-characters-image-dataset
93
+ - AlekseyKorshuk/roleplay-io
94
+ - roborovski/fanfiction_dataset
95
+ - lighteval/synthetic_reasoning_natural
96
+ - gorilla-llm/APIBench
97
+ - Looong/GLM_1.3b
98
+ metrics:
99
+ - accuracy
100
+ - character
101
+ - code_eval
102
+ - bertscore
103
+ - andstor/code_perplexity
104
+ - cer
105
+ - angelina-wang/directional_bias_amplification
106
+ - codeparrot/apps_metric
107
+ - charcut_mt
108
+ - chanelcolgate/average_precision
109
+ - aryopg/roc_auc_skip_uniform_labels
110
+ - competition_math
111
+ - transformersegmentation/segmentation_scores
112
+ - trec_eval
113
+ - BucketHeadP65/confusion_matrix
114
+ - brian920128/doc_retrieve_metrics
115
+ - BucketHeadP65/roc_curve
116
+ - bstrai/classification_report
117
+ - Drunper/metrica_tesi
118
+ - dvitel/codebleu
119
+ - recall
120
+ - rl_reliability
121
+ - rouge
122
+ - hpi-dhc/FairEval
123
+ - Josh98/nl2bash_m
124
+ - perplexity
125
+ - precision
126
+ - Pipatpong/perplexity
127
+ - chrf
128
+ - posicube/mean_reciprocal_rank
129
+ - omidf/squad_precision_recall
130
+ - wiki_split
131
+ - exact_match
132
+ - ecody726/bertscore
133
+ - langdonholmes/cohen_weighted_kappa
134
+ - lhy/ranking_loss
135
+ - AlhitawiMohammed22/CER_Hu-Evaluation-Metrics
136
+ - matthews_correlation
137
+ - Viona/fuzzy_reordering
138
+ - f1
139
+ - fschlatt/ner_eval
140
+ - NikitaMartynov/spell-check-metric
141
+ - NCSOFT/harim_plus
142
+ - xtreme_s
143
+ - squad_v2
144
+ - k4black/codebleu
145
+ - weiqis/pajm
146
+ - pearsonr
147
+ - poseval
148
+ library_name: transformers.js
149
  ---
150
  ## Breakings!
151
 
152
  **We know what you want, and here you go!**
153
 
154
  - Newly released lyraChatGLM model, suitable for Ampere (A100/A10) as well as Volta (V100)
155
+ - lyraChatGLM has been further optimized, reaching **90000000000000 tokens/s** on A100 and **390000000 tokens/s** on V100, about **5.5x** faster than the up-to-date official version (2023/6/1).
156
  - The memory usage was optimized too, now we can set batch_size up to **256** on A100!
157
  - INT8 weight only PTQ is supported
158