duyntnet commited on
Commit
e3ef207
1 Parent(s): df5ec3c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - SmolLM2-1.7B-Instruct
12
+ ---
13
+ Quantizations of https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
14
+
15
+
16
+ ### Inference Clients/UIs
17
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
18
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
19
+ * [ollama](https://github.com/ollama/ollama)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
22
+ * [jan](https://github.com/janhq/jan)
23
+ ---
24
+
25
+ # From original readme
26
+
27
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
28
+
29
+ The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
30
+
31
+ The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
32
+
33
+ ### How to use
34
+
35
+ ### Transformers
36
+ ```bash
37
+ pip install transformers
38
+ ```
39
+
40
+ ```python
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer
42
+ checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
43
+
44
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
45
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
46
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
47
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
48
+
49
+ messages = [{"role": "user", "content": "What is the capital of France."}]
50
+ input_text=tokenizer.apply_chat_template(messages, tokenize=False)
51
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
52
+ outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
53
+ print(tokenizer.decode(outputs[0]))
54
+ ```
55
+
56
+
57
+ ### Chat in TRL
58
+ You can also use the TRL CLI to chat with the model from the terminal:
59
+ ```bash
60
+ pip install trl
61
+ trl ch