Text Generation
Transformers
Safetensors
mistral
Inference Endpoints
text-generation-inference
Kukedlc commited on
Commit
0176c9d
1 Parent(s): 4ebe198

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -34,4 +34,25 @@ Each dataset contributed 20,000 data points to the training process, ensuring a
34
  - If interested in contributing or experimenting with this model, please feel free to reach out or access the code directly from my Kaggle profile.
35
 
36
  ## Contact Information
37
- - For any inquiries, suggestions, or collaboration proposals, please contact [Your Name] at [Your Email].
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  - If interested in contributing or experimenting with this model, please feel free to reach out or access the code directly from my Kaggle profile.
35
 
36
  ## Contact Information
37
+ - For any inquiries, suggestions, or collaboration proposals, please contact me!
38
+
39
+ !pip install -qU transformers accelerate
40
+
41
+ from transformers import AutoTokenizer
42
+ import transformers
43
+ import torch
44
+
45
+ model = "Kukedlc/NeuralExperiment-7b-MagicCoder-v6"
46
+ messages = [{"role": "user", "content": "What is a large language model?"}]
47
+
48
+ tokenizer = AutoTokenizer.from_pretrained(model)
49
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
50
+ pipeline = transformers.pipeline(
51
+ "text-generation",
52
+ model=model,
53
+ torch_dtype=torch.float16,
54
+ device_map="auto",
55
+ )
56
+
57
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
58
+ print(outputs[0]["generated_text"])