jamesdslab2 commited on
Commit
791f481
·
verified ·
1 Parent(s): 935a601

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Unlike our encrypted DistilBert, this model’s weights reside on Nesa’s secure server, but the tokenizer is on Hugging Face. You can still use the tokenizer to encode and decode text and then submit it for inference via the Nesa network!
3
+
4
+ ```python
5
+
6
+ ###### Load the Tokenizer
7
+
8
+ from transformers import AutoTokenizer
9
+
10
+ hf_token = "<HF TOKEN>" # Replace with your token
11
+ model_id = "nesaorg/Llama-3.2-1B-Instruct-Encrypted"
12
+ tokenizer = AutoTokenizer.from_pretrained(model_id, token=hf_token, local_files_only=False)
13
+ ```
14
+
15
+ ###### Tokenize and Decode Text
16
+
17
+ ```python
18
+ text = "I'm super excited to join Nesa's Equivariant Encryption initiative!"
19
+
20
+ # Encode text into token IDs
21
+ token_ids = tokenizer.encode(text)
22
+ print("Token IDs:", token_ids)
23
+
24
+ # Decode token IDs back to text
25
+ decoded_text = tokenizer.decode(token_ids)
26
+ print("Decoded Text:", decoded_text)
27
+ ```
28
+
29
+ ###### Example Output:
30
+
31
+ ```
32
+ Token IDs: [128000, 1495, 1135, 2544, 6705, 284, 2219, 11659, 17098, 22968, 8707, 2544, 3539, 285, 34479]
33
+ Decoded Text: I'm super excited to join Nesa's Equivariant Encryption initiative!
34
+ ```