File size: 1,307 Bytes
6b2faea
 
 
 
 
 
 
 
 
 
 
be4e835
 
 
 
 
6b2faea
 
 
 
 
 
a8b685c
 
6b2faea
 
 
 
b50c919
6b2faea
 
 
 
 
acc6b3a
6b2faea
acc6b3a
 
6b2faea
 
 
 
 
 
 
 
b50c919
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: mit
task_categories:
- translation
language:
- en
tags:
- code
pretty_name: Base64 decode version1
size_categories:
- 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/data.jsonl
---

# Dataset: Base64 decode version1

This dataset is for improving base64 decoding capabilities.

The number of bytes that are in the base64 encoded data spans between 0..127 bytes.

`GPT 4o` is great at base64 decoding.

However `llama3` is terrible at base64 decoding.

Short examples of what `data.jsonl` looks like:

```text
{"instruction": "Transform base64 to HEX", "input": "464pNBlIObA=", "output": "e3ae2934194839b0"}
{"instruction": "Decode Base64 to json", "input": "NQ==", "output": "[53]"}
{"instruction": "Base64 to Hexadecimal", "input": "ax0WaQ==", "output": "6b1d1669"}
{"instruction": "convert base64 to Hexadecimal", "input": "8X43", "output": "f17e37"}
{"instruction": "Change base64 to JSON", "input": "7MmBZO4=", "output": "[236,201,129,100,238]"}
{"instruction": "Json from Base64", "input": "ytBBCmPRA6De+Ow=", "output": "[202,208,65,10,99,209,3,160,222,248,236]"}
{"instruction": "BASE64 to Hex", "input": "m/A=", "output": "9bf0"}
```

# Generate dataset

```
PROMPT> python generate_dataset.py
```

This creates the `data.jsonl` file.