Datasets:
Tasks:
Translation
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
Tags:
code
License:
metadata
license: mit
task_categories:
- translation
language:
- en
tags:
- code
pretty_name: Base64 decode version1
size_categories:
- 10K<n<100K
Dataset: Base64 decode version1
This dataset is for improving base64 decoding capabilities.
GPT 4o
is great at base64 decoding.
However llama3
is terrible at base64 decoding.
Short examples of what data.jsonl
looks like:
{"instruction": "Transform base64 to HEX", "input": "464pNBlIObA=", "output": "e3ae2934194839b0"}
{"instruction": "Decode Base64 to json", "input": "NQ==", "output": "[53]"}
{"instruction": "Base64 to Hexadecimal", "input": "ax0WaQ==", "output": "6b1d1669"}
{"instruction": "convert base64 to Hexadecimal", "input": "8X43", "output": "f17e37"}
{"instruction": "Change base64 to JSON", "input": "7MmBZO4=", "output": "[236,201,129,100,238]"}
{"instruction": "Json from Base64", "input": "ytBBCmPRA6De+Ow=", "output": "[202,208,65,10,99,209,3,160,222,248,236]"}
{"instruction": "BASE64 to Hex", "input": "m/A=", "output": "9bf0"}
Generate dataset
PROMPT> python generate_dataset.py
This creates the data.jsonl
file.