Cyber_assist3.0 / README.md
Zardian's picture
Update README.md
eb8b6a3 verified
|
raw
history blame
No virus
1.8 kB
---
library_name: transformers
datasets:
- ahmed000000000/cybersec
- dzakwan/cybersec
language:
- en
tags:
- conversational
---
# Model Card for Model ID
Works as a cyber assistant.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** <a href="https://github.com/Zardian18">Zardian18</a>
- **Model type:** GPT2
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** <a href="https://huggingface.co/openai-community/gpt2-medium">OpenAi GPT2-medium</a>
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** <a href="https://github.com/Zardian18/CyberAssist">Github repo</a>
## Uses
Can be used to handle and solve basic cybersec queries.
## Bias, Risks, and Limitations
Currently it is fine-tuned on GPT2, which is good but not comparable to state of the art LLMs and Transformers. Moreover, the dataset is small.
[More Information Needed]
## How to Get Started with the Model
```# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Zardian/Cyber_assist3.0")
```
```# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Zardian/Cyber_assist3.0")
model = AutoModelForCausalLM.from_pretrained("Zardian/Cyber_assist3.0")
```
## Training Details
### Training Data
<a href="https://huggingface.co/datasets/ahmed000000000/cybersec">Cybersec queries and responses dataset</a> consisting of 12k enteries.
<a href="https://huggingface.co/datasets/dzakwan/cybersec">Cybersec dataset with instructions and output</a> consisting of 14k enteries.