Alex Sadovsky
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,31 +8,42 @@ pipeline_tag: text-generation
|
|
| 8 |
tags:
|
| 9 |
- MBTI
|
| 10 |
- psychology
|
| 11 |
-
-
|
| 12 |
- profiling
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Flan-T5 Base — MBTI Question Generator
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
-
## 🧩 Model Overview
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
-
This model is trained to generate **thoughtful, personality-driven interview questions** similar to MBTI self-assessment prompts.
|
| 27 |
---
|
| 28 |
-
|
|
|
|
|
|
|
| 29 |
```python
|
| 30 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
| 31 |
|
| 32 |
model = AutoModelForSeq2SeqLM.from_pretrained("f3nsmart/ft-flan-t5-base-qgen")
|
| 33 |
tokenizer = AutoTokenizer.from_pretrained("f3nsmart/ft-flan-t5-base-qgen")
|
| 34 |
|
| 35 |
-
prompt = "
|
| 36 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 37 |
outputs = model.generate(**inputs, max_new_tokens=60)
|
| 38 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
| 8 |
tags:
|
| 9 |
- MBTI
|
| 10 |
- psychology
|
| 11 |
+
- question-generation
|
| 12 |
- profiling
|
| 13 |
+
- random-question-generator
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# Flan-T5 Base — MBTI Random Question Generator
|
| 17 |
+
|
| 18 |
+
This repository hosts a fine-tuned version of **google/flan-t5-base**, adapted for generating *random, personality-themed questions* in the context of the **Myers–Briggs Type Indicator (MBTI)** framework.
|
| 19 |
+
|
| 20 |
+
The model produces short, standalone prompts designed to encourage self-reflection and discussion related to personality traits, emotions, and decision-making.
|
| 21 |
+
It operates as a **randomized question generator** rather than an interactive conversational model.
|
| 22 |
+
|
| 23 |
---
|
|
|
|
| 24 |
|
| 25 |
+
## Model Purpose
|
| 26 |
+
|
| 27 |
+
The goal of this model is to generate concise, psychologically relevant questions similar to those found in MBTI-style interviews or self-assessment forms.
|
| 28 |
+
Each output question is intended to provoke reflection or reveal an aspect of human cognition, motivation, or behavior.
|
| 29 |
+
|
| 30 |
+
**Key Characteristics:**
|
| 31 |
+
- Generates *independent questions* — no memory or contextual carryover between generations.
|
| 32 |
+
- Optimized for **single-turn usage** (no long-term dialogue support).
|
| 33 |
+
- Produces diverse questions across multiple MBTI domains (e.g., intuition, sensing, thinking, feeling, judging, perceiving).
|
| 34 |
+
- Ideal for personality research tools, psychological chatbots, or training datasets for reflective AI dialogue.
|
| 35 |
|
|
|
|
| 36 |
---
|
| 37 |
+
|
| 38 |
+
## Example Usage
|
| 39 |
+
|
| 40 |
```python
|
| 41 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
| 42 |
|
| 43 |
model = AutoModelForSeq2SeqLM.from_pretrained("f3nsmart/ft-flan-t5-base-qgen")
|
| 44 |
tokenizer = AutoTokenizer.from_pretrained("f3nsmart/ft-flan-t5-base-qgen")
|
| 45 |
|
| 46 |
+
prompt = "Generate a question about emotional decision-making."
|
| 47 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 48 |
outputs = model.generate(**inputs, max_new_tokens=60)
|
| 49 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|