Trist4x commited on
Commit
923b41c
1 Parent(s): 2a2c6fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md CHANGED
@@ -1,3 +1,140 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
  ---
6
+ # NPC Model
7
+
8
+ This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3**, using LoRA.
9
+
10
+ This model parses a text description of a game scene, and outputs commands like:
11
+ * `say <player1> "Hello Adventurer, care to join me on a quest?`
12
+ * `greet <player1>`
13
+ * `attack <player1>`
14
+ * Any other `<action> <param>` you add to the prompt! (We call these "skills"!)
15
+
16
+
17
+ ⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️
18
+
19
+
20
+ ## Usage
21
+
22
+ **Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)**
23
+
24
+ * Instantiating the model using outlines:
25
+ ```py
26
+ from outlines import models
27
+ from gigax.step import NPCStepper
28
+
29
+ # Download model from the Hub
30
+ model_name = "Gigax/NPC-LLM-7B"
31
+ llm = AutoModelForCausalLM.from_pretrained(model_name)
32
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
33
+
34
+ # Our stepper takes in a Outlines model to enable guided generation
35
+ # This forces the model to follow our output format
36
+ model = models.Transformers(llm, tokenizer)
37
+
38
+ # Instantiate a stepper: handles prompting + output parsing
39
+ stepper = NPCStepper(model=model)
40
+ ```
41
+
42
+ * Calling the model on your game's data:
43
+
44
+ ```py
45
+ from gigax.parse import CharacterAction
46
+ from gigax.scene import (
47
+ Character,
48
+ Item,
49
+ Location,
50
+ ProtagonistCharacter,
51
+ ProtagonistCharacter,
52
+ Skill,
53
+ ParameterType,
54
+ )
55
+ # Use sample data
56
+ current_location = Location(name="Old Town", description="A quiet and peaceful town.")
57
+ NPCs = [
58
+ Character(
59
+ name="John the Brave",
60
+ description="A fearless warrior",
61
+ current_location=current_location,
62
+ )
63
+ ]
64
+ protagonist = ProtagonistCharacter(
65
+ name="Aldren",
66
+ description="Brave and curious",
67
+ current_location=current_location,
68
+ memories=["Saved the village", "Lost a friend"],
69
+ quests=["Find the ancient artifact", "Defeat the evil warlock"],
70
+ skills=[
71
+ Skill(
72
+ name="Attack",
73
+ description="Deliver a powerful blow",
74
+ parameter_types=[ParameterType.character],
75
+ )
76
+ ],
77
+ psychological_profile="Determined and compassionate",
78
+ )
79
+ items = [Item(name="Sword", description="A sharp blade")]
80
+ events = [
81
+ CharacterAction(
82
+ command="Say",
83
+ protagonist=protagonist,
84
+ parameters=[items[0], "What a fine sword!"],
85
+ )
86
+ ]
87
+
88
+ action = stepper.get_action(
89
+ context=context,
90
+ locations=locations,
91
+ NPCs=NPCs,
92
+ protagonist=protagonist,
93
+ items=items,
94
+ events=events,
95
+ )
96
+ ```
97
+
98
+ ## Input prompt
99
+
100
+ Here's a sample input prompt, showing you the format on which the model has been trained:
101
+ ```txt
102
+ - WORLD KNOWLEDGE: A vast open world full of mystery and adventure.
103
+ - KNOWN LOCATIONS: Old Town
104
+ - NPCS: John the Brave
105
+ - CURRENT LOCATION: Old Town: A quiet and peaceful town.
106
+ - CURRENT LOCATION ITEMS: Sword
107
+ - LAST EVENTS:
108
+ Aldren: Say Sword What a fine sword!
109
+ - PROTAGONIST NAME: Aldren
110
+ - PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious
111
+ - PROTAGONIST MEMORIES:
112
+ Saved the village
113
+ Lost a friend
114
+ - PROTAGONIST PENDING QUESTS:
115
+ Find the ancient artifact
116
+ Defeat the evil warlock
117
+ - PROTAGONIST ALLOWED ACTIONS:
118
+ Attack <character> : Deliver a powerful blow
119
+ Aldren:
120
+ ```
121
+
122
+ ### 🤗 We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! 🤗
123
+
124
+
125
+ ## Model info
126
+
127
+ - **Developed by:** Gigax
128
+ - **Language(s) (NLP):** English
129
+ - **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
130
+ - **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more!
131
+
132
+ ## How to Cite
133
+
134
+ ```bibtex
135
+ @misc{NPC-LLM-3_8B,
136
+ url={[https://huggingface.co/Gigax/NPC-LLM-7B](https://huggingface.co/Gigax/NPC-LLM-3_8B)},
137
+ title={NPC-LLM-3_8B},
138
+ author={Gigax team}
139
+ }
140
+ ```