LlameUser commited on
Commit
458f861
1 Parent(s): afb575a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -1
README.md CHANGED
@@ -26,4 +26,57 @@ tags:
26
  pretty_name: relative-positioning
27
  size_categories:
28
  - 10K<n<100K
29
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  pretty_name: relative-positioning
27
  size_categories:
28
  - 10K<n<100K
29
+ ---
30
+ # Dataset Card for Dataset Name
31
+
32
+ This dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.),
33
+ which in my findings most LLMs, even SOTA where not able to produce under all circumstances.
34
+ Will be pushing a fine-tuned mixtral-7x8B with this dataset.
35
+
36
+ ## Dataset Details
37
+
38
+ ### Dataset Description
39
+
40
+ Contains Data for relative positioning on a grid(256, 256).
41
+ Assumes Origin [0, 0] is in the bottom left.
42
+ Two Objects (Object 1, Object 2) are randomly created.
43
+ Answer is there relative position to one another.
44
+
45
+ - **Curated by:** [Antoine Angert]
46
+ - **Language(s) (NLP):** [English]
47
+ - **License:** [apache-2.0]
48
+
49
+ ## Uses
50
+
51
+ ### Direct Use
52
+
53
+ Can be used to fine-tune Language Models.
54
+ (Althought so far not been tested, will update)
55
+
56
+ ## Dataset Structure
57
+
58
+ Features:
59
+ Prompt(String), Response(String)
60
+
61
+ ## Dataset Creation
62
+
63
+ ### Curation Rationale
64
+
65
+ I did some testing to see how well LLMs are able to handle positional data(2D, 3D).
66
+ I found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding.
67
+ Most bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided.
68
+ When I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly.
69
+ This is my first attempt of trying to embed this understanding directly into the models and not throught context.
70
+
71
+ #### Data Collection and Processing
72
+
73
+ The dataset was generated using a python script.
74
+
75
+ ## Dataset Card Authors [optional]
76
+
77
+ Antoine Angert
78
+
79
+ ## Dataset Card Contact
80
+
81
+ Contact under:
82
+ antoine.angert@hsbi.de