Datasets:
File size: 3,965 Bytes
3652748 2757638 3652748 27d42da 3652748 573d9d4 3652748 aaff689 af08558 aaff689 af08558 c556070 a00e213 aaff689 c556070 573d9d4 3df7de7 aaff689 af08558 ab9029e af08558 ec1d5ac af08558 835024a 61fe2d1 c643bb9 835024a c643bb9 e0d4ede c643bb9 7f590f9 c643bb9 835024a ec1d5ac 835024a ec1d5ac c643bb9 ec1d5ac af08558 31f7dc0 af08558 95a8a1e 835024a 95a8a1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
language:
- en
size_categories:
- 10K<n<100K
config_names:
- chat
configs:
- config_name: chat
default: true
data_files:
- split: train
path: data/train.csv
- split: validation
path: data/valid.csv
- split: test
path: data/test_iid.csv
- split: test_geo
path: data/test_geo.csv
- split: test_vis
path: data/test_vis.csv
- split: test_cat
path: data/test_cat.csv
- split: test_web
path: data/test_web.csv
tags:
- conversational
- image-to-text
- vision
- convAI
task_categories:
- image-to-text
- text-generation
- text2text-generation
- sentence-similarity
pretty_name: weblinx
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div>
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://github.com/McGill-NLP/WebLINX">💾Code</a></div>
<div><a href="https://twitter.com/sivareddyg/status/1755799365031965140">🐦Tweets</a></div>
<div><a href="https://huggingface.co/collections/McGill-NLP/weblinx-models-65c57d4afeeb282d1dcf8434">🤖Models</a></div>
</div>
<video width="100%" controls autoplay muted loop>
<source src="https://huggingface.co/datasets/McGill-NLP/WebLINX/resolve/main/WeblinxWebsiteDemo.mp4?download=false" type="video/mp4">
Your browser does not support the video tag.
</video>
## Quickstart
To get started, simply install `datasets` with `pip install datasets` and load the chat data splits:
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
# Load the validation split
valid = load_dataset("McGill-NLP/weblinx", split="validation")
# Download the input templates and use the LLaMA one
snapshot_download(
"McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*", local_dir="."
)
with open('templates/llama.txt') as f:
template = f.read()
# To get the input text, simply pass a turn from the valid split to the template
turn = valid[0]
turn_text = template.format(**turn)
```
You can now use `turn_text` as an input to LLaMA-style models. For example, you can use Sheared-LLaMA:
```python
from transformers import pipeline
action_model = pipeline(
model="McGill-NLP/Sheared-LLaMA-2.7B-weblinx", device=0, torch_dtype='auto'
)
out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True)
pred = out[0]['generated_text']
print("Ref:", turn["action"])
print("Pred:", pred)
```
## Raw Data
To use the raw data, you will need to use the `huggingface_hub`:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="McGill-NLP/WebLINX-full", repo_type="dataset", local_dir="./wl_data")
# You can download specific demos, for example
demo_names = ['saabwsg', 'ygprzve', 'iqaazif'] # 3 random demo from valid
patterns = [f"demonstrations/{name}/*" for name in demo_names]
snapshot_download(
repo_id="McGill-NLP/WebLINX-full", repo_type="dataset", local_dir="./wl_data", allow_patterns=patterns
)
```
For more information on how to use this data using our [official library](https://github.com/McGill-NLP/WebLINX), please refer to the [WebLINX documentation](https://mcgill-nlp.github.io/weblinx/docs).
## Citation
If you use our dataset, please cite our work as follows:
```bibtex
@misc{lu-2024-weblinx,
title={WebLINX: Real-World Website Navigation with Multi-Turn Dialogue},
author={Xing Han Lù and Zdeněk Kasner and Siva Reddy},
year={2024},
eprint={2402.05930},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |