WebLINX / README.md
xhluca's picture
Fix instructions
e0d4ede verified
|
raw
history blame
No virus
3.66 kB
---
language:
- en
size_categories:
- 10K<n<100K
config_names:
- chat
configs:
- config_name: chat
default: true
data_files:
- split: train
path: data/train.csv
- split: validation
path: data/valid.csv
- split: test
path: data/test_iid.csv
- split: test_geo
path: data/test_geo.csv
- split: test_vis
path: data/test_vis.csv
- split: test_cat
path: data/test_cat.csv
- split: test_web
path: data/test_web.csv
tags:
- conversational
- image-to-text
- vision
- convAI
task_categories:
- image-to-text
- text-generation
- text2text-generation
- sentence-similarity
pretty_name: weblinx
---
<div align="center">
<h1 style="margin-bottom: 0.5em;">WebLINX: Real-World Website Navigation with Multi-Turn Dialogue</h1>
<em>Xing Han Lù*, Zdeněk Kasner*, Siva Reddy</em>
</div>
<div style="margin-bottom: 2em"></div>
<div style="display: flex; justify-content: space-around; align-items: center; font-size: 120%;">
<div><a href="https://arxiv.org/abs/2402.05930">📄Paper</a></div>
<div><a href="https://mcgill-nlp.github.io/weblinx">🌐Website</a></div>
<div><a href="https://huggingface.co/spaces/McGill-NLP/weblinx-explorer">💻Explorer</a></div>
<div><a href="https://github.com/McGill-NLP/WebLINX">💾Code</a></div>
<div><a href="https://twitter.com/sivareddyg/status/1755799365031965140">🐦Tweets</a></div>
<div><a href="https://huggingface.co/collections/McGill-NLP/weblinx-models-65c57d4afeeb282d1dcf8434">🤖Models</a></div>
</div>
<video width="100%" controls autoplay muted loop>
<source src="https://huggingface.co/datasets/McGill-NLP/WebLINX/resolve/main/WeblinxWebsiteDemo.mp4?download=false" type="video/mp4">
Your browser does not support the video tag.
</video>
## Quickstart
To get started, simply install `datasets` with `pip install datasets` and load the chat data splits:
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
# Load the validation split
valid = load_dataset("McGill-NLP/weblinx", split="validation")
# Download the input templates and use the LLaMA one
snapshot_download(
"McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*", local_dir="."
)
with open('templates/llama.txt') as f:
template = f.read()
# To get the input text, simply pass a turn from the valid split to the template
turn = valid[0]
turn_text = template.format(**turn)
```
You can now use `turn_text` as an input to LLaMA-style models. For example, you can use Sheared-LLaMA:
```python
from transformers import pipeline
action_model = pipeline(
model="McGill-NLP/Sheared-LLaMA-2.7B-weblinx", device=0, torch_dtype='auto'
)
out = action_model(turn_text, return_full_text=False, max_new_tokens=64, truncation=True)
pred = out[0]['generated_text']
print("Ref:", turn["action"])
print("Pred:", pred)
```
## Raw Data
To use the raw data, you will need to use the `huggingface_hub`:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="McGill-NLP/WebLINX-full", repo_type="dataset", local_dir="./data/weblinx")
```
For more information on how to use this data using our [official library](https://github.com/McGill-NLP/WebLINX), please refer to the [WebLINX documentation](https://mcgill-nlp.github.io/weblinx/docs).
## Citation
If you use our dataset, please cite our work as follows:
```bibtex
@misc{lu-2024-weblinx,
title={WebLINX: Real-World Website Navigation with Multi-Turn Dialogue},
author={Xing Han Lù and Zdeněk Kasner and Siva Reddy},
year={2024},
eprint={2402.05930},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```