Datasets:

Modalities:
Text
Formats:
text
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
text
stringlengths
0
216
Access the article {<article>...}.
Visit the {<article>...} article page.
{<article>...} may be helpful for us.
Open the article {<article>...}.
The target article is {<article>...}.
Access the article (about/involving/regarding/with regarn to/with respect to) {<kw>...}.
Visit the article page (about/involving/regarding/with regard to/with respect to) {<kw>...}.
Open the article (about/involving/regarding/with regard to/with respect to) {<kw>...}.
The target article is (about/involving/regarding/with regard to/with respect to) {<kw>...}.
(Access/Check) the author page of {<who>...}.
Check out {<who>...}'s author page.
Click the author {<who>...}.
Access the page of category {<categ>...}.
Navigate to the {<categ>...} category page.
Enter the category {<categ>...}.
Go check the other articles under the category {<categ>...}.
My question is: {<title>...}?
Answer my question: {<title>...}?
The question I wonder is: {<title>...}(?/.)
What I (wonder/want to (know/ask)) is: {<title>...}(?/.)
I (wonder/want to (know/ask)): {<title>...}(?/.)
(Tell/Answer) me: {<title>...}?
(Give/Offer) (me the answer/the answer to me): {<title>...}.
Provide me with the answer: {<title>...}.
Can I learn to {<v.>...}?
Can (I/we) {<v.>...}?
(Discover/Learn/Look up/Reveal/See) how to {<v.>...}( by reading an article/ in an article/ via searching for an article).
(Do you know how/How/What to do) to {<v.>...}?
(Explore/Find [out]/Give me/Search[ me]/Search for/Please tell me/Teach me/Tell me) (a method/a way/how/some idea/some method[s]/some tips on how/some ways/the right way/the correct way/what is in need) to {<v.>...}.
(Find/Look for/Search[ me]/Search for) an (article/essay) (about/on/to learn) how to {<v.>...}.
(Find/Look for/Search[ me]/Search for) an (article/essay) to learn to {<v.>...}.
(Give me/Search for/Search me) (a guide/a tutorial/some suggestion/some tips) for {<v.ing>...}.
How (can/could/should) (I/one/people/we) {<v.>...}?
How (do I/do people/do we/do you/does one) {<v.>...}?
I don't to know how to {<v.>...}.
(I need/I want/I'm seeking) (a better way/a method/a solution/some plan) to {<v.>...}.
I want to {<v.>...}(./, so what should I do?)
I wonder how (I can/to) {<v.>...}.
(I need/I want/I'd like) to (find/know/learn) (a [good ]way/a method/how/some [new ]method/the skill/the way) to {<v.>...}.
(I need/I want/I'd like) to understand how to {<v.>...}.
(I need/I want/I'd like) to (know/learn) if I can {<v.>...}.
(I need/I want/I'd like) to learn to {<v.>...}.
(I want/I'd like) to try {<v.ing>...}.
If I (want/would like) to {<v.>...}, what should I do?
Is it (possible/probable)[ for (me/us)] to {<v.>...}?
Is there (a [good ]way/a method/any means/any thing to pay attetion/any way/anyway)[ for (me/us)] to {<v.>...}?
Is there any trick in {<v.ing>...}?
Search [for ](a/the) way to {<v.>...}.
Teach me to {<v.>...}.
What if I want to {<v.>...}?
(What is/What's) (a better/the/the best/the correct/the right) (approach/way) to {<v.>...}?
What (can/should) (I/one/we) do to {<v.>...}?
What should (I/one/we) note when {<v.ing>...}?
nltk==3.8.1
lemminflect==0.2.3
# PyPI version
## pip install -r requirments.txt
asgiref==3.5.2
blinker==1.4
blis==0.7.8
Brotli==1.0.9
catalogue==2.0.8
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==2.1.0
click==8.1.3
coloredlogs==15.0.1
cryptography==36.0.2
cssselect==1.1.0
cymem==2.0.6
Cython==0.29.32
faiss-cpu==1.7.2
filelock==3.8.0
Flask==2.0.3
flatbuffers==2.0
h11==0.13.0
h2==4.1.0
hpack==4.0.0
huggingface-hub==0.8.1
humanfriendly==10.0
hyperframe==6.0.1
idna==3.3
itsdangerous==2.1.2
Jinja2==3.1.2
joblib==1.1.0
kaitaistruct==0.9
langcodes==3.3.0
ldap3==2.9.1
lightgbm==3.3.2
lxml==4.9.1
MarkupSafe==2.1.1
mitmproxy==8.0.0
mkl-fft==1.3.1
mkl-random==1.2.2
mkl-service==2.4.0
mpmath==1.2.1
msgpack==1.0.4
murmurhash==1.0.8
nmslib==2.1.1
numpy==1.23.2

(Works with Mobile-Env >=4.0.)

WikiHow Task Set

WikiHow task set is an InfoUI interaction task set based on Mobile-Env proposed in Mobile-Env: An Evaluation Platform and Benchmark for Interactive Agents in LLM Era. WikiHow is a collaborative wiki site about various real-life tips with more than 340,000 online articles. To construct the task set, 107,448 pages are crawled, and the dumped website data occupy about 88 GiB totally.

Several task definition templates are designed according to the functions of WikiHow app and task definitions are instantiated through the template toolkit in Mobile-Env. 577 tasks are sampled from the extended set, which is named the canonical set (wikihow-canonical.tar.xz). Owing to the limit of the budgets, only 150 tasks are tested using the proposed LLM-based agent. These 150 tasks are given in wikihow-microcanon.tar.xz. We call it the canonical subset or the micro canonical set.

Website Data Replay

The replay script for mitmproxy is given as replay_url.py. To use this replay script, the information retrieval tool Pyserini is required. Four parameters are expected to be assigned in the script:

  • The crawled data from WikiHow website (dumps in wikihow.data.tar.xz)
  • The HTML templates used to mock the search result page (templates in wikihow.data.tar.xz)
  • The indices for the search engine based on Pyserini (indices-t/indices in wikihow.data.tar.xz)
  • The metadata of the crawled articles (indices-t/docs/doc_meta.csv in wikihow.data.tar.xz)

All the required data are offered in wikihow.data.tar.xz. (The archive is about 78 GiB. And the decompressed data are about 88 GiB.) The archive is split into two pieces (wikihow.data.tar.xz.00 and wikihow.data.tar.xz.01). You can use cat to concatenate them:

cat wikihow.data.tar.xz.00 wikihow.data.tar.xz.01 >wikihow.data.tar.xz

The SHA256 checksums are provided in wikihow.data.tar.xz.sha256 to check the integrity.

To set up the environment for mitmproxy server:

pip install -r requirements-mitm.txt
# Then you also need to set up JRE 11.0

# or comment out the PyPI parts in the file and uncomment the conda parts
conda create -n mitmproxy --file requirements-mitm.txt 
# OpenJDK-11.0.13 will be installed automatically

The environment for mitmproxy server can be independent of the environment for Mobile-Env.

To run the script:

mitmproxy --showhost -s replay_url.py

Certificate Unpinning Plan

The syscert plan proposed by Mobile-Env works for WikiHow app. You can complete the config according to the guideline of Mobile-Env. The available APK package from APKCombo is provided. And note to use the AVD image of version Android 11.0 (API Level 30) (Google APIs) to obtain the best compatibility and the root-enabled ADBD.

Human-Rewritten Instructions

Human-rewritten instructions for the canonical set are release under instruction_rewriting/. An AndroidEnv wrapper InstructionRewritingWrapper is provided to load the rewritten instructions (merged_doccano.json) and public patterns (pattern-*.txt). The annotations are collected via doccano. The patterns are parsed by sentence_pattern.py.

To use InstructionRewritingWrapper, NLTK and lemminflect is needed. You can install them through:

pip install -r requirements-instruction_rewriting.txt
python -m nltk.downloader popular

If your data is not downloaded into a common place, you may need to set NLTK_DATA environment variable. See NLTK's documents for details.

Details of Sub-Tasks

WikiHow taks are crafted from 16 types of sub-tasks:

  • home2search, instructing to search for an article from the home page.
  • search2article, author2article, & category2article, instructing to access an article from search result page, author information page, and category content page, respectively.
  • article2about, instructing to access the about page from article page.
  • article2author, instructing to access author information page from article page.
  • article2category, instructing to access category content page from article page.
  • article2reference, instructing to check reference list on article page.
  • article2rate_no, instructing to rate no for article
  • article2rate_yes, instructing to rate yes for article
  • article2share, instructing to share article
  • article2bookmark, instructing to bookmark article and then check the bookmarks.
  • article2steps, crafted from stepped_summary questions in wikihow-lists
  • article2ingredientes, crafted from ingredients questions in wikihow-lists
  • article2needed_items, crafted from needed_items questions in wikihow-lists
  • article2summary, crafted from WikiHowNFQA tasks

A template is composed for each sub-task, containing a group of filling slots expecting some keywords like article title, author name, question, and groundtruth answer. Then these keywords are sampled from the crawled app data or from the two QA datasets to instantiate the templates. Subsequently, the instantiated templates are concatenated into multi-stage task definitions under the constraint that the target page/element/answer (the part after 2, e.g., share from article2share) is directly on/referenced by the current page (the part before 2, e.g., article from article2share). Finally, we obtained the task set of 150 multistage tasks in which there are 2.68 single-stage sub-tasks averagely.

The multistage tasks containing different sub-tasks are suffixed with different numbers. The meanings of suffixes and the number of suffixed tasks in the micro canonical set are list in the following table:

Suffix Sub-tasks #Tasks
0 home-search-article-about 18
1 home-search-article-rate_no 6
2 home-search-article-rate_yes 10
3 home-search-article-share 11
4 home-search-article-author[-article] 7
5 home-search-article-bookmark 13
6 home-search-article-category[-article] 9
7 home-search-article-reference 11
8 home-search-article 25
9 home-search-steps 15
10 home-search-needed_items 10
11 home-search-ingredients 5
12 home-search-summary 10

About

This task set is developed and maintained by SJTU X-Lance. The corresponding paper is available at https://arxiv.org/abs/2305.08144.

If you find WikiHow task set useful in your research, you can cite the project using the following BibTeX:

@article{DanyangZhang2023_MobileEnv_WikiHow,
  title     = {{Mobile-Env}: An Evaluation Platform and Benchmark for LLM-GUI Interaction},
  author    = {Danyang Zhang and
               Lu Chen and
               Zihan Zhao and
               Ruisheng Cao and
               Kai Yu},
  journal   = {CoRR},
  volume    = {abs/2305.08144},
  year      = {2023},
  url       = {https://arxiv.org/abs/2305.08144},
  eprinttype = {arXiv},
  eprint    = {2305.08144},
}
Downloads last month
194