We are currently collecting survey responses asking for input regarding WebShop's usability! If you have 5 minutes to spare, we would wholeheartedly appreciate it if you filled out the survey. Thanks so much in advance!

Abstract

Existing benchmarks for grounding language in interactive environments either lack real-world linguistic elements, or prove difficult to scale up due to substantial human involvement in the collection of data or feedback signals. To bridge this gap, we develop WebShop – a simulated e-commerce website environment with 1.18 million real-world products and 12,087 crowd-sourced text instructions. Given a text instruction specifying a product requirement, an agent needs to navigate multiple types of webpages and issue diverse actions to find, customize, and purchase an item. WebShop provides several challenges for language grounding including understanding compositional instructions, query (re-)formulation, comprehending and acting on noisy text in webpages, and performing strategic exploration. We collect over 1,600 human demonstrations for the task, and train and evaluate a diverse range of agents using reinforcement learning, imitation learning, and pre-trained image and language models. Our best model achieves a task success rate of 29%, which outperforms rule-based heuristics (9.6%) but is far lower than human expert performance (59%). We also analyze agent and human trajectories and ablate various model components to provide insights for developing future agents with stronger language understanding and decision making abilities. Finally, we show that agents trained on WebShop exhibit non-trivial sim-to-real transfer when evaluated on amazon.com, indicating the potential value of WebShop in developing practical web-based agents that can operate in the wild.

WebShop Environment

The diagram’s components are as follows:

Demo

Interactive Web App

Trajectories

The below slides show the step-by-step actions of trajectories generated from different agents and entities performing the task of searching for a product based on a goal instruction.

Goal Instruction: I’m looking for a quick-release replacement fitness strap band; it should match my chic teal fitbit, and price lower than 40.00 dollars

These first four slideshows showcase trajectories by an MTurk worker, Rule Based Heuristic Imitation Learning Agent, and Imitation Learning + Reinforcement Learning Agent searching for a product on WebShop given the same goal instruction.

Sim-to-real Transfer: This last slideshow shows a trajectory generated by an Imitation Learning agent searching on the www.amazon.com website, achieved via sim-to-real transfer logic.

Citation

@inproceedings{yao2022webshop,
  bibtex_show = {true},
  title = {WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents},
  author = {Yao, Shunyu and Chen, Howard and Yang, John and Narasimhan, Karthik},
  booktitle = {ArXiv},
  year = {preprint},
  html = {https://arxiv.org/abs/2207.01206},
  tag = {NLP}
}

Authors

Shunyu
Shunyu Yao
Howard
Howard Chen
John
John Yang
Karthik
Karthik Narasimhan