Edit model card

WebGLM: Towards An Efficient Web-enhanced Question Answering System with Human Preference

πŸ“ƒ Paper (KDD 2023) | πŸ’» Github Repo

Introduction

WebGLM aspires to provide an efficient and cost-effective web-enhanced question-answering system using the 10-billion-parameter General Language Model (GLM). It aims to improve real-world application deployment by integrating web search and retrieval capabilities into the pre-trained language model.

WebGLM is built by the following parts:

  • LLM-augmented Retriever: Enhances the retrieval of relevant web content to better aid in answering questions accurately.
  • Bootstrapped Generator: Generates human-like responses to questions, leveraging the power of the GLM to provide refined answers.
  • Human Preference-aware Scorer: Estimates the quality of generated responses by prioritizing human preferences, ensuring the system produces useful and engaging content.

This repo is the implementation of Bootstrap Generator.

See our Github Repo for more detailed usage.

Downloads last month
66
Inference Examples
Inference API (serverless) has been turned off for this model.

Spaces using THUDM/WebGLM 2