Question Answering
Transformers
English
Inference Endpoints
Edit model card

Model Details

Model Description

Mozi is the first large-scale language model for the scientific paper domain, such as question answering and emotional support. With the help of the large-scale language and evidence retrieval models, SciDPR, Mozi generates concise and accurate responses to users' questions about specific papers and provides emotional support for academic researchers.

  • Developed by: See GitHub repo for model developers
  • Model date: Mozi was trained In May. 2023.
  • Model version: This is version 1 of the model.
  • Model type: Mozi is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B parameters.
  • Language(s) (NLP): Apache 2.0
  • License: English

Model Sources [optional]

Downloads last month

-

Downloads are not tracked for this model. How to track
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train DataHammer/mozi_llama_7b