DataHammer commited on
Commit
3097552
1 Parent(s): 5d01062

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -18,9 +18,9 @@ pipeline_tag: question-answering
18
  Mozi is the first large-scale language model for the scientific paper domain, such as question answering and emotional support. With the help of the large-scale language and evidence retrieval models, SciDPR, Mozi generates concise and accurate responses to users' questions about specific papers and provides emotional support for academic researchers.
19
 
20
  - **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
21
- - **Model date:** LLaMA was trained In May. 2023.
22
  - **Model version:** This is version 1 of the model.
23
- - **Model type:** mozi_llama is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B parameters.
24
  - **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
25
  - **License:** English
26
 
 
18
  Mozi is the first large-scale language model for the scientific paper domain, such as question answering and emotional support. With the help of the large-scale language and evidence retrieval models, SciDPR, Mozi generates concise and accurate responses to users' questions about specific papers and provides emotional support for academic researchers.
19
 
20
  - **Developed by:** See [GitHub repo](https://github.com/gmftbyGMFTBY/science-llm) for model developers
21
+ - **Model date:** Mozi was trained In May. 2023.
22
  - **Model version:** This is version 1 of the model.
23
+ - **Model type:** Mozi is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B parameters.
24
  - **Language(s) (NLP):** [Apache 2.0](https://github.com/gmftbyGMFTBY/science-llm/blob/main/LICENSE)
25
  - **License:** English
26