---


PMC_LLaMA - finetuned on PubMed Central papers

This is a ggml conversion of chaoyi-wu's PMC_LLAMA_7B_10_epoch model.

It is a LLaMA model which is finetuned on PubMed Central papers from The Semantic Scholar Open Research Coprus dataset.

Currently I have only converted it into new k-quant method Q5_K_M. I will gladly make more versions on request.

Other possible quantizations include: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q5_K_M, q6_K

Compatible with llama.cpp, but also with:

  • text-generation-webui
  • KoboldCpp
  • ParisNeo/GPT4All-UI
  • llama-cpp-python
  • ctransformers

CAVE!

Being a professional myself and having tested the model, I can strongly advise that this model is best left in the hands of professionals.

This model can produce very detailed and elaborate responses, but it tends to confabulate quite often in my opinion (considering the field of use).

Because of the detail accuracy, it is difficult for a layperson to tell when the model is returning facts and when it is returning bullshit.

– so unless you are a subject matter expert (biology, medicine, chemistry, pharmacy, etc) I appeal to your sense of responsibility and ask you:

to use the model only for testing, exploration, and just-for-fun. In no case should the answers of this model lead to implications that affect your health.


Here is what the autor/s write in the original model card:

This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.

Notably, different from chaoyi-wu/PMC_LLAMA_7B, this model is further trained for 10 epochs.

The model was trained with the following hyperparameters:

Epochs: 10
Batch size: 128
Cutoff length: 512
Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.

That's it!

If you have any further questions, feel free to contact me or start a discussion

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .