Are the BLING models fine-tuned on a finance/legal-specific dataset?

#1
by rkow - opened

Are you able to share any other information about the dataset used to fine-tune the BLING models?

Many thanks

llmware org

Thanks for your question - Yes, the BLING models are fine-tuned on a custom instruct dataset that from a domain point of view consists primarily of legal, finance, regulatory, and general business content.

Can I specify multiple documents in the context and prompt the model to choose the best context and rephrase it

llmware org

Thanks for your question on multiple documents in the context. The short answer is - "results may vary" and may take a little experimentation to set up the prompt in the most effective way for the use case.

What will likely work the best: (1) If you connect together a list of unrelated sentences, and ask a fact retrieval question contained in one of the sentences, then Yes, the model will generally be OK to use the applicable information, or (B) If it is a relatively simple selection task, such as a listing of 3 sections of an agreement, such as Section 6.3, Section 6.4 and Section 6.5, and the question is which is the right section that addresses a particular topic, then results should generally be OK.

Where the model will struggle: more complex open-ended selection and multiple choice tasks. Models in this size range can also fail to recognize when a question can not be answered from aa particular context. We have found it difficult to train models in the 1B-3B range to recognize these more complex instructions. Please feel free to try the "llmware/dragon" models which are 7B parameter models that are best suited for more complex tasks like this.

Thanks for the reply. Is there a plan to release the training set? That would be immensely helpful

Sign up or log in to comment