Edit model card

LexGPT-Beta

Description

LexGPT-Beta is a sophisticated Large Language Model (LLM) that has been fine-tuned using Mistral-7B-Instruct-v0.2 as its base. It is specifically designed to excel in mathematical instruction and reasoning, showcasing a wide array of mathematical and logical capabilities. The model is still in its beta stage, with continuous development and testing ongoing.

Model Information

  • Base Model: Mistral-7B-Instruct-v0.2, a versatile LLM renowned for its proficiency in understanding and adhering to intricate instructions. This model surpasses its predecessor, Mistral-7B-Instruct-v0.1, and is widely recognized for its applicability across various AI domains.
  • Fine-tuning Datasets:
  • Fine-tuning Time: Approximately 90 hours.
  • Parameters: 7.24 billion.
  • Censored sometimes Uncensored

Application

LexGPT-Beta can be employed for a multitude of tasks, particularly shining in those involving mathematical reasoning and instruction comprehension. To fully exploit its potential, it's advisable to formulate prompts with concise and explicit instructions.

Comparison to ChatGPT

Preliminary assessments suggest that LexGPT-Beta's performance is on par with ChatGPT, particularly in mathematical scenarios. However, no Humaneval has been executed to definitively compare the two.

Development Standing

LexGPT-Beta is presently in the beta phase. Input and collaboration are encouraged to expand the model's functionalities and applications.

Caution

As LexGPT-Beta is uncensored and in its beta phase, users are urged to exercise caution when utilizing the model. The model's outputs should be scrutinized, especially in delicate contexts.

Credits

LexGPT-Beta is derived from Mistral-7B-Instruct-v0.2, created by the Mistral AI Team. The fine-tuning process leveraged datasets accessible on Hugging Face, significantly contributing to the model's specialized features.

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Datasets used to train lex-hue/LexGPT-Beta