Edit model card

this isthe 4bit model !!!

Updated to include LangChain code and documentation:

as the model previoulsy fcould not generate correct code: Transformer documentation was also re added (not lost) as python code fragments as well as markdown pages and html pages: This data was fit to 0.9 ( the bible was attempted again but the problem is still at 2,4.....perhaps i will have to have a specialized session for this data. ).

.. in training this model only 2million parameters were moved: ....its easy fit data as it can already program!! ( no need for deep embedding)

a lora of rank 1:1 <<< Usefull for data which will only need to be slightly fit or data that the model is already close to:

hence for bible training i will have to move more tensors!!! making a more drastic effect on the model !!!(not greatly desirable)<<<< Here we will have to create a specific merge candidate : and merge the data into the prime model instead!!

Uploaded model

  • Developed by: LeroyDyer
  • License: apache-2.0
  • Finetuned from model : LeroyDyer/Mixtral_AI_LCARS_tg_1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
25
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·

Quantized from

Datasets used to train LeroyDyer/_LCARS_merged_4bit