Edit model card

Maximus Model Card

Model Details

Model Name: Maximus

Model Type: Transformer-based leveraging Microsoft Phi 14b 128k tokens

Publisher: Awels Engineering

License: MIT

Model Description: Maximus is a sophisticated model designed to help as an AI agent focusing on Maximo Application Suite. It leverages advanced machine learning techniques to provide efficient and accurate solutions. It has been trained on the full docments corpus of MAS 8.5.

Dataset

Dataset Name: awels/maximo_admin_dataset

Dataset Source: Hugging Face Datasets

Dataset License: MIT

Dataset Description: The dataset used to train Maximus consists of all the public documents available on Maximo application suite. This dataset is curated to ensure a comprehensive representation of typical administrative scenarios encountered in Maximo.

Training Details

Training Data: The training data includes 67,000 Questions and Answers generated by the Bonito LLM. The dataset is split into 3 sets of data (training, test and validation) to ensure robust model performance.

Training Procedure: Maximus was trained using supervised learning with cross-entropy loss and the Adam optimizer. The training involved 1 epoch, a batch size of 4, a learning rate of 5.0e-06, and a cosine learning rate scheduler with gradient checkpointing for memory efficiency.

Hardware: The model was trained on a single NVIDIA H100 SXM graphic card.

Framework: The training was conducted using PyTorch.

Evaluation

Evaluation Metrics: Maximus was evaluated on the training dataset:

epoch = 1.0 total_flos = 233585641GF train_loss = 1.7111 train_runtime = 1:08:52.73 train_samples_per_second = 11.41 train_steps_per_second = 2.853

Performance: The model achieved the following results on the evaluation dataset:

epoch = 1.0 eval_loss = 1.4482 eval_runtime = 0:03:24.92 eval_samples = 10773 eval_samples_per_second = 57.386 eval_steps_per_second = 14.347

Intended Use

Primary Use Case: Maximus is intended to be used locally in an agent swarm to colleborate together to solve Maximo Application Suite related problems.

Limitations: This 14b model is an upscale of the 3b model. Much better loss than the 3b so results should be better.

Downloads last month
13
GGUF
Model size
14B params
Architecture
phi3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for awels/maximusLLM-14b-128k-gguf

Quantized
(70)
this model

Dataset used to train awels/maximusLLM-14b-128k-gguf