Model Card for Cerebras 111M Dollyfied.

This is a finetuned model of Cerebras 111M model. using DataBricksLabs Dolly Framework

Model Details

Model Description

This is a finetuned version of cerebras' 111million paramater model that has been trained to follow instructions.

It was accomplished using DataBricks Dolly training tools and the alpaca dataset, and was trained for 2 epochs.

Uses

This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best.

Direct Use

If you have a use you put it to, Please let me know.

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice.

[More Information Needed]

Bias, Risks, and Limitations

Limitations... Yes, I am sure there are so so many.

[More Information Needed]

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 8xA100s (accomplished while I was downloading the model I was actually training.)
  • Minutes used: 7.5
  • Cloud Provider: LambdaGPU
  • Compute Region: USA
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.04
ARC (25-shot) 19.71
HellaSwag (10-shot) 26.68
MMLU (5-shot) 25.28
TruthfulQA (0-shot) 43.72
Winogrande (5-shot) 50.2
GSM8K (5-shot) 0.0
DROP (3-shot) 2.69
Downloads last month
1,489
Safetensors
Model size
153M params
Tensor type
BF16
Β·
U8
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Corianas/111m

Spaces using Corianas/111m 26