LLAMA2-446m / README.md
ccore's picture
Update README.md
807de2e
|
raw
history blame
No virus
2.03 kB
metadata
base_model: ./core2
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: core2
    results: []

Model Card: LLama 2 - Version 7b (Embedding + Output + 1 Hidden Layer)

Overview

  • Link to Training Progress: WandB Training Progress

  • Model Name: LLama 2 - Version 7b

  • Total Parameters: 446 million

Training Data

The model has been trained with the following sequence of datasets:

  1. GPT-2 Data (Done): The initial training phase involves GPT-2 data and is currently in the finalization stage.

  2. Wikipedia QA in Markdown (In Progress): The model's training will continue with Wikipedia question-answering data in Markdown format.

  3. QA with Rhetoric (Future Stages): The model will further be fine-tuned with question-answering data generated from various LLama models, incorporating rhetorical elements.

Model Description

The LLama 2 - Version 7b model is a powerful language model with a total of 446 million parameters. It utilizes embeddings, an output layer, and one hidden layer to perform a wide range of natural language processing tasks. The training is conducted in multiple stages, each focused on different datasets and objectives.

Disclaimer

This model card provides an overview of the LLama 2 - Version 7b model, its training data, and intended use cases. Keep in mind that the model's performance may vary depending on the specific task or dataset. Users are encouraged to evaluate the model's suitability for their applications and exercise caution when using it in real-world scenarios.

For any further inquiries or issues related to this model, please contact the model developers through the provided training progress link.