This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out this form but either lost your copy of the weights or got some trouble converting them to the Transformers format.

Model Card for Model ID

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Meta, Changwoo Lee, Soo Min Kwon, Qing Qu, Hun-Seok Kim
  • Model type: Text Generation
  • Language(s) (NLP): English
  • License: This model inherited Llama License (see LICENSE).
  • Finetuned from model: huggyllama/llama-7b

Model Sources

  • Repository: https://github.com/changwoolee/BLAST
  • Paper: Changwoo Lee, Soo Min Kwon, Qing Qu, and Hun-Seok Kim. "BLAST: Block Level Adaptive Structured Matrix for Efficient Deep Neural Network Inference." NeurIPS 2024

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Citation [optional]

BibTeX:

@inproceedings{
    lee2024blast,
    title={{BLAST}: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference},
    author={Lee, Changwoo and Kwon, Soo Min and Qu, Qing and Kim, Hun-Seok},
    booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
    year={2024},
}
Downloads last month
37
Safetensors
Model size
3.56B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for cwoolee/blast-llama-4B

Finetuned
(16)
this model