Model Card for Model ID
This model is a fully fine-tuned version of the Llama-7B model on synthetically generated arithmetic tasks. It was introduced in this paper. It is very similar to Goat-7B, except it was trained without LoRA.
For inquiries about checkpoints during the fine-tuning process, kindly reach out to Nikhil via email.
Model Details
Model Description
- Developed by: Nikhil Prakash
- Model type: Autoregressive Decoder-only Language Model
- License: MIT License
- Finetuned from model: Llama-7B
Model Sources
- Repository: Link
- Paper : Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModel
model = AutoModel.from_pretrained("nikhil07prakash/float-7b")
Citation
BibTeX:
@inproceedings{prakash2023fine,
title={Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking},
author={Prakash, Nikhil and Shaham, Tamar Rott and Haklay, Tal and Belinkov, Yonatan and Bau, David},
booktitle={Proceedings of the 2024 International Conference on Learning Representations},
note={arXiv:2402.14811},
year={2024}
}
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.