Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Access to this model is automatically granted upon accepting the AI2 ImpACT License - Medium Risk Artifacts (“MR Agreement”) and completing all fields below.

Log in or Sign Up to review the conditions and access this model content.

Model Card for WildLlama-7b-assistant-only

Model Description

The WildLlama-7b-assistant-only model is a chatbot derived from the Llama-2 model by Meta that is licensed under the Llama 2 License, enhanced through fine-tuning on the WildChat Dataset's user-ChatGPT interactions. WildLlama-7b-assistant-only is trained to predict only assistant responses. To be able to both predict user prompts and assistant responses, check out WildLlama-7b-user-assistant.

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Recommendations

We recommend that this model not be used for any high-impact or human-facing purposes as its biases and limitations need to be further explored. We intend this to be a research artifact to advance AI's ability to better serve human needs.

Citation

BibTeX:

@inproceedings{
  zhao2024wildchat,
  title={WildChat: 1M Chat{GPT} Interaction Logs in the Wild},
  author={Wenting Zhao and Xiang Ren and Jack Hessel and Claire Cardie and Yejin Choi and Yuntian Deng},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024},
  url={https://openreview.net/forum?id=Bl8u7ZRlbM}
}
@misc{deng2024wildvisopensourcevisualizer,
  title={WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild}, 
  author={Yuntian Deng and Wenting Zhao and Jack Hessel and Xiang Ren and Claire Cardie and Yejin Choi},
  year={2024},
  eprint={2409.03753},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2409.03753}, 
}

How to Get Started with the Model

Use the code below to get started with the model.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

device = "cuda" if torch.cuda.is_available() else "cpu"

model_name = 'allenai/WildLlama-7b-assistant-only'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)

# Notice the </s>! Note that the format is slightly different from allenai/WildLlama-7b-user-assistant
# Format: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: abc ASSISTANT: def</s>USER: def ASSISTANT: adfs</s>USER: asdf
# To generate an assistant response
user_prompt = 'Write a story about a dinosaur on an airplane.'
prompt = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user_prompt} ASSISTANT:"
model_inputs = tokenizer(prompt, return_tensors='pt', add_special_tokens=False).to(device)
output = model.generate(**model_inputs)

print("Output:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
0
Safetensors
Model size
6.74B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train allenai/WildLlama-7b-assistant-only

Spaces using allenai/WildLlama-7b-assistant-only 24