Xenova's picture
Xenova HF staff
Update README.md
5cdb3a9 verified
|
raw
history blame
5.74 kB
metadata
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
  - conversational
base_model: google/gemma-2-27b-it

DataGemma RIG model card

Resources and Technical Documentation:

Terms of Use: Terms

Authors: Google

Model Information

Description

DataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data Commons into their responses. DataGemma RIG is used in the retrieval interleaved generation approach (based off of tool-use approaches), where it is trained to annotate a response with natural language queries to Data Commons’ existing natural language interface wherever there are statistics. More information can be found in this research paper.

Inputs and outputs

  • Input: Text string, such as a question or a prompt.
  • Output: Generated English-language text in response to the input where statistics in the response are annotated with [DC(<natural language query to get the statistic from Data Commons>)].

Usage

Below we provide a code snippet to run the fine-tuned model, which is just one step in the entire RIG approach explained in the DataGemma paper. You can try out the entire RIG approach in this colab notebook.

To run this model, first make sure to pip install -U transformers bitsandbytes accelerate, then copy the code snippet from the following section.

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type='nf4',
   bnb_4bit_compute_dtype=torch.bfloat16,
)

model_id = 'google/datagemma-rig-27b-it'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map='auto',
    quantization_config=nf4_config,
    torch_dtype=torch.bfloat16,
)
input_text = 'What are some interesting trends in Sunnyvale spanning gender, age, race, immigration, health conditions, economic conditions, crime and education?'

inputs = tokenizer(input_text, return_tensors='pt')

outputs = model.generate(**inputs, max_new_tokens=4096)
answer = tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0]
print(answer)

Citation

TODO

Model Data

The base model was trained on a dataset of text data that includes a wide variety of sources, see the Gemma 2 documentation for more details. The DataGemma RIG model is fine-tuned on synthetically generated data. More details can be found in the DataGemma paper.

Implementation Information

Like Gemma, DataGemma RIG was trained on TPUv5e, using JAX.

Evaluation

Evaluation on the model was done as part of evaluation on the full RIG workflow and documented in the DataGemma paper.

Ethics and Safety

We are releasing an early version of the models. They are meant for trusted tester use (primarily for academic and research purposes) and are not yet ready for commercial or general public use. This version was trained on a very small corpus of examples and may exhibit unintended, and at times controversial or inflammatory, behavior. Please anticipate errors and limitations as we actively develop this LLM interface.

  • We red teamed and checked the Data Commons Natural Language interface pre-launch against a set of potentially dangerous queries that could result in misleading, controversial, or inflammatory results.
  • We ran these same queries against the outputs of the RIG and RAG models, finding a few examples where query responses were controversial, but not dangerous.
  • As this model is meant purely for academic and research purposes, it has not been subjected to our usual safety evaluations.

Usage and Limitations

These models have certain limitations that users should be aware of.

This is a very early version of DataGemma RIG. It is meant for trusted tester use (primarily for academic and research use) and not yet ready for commercial or general public use. This version was trained on a very small corpus of examples and may exhibit unintended, and at times controversial or inflammatory behavior. Please anticipate errors and limitations as we actively develop this large language model interface.

Your feedback and evaluations are critical to refining DataGemma's performance and will directly contribute to its training process. Known limitations are detailed in the DataGemma paper, and we encourage you to consult it for a comprehensive understanding of DataGemma's current capabilities.