The GeoV model was designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER) by Georges Hark and Varuna Jayasiri.
RoPER, in addition to using relative positions in the attention score calculation by RoPE embeddings, adds relative positional information explicitly to value embeddings. Specifically, it incorporates the relative positions of the tokens paid attention to. RoPER has given better performance in some algorithmic tasks, and seems comparable to RoPE in language modeling.
The GeoV tokenizer uses SentencePiece unigram language model and tokenizes symbols, digits and new line characters separately, in order to achieve better performance on mathematical content and code.
This model was contributed by gharik and vpj.
We have shared 9B parameter pre-trained model at GeoV/GeoV-9b. We plan to release checkpoints around every 20b tokens trained from here until around 300b tokens. We will also train smaller and larger versions. Our aim is to have broadly available smaller and larger models.
The original model code is at goev-ai/goev.
The generate()
method can be used to generate text using GeoV model.
>>> from transformers import GeoVForCausalLM, GeoVTokenizer
>>> model = GeoVForCausalLM.from_pretrained("GeoV/GeoV-9b")
>>> tokenizer = GeoVTokenizer.from_pretrained("GeoV/GeoV-9b")
>>> prompt = "In mathematics, topology is the study of"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids
>>> gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
>>> gen_text = tokenizer.batch_decode(gen_tokens)[0]
( vocab_size = 65536 hidden_size = 5120 num_hidden_layers = 32 num_attention_heads = 40 intermediate_size = 20480 layer_norm_eps = 0.0001 rotary_emb_base = 10000 max_position_embeddings = 2048 initializer_range = 0.02 use_extra_biases_ffn = False use_cache = True bos_token_id = 0 eos_token_id = 2 tie_word_embeddings = False use_parallel_residual = False **kwargs )
Parameters
int
, optional, defaults to 65536) —
Vocabulary size of the GeoV model. Defines the number of different tokens that can be represented by the
inputs_ids
passed when calling GeoVModel.
int
, optional, defaults to 5120) —
Dimension of the encoder layers and the pooler layer.
int
, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
int
, optional, defaults to 40) —
Number of attention heads for each attention layer in the Transformer encoder.
int
, optional, defaults to 20480) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
int
, optional, defaults to 10000) —
base for computing rotary embeddings frequency
int
, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
float
, optional, defaults to 1e-5) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
float
, optional, defaults to 1e-4) —
The epsilon used by the layer normalization layers.
bool
, optional, defaults to True
) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True
.
bool
, optional, defaults to False
) —
Whether or not to have extra bias parameters in the final layer of FFN modules.
bool
, optional, defaults to True
) —
Whether to use a “parallel” formulation in each Transformer layer, which can provide a slight training
speedup at large scales (e.g. 20B).
Example —
This is the configuration class to store the configuration of a GeoVModel. It is used to instantiate a GeoV model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GeoV GeoV/GeoV-9b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import GeoVConfig, GeoVModel
>>> # Initializing a GeoV configuration
>>> configuration = GeoVConfig()
>>> # Initializing a model (with random weights) from the configuration
>>> model = GeoVModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( vocab_file bos_token = '<s>' eos_token = '</s>' unk_token = '<unk>' new_line_token_id = 65499 **kwargs )
Parameters
str
) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
str
, optional, defaults to "<s>"
) —
The beginning of sequence token that was used during pretraining.
str
, optional, defaults to "</s>"
) —
The end of sequence token.
str
, optional, defaults to "<unk>"
) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
int
, optional, defaults to 65_499
) —
The token id of new line character.
SentencePieceProcessor
) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct an GeoV tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
Converts a sequence of tokens (strings for sub-words) in a single string.
( config: GeoVConfig )
Parameters
The bare GeoV Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, seq_len)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, seq_len)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape ({0})
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
torch.FloatTensor
of shape (num_heads,)
or (num_layers, num_heads)
, optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]
:
Tuple[Tuple[torch.FloatTensor]]
of length n_layers
, with each tuple having 2 tensors of shape (batch_size, n_heads, seq_len - 1, head_size)
, optional) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
).
torch.FloatTensor
of shape (batch_size, seq_len, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (GeoVConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values
is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size)
is output.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and optionally if
config.is_encoder_decoder=True
2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True
in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The GeoVModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
GeoV/GeoV-9b instead of GeoV/GeoV-9b. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto"
in the from_pretrained
call.
Example:
>>> from transformers import AutoTokenizer, GeoVModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("GeoV/GeoV-9b")
>>> model = GeoVModel.from_pretrained("GeoV/GeoV-9b")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
( config: GeoVConfig )
Parameters
GeoV Model with a language modeling
head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, seq_len)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, seq_len)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape ({0})
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
torch.FloatTensor
of shape (num_heads,)
or (num_layers, num_heads)
, optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]
:
Tuple[Tuple[torch.FloatTensor]]
of length n_layers
, with each tuple having 2 tensors of shape (batch_size, n_heads, seq_len - 1, head_size)
, optional) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
).
torch.FloatTensor
of shape (batch_size, seq_len, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail.
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail.
bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size]
(see input_ids
docstring) Tokens with indices set to -100
are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
.
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (GeoVConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
)
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The GeoVForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, GeoVForCausalLM, GeoVConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("GeoV/GeoV-9b")
>>> model = GeoVForCausalLM.from_pretrained("GeoV/GeoV-9b")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> prediction_logits = outputs.logits