Model Card for Extended-Mind-Llama-2-70b-chat * Github: https://github.com/normal-computing/extended-mind-transformers/ * ArXiv: https://arxiv.org/abs/2406.02332 Original architecture and code by Meta. * Developed by: Normal Computing, Adapted from Meta * License: Apache 2.0 This model is part of the Extended Mind Transformers collection, and implements the methods described in our [paper](https://arxiv.org/abs/2406.02332). This model retrieves and attends to an external cache of key-value pairs (or memories), and has not been finetuned (The original model weights have not been edited). ## Model Usage ### External Memory Passing external memories to the model is easy. Simply pass the token ids to the model during instantiation, as the following examples illustrate. Generating and caching the memories is handled internally, during the first `model.generate()` call. You can update the memories using the following sequence of commands: ```python model.clear_memories() model.memory_ids = list_of_new_token_ids ``` Set `trust_remote_code=True` to avoid warnings. Pass the memories to the model as a list of token ids. ```python from transformers import AutoModelForCausalLM, AutoTokenizer ag_wiki_entry = """Alexander Grothendieck (/ˈɡroʊtəndiːk/; German pronunciation: [ˌalɛˈksandɐ ˈɡʁoːtn̩ˌdiːk] (listen); French: [ɡʁɔtɛndik]; 28 March 1928 – 13 November 2014) was a stateless (and then, since 1971, French) mathematician who became the leading figure in the creation of modern algebraic geometry.[7][8] His research extended the scope of the field and added elements of commutative algebra, homological algebra, sheaf theory, and category theory to its foundations, while his so-called "relative" perspective led to revolutionary advances in many areas of pure mathematics.[7][9] He is considered by many to be the greatest mathematician of the twentieth century.[10][11]""" tokenizer_hf = AutoTokenizer.from_pretrained("normalcomputing/extended-mind-llama-2-70b-chat") memories = tokenizer_hf(ag_wiki_entry).input_ids model_hf = AutoModelForCausalLM.from_pretrained("normalcomputing/extended-mind-llama-2-70b-chat", external_memories=memories, trust_remote_code=True) ``` After this, you can generate text with the model as usual. The model will automatically use the memories during generation. You can update any config parameters (we set `topk` below) by passing new values to the `model.generate()` method. ```python inputs = "When did Alexander Grothendieck become a French citizen?" inputs = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(inputs, max_length=40, topk=2) tokenizer.decode(outputs_hf['sequences'][0], skip_special_tokens=True) ``` ### Citations By simply setting `output_retrieved_memory_idx=True` in the `model.generate()` method, you can retrieve the memory indices used during generation. We walk through an example in the [demo notebook](). ### Additional configuration LongLLaMA has several other parameters: * `memory_type` (`string`, *optional*, defaults to `manual`): Whether to store external memories manually or in a vector database. * `mask_by_sim` (`bool`, *optional*, defaults to `True`): Whether or not to mask retrieved memories by similarity. * `sim_threshold` (`float`, *optional*, defaults to `0.25`): Threshold for masking retrieved memories. * `tokenizer_all_special_ids` (`list`, *optional*, defaults to `[0, 50278]`): Ids for special tokens to remove from memories. * `remove_special_tokens` (`bool`, *optional*, defaults to `True`): Remove memories that correspond to tokenizer special ids. Additionally, the stride used to compute the memory representations can be set within `generate_cache()` method. Smaller strides generate higher-quality representations, while larger strides require fewer computations. ## Limitations This model is part of ongoing research at Normal Computing. --- license: apache-2.0 ---