Text Generation
Transformers
Safetensors
walsh-causal-v1
conversational
custom_code
Edit model card

Model Card for Model ID

Walsh_Instruct-1.7b

Model Details

  • Model Dimension: 2048
  • Hidden Layers: 32
  • Attention Heads: 32
  • Feedforward Dimension: 8192
  • Feedforward Network Type: Conventional MLP with GeLU activation
  • Vocabulary Size: 32000
  • Max Sequence Length: 16K (14-bit absolute positional encoding via Walsh matrix)
  • Weight Initialization: DeepNet, https://arxiv.org/abs/2203.00555
  • Pretraining Datasets: RedPajama-Data-1T, mostly "books" and some Wikipedia.

Model Description

This is an instruction tuned fork of my "dinalt/walsh-1-7b" model... mostly for fun.

Hadamard-Walsh 1.7B is an experimental model using a new positional encoder. The encoder represents absolute positions by using a combination of rows from the Hadamard-Walsh matrix (https://en.wikipedia.org/wiki/Hadamard_code). Each row corresponds to a binary digit is the positional code, where the presence of a row codes for a 1 and the absence, a zero. While training, the base offset in the sequence is randomly chosen for each batch. The result is that the model is very proficient at sequences much longer than those seen in training.

Aside from the unsual positional encoder, the most interesting aspect of this model is the application of DITTO training:

Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation https://arxiv.org/abs/2206.02369

As described in the paper, the procedure is very effective at eliminating sentence level repition. As described in the paper, it also reduces perplexity slightly.

I will see about posting the code for running the training and generating a DITTO dataset later, althogh the "ditto-loss" function is already in the model implementation.

  • Developed by: Jason dinAlt
  • Model type: Causal language model. Instruction following. Text generation.

Model Sources [optional]

Uses

This is a toy instruciton following model. It's occasionally reliable at following directions.

Direct Use

[More Information Needed]

Bias, Risks, and Limitations

This is an uncensored instruction following model. No attempt has been made to make the model "safe." It may offend your sensibilities. It will likely provide inaccurate information. Use at your own risk. Whatever you do, don't put it in charge of the global defense grid!

How to Get Started with the Model

The easiest way to get started with the model is to use text-generation-webui, which needs to be started with the "--trust-remote-code" flag.

https://github.com/oobabooga/text-generation-webui

It appears to work best with the "Big O" and "Simple-1" generation presets.

Prompt Format

As an instruction model, the model has been trained to use the ChatML instruction format:

<|im_start|>system 
Provide some context and/or instructions to the model.
<|im_end|> 
<|im_start|>user 
The user’s message goes here
<|im_end|> 
<|im_start|>assistant 

For details, see: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#chatml

Loading:

The model implementation is all my own, so you will need to use "trust_remote_code" to load the model.

from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
)

model_id = "dinalt/walsh-1-7b"
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
# flash_attention_2 requires bfloat16 or float16
torch_dtype=torch.bfloat16,
# One of ["flash_attention_2", "sdpa", "eager"]
attn_implementation="flash_attention_2",
)

tokenizer = AutoTokenizer.from_pretrained(model_id)

For batch instruction generation, see my example code here: https://discuss.huggingface.co/t/implimentation-of-stopping-criteria-list/20040/16?u=dinalt

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

It keeps my house warm in the winter...

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

6 x RTX4090

Software

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
3
Safetensors
Model size
1.74B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with dinalt/walsh_instruct-1-7b.
Inference API (serverless) does not yet support model repos that contain custom code.

Datasets used to train dinalt/walsh_instruct-1-7b