daviswer commited on
Commit
8866072
1 Parent(s): b906bb5

Condense/contextualize description

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -5,12 +5,11 @@ license: llama2
5
  ## Description
6
 
7
  This model is intended to be used as an accelerator for [llama 13B (chat)](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) and takes inspiration
8
- from the Medusa architecture and modifies the MLP into a multi-stage MLP, where each stage predicts
9
- a single token in the draft. Each stage takes as input both a state vector and sampled token embedding
10
- from the prior stage (the base model can be considered stage 0). The inputs are projected and passed
11
- through a LayerNorm/GeLU activation, forming a new state vector. This state vector is used to predict
12
- the next draft token, which, with the new state vector, acts as input for the next stage of prediction.
13
- We sample multiple tokens at each stage, and emit a tree of candidate suffixes to evaluate in parallel.
14
 
15
  ## Code
16
 
 
5
  ## Description
6
 
7
  This model is intended to be used as an accelerator for [llama 13B (chat)](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) and takes inspiration
8
+ from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
9
+ a single token in the draft based on both a state vector and sampled token
10
+ from the prior stage (the base model can be considered stage 0).
11
+ The state vector from the base model provides contextual information to the accelerator,
12
+ while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
 
13
 
14
  ## Code
15