xformAI's picture
Update README.md
078bcdd verified
metadata
license: mit
language:
  - en
library_name: transformers

This is a GQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has 6 groups of KV heads per layer instead of original 12 KV heads in the MHA implementation.