Reproducing the throughput results
Dear authors,
thank you for the great work and making the model public.
I would like to reproduce the throughput results from figure 2. Could you assist me on this?
Currently, with transformers library and with all the requires Hymba dependencies, Hymba is much slower than Llama 1B in my generation experiments.
Thanks in advance!
The single-stream generation speed is highly disappointing. I suspect their "throughput" is just what the paper says: measured on "sequence length 8k, batch size 128" - which is of course quite unfavorable conditions for the transformer model they compare against But even so the reported throughput in Table 1 is both worse than Llama and 43% worse than pure Mamba. In Figure 2 it somehow miraculously (edit: probably because Table 1 is comparing against 300M LLama model) becomes better than Llama (and the comparison with unadorned Mamba is dropped) Anyhow, what matters is not "throughput" but end-user experience and with this model's architecture/implementation it seems to be capped at "very poor". Needless to say, what's published in the paper is not an honest way to report on real-world performance.