Text Generation
Transformers
PyTorch
Safetensors
English
stripedhyena
custom_code

Optional dependencies on custom kernels don't work(Flash Depthwise, flash fft)

#2
by Maykeye - opened
  • Flash depth wise:
  1. no import for FlashDepthwiseConv1d (both in github repo and hf repo it mentioned only once - when instantiated)
  2. I'm not sure what package is intended but flashfftconv mentioned in github repo has FlashDepthWiseConv1d (upper case W)
  3. If I use from flashfftconv import FlashDepthWiseConv1d as FlashDepthwiseConv1d and enable flash_depthwise in config I get warnings about unitialized parameters:
In [1]: model = AutoModelForCausalLM.from_pretrained(".", device="cuda", dtype=torch.float16, load_in_4bit=True, trust_remote_code=True)
bin /home/fella/src/sd/sd/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:04<00:00,  2.21s/it]
Some weights of StripedHyenaModelForCausalLM were not initialized from the model checkpoint at . and are newly initialized: ['backbone.blocks.2.filter.fir_fn.bias', 'backbone.blocks.24.filter.fir_fn.bias', 'backbone.blocks.16.filter.fir_fn.weights', 'backbone.blocks.8.filter.fir_fn.bias', 'backbone.blocks.10.filter.fir_fn.weights', 'backbone.blocks.14.filter.fir_fn.bias', 'backbone.blocks.10.filter.fir_fn.bias', 'backbone.blocks.16.filter.fir_fn.bias', 'backbone.blocks.4.filter.fir_fn.weights', 'backbone.blocks.20.filter.fir_fn.bias', 'backbone.blocks.18.filter.fir_fn.weights', 'backbone.blocks.0.filter.fir_fn.bias', 'backbone.blocks.18.filter.fir_fn.bias', 'backbone.blocks.28.filter.fir_fn.bias', 'backbone.blocks.26.filter.fir_fn.bias', 'backbone.blocks.14.filter.fir_fn.weights', 'backbone.blocks.8.filter.fir_fn.weights', 'backbone.blocks.12.filter.fir_fn.bias', 'backbone.blocks.26.filter.fir_fn.weights', 'backbone.blocks.0.filter.fir_fn.weights', 'backbone.blocks.22.filter.fir_fn.bias', 'backbone.blocks.24.filter.fir_fn.weights', 'backbone.blocks.4.filter.fir_fn.bias', 'backbone.blocks.6.filter.fir_fn.bias', 'backbone.blocks.12.filter.fir_fn.weights', 'backbone.blocks.20.filter.fir_fn.weights', 'backbone.blocks.30.filter.fir_fn.weights', 'backbone.blocks.6.filter.fir_fn.weights', 'backbone.blocks.30.filter.fir_fn.bias', 'backbone.blocks.28.filter.fir_fn.weights', 'backbone.blocks.22.filter.fir_fn.weights', 'backbone.blocks.2.filter.fir_fn.weights']
  • flash fft
    It's marked as compatible in yml file, but if I try to use it, model.py raises error
        if config.get("use_flashfft", "False"):
            raise NotImplementedError("Please use standalone SH code for other custom kernels")

(it is different on github though, but once again name mismatch: it uses flash_fft.conv, while recently build flash-fft-conv uses flashfftconv and github also uses config.seqlen which is None in hf)

Together org

Additional optimizations will trickle in (including better support of custom kernels, quantization). Stay tuned :)

Sign up or log in to comment