unable to qunatize model for local use with ollama
#75
by
adamjheard
- opened
β bert-base-uncased git:(main) docker run --rm -v .:/model ollama/quantize -q q4_K_M /model
unknown architecture BertForMaskedLM
β bert-base-uncased git:(main) docker run --rm -v .:/model ollama/quantize -q q4_K_M /model
unknown architecture BertForMaskedLM