Request for SLM phi 4 14b finetuned Version
#2
by
Gamertams
- opened
Hello there I have seen the technical report stating that phi 4 14b performs on par with 3.3 70b so i wonder if we can finetune that models so that it will be good for people with low compute to infer
note: IDk for sure phi 4 14b performance thing it might be BS because its from MSFT
If u can try and test that model if possible finetune
Great Job on qwen models buddy
traditionally phi models have been very censored, even more than qwen. even with abliteration they don't have any knowledge of nsfw content. also, their prose is very dry and bland. despite what benchmarks tell you, i prefer qwen for chat purposes as it feels more natural and pays more attention to the system prompt.
Guess we have wait until mistral releases their Xmas Gifts