Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks like someone converted it for Ollama use already: https://ollama.com/vanilj/Phi-4


I've had great success with quantized Phi-4 12B and Ollama so far. It's as fast as Llama 3.1 8B but the results have been (subjectively) higher quality. I copy/pasted some past requests into Phi-4 and found the answers were generally better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: