This works well for me except the 15B+ don't run fast enough on a 4090 - hopefully exllama supports non-llama models, or maybe it'll support CodeLLaMa already I'm not sure.
For general chat testing/usage this works pretty well with lots of options - https://github.com/oobabooga/text-generation-webui/
I assume quantized models will run a lot better. TheBloke already seems like he's on it.
https://huggingface.co/TheBloke/CodeLlama-13B-fp16
Because codellama is llama based it may just work possibly?
This works well for me except the 15B+ don't run fast enough on a 4090 - hopefully exllama supports non-llama models, or maybe it'll support CodeLLaMa already I'm not sure.
For general chat testing/usage this works pretty well with lots of options - https://github.com/oobabooga/text-generation-webui/