Hi, you’ve found this subreddit Community, welcome!
This Community is intended to be a replacement for r/LocalLLaMA, because I think that we need to move beyond centralized Reddit in general (although obviously also the API thing).
I will moderate this Community for now, but if you want to help, you are very welcome, just contact me!
I will mirror or rewrite posts from r/LocalLLama for this Community for now, but maybe we could eventually all move to this Community (or any Community on Lemmy, seriously, I don’t care about being mod or “owning” it).
Hi, sure, thank you so much for helping out! As for LLaMA, I would point you at llama.cpp, (https://github.com/ggerganov/llama.cpp) which is the absolute bleeding edge, but also has pretty useful instructions on the page (https://github.com/ggerganov/llama.cpp#usage). You could also use Kobold.cpp, but I don’t have any experience with it, so I can’t help you if you have issues.
Adding to this: text-generation-webui (https://github.com/oobabooga/text-generation-webui) works with the latest bleeding edge llama.cpp via llama-cpp-python, and it has a nice graphical front-end. You do have a manually tell pip to install llama.cpp-python with the right compiler flags to get GPU acceleration working but the llama-cpp-python github and ooba github explain how to do this.
You can even set up GPU acceleration through metal on m1 Macs I’ve seen some fucking INSANE performance numbers online for the higher RAM MacBook pros (20+ tokens/sec, I think with a 33b model, but it might have been 13b, either way, impressive.)