Saiga GGUF
Collection
Russian fine-tunes of different base LLMs in the GGUF format compatible with llama.cpp β’ 8 items β’ Updated β’ 39
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf IlyaGusev/saiga_llama3_8b_gguf:# Run inference directly in the terminal:
llama-cli -hf IlyaGusev/saiga_llama3_8b_gguf:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf IlyaGusev/saiga_llama3_8b_gguf:# Run inference directly in the terminal:
./llama-cli -hf IlyaGusev/saiga_llama3_8b_gguf:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf IlyaGusev/saiga_llama3_8b_gguf:# Run inference directly in the terminal:
./build/bin/llama-cli -hf IlyaGusev/saiga_llama3_8b_gguf:docker model run hf.co/IlyaGusev/saiga_llama3_8b_gguf:Llama.cpp compatible versions of an original 8B model.
Download one of the versions, for example model-q4_K.gguf.
wget https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf/resolve/main/model-q4_K.gguf
Download interact_llama3_llamacpp.py
wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llama3_llamacpp.py
How to run:
pip install llama-cpp-python fire
python3 interact_llama3_llamacpp.py model-q4_K.gguf
System requirements:
2-bit
8-bit
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf IlyaGusev/saiga_llama3_8b_gguf:# Run inference directly in the terminal: llama-cli -hf IlyaGusev/saiga_llama3_8b_gguf: