If you don’t specify anything in particular, it seems that when you use Llama 3 with Ollama, Llama 3 Instruct is used instead of Llama 3 (Base). This is probably the same for other models as well. If you want to get output similar to Ollama, use Instruct.
John6666
16
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Ollama + Llama-3.2-11b-vision-uncensored like 22 | 1 | 1868 | December 10, 2024 | |
| Llama 3 Instruct taking too long all of a sudden | 1 | 1550 | June 9, 2024 | |
| Llama 3 performance is 4 mins. can get it in seconds? | 2 | 703 | March 24, 2025 | |
| How to Configure LLaMA-3:8B on HuggingFace to Generate Responses Similar to Ollama? | 7 | 1974 | October 7, 2024 | |
| 500 Internal Server Error with Ollama | 4 | 4395 | April 11, 2026 |