Run locally hosted LLMs
Some of the Features in ClassifAI can be set up to use locally hosted LLMs. This has the benefit of complete privacy and data control, as well as being able to be run without any cost. The trade-offs here are performance isn't as great and results may also be less accurate.
Right now, this is powered by Ollama, a tool that allows you to host and run LLMs locally. To set this up, follow the steps below:
1. Install Ollama
- Install Ollama on your local machine.
- By default Ollama runs at
http://localhost:11434/
.
2. Install the model
- Decide which models you want to use. This will depend on the Feature you are setting up. For instance, if you want to use Image Processing Features, ensure you install a Vision model. If you want to use the Classification Feature, ensure you install an Embedding model. All other Features should work with standard models.
- Install the model locally by running
ollama pull <model-name>
in your terminal.
3. Configure Provider
- Once Ollama is running and the model is installed, you can proceed to use it as a Provider for the desired Feature.
- Note that when using locally hosted LLMs, performance may be slower than using cloud-based services, especially for initial requests. Results may also be less accurate but these are the trade-offs for privacy and data control.