Guess it’s all in the subject. I’ve found some implementations of AI practical but it’s always asking for more data more everything. Just curious about how others use AI as carefully as possible.
Alpaca on flathub makes it simple to setup a local instance and get chatting. https://flathub.org/en/apps/com.jeffser.Alpaca
Use a local model, learn some toolcalling and have it retrieve factual answers from a database like wolfram alpha if needed. . We have a community over at c/[email protected] all about local models. if your not very techy I recommend starting with a simple llamafile which is a one click executable EXE that packages engine and model together in a single file.
Then move on to a real local model engine like kobold.cpp running a quantized model that fits in your computer especially if you have a graphics card and want to offload via CUDA or Vulcan. Feel free to reply/message me if you need further clarification/guidance
https://github.com/mozilla-ai/llamafile
https://github.com/LostRuins/koboldcpp
I would start with a 7b q4km quant see if your system can run that.
Run local models.
Indeed. I saved a set of instructions and have just been waiting for the time to implement.
The chatbot only knows what you tell it. Don’t tell it what you don’t want it to know.
The bigger issue to me would be them crawling data for training. If you don’t want it training on your data then keep it offline/hidden.
So you never have the odd question for it?



