Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.
I don’t know of an LLM that works decently on personal hardware
Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.
Yeah if your willing to carry a brick or at least a power bank (brick) if you don’t want it to constantly overheat or deal with 2-3 hours of battery life. There’s only so much copper can take and there are limits to minaturization.
It’s not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.
And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.
On the bright side there are many open source llms, and it seems like there’s more everyday.
I want said AI to be open source and run locally on my computer
It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.
I’m already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it’s not necessary to have.
Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.
Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.
If you have really low specs use the recently open sourced Microsoft Phi model.
deleted by creator
You seem to have missed the point a bit
deleted by creator
I do, so thank you :)
“I wish I had X”
“Here’s X”
What point was missed here?
The post “I wish X instead of Y”
The comment: “And run it [X] locally”
The next comment: “You can run Y locally”
Also the one I told this literally admitted that I was right and you’re arguing still
I want mine in an emotive-looking airborne bot like Flubber
This technology will be running on your phone within the next few years.
Because like every other app on smartphones it’ll require an external server to do all of the processing
I mean, that’s already where we are. The future is going to be localized.
Yeah if your willing to carry a brick or at least a power bank (brick) if you don’t want it to constantly overheat or deal with 2-3 hours of battery life. There’s only so much copper can take and there are limits to minaturization.
It’s not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.
A lot of it can if you have a big enough computer.
Hey me too.
And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.
On the bright side there are many open source llms, and it seems like there’s more everyday.
Checkout /r/localLlama, Ollama, and Mistral.
This is all possible and became a lot easier to do recently.
Ha. Lame.
Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.
I want some of whatever you have, man.
Reckless disregard for the opinions of the fanatically security and privacy conscious? Or just a good-natured appreciation for pissing people off? :)
Drugs. I want the drugs.