Based on our record, RecurseChat should be more popular than local.ai. It has been mentiond 15 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
If you are on a Mac, give https://recurse.chat/ a try. As simple as download the model and start chatting. Just added the new multimodal support in LLaMA.cpp. - Source: Hacker News / 5 months ago
If anyone on macOS wants to use llama.cpp with ease, check out https://recurse.chat/. Supports importing ChatGPT history & continue chats offline using llama.cpp. Built this so I can use local AI as a daily driver. - Source: Hacker News / 10 months ago
If you are interested in no config setup for local LLM, give https://recurse.chat/ a try (I'm the dev). The app is designed to be self-contained and as simple as you can imagine. - Source: Hacker News / 11 months ago
Shameless plug: If you are on a Mac, check out RecurseChat: https://recurse.chat/ A few outstanding features:. - Source: Hacker News / 11 months ago
Give https://recurse.chat/ a try - I'm the developer. One particular advantage over alternative apps is importing ChatGPT history and speed of the app, including full-text search. You can import your thousands of conversations and every chat loads instantly. We also recently added floating chat feature. Check out the demo: https://x.com/recursechat/status/1846309980091330815. - Source: Hacker News / 11 months ago
I tried to launch gpt4all on my laptop with 16gb ram and Ryzen 7 4700u. Gpt4all doesn't work properly. It uses igpu at 100% level instead of using cpu. And it can't manage to load any model, I can't type any question in it's window. Faraday.dev, secondbrain.sh, localai.app, lmstudio.ai, rwkv runner, LoLLMs WebUI, kobold cpp: all these apps run normally. Only gpt4all and oobabooga fail to run. Source: about 2 years ago
Sidenote: can you try out localai.app and see if it's faster than oobabooga on your end? (It's all CPU inferencing as well, but just curious if there's any speed gain). Source: over 2 years ago
Ollama - The easiest way to run large language models locally
GPT4All - A powerful assistant chatbot that you can run on your laptop
150 ChatGPT 4.0 prompts for SEO - Unlock the power of AI to boost your website's visibility.
LM Studio - Discover, download, and run local LLMs
Claude AI - Claude is a next generation AI assistant built for work and trained to be safe, accurate, and secure. An AI assistant from Anthropic.
KoboldCpp - Run GGUF models easily with a KoboldAI UI. One File. Zero Install. - LostRuins/koboldcpp