Software Alternatives, Accelerators & Startups

Ollama Reviews and details

Screenshots and images

  • Ollama Landing page
    Landing page //
    2023-08-22

Badges

Promote Ollama. You can add any of these badges on your website.
SaaSHub badge
Show embed code
SaaSHub badge
Show embed code

Videos

Code Llama: First Look at this New Coding Model with Ollama

Whats New in Ollama 0.0.12, The Best AI Runner Around

Social recommendations and mentions

We have tracked the following product recommendations or mentions on various public social media platforms and blogs. They can help you see what people think about Ollama and what they use it for.
  • Google CodeGemma: Open Code Models Based on Gemma [pdf]
    One thing I've noticed is that gemma is much less verbose by default. [0] https://github.com/ollama/ollama. - Source: Hacker News / about 1 month ago
  • Preloading Ollama Models
    A few weeks ago, I started using Ollama to run language models (LLM), and I've been really enjoying it a lot. After getting the hang of it, I thought it was about time to try it out on one of our real-world cases (I'll share more about this later). - Source: dev.to / about 2 months ago
  • k8s-snap (Canonical Kubernetes) pour un déploiement simple et rapide d’un cluster k8s …
    GitHub - ollama/ollama: Get up and running with Llama 2, Mistral, Gemma, and other large language models. - Source: dev.to / 3 months ago
  • Ollama is now available on Windows in preview
    Looks like it's already available on Linux & Mac. The change is that they're adding Windows: https://github.com/ollama/ollama. - Source: Hacker News / 3 months ago
  • Ollama Python and JavaScript Libraries
    They have a high level summary of ram requirements for the parameter size of each model and how much storage each model uses on their GitHub: https://github.com/ollama/ollama#model-library. - Source: Hacker News / 4 months ago
  • Emacs-copilot: Large language model code completion for Emacs
    Https://github.com/s-kostyaev/ellama (which it calls for local LLM goodness). - Source: Hacker News / 5 months ago
  • Went down the rabbit hole of 100% local RAG, it works but are there better options?
    I used Ollama (with Mistral 7B) and Quivr to get a local RAG up and running and it works fine, but was surprised to find there are no easy user-friendly ways to do it. Most other local LLM UIs don't implement this use case (I looked here), even though it is one of the most useful local LLM use-cases I can think of: search and summarize information from sensitive / confidential documents. Source: 5 months ago
  • What are the best open-source projects out there that aim to perform similar to chatGPT? [D]
    I've been playing alot with llama2 using https://github.com/jmorganca/ollama. Source: 5 months ago
  • LibreChat
    The primary use case here seems to be that it might be possible to use this tool to spend <$20/mo for the same feature set as ChatGPT+. It does not currently make any effort to support locally-hosted open source models, which is what I would have assumed from its name. If you're interested in a fully Libre LLM stack, I've had fun lately with ollama [0] and ollama-webui [1]. It was pretty trivial to take... - Source: Hacker News / 6 months ago
  • Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce
    You can get a local chatbot running in seconds with this tool: https://github.com/jmorganca/ollama None of the open models available to these local tools benchmark as well as the closed OpenAI model but the benchmarks are weird. In my own poking around, some models are definitely less coherent or more detailed for some specific prompts but it’s mostly the same. - Source: Hacker News / 6 months ago
  • Ask HN: How are you handling the Biggest Vendor Lock In of the Decade?
    Sure. I’m not a computer scientist so I’m using ollama[1] along with a number of the most popular models[2]. It publishes a REST API on localhost and you can make use of “Modelfiles”, which are modeled on Dockerfile, to create customized models (ala GPTs).[3] It took 10 minutes to get all this working — most of it waiting for model downloads. I had working code and Modelfiles in another 20 minutes after that. I... - Source: Hacker News / 6 months ago
  • Plugin to bring the power of local LLMs to logseq with (ollama-logseq)
    Hello guys I was jealous that obsidian had a plugin integrating with ollama, So I decided to make one for logseq myself, ollama essentially allows you to play with local LLMs like: LLama 2, Orca Mini, vicuna and many more. Some of these LLMs are performing to levels close to chatGPT3.5 in some tasks and the fact that they are running locally is awesome, here is a quick demo. Source: 6 months ago
  • Llama Ollama: Unlocking Local LLMs with ollama.ai
    Now, how about getting started with Ollama.ai? It's as easy as pie! The Ollama GUI is your friendly interface, making the setup process smoother than a llama’s coat. Just download and install the Ollama CLI, throw in a couple of commands like ollama pull and ollama serve, and voila! You're on your way to running Large Language Models on your local machine. And if you ever find yourself in a pickle, just read the... - Source: dev.to / 7 months ago
  • Can I run Ollama on this Server with GPU?
    Hey guys. I am thinking about renting a server with a GPU to utilize LLama2 based on Ollama. Source: 7 months ago
  • Ask HN: Why aren't we using ChatGPT in the CLI?
    Many people are. It's quite popular to build yourself as it's quite easy. There are also more extensive implementations like https://github.com/KillianLucas/open-interpreter. - Source: Hacker News / 8 months ago
  • Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
    Hi HN, Over the last few months I've been working with some folks on a tool named Ollama (https://github.com/jmorganca/ollama) to run open-source LLMs like Llama 2, Code Llama and Falcon locally, starting with macOS. The biggest ask since then has been "how can I run Ollama on Linux?" with GPU support out of the box. Setting up and configuring CUDA and then... - Source: Hacker News / 8 months ago
  • Run LLMs at home, BitTorrent‑style
    This is neat. Model weights are split into their layers and distributed across several machines who then report themselves in a big hash table when they are ready to perform inference or fine tuning "as a team" over their subset of the layers. It's early but I've been working on hosting model weights in a Docker registry for https://github.com/jmorganca/ollama.... - Source: Hacker News / 8 months ago
  • I asked 60 LLMs a set of 20 questions
    This is very cool. Sorry if I missed it (poked around the site and your GitHub repo), but is the script available anywhere? Would love to try running this against a series of open-source models with different quantization levels using Ollama and a 192GB M2 Ultra Mac studio: https://github.com/jmorganca/ollama#model-library. - Source: Hacker News / 8 months ago
  • Run ChatGPT-like LLMs on your laptop in 3 lines of code
    Love how simple of an interface this has. Local LLM tooling can be super daunting, but reducing it to a simple ingest() and then prompt() is really neat. By chance, have you checked out Ollama (https://github.com/jmorganca/ollama. Thought I'd share. Best of luck with the project! - Source: Hacker News / 8 months ago
  • Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
    Continue has a great guide on using the new Code Llama model launched by Facebook last week: https://continue.dev/docs/walkthroughs/codellama):
      ollama pull codellama.
    - Source: Hacker News / 9 months ago
  • Show HN: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
    It really is good. Surprisingly it seems to answer instruct-like prompts well! I’ve been using it with Ollama (https://github.com/jmorganca/ollama) with prompts like:
      ollama run phind-codellama "write c code to reverse a linked list".
    - Source: Hacker News / 9 months ago

Do you know an article comparing Ollama to other products?
Suggest a link to a post with product alternatives.

Suggest an article

Ollama discussion

Log in or Post with

This is an informative page about Ollama. You can review and discuss the product here. The primary details have not been verified within the last quarter, and they might be outdated. If you think we are missing something, please use the means on this page to comment or suggest changes. All reviews and comments are highly encouranged and appreciated as they help everyone in the community to make an informed choice. Please always be kind and objective when evaluating a product and sharing your opinion.