Based on our record, Wit.ai should be more popular than MiniGPT-4. It has been mentiond 23 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
Hello everyone, new to LLMs. I am working on my thesis project. The whole idea is to create a mixed reality voice assistant that can control some devices in a room and you can have with it a more intelligent conversation compared to other voice assistants(Alexa,Google, etc.). I thought initially to use wit.ai for the extraction of commands and if it's not a recognized command to send a request to a chatgpt API.... Source: 5 months ago
I can't find anything wrong with the code you posted. It is possible that wit.ai is expecting some default header that Unity is not sending (and that you are not setting). Source: about 1 year ago
Even though this was made for VR hopefully the scripts for wit.ai and GPT will be helpful to anyone who wants to explore this topic and doesn't know where to start. Source: about 1 year ago
Hey HN, We're Alex, Martin and Laurent. We previously founded [Wit.ai](http://wit.ai/) (W14), which we sold to Facebook in 2015. Since 2019, we've been working on Nabla (https://www.nabla.com), an intelligent assistant for health practitioners. When GPT-3 was released in 2020, we investigated it's usage in a medical context[0], to mixed results. Since then we’ve kept exploring opportunities at the intersection of... - Source: Hacker News / about 1 year ago
Thank you, that's helpful except that currently we're not running our own server. I'm currently using wit.ai for NLP which is a web API service provided by Meta. I'm trying to budget for what it would cost to roll out our own on a private cloud. Source: about 1 year ago
Isn't there only two open multimodal LLMs, LLaVA and mini-gpt4? Source: 11 months ago
So we use MiniGPT-4 for image parsing, and yep it does return a pretty detailed (albeit not always accurate) description of the photo. You can actually play around with it on Huggingface here. Source: almost 1 year ago
We use MiniGPT-4 first to interpret the image and then pass the results onto GPT-4. Hopefully, once GPT-4 makes its multi-modal functionality available, we can do it all in one request. Source: almost 1 year ago
But I would like to bring up that there are some multi models(llava, miniGPT-4) that are built based on censored llama based models like vicuna. I tried several multi modal models like llava, minigpt4 and blip2. Llava has very good captioning and question answering abilities and it is also much faster than the others(basically real time), though it has some hallucination issue. Source: about 1 year ago
Https://minigpt-4.github.io/ <-- free image recognition, although not powered by true GPT-4. Source: about 1 year ago
Dialogflow - Conversational UX Platform. (ex API.ai)
LangChain - Framework for building applications with LLMs through composability
Botpress - Open-source platform for developers to build high-quality digital assistants
Hugging Face - The Tamagotchi powered by Artificial Intelligence 🤗
Microsoft Bot Framework - Framework to build and connect intelligent bots.
Haystack NLP Framework - Haystack is an open source NLP framework to build applications with Transformer models and LLMs.