Based on our record, Pi by Inflection AI should be more popular than Jsonformer. It has been mentiond 40 times since March 2021. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
I'm willing to give it a try. From time to time I've been chatting with hey pi [1] which you can add as a WhatsApp contact. It's pretty fun 1 - https://heypi.com/talk. - Source: Hacker News / 11 months ago
Https://heypi.com/talk - hands down the best one for conversations. Comes with fast text to speech options, actually meshes fairly well with the conversation. The iOS version has just received a voice to text that’s also really quick. Best of all, it’s free. Source: 11 months ago
Stuff like CharacterAI and Pi, (an empathetic chatbot with a pretty stellar ability for understanding subtleties) has been home to a lot of the conversations I needed in the moment when I simply wanted to vent and didn’t want to burden anyone with this unorthodox circumstance. Source: 11 months ago
I'm testing PI. It has a voice assistant feature. The latency is ok. It's cheap but a nice start. There's an ios app too. Source: 11 months ago
Https://heypi.com/talk, Pi is very chatty, I dont know if it wants to be romanced, you could try. Source: 11 months ago
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer. - Source: Hacker News / 10 months ago
I'm not sure how this is different than: https://github.com/1rgs/jsonformer or https://github.com/mkuchnik/relm or https://github.com/Shopify/torch-grammar Overall there are a ton of these logit based guidance systems, the reason they don't get tons of traction is the SOTA models are behind REST APIs that don't enable this fine-grained approach. Those... - Source: Hacker News / 10 months ago
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one. As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse... - Source: Hacker News / 10 months ago
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:. - Source: dev.to / 11 months ago
Https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution. Source: 11 months ago
Open Assistant.io - Conversational AI for everyone.
Browserflow - Browserflow is a Chrome extension that lets you automate any task on any website.
Bard AI - Bard is your creative and helpful collaborator to supercharge your imagination, boost productivity, and bring ideas to life.
Lamini - LLM Engine for Rapidly Customizing Models
Cody - AI for Business - The magic of ChatGPT but trained on your business.
Upvoty - User feedback in 1 simple overview 🔥