Scale Nucleus might be a bit more popular than Evidently AI. We know about 2 links to it since March 2021 and only 2 links to Evidently AI. We are tracking product recommendations and mentions on various public social media platforms and blogs. They can help you identify which product is more popular and what people think of it.
It is doable. However the main focus of MLFlow is in experiment tracking. I would suggest for you to look into another monitoring tools such evidentlyai . You can track more things than performance (e.g.data drift). Which may be helpful in a production setting. Source: almost 2 years ago
Evidently is an open-source Python library that analyzes and monitors machine learning models. It generates interactive reports based on Panda DataFrames and CSV files for troubleshooting models and checking data integrity. These reports show model health, data drift, target drift, data integrity, feature analysis, and performance by segment. - Source: dev.to / over 2 years ago
At Scale we built a tool for model debugging in computer vision called Nucleus (scale.com/nucleus) designed exactly for this, which is free try out if you're curious to see where your model predictions are most at odds with your ground truth. Source: over 2 years ago
To address your point about gathering edge cases, which can also be defined as cases of low model fidelity for our use cases, there is active learning and tools such as Aquarium Learning and Scale Nucleus which make it easy to implement into workflows. Source: almost 3 years ago
ML5.js - Friendly machine learning for the web
Aquarium - Improve ML models by improving datasets they’re trained on
ML Showcase - A curated collection of machine learning projects
PerceptiLabs - A tool to build your machine learning model at warp speed.
Lobe - Visual tool for building custom deep learning models
ML Image Classifier - Quickly train custom machine learning models in your browser