Spring is here and we are introducing important new features and improvements with Clarifai Release 7.4. Clarifai is democratizing AI and giving developers access to tools that can accelerate AI solutions at unprecedented speeds. We are adding new models, workflows and platform functionality that reduces the complexity of deploying AI solutions in the real world.
How do your customers “feel” about your product? How do you identify opportunities for product improvement? How do you quickly identify problems before they get out of hand?
Introducing Clarifai’s new product sentiment review model. Our new model will automatically analyze text passages and then rate them on a sentiment scale from 1-5 stars, with 1 representing the most negative sentiment and a 5 representing the most positive sentiment. Learn more and try it yourself.
Content posted on social media, forums, micro-blogs, or social networking sites can be a rich and valuable source of information for your business. Clarifai’s new social media sentiment model helps you understand how customers feel about your products and services. Learn more and try it yourself.
Face v4 includes all new architecture for cutting edge performance on face recognition. Taking advantage of our new Angular Margin Visual-Embedder, Face v4 can distinguish between a large number of people by using only a small number of sample images.
Our technology consistently outperforms the state-of-the-art and identifies highly discriminative features for face recognition. Face v4 has also been implemented with efficient computational overhead so that predictions are made quickly. Learn more and try it yourself.
Video can be a rich source of visual data and efficient labeling tools can allow you to label thousands of frames of video in a fraction of the time that it would take to label individual images. We have improved our video labeling interface and introduce keyframes and improved timeline editing support. Read more.
Every second that can be saved in an individual labeling task can be critical to reducing the overall cost and efficiency of a labeling project. With new extended hotkey support for concept toggling, data labeling is faster and more efficient than ever before. Read more.
Now it is easier than ever to label visual data for detection models. With AI assist, auto annotation and human labelers annotating visual data, users need the flexibility to edit the concepts ascribed to a bounding box that has already been created. Read more.
We have unified the interface used to create components in Explorer view in Portal. The model prediction tab now gives you easy access to the predictions and annotations for the model that you have selected. Once a model is selected, predictions and existing annotations can be viewed and updated. Helpful tool tips will let you know when a model needs to be trained. Annotating and viewing annotations in Explorer is now easier than ever before. Read more.
You now have the ability to toggle between classification and detection predictions right in Explorer. Annotations can be viewed, edited, and updated all in one view. Read more.