Today we are proud to pull back the curtain on one of our most important releases yet. Clarifai Release 7.5 introduces sophisticated new segmentation models, classification models and a whole new approach to building workflows on the Clarifai platform. Plus we have made dozens of improvements behind the scenes that improve performance and stability.
We are looking for your feedback! We’ve reimagined and redesigned workflows on the Clarifai Platform. It is easier than ever to build AI solutions with our intuitive new visual graph editor.
You can now easily create the skeleton of a workflow by connecting one or more Model-Types and then configuring settings for each “node” in your workflow. Plus “non-trainable” model-types (also known as model operators) can be added to workflows on-the-fly, without the need to pre-configure these models in Model Mode. You can even view and edit the output settings of any model in a workflow, directly in the graph editor. Learn more and try it out in Portal.
The Clarifai “General” model is the model that started it all. Upon its release in 2013, the Clarifai General model outperformed the models produced by the largest technology companies in the world. Since that time we have developed our highly popular “General Detection” model, as well as hundreds of specialized models. Now we are proud to announce the release of our most advanced General purpose model yet: the General Visual Segmenter. With the General Visual Segmenter you can identify objects in your images and videos with pixel-level accuracy. Learn more and try it yourself in Portal.
In many cases, images contain one clear subject; we would say that the picture is a picture “of” a given object. The Subject Visual Segmenter takes advantage of this fact by automatically segmenting the main subject in an image. Use the Subject Visual Segmenter for fast and accurate segmentation results in product photography, portraiture, or as a component in a more complex workflow. The Subject Visual Segmenter is a robust and versatile multi-purpose model. Learn more and try it yourself in Portal.
Our new and improved Room-Types classifier offers an improved taxonomy and improved performance over our original model. With the new model architecture you will be able to quickly and accurately classify indoor scenes based on room type. Learn more and try it yourself in Portal.
Sometimes, a person’s facial expression says it all. The human face is one of the powerful tools for communication, and our new Face Sentiment model will help you extract valuable insights from images of faces. Understand audience sentiment in a way that was never possible before so that you can make informed decisions about products and services. Learn more and try it out yourself in Portal.
Text tokens are small units of text (usually individual words) that have been extracted from phrases, sentences, paragraphs, or an entire text document. The new Token-To-Entity model type aggregates sequential tokens with same classification into grouped “entities”.