April 9, 2024

Clarifai 10.3: Template Wizardry: Build Apps with a Click

Table of Contents:

Template Wizard_ Build Apps with a Click!

This blog post focuses on new features and improvements. For a comprehensive list, including bug fixes, please see the release notes.

Introduced app templates for streamlined app creation.

We now provide pre-built, ready-to-use templates that expedite the app creation process. Each template comes with a range of resources, such as datasets, models, workflows, and modules, allowing you to quickly hit the ground running with your app creation process.

To access the templates:

  1. You can either go to the community Apps section and filter the apps by selecting the "Templates" option on the right side.
    Screenshot 2024-04-08 at 2.36.44 PM
  2. Or you can choose the "Use an App template" option by creating your app from the create option on the top right side.
    Screenshot 2024-04-09 at 11.34.58 AM

Here are the five different templates available at the moment which cover various use cases.

  1. Chatbot-Template: Chatbot App Template serves as an extensive guide for building an AI chatbot swiftly and effectively, utilizing the capabilities of Clarifai's Large Language Models (LLMs).
  2. RAG-Template: This RAG App Template offers a comprehensive guide for building RAG (Retrieval-Augmented Generation) applications effectively using Clarifai. It enables you to quickly experiment with RAG using your datasets without the need for extensive coding.
  3. Document-Summarization Template: This template provides you with multiple workflows for various levels of summarization, such as summarizing a couple of paragraphs with a prompt, summarizing multiple pages, and summarizing an entire book.
  4. Content-Generation Template: This App Template discusses several content generation use cases such as email writing, blog writing, question answering, etc., and comes with several ready-to-use workflows for content creation, leveraging different LLM models and optimized through various prompt engineering techniques.
  5. Image-Moderation Template: This template explores various image moderation scenarios and offers ready-to-use workflows tailored to different use cases. It leverages various computer vision models trained by Clarifai for image moderation.

Released a new Node SDK [Developer Preview]

  • We released the first open-source version (for developer preview) of a Node SDK for JavaScript/TypeScript developers focused on creating web services and web apps consuming AI models.
  • It is designed to offer a simple, fast, and efficient way to experience the power of Clarifai’s AI platform — all with just a few lines of code.

  • You can check its documentation here.

Screenshot 2024-04-08 at 2.21.30 PM

Published new models

  • Clarifai-hosted Mxbai-embed-large-v1, a state-of-the-art, versatile, sentence embedding model trained on a unique dataset for superior performance across a wide range of NLP tasks. It also tops the MTEB Leaderboard.

    Screenshot 2024-04-08 at 3.38.02 PM
  • Clarifai-hosted Genstruct 7B, an instruction-generation LLM, designed to create valid instructions given a raw text corpus. It enables the creation of new, partially synthetic instruction fine-tuning datasets from any raw-text corpus.

  • Wrapped Deepgram’s Aura Text-to-Speech model, which offers rapid, high-quality, and efficient speech synthesis, enabling lifelike voices for AI agents across various applications.

    Screenshot 2024-04-08 at 3.08.10 PM
  • Wrapped Mistral-Large, a flagship LLM developed by Mistral AI, and renowned for its robust multilingual capabilities, advanced reasoning skills, mathematical prowess, and proficient code generation abilities.

    Screenshot 2024-04-08 at 3.36.20 PM
  • Wrapped Mistral-Medium, Mistral AI's medium-sized model. It supports a context window of 32k tokens (around 24000 words) and outperforms Mixtral 8x7B and Mistral-7b on benchmarks across the board.

  • Wrapped Mistral-Small, a balanced, efficient large language model offering high performance across various tasks with lower latency and broad application potential.

  • Wrapped DBRX-Instruct, a state-of-the-art, efficient, open LLM by Databricks. It’s capable of handling input length of up to 32K tokens. The model excels at a broad set of natural language tasks, such as text summarization, question-answering, extraction, and coding.

Added ability to import datasets via archive files with ease

  • Within the Input Manager, users can now seamlessly upload archive or zipped files containing diverse data types such as texts, images, and more.

    Screenshot 2024-04-09 at 11.57.47 AM

Devtools Integrations

Integrated the unstructured Python library with Clarifai as a target destination.

  • The unstructured library provides open-source components for ingesting and pre-processing images and text documents. We’ve integrated it with Clarifai to allow our users to streamline and optimize the data processing pipelines for LLMs.

Added support for exporting your own trained models [Enterprise-only]

  • You can now export the models you own from our platform to a pre-signed URL. Upon export, you'll receive model files accessible via pre-signed URLs or private cloud buckets, along with access credentials.
  • Please note that we only support exporting trainable model types. Models such as embedding-classifiers, clusterers, and agent system operators are not eligible for export.

Improved the Model-Viewer UI of multimodal models

  • For multimodal models like GPT4-V, users can provide input text prompts, include images, and optionally adjust inference settings. The output consists of generated text.
  • They also support the use of 3rd party API keys (for Enterprise Customers).
    Screenshot 2024-04-04 at 1.04.46 PM-1

Added support for exporting models

  • You can now use the Python SDK to export your own trained models to an external environment.

Introduced improvements to the dataloader module

  • We added retry mechanisms for failed uploads and introduced systematic handling of failed inputs. These improvements optimize the data import process and minimize errors within the dataloader module.

Added support for dataset version ID

  • Previously, it was not possible to access or interact with specific versions of a dataset within the Python SDK. This update introduces support for dataset versions in several key areas as detailed here.

Made improvements to the local model upload functionality

  • We now provide users with a pre-signed URL for uploading models.
  • We added educational materials and tooltips to the local model upload UI.
  • We made other improvements to make the process of uploading models simple and intuitive.

Enhanced the functionality of the Actions column within a model’s versions table

  • We refactored the column into an intuitive context menu. Now, when a user clicks on the three dots, a dropdown menu presents various options, optimizing user experience and accessibility.
    Screenshot 2024-04-09 at 12.12.04 PM

Enabled deletion of associated model assets when removing a model annotation

  • Now, when deleting a model annotation, the associated model assets are also marked as deleted.

Improved the functionality of the Face workflow

  • You can now use the Face workflow to effectively generate face landmarks and perform face visual searches within your applications.

Added Python SDK code snippets to the Use Model / Workflow modal window

  • If you want to use a model or a workflow for making API calls, you need to click the Use Model / Workflow button at the upper right corner of the individual page of a model or workflow. The modal that pops up has snippets in various programming languages, which you can copy and use.
  • We introduced Python SDK code snippets as a primary tab. Users can now conveniently access and copy the Python SDK code snippets directly from the modal.
    Screenshot 2024-04-09 at 10.37.51 AM-1

Revamped the resource filtering experience on desktop devices

  • We relocated the filtering sidebar from the right to the left side of the screen, optimizing accessibility and user flow.
  • We also made other improvements to the filtering feature, such as using chevrons to mark the collapsible sections, enhancing the alignment of the clear button, and enhancing the appearance of the divider line.
  • We also added Multimodal-to-text, Multimodal-embedder, and text-to-audio filtering options.
    Screenshot 2024-04-09 at 10.25.34 AM

Revamped mobile resource filters with a fresh design

  • Implemented a new and improved design for resource filters on mobile platforms.

Added ability to sort apps listed on the collapsible left sidebar of your individual app page

  • You can now sort the apps alphabetically (from A to Z) or by "Last Updated." This lets you find the apps you need quickly and efficiently.
    Screenshot 2024-04-09 at 10.28.28 AM

Enhanced markdown template functionality with custom variables

  • We have introduced a feature that allows users to insert custom variables such as  and  into markdown templates, particularly in sections like the Notes section of a model. These variables are dynamically replaced with the corresponding user_id and app_id extracted from the URL, allowing you to personalize content within your templates.
  • For example, within the Notes section of a model, you can now add  to dynamically display the user who created the model.

Improved responsiveness for 13-inch MacBooks

  • We improved responsiveness issues to ensure an optimal viewing experience for 13-inch MacBook devices with a viewport of 1440px × 900px dimensions.

Made enhancements to the RAG (Retrieval Augmented Generation) feature

  • Enhanced the RAG SDK's upload() function to accept the dataset_id parameter.
  • Enabled custom workflow names to be specified in the RAG SDK's setup() function.
  • Added support for chunk sequence numbers in the metadata when uploading chunked documents via the RAG SDK.