New
WEBINAR | AI Prototype to Production: Operationalizing and Orchestrating AI
September 4, 2023

Easily Create a Variety of LLM Apps with Streamlit and Clarifai

Table of Contents:

streamlit

By just changing the plain English prompt, quickly experiment with different LLMs and applications

Author: Ian Kelk, Product Marketing Manager, Clarifai

👉 TLDR: This blog post showcases how to build an engaging and versatile chatbot using the Clarifai API and Streamlit. Links: Here's the app and the code.

This Streamlit app lets you chat with several Large Language Models. It has two main capabilities:

  • It proves how powerful and easy it is to integrate models provided by Clarifai using Streamlit & Langchain.
  • You can evaluate the responses from multiple LLMs and choose the one which best suits your purpose.
  • You can see how just by changing the initial prompt to the LLM, you can completely change the entire nature of the app.

https://llm-text-adventure.streamlit.app

Introduction

Hello, Streamlit Community! 👋 I'm Ian Kelk, a machine learning enthusiast and Developer Relations Manager at Clarifai. My journey into data science began with a strong fascination for AI and its applications, particularly within the lens of natural language processing.

Problem statement

It can seem intimidating to have to create an entirely new Streamlit app every time you find a new use case for an LLM. It also requires knowing a decent amount of Python and the Streamlit API. What if, instead, we can create completely different apps just by changing the prompt? This requires nearly zero programming or expertise, and the results can be surprisingly good. In response to this, I've created a Streamlit chatbot application of sorts, that works with a hidden starting prompt that can radically change its behavour. It combines the interactivity of Streamlit's features with the intelligence of Clarifai's models.

In this post, you’ll learn how to build an AI-powered Chatbot:

Step 1: Create the environment to work with Streamlit locally

Step 2: Create the Secrets File and define the Prompt

Step 3: Set Up the Streamlit App

Step 4: Deploy the app on Streamlit's cloud.

App overview / Technical details

The application integrates the Clarifai API with a Streamlit interface. Clarifai is known for it's complete toolkit for building production scale AI, including models, a vector database, workflows, and UI modules, while Streamlit provides an elegant framework for user interaction. Using a secrets.toml file for secure handling of the Clarifai Personal Authentication Token (PAT) and additional settings, the application allows users to interact with different Language Learning Models (LLMs) using a chat interface. The secret sauce however, is the inclusion of a separate prompts.py file which allows for different behaviour of the application purely based on the prompt.

Let's take a look at the app in action:

italian

Step A

As with any Python project, it's always best to create a virtual environment. Here's how to create a virtual environment named llm-text-adventure using both conda and venv in Linux:

1. Using conda:

  1. Create the virtual environment:

    Note: Here, I'm specifying Python 3.8 as an example. You can replace it with your desired version.

  2. Activate the virtual environment:

  3. When you're done and wish to deactivate the environment:

 

2. Using venv:

  1. First, ensure you have venv module installed. If not, install the required version of Python which includes venv by default. If you have Python 3.3 or newer, venv should be included.

  2. Create the virtual environment:

    Note: You may need to replace python3 with just python or another specific version, depending on your system setup.

  3. Activate the virtual environment:

    When the environment is activated, you'll see the environment name (llm-text-adventure) at the beginning of your command prompt.

  4. To deactivate the virtual environment and return to the global Python environment:

    That's it! Depending on your project requirements and the tools you're familiar with, you can choose either conda or venv.

Step B

The next step starts with creating a secrets.toml file which stores Clarifai's PAT and defines the language learning models that will be available to the chatbot.

This file will hold both the PAT (personal authotization token) for your app, which you would never want to publicly share. The other line is our default models, which isn't an important secret but determines which LLMs you'll offer.

Here's an example secrets.toml. Note that when hosting this on the Streamlit cloud, you need to go into your app settings -> secrets to add these lines so that the Streamlit servers can use the information. The following DEFAULT_MODELS provides GPT-3.5 and GPT-4, Claude v2, and the three sizes of Llama2 trained for instructions.

On Streamlit's cloud, this would appear like this:

streamlit-secrets

Step C

The second step entails setting up the Streamlit app (app.py). I've broken it up into several substeps since this is long section.

  1. Importing Python libraries and modules:

    Import essential APIs and modules needed for the application like Streamlit for app interface, Clarifai for interface with Clarifai API, and Chat related APIs.
  2. Set the layout: Configure the layout of the Streamlit app to "wide" layout which allows using more horizontal space on the page.
  3. Define helper functions:

    These functions ensure we load the PAT and LLMs, keep a record of chat history, and handle interactions in the chat between the user and the AI.
  4. Define prompt lists and load PAT:

    Define the list of available prompts along with the personal authentication token (PAT) from the secrets.toml file. Select models and append them to the llms_map.
  5. Prompt the user for prompt selection:

    Use Streamlit's built-in select box widget to prompt the user to select one of the provided prompts from prompt_list.
  6. Choose the LLM:

    Present a choice of language learning models (LLMs) to the user to select the desired LLM.
  7. Initialize the model and set the chatbot instruction:

    Load the language model selected by the user. Initialize the chat with the selected prompt.
  8. Initialize the conversation chain:

    Use a ConversationChain to handle making conversations between the user and the AI.
  9. Initialize the chatbot:

    Use the model to generate the first message and store it into the chat history in the session state.
  10. Manage Conversation and Display Messages:

    Show all previous chats and call chatbot() function to continue the conversation.

That's the step-by-step walkthrough of what each section in app.py does. Here is the full implementation:

Step D

This is the fun part! All the other code in this tutorial already works fine out of the box, and the only thing you need to change to get different behaviour is the prompts.py file:

  1. "Text Adventure": In this mode, the chatbot is instructed to behave as a Text Adventure video game. The game world is set in the world of "A Song of Ice and Fire". As a recently knighted character, user's interactions will determine the unfolding of the game. The chatbot will present the user with 6 options at each turn, including an ascii map and the option to 'Attack with a weapon.' The user interacts with the game by inputting corresponding option numbers. It is intended to give a realistic text-based RPG experience with conditions, like the user's inventory, changing based on the user's actions.
    text-adventure
  2. "Italian Tutor": Here, the bot plays the role of an Italian tutor. It will present sentences that the user has to translate, alternating between English to Italian and Italian to English. If the user commits a mistake, the bot will correct them and give the right translation. It's designed for users who wish to practice their Italian language skills in a conversational setup.
    italian
  3. "Jeopardy": In this mode, the chatbot emulates a game of 'Jeopardy' with the user. The bot will present several categories and an ASCII representation of a game board. Each category has five questions, each with values from 100 to 500. The user selects a category and the value of a question, and the bot asks the corresponding question. The user answers in Jeopardy's signature style of a question. If the user gets it right, they earn points, and if they get it wrong, points are deducted. The game ends when all questions have been answered, and the bot reports the final score.
    jeopardy

Pretty cool right? All running the same code! You can add new applications just by adding new, plain English options to the prompts.py file, and experiment away!