Quick Start

Dive right into code examples to get up and running as quickly as possible

Install
Predict
Search
Train

// The JavaScript client works in both Node.js and the browser.


// Install the client from NPM
npm install clarifai

// Require the client
const Clarifai = require('clarifai');

// initialize with your api key. This will also work in your browser via http://browserify.org/
const app = new Clarifai.App({
 apiKey: 'YOUR_API_KEY'
});

// You can also use the SDK by adding this script to your HTML:
<script type="text/javascript" src="https://sdk.clarifai.com/js/clarifai-latest.js"></script>

#
# Pip install the client:
# pip install clarifai
#

# The package will be accessible by importing clarifai:

from clarifai import rest
from clarifai.rest import ClarifaiApp

# The client takes the `API_KEY` you created in your Clarifai
# account. You can set these variables in your environment as:

# - `CLARIFAI_API_KEY`

app = ClarifaiApp()

// Our API client is hosted on jCenter, Maven Central, and JitPack.

///////////////////////////////////////////////////////////////////////////////
// Installation - via Gradle (recommended)
///////////////////////////////////////////////////////////////////////////////

// Add the client to your dependencies:
dependencies {
    compile 'com.clarifai.clarifai-api2:core:2.2.12'
}

// Make sure you have the Maven Central Repository in your Gradle File
repositories {
    mavenCentral()
}

///////////////////////////////////////////////////////////////////////////////
// Installation - via Maven
///////////////////////////////////////////////////////////////////////////////

/*
<!-- Add the client to your dependencies: -->
<dependency>
  <groupId>com.clarifai.clarifai-api2</groupId>
  <artifactId>core</artifactId>
  <version>2.2.12</version>
</dependency>
*/


///////////////////////////////////////////////////////////////////////////////
// Initialize client
///////////////////////////////////////////////////////////////////////////////

// Make sure to import what you need. Below is for the quickstart code:

import clarifai2.api.ClarifaiBuilder;
import clarifai2.api.ClarifaiClient;
import clarifai2.api.request.ClarifaiRequest;
import clarifai2.dto.input.ClarifaiInput;
import clarifai2.dto.input.image.ClarifaiImage;
import clarifai2.dto.model.output.ClarifaiOutput;
import clarifai2.dto.prediction.Concept;

///////////////////////////////////////////////////////////////////////////////
// Initialize client
///////////////////////////////////////////////////////////////////////////////

new ClarifaiBuilder("YOUR_API_KEY")
    .client(new OkHttpClient()) // OPTIONAL. Allows customization of OkHttp by the user
    .buildSync() // or use .build() to get a Future<ClarifaiClient>

## Installation via Cocoapods

// 1. Create a new XCode project, or use a current one.

// 2. Create a workspace with Cocoapods.

  // a) Add Clarifai to your Podfile.
  pod 'Clarifai'

  // b) Generate xcworkspace from command line. 
  pod install

// 3. Import ClarifaiApp.h in your project code.
#import ClarifaiApp.h

// 4. Go to [clarifai.com/developer/applications](https://clarifai.com/developer/applications), 
// click on your application, then copy the API Key value (if you don't 
// already have an account or application, you'll need to sign up first).

// 5. Create a ClarifaiApp object in your project with your API Key.
ClarifaiApp *app = [[ClarifaiApp alloc] initWithApiKey:@"YOUR_API_KEY"];

// 6. That's it! Explore the [API docs and guide](https://clarifai.com/developer).

// NOTE- to use Clarifai in Swift, make sure to add use_frameworks! to your podfile and import into any swift file using:
import Clarifai

// Install cURL: https://curl.haxx.se/download.html

// instantiate a new Clarifai app passing in your api key.
const app = new Clarifai.App({
 apiKey: 'YOUR_API_KEY'
});

// predict the contents of an image by passing in a url
app.models.predict(Clarifai.GENERAL_MODEL, 'https://samples.clarifai.com/metro-north.jpg').then(
  function(response) {
    console.log(response);
  },
  function(err) {
    console.error(err);
  }
);
//make sure to upgrade to the latest version of python

from clarifai.rest import ClarifaiApp

app = ClarifaiApp(api_key='YOUR_API_KEY')

# get the general model
model = app.models.get("general-v1.3")

# predict with the model
model.predict_by_url(url='https://samples.clarifai.com/metro-north.jpg')

new ClarifaiBuilder("YOUR_API_KEY").buildSync().registerAsDefaultInstance();

final List<ClarifaiOutput<Concept>> predictionResults =
    client.getDefaultModels().generalModel() // You can also do Clarifai.getModelByID("id") to get custom models
        .predict()
        .withInputs(
            ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg"))
        )
        .executeSync() // optionally, pass a ClarifaiClient parameter to override the default client instance with another one
        .get();

ClarifaiApp *app = [[ClarifaiApp alloc] initWithApiKey:@"YOUR_API_KEY"];

ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
    [model predictOnImages:@[image]
                completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
                    NSLog(@"outputs: %@", outputs);
    }];
}];

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models/aaa03c23b3724a16a56b629203edc62c/outputs

// instantiate a new Clarifai app passing in your API Key
const app = new Clarifai.App({
 apiKey: 'YOUR_API_KEY'
});

// add some inputs
app.inputs.create('https://samples.clarifai.com/puppy.jpeg').then(
  searchForDog,
  function(err) {
    console.error(err);
  }
);

// search for concepts
function searchForDog(response) {
  app.inputs.search({
    concept: {
      name: 'dog'
    }
  }).then(
    function(response) {
      console.log(response);
    },
    function(response) {
      console.error(response);
    }
  );
}

from clarifai.rest import ClarifaiApp

app = ClarifaiApp(api_key='YOUR_API_KEY')

# before search, first need to upload a few images
app.inputs.create_image_from_url("https://samples.clarifai.com/puppy.jpeg")

# search by public concept
app.inputs.search_by_predicted_concepts(concept='dog')

new ClarifaiBuilder("YOUR_API_KEY").buildSync().registerAsDefaultInstance();

final ClarifaiResponse<List<SearchHit>> trainImages = client.searchInputs(
    // Finds images that match this picture of a train
    SearchClause.matchImageVisually(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg"))
)
    .getPage(1)
    .executeSync();

ClarifaiApp *app = [[ClarifaiApp alloc] initWithApiKey:@"YOUR_API_KEY"];

ClarifaiImage *dog1 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/dog1.jpeg"];
ClarifaiImage *dog2 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/dog2.jpeg"];

[app addInputs:@[dog1, dog2] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    // Create a Search Term to search for images that look similar to the supplied image.
    ClarifaiSearchTerm *dogSearchTerm = [ClarifaiSearchTerm searchVisuallyWithImageURL:@"https://samples.clarifai.com/dog3.jpeg"];
    [app search:@[dogSearchTerm, catSearchTerm] page:@1 perPage:@20 completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
        NSLog(@"outputs: %@", outputs);
    }];
}];

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '
  {
    "query": {
      "ands": [
        {
          "output": {
            "data": {
              "concepts": [
                {
                  "name": "dog"
                }
              ]
            }
          }
        }
      ]
    }
  }' \
  https://api.clarifai.com/v2/searches

// instantiate a new Clarifai app passing in your API Key
const app = new Clarifai.App({
 apiKey: 'YOUR_API_KEY'
});

// add inputs with concepts
app.inputs.create([{
  "url": "https://samples.clarifai.com/dog1.jpeg",
  "concepts": [
    { "id": "cat", "value": false },
    { "id": "dog", "value": true }
  ]
}, {
  "url": "https://samples.clarifai.com/dog2.jpeg",
  "concepts": [
    { "id": "cat", "value": false },
    { "id": "dog", "value": true }
  ]
}, {
  "url": "https://samples.clarifai.com/cat1.jpeg",
  "concepts": [
    { "id": "cat", "value": true },
    { "id": "dog", "value": false }
  ]
}, {
  "url": "https://samples.clarifai.com/cat2.jpeg",
  "concepts": [
    { "id": "cat", "value": true },
    { "id": "dog", "value": false }
  ]
}]).then(
  createModel,
  errorHandler
);

// once inputs are created, create model by giving name and list of concepts
function createModel(inputs) {
  app.models.create('pets', ["dog", "cat"]).then(
    trainModel,
    errorHandler
  );
}

// after model is created, you can now train the model
function trainModel(model) {
  model.train().then(
    predictModel,
    errorHandler
  );
}

// after training the model, you can now use it to predict on other inputs
function predictModel(model) {
  model.predict(['https://samples.clarifai.com/dog3.jpeg', 'https://samples.clarifai.com/cat3.jpeg']).then(
    function(response) {
      console.log(response);
    }, errorHandler
  );
}

function errorHandler(err) {
  console.error(err);
}

from clarifai.rest import ClarifaiApp

app = ClarifaiApp(api_key='YOUR_API_KEY')

# import a few labelled images
app.inputs.create_image_from_url(url="https://samples.clarifai.com/dog1.jpeg", concepts=["cute dog"], not_concepts=["cute cat"])
app.inputs.create_image_from_url(url="https://samples.clarifai.com/dog2.jpeg", concepts=["cute dog"], not_concepts=["cute cat"])

app.inputs.create_image_from_url(url="https://samples.clarifai.com/cat1.jpeg", concepts=["cute cat"], not_concepts=["cute dog"])
app.inputs.create_image_from_url(url="https://samples.clarifai.com/cat2.jpeg", concepts=["cute cat"], not_concepts=["cute dog"])

model = app.models.create(model_id="pets", concepts=["cute cat", "cute dog"])

model = model.train()

# predict with samples
print model.predict_by_url(url="https://samples.clarifai.com/dog3.jpeg")
print model.predict_by_url(url="https://samples.clarifai.com/cat3.jpeg")

new ClarifaiBuilder("YOUR_API_KEY").buildSync().registerAsDefaultInstance();

// Create some concepts
client.addConcepts()
    .plus(
        Concept.forID("boscoe")
    )
    .executeSync();

// All concepts need at least one "positive example" (ie, an input whose image file contains that concept)
// So we will add a positive and a negative example of Boscoe
client.addInputs()
    .plus(
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/puppy.jpeg"))
            .withConcepts(
                Concept.forID("boscoe")
            ),
        ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/wedding.jpg"))
            .withConcepts(
                Concept.forID("boscoe").withValue(false)
            )
    )
    .executeSync();


// Now that you have created the boscoe concept, and you have positive
// examples of this concept, you can create a Model that knows this concept
final ConceptModel petsModel = client.createModel("pets")
    .withOutputInfo(ConceptOutputInfo.forConcepts(
        Concept.forID("boscoe")
    ))
    .executeSync()
    .get();

// Now that your app contains inputs with the concepts that you wanted to
// detect, you can train your "pets" model
petsModel.train().executeSync();

ClarifaiApp *app = [[ClarifaiApp alloc] initWithApiKey:@"YOUR_API_KEY"];

ClarifaiImage *dog1 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/dog1.jpeg" andConcepts:@[@"cute_dog"]];
ClarifaiImage *dog2 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/dog2.jpeg" andConcepts:@[@"cute_dog"]];

ClarifaiImage *cat1 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/cat1.jpeg" andConcepts:@[@"cute_cat"]];
ClarifaiImage *cat2 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/cat2.jpeg" andConcepts:@[@"cute_cat"]];

[app addInputs:@[dog1, dog2, cat1, cat2] completion:^(NSArray<ClarifaiInput *> *inputs, NSError *error) {
    [app createModel:@[@"cute_dog", @"cute_cat"] name:@"pets" conceptsMutuallyExclusive:NO closedEnvironment:NO completion:^(ClarifaiModel *model, NSError *error) {
        [model train:^(ClarifaiModel *model, NSError *error) {
            NSLog(@"model has been submitted to training queue");
        }];
    }];
}];

//Wait for model to be trained, and then:
ClarifaiImage *dog3 = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/dog3.jpeg"];
ClarifaiImage *cat3 = [[ClarifaiImage alloc] initWithUR:@"https://samples.clarifai.com/cat3.jpeg"];
[model predictOnImages:@[dog3, cat3] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
  NSLog(@"outputs: %@", outputs);
}];

// add inputs with concepts

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          },
          "concepts":[
            {
              "id": "boscoe",
              "value": true
            }
          ]
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/inputs

// once inputs are created, create model by giving name and list of concepts

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '
  {
    "model": {
      "name": "pets",
      "output_info": {  
        "data": {
          "concepts": [
            {
              "id": "boscoe"
            }
          ]
        },
        "output_config": {
          "concepts_mutually_exclusive": false,
          "closed_environment":false
        }
      }
    }
  }'\
  https://api.clarifai.com/v2/models

// after model is created, you can now train the model

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  https://api.clarifai.com/v2/models//versions


// after training the model, you can now use it to predict on other inputs

curl -X POST \
  -H "Authorization: Key YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '
  {
    "inputs": [
      {
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/puppy.jpeg"
          }
        }
      }
    ]
  }'\
  https://api.clarifai.com/v2/models//outputs

Install

Choose a client above and follow the instructions to get up and running.

Predict

Predict analyzes your images and tells you what's inside of them.

The API will return a list of concepts with corresponding probabilities of how likely it is these concepts are contained within the image.

When you make a prediction through the API, you tell it what model to use. A model contains a group of concepts. A model will only 'see' the concepts it contains.

You can use different models to analyze images in different ways. The above example is using the 'general' model.

Search

You can use text or visual content to search across your collection of images.

You start by adding images (inputs) to an app. These get automatically tagged with the 'general' model. You can then search by concept and get ranked results based on the probability your images contain that concept.

You can also use images to do reverse image search on your collection. The API will return ranked results based on how similar the results are to the image you provided in your query.

Train

Train allows you to create your own model using your own custom concepts.

You start by adding inputs (images) that you already know contain the concepts you are interested in. You do not need many images to get started. We recommend starting with 10 and adding more as needed.

You then create a model and tell it what concepts it contains.

After creating the model, you 'train' it to learn based on the images and concepts you provided. This train operation is asynchronous. It may take a few seconds for your model to be fully trained and ready.

After it is done training, you can use that model to predict those concepts on new images.