MLFairy

MLFairy

  • Docs
  • Blog
  • Sign In
  • Sign Up

›Getting Started

Getting Started

  • Introduction
  • Uploading Models to MLFairy
  • Installing the SDK in your app
  • Downloading Models in to Your App
  • Collecting Predictions from Your App

Collecting Predictions from Your App

MLFairy provides the ability to automatically collect your app's CoreML predictions. MLFairy uses AES encryption to securely collect and transmit your predictions. This means your data is always safe.

These predictions give you the ability to improve your CoreML model during training. Then, you can re-deploy or re-distribute your app using MLFairy if you choose.

In the following image, you will see an example of a CoreML model performing prediction, and how the prediction shows up in MLFairy.

Note: The source code for the sample app in this image is in the project repo

Collecting predictions

In order to start collecting predictions, start by creating a model from your dashboard.

Note: You don't need to upload a file unless you're using MLFairy for distribution.

Each model has a unique token which you'll use to send all inference data to.

Copy token to clipboard

With your token copied, and the MLFairy sdk added to your app, you have one of three options to collecting predictions.

Wrap your model with MLFairy

The simplest way to collect predictions is to invoke MLFairy.wrapCoreMLModel, passing in the copied token and reference to your CoreML model.

import MLFairy
...
let coremlModel = <SomeXcodeGeneratedClass>()
let MLFAIRY_TOKEN = <copied token from your account>

coremlModel.model = MLFairy.wrapCoreMLModel(coremlModel.model, token: MLFAIRY_TOKEN)

Then continue using your CoreML as normal.

Send your predictions directly to MLFairy

If you tend to work with raw MLModel objects, or don't want to wrap your CoreML model with an MLFairy model, you can also pass predictions directly into MLFairy using the MLFairy.collectCoreMLPrediction method.

import MLFairy
...
let coremlModel = <SomeXcodeGeneratedClass>()
let MLFAIRY_TOKEN = <copied token from your account>

let input: MLFeatureProvider = <some input>
let output = coremlModel.model.prediction(from:input)
MLFairy.collectCoreMLPrediction(token: MLFAIRY_TOKEN, input: input, output: output)

Use the MLFModel returned after downloading your model

If you're using MLFairy for distribution of your CoreML models, the MLFairy SDK provides a convenience object in the returned MLFModelResult from MLFairy.getCoreMLModel. The MLFModelResult contains an already wrapped CoreML model in the mlFairyModel instance, and all the predictions will automatically be associated with the given download.

import MLFairy
...

let coremlModel = <SomeXcodeGeneratedClass>()
let MLFAIRY_TOKEN = <copied token from your account>

MLFairy.getCoreMLModel(MLFAIRY_TOKEN) { result in
  switch (result.result) {
    case .success(_):
      guard let model = result.mlFairyModel else {
        print("Failed to get CoreML model.")
        return
      }
      
      print("Model successfully downloaded")
      coremlModel.model = model
    case .failure(let error):
      print("Failed to get CoreML model \(String(describing: error)).")
  }
}
← Downloading Models in to Your App
  • Wrap your model with MLFairy
  • Send your predictions directly to MLFairy
  • Use the MLFModel returned after downloading your model
Docs | Privacy | Terms
Copyright © 2020 9594426 Canada Inc.