MLFairy

MLFairy

  • Docs
  • Blog
  • Sign In
  • Sign Up

›Getting Started

Getting Started

  • Introduction
  • Uploading Models to MLFairy
  • Installing the SDK in your app
  • Downloading Models in to Your App
  • Collecting Predictions from Your App

Downloading Models in to Your App

MLFairy provides production-grade deployment of your CoreML models. If your model has already been added once to your app, you can use MLFairy to distribute updated versions of your app.

If you upload multiple versions of your model to MLFairy, you can decide which model your app is updated with by marking that model as the actively deployed model.

Mark model as actively deployed

Once you've activate a model for deployment, you'll need your model's token. Each model has a unique token, so you can control which model you'd like to download to your app.

Copy token to clipboard

With your token copied, and the MLFairy sdk added to your app, simply invoke MLFairy.getCoreMLModel passing in the copied token.

Note: This token should be kept private. Anyone can download your model if this token is made public.

import MLFairy
...

let coremlModel = <SomeXcodeGeneratedClass>()
let MLFAIRY_TOKEN = <copied token from your account>

MLFairy.getCoreMLModel(MLFAIRY_TOKEN) { result in
  switch (result.result) {
    case .success(_):
      guard let model = result.compiledModel else {
        print("Failed to get CoreML model.")
        return
      }
      
      print("Model successfully downloaded")
      coremlModel.model = model
    case .failure(let error):
      print("Failed to get CoreML model \(String(describing: error)).")
  }
}

Note: Although download happens in a background thread, models are returned on the main UI thread. You can pass in an optional DispatchQueue to receive the model from MLFairy on a different thread.

MLFairy will attempt to update the underlying MLModel on each call to MLFairy.getCoreMLModel.

Models are downloaded and stored on the device. If there's a failure to download the latest version, the last saved version will be used.

You can track your model downloads from your model's dashboard

Note: The source code for the sample app in this image is in the project repo

See model downloads

If a model was successfully downloaded, the MLFModelResult object returns a compiled MLModel. However, the MLFModelResult object will also return a path to the downloaded model, and the path to the compiled model. It also returns a MLFModel which conforms to MLModel, and can be to used to collect predictions.

Note: Currently, watchOS doesn't support compiling models, therefore, the returned MLFModelResult will always return a nil MLFModel.

← Installing the SDK in your appCollecting Predictions from Your App →
Docs | Privacy | Terms
Copyright © 2020 9594426 Canada Inc.