Core ML: Machine Learning for iOS

published in Agile Methodology, iOS Development, Tutorials
by Piotr Przeliorz

You have probably heard phrases like “Machine Learning” or “Artificial Intelligence” without knowing exactly what they mean. In this article, I will shed some light on “What Machine Learning is” by showing you a Core ML iOS sample.

It’s really hard to explain “What Machine Learning is” in one sentence because nowadays it is a sprawling field. In my opinion, the easiest way to reply to that question is: learning from experience.

What is Machine Learning?

In computer science, Machine Learning means that a computer is able to learn from experience in order to understand data (for example; images, text, number or almost anything that we could express as binary data).

Equally important as data is an algorithm, which tells the computer how to learn from that data. After applying that algorithm to the data on a powerful computer, we could get an artifact that is created by the training process as a result. That artifact can be exported to lightweights executable – trained model.

What is a machine learning model?

Apple Inc. introduced Core ML framework in 2017 which provides functionality to integrate trained machine learning models into an iOS or a macOS app.

machine learning models

Apple Core ML model. Source: medium.com

As mobile developers, we don’t need to be an expert in machine learning at all. We just want to play with machine learning features, without wasting time on creating our own trained models.

Fortunately, you could find hundreds of trained models on the internet. For example from Caffe Model Zoo, TensorFlow Models or MXNet Model Zoo. That easily converted to Core ML model format (models with a .mlmodel file extension)  using conversion tools like mxnet-to-coreml or tfcoreml.

You can also get a model from Apple Developer which works out of the box.

Let’s get through the process of integrating a Core ML model into your app step by step.

Core ML: Machine Learning for iOS

I’ve created a sample project which shows how to use a machine learning model on iOS 11 using Core ML.

For this purpose, I’ve used Vision – a framework for image analysis, which is built on top of Core ML, and the Pixabay API, that allows us to download photos.

core Ml model

Core ML model. Source: developer.apple.com

The idea is simple:

  1. Download pictures from Pixabay (thanks to the Kingfisher library for downloading and caching images from the web)
  2. Predict where a photo was taken based on the view from that photo

The first part of the work is pretty easy: make the API call using URLSession, parse it and display the results in UICollectionView.

Once we have got our image downloaded, we can make a prediction. First, we need to create our VNCoreMLModel,  whose base class is a container for a Core ML model with Vision requests (for that app I have used RN1015k500).

After that, we need to initialize our request object, which is a type of VNCoreMLRequest. To do that, we need our previously initialized object from the VNCoreMLModel. The request object is initialized with completion closure which returns VNRequest or Error.

The last object that we need to create for our prediction is VNImageRequestHandler, which accepts data types like:

  • CGImage
  • CVPixelBuffer
  • Data
  • URL

If we try to predict an image, the easiest way is for us to use the CGImage data type. It’s available via the CGImage property on UIImage.

Now that we have got all the necessary components, we can call the perform method on the handler (we can call many requests at the same time because the perform method accepts an array of VNCoreMLRequest).



private func predict(`for` image: CGImage) {
        guard let visionModel = try? VNCoreMLModel(for: mlModel.model) else { return }
        let request = VNCoreMLRequest(model: visionModel) { request, error in
            self.processRequest(request: request)
        }
        request.imageCropAndScaleOption = .centerCrop
        let handler = VNImageRequestHandler(cgImage: image)
        do {
            try handler.perform([request])
        } catch {
            print(error)
        }
    }
}

The final step of integrating a Core ML model into an app is to map our VNRequest to a data format which can be used in the user interface. In our case, we want to get the location coordinates of the place where a photo was taken.

First, we need access to the prediction results via the result property at the VNRequest object and cast them to an array of VNClassificationObservation. The rest of the work depends on which kind of output our Core ML model returns. Here is a snippet of the code which I used to process the request.


private func processRequest(request: VNRequest) {
        guard let observations = request.results as? [VNClassificationObservation] else {  return }
        let results = observations.prefix(through: 0).map { ($0.identifier, Double($0.confidence)) }.first
        guard let coorindatesData = results?.0 else { return }
        guard let confidence = results?.1 else { return }
        let latLong = coorindatesData.components(separatedBy: "\t").dropFirst().compactMap { Double($0) }
        guard let lat = latLong.first else { return }
        guard let long = latLong.last else { return }
        let result = ImageRecognitionData(latitude: lat, longitude: long, confidence: confidence)
        DispatchQueue.main.async {
            self.completion(result)
        }
    }

Once the whole request is processed, I pass the result in completion and update the location on the map. And that’s it! Now you know how to use a machine learning model on iOS 11 using Core ML.

Core ML: Machine Learning for iOS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I hope that with this Core ML: Machine Learning for iOS tutorial I have shed some light on Machine Learning. Remember this is only one of the ways to use Core ML to build more intelligent apps with machine learning. There are a lot of different features related to Core ML: Machine Learning for iOS: for example, you can work with not only pictures but also with text, video or audio.


If you want to learn more about Core ML check out the WWDC sessions from 2017. Feel free to check the entirety of the application code here and share your comments about “What is Machine Learning?” below.

Piotr Przeliorz

Regardless of complicated surname, Piotr believes in simple solutions. He loves to create, to modify, to make mistakes in iOS apps and to correct them. And, last but not least, to have fun with every single line of new code.

Popular posts

Interviews with startup founders: Fonn Byggemappen

Interviews with startup founders: Fonn Byggemappen

Fonn Byggemappen is an iOS, Android and web application for construction industry projects. The tool’s co-founder Jan Tore Grindheim told us about the beginnings and crucial moments of the app.

Read more
3 steps to choose a software development company

3 steps to choose a software development company

Do you have a new business idea for a profitable web or mobile application only needing a qualified team to make it happen? If that is the case, this checklist is for you! Keep on reading and learn how to find the right software development company.

Read more
Interviews with startup founders: Lerni app

Interviews with startup founders: Lerni app

Lerni is a web platform and mobile application for Android to learn languages in a fast and easy way. We spoke with the app’s co-founder: Bartłomiej Postek, who told us a very inspiring startup story. Keep on reading and learn how to start a startup company. 

Read more
Mobile Apps

Get your mobile app in 3 easy steps!

1

Spec out

with the help of our
business analyst

2

Develop

design, implement
and test, repeat!

3

Publish

get your app out
to the stores

Contact us

Hire us now!

back to top