The WWDC17 announcement that will change the app ecosystem

Reviews | Thoughts By 2 days ago

This year b2cloud was lucky enough to be successful in the lottery for WWDC17 and I was given the incredible opportunity to fly to San Jose and represent our business.

I had high expectations given that it’s the 10th anniversary from the release of the first iPhone, and I was not disappointed.

In the midst of the hype of the conference, one thing that stood out to me as having far reaching implications for the app ecosystem the world over was CoreML.

CoreML allows app developers to incorporate the cumulative work of machine learning researchers around the world directly into their apps. It allows smart people (developers) to leverage really smart people (PhD researchers).

First, what is machine learning?

Wikipedia defines machine learning as:

the subfield of computer science that, according to Arthur Samuel in 1959, “gives computers the ability to learn without being explicitly programmed.“.

What this means in context is that you can create a machine learning algorithm, feed lots of pre-classified ‘training’ data into it, and out pops a “trained” machine learning model. You can then feed it new, never before seen stuff, and it will make decisions about it. You’ve now “trained” a computer to tell you things that you didn’t know before (eg. data mining), be better than you can be (eg. chess), or handle things worse than you, but in vast quantities (eg. every concurrent Siri conversation around the world)!

Machine learning, and by extension CoreML, is not “self-teaching”. It will learn based on the information which is loaded initially to teach or train it, but it does not teach itself new information unless taught by it’s masters (some exceptions apply)

What CoreML lets you do

In the past, if you wanted to incorporate any machine-learned process (a trained machine learning model) into an app, you needed to pull in vast amounts of code to run it. This code was often arcane, not Swift or even Objective-C, and unless you’re a machine learning expert, basically inscrutable. Alternatively, do it all in the cloud with the same difficulties, usually much slower, and with additional privacy concerns.

Now, with CoreML, you just need to drag a single file into your app. Apple has created a tool that will convert many existing formats of pre-trained machine learning models into a single black box file (stuff goes in, results come out), which under the hood uses the full suite of optimisation and hardware acceleration available on modern Apple devices.

This is important, because the former method was basically reserved for giant companies with the resources to hire post-doctorate researchers, whereas now with CoreML, any Johnny Appleseed app developer can get quite far without expert knowledge of machine learning.

A worked example

A friend of my partner, let’s call him Dr. S, wants to be an ophthalmologist (Eye Surgeon), and wondered if he and I could make an app that could help rural doctors diagnose complex eye disorders. This would, incidentally, strengthen his bid to gain entry into the Ophthalmology programme.

His idea was to use a phone’s front-facing camera to help diagnose vision disorders based on eye gaze direction problems (when eyes don’t look in the same direction properly). When floating this idea to me, as is often the case, he vastly underestimated the complexity of making apps. In particular he failed to realise that while his idea is a good subject for a PhD, not for making an app in a few weekends.

However, just out of interest, I scoured GitHub and happened upon some work on pupil tracking that I could at least run on an iPhone called OpenFaceIOS*, a small project that brings a larger facial feature identification system to iOS – OpenFace^, which itself uses the OpenCV# computer vision framework.

So basically, a long chain of extremely smart people have performed research, written papers, written code, released it under open source, and so on and so forth. Shoehorning all of that into an app is at best messy, time consuming, and requires a high level of expertise.

Thankfully, in 2017, we have CoreML. CoreML builds on decades of research by the world at large into machine learning, computer vision, and natural language processing, to more or less offer all of the above in a nice, easy to use format for Apple devices, with some of it built into the OS.

CoreML vastly reduces the amount of code needed to use machine learning on Apple devices. The onerous above mentioned process of bringing in tens of thousands of lines of code into an iOS app turns into a couple hundred lines.

Leveraging Apple’s new aptly named Vision framework (itself built with CoreML), I was astonished that, after referencing example code from Apple, I got back to where I was the first time I looked into helping Dr. S with his idea within 2 hours versus the 2 days I originally spent looking into it.

Maybe, with CoreML, I can help Dr. S in a few weekends, without doing a PhD, and that’s why CoreML is kind of a big deal.

————————————————————————————————————————–

References:

* OpenFaceIOS – A project on GitHub – https://github.com/FaceAR/OpenFaceIOS

^ OpenFace – An open source facial landmark detection system by Tadas Baltrusaitis – https://github.com/TadasBaltrusaitis/OpenFace

# OpenCV – An open source computer vision framework – http://opencv.org/