Home / iPhone / Apple hopes you'll figure out what to do with AI on the iPhone XS

Apple hopes you'll figure out what to do with AI on the iPhone XS

[ad_1]

One of the toughest problems in machine learning, within the broader field of AI, is to to figure out what problem the computer should be solving. Computers can only learn and understand, if they understand at all, when something is framed as a matter of finding a solution to a problem.

Apple is approaching that challenge by hoping to lure developers to use its chips and software programming tools to supply the new use cases for neural networks on a mobile device.

During Wednesday’s media event at its Cupertino headquarters to unveil the iPhone XS, XS Max and XS R, Apple discussed the “neural engine,” a section of the A-series processor in the iPhone that is designed to focus on machine learning workloads.

Also: Meet Apple’s iPhone XS, iPhone XS Max, and iPhone XR: Prices and specs

This year’s A12 chip features the second version of the neural engine, which first debuted last year in the iPhone X’s A11 processor. The new version has eight cores, up from two, which Apple says allows the circuitry to process five trillion operations per second, and increase from the 600 billion it quoted last year.

Neural engine workloads

What to do with all that dedicated computing power is the question. Apple has some suggestions, but it is clearly hoping that developers using its programming development kit for machine learning, Core ML, will fill in the blanks.

Last year’s iPhone X already used the neural engine to perform facial recognition. Yesterday, the company’s marketing lead, Phil Schiller, discussed how the iPhone XS now lets the neural engine work with another area of the chip, the one dedicated to photo processing, called the image signal processor. Together, the neural engine helps the image signal processor create sharper “segmentation masks” to tell where features of a face are in a portrait. That can be used to improve the way lighting is applied to a face when a portrait is taken.

To show off what a developer can do, Apple invited onto the stage Nex Team, a startup that is building augmented reality applications. They showed how a basketball video can be used for training players. With the consultation of Steve Nash, a former player for the Pheonix Suns and now a professional trainer, the company has developed a program that takes a video and tracks in real-time the posture of the player in the video as she or he takes practice shots at the hoop. It also plots the trajectory of the basketball in flight, in real time, and gathers various other metrics.

Also: AI: Everything you need to know about Artificial Intelligence | Deep learning: Everything you need to know | Machine learning: Everything you need to know

Thus, training can be re-conceived as an ML problem of how to measure the statistics that underlie the best athletic performance using mobile video.

Signing up developers like Nex Team is a way to put some distance between Apple and the various other chip developers that are building either merchant silicon to do AI, or that have captive in-house efforts for their own phones. For example, chip giant Qualcomm includes AI in its Snapdragon line of mobile processors. And Huawei has built AI circuitry into its Kirin chip for its smartphones, as has Samsung Electronics with its own Exynos processor for the Galaxy smartphones.

Much like digital signal processors, which took over some functions of math from the CPU, such as video decoding, the machine learning circuitry is anticipating a wave of ML workloads in mobile devices, says Linley Gwennap, principle analyst with chip research firm The Linley Group.

Also: Apple’s Siri: A cheat sheet TechRepublic

“This follows a well-established path that you don’t want to do use the CPU for a lot of things,” says Gwennap. “It’s always more efficient to take a common function and put it in a separate functional block.”

“The CPU could run simple neural networks for face recognition and things, but when you put that same task into a hardware acclerator,” such as the neural engine, “you can do the same work for one tenth the power consumption of doing it on the CPU.”

For example, “In a convolutional neural network, about 80 percent of the computation is a matrix multiplication,” says Gwennap, referring to one of the “primitives” that underly one one of the most common types of machine learning structures. Identifying such common primitives is a simple way to boost performance across a broad assortment of AI workloads, even as algorithms change.

As machine learning becomes a more common function, presumably Apple could have a leg up on Huawei and Samsung and others by having the best functional block in hardware to run an increasing number of neural nets written with its Core ML framework.

Also: Siri will show you your passwords if you ask CNET

Notably absent from the neural net talk on Wednesday was Apple’s own Siri intelligent assistant. Siri has mixed performance, and it would seem a good candidate for some kind of local acceleration on the phone. Gwennap offers that local processing of Siri could be useful for things, such as instructing the lights in your home to turn on. You wouldn’t have to wait for Siri to first connect to the cloud to understand your voice commands.

“Just to have some Siri presence greet you” without having to wait for the latency of going roundtrip to the cloud, might be an improvement in Siri’s lackluster track record, he offers.

MORE FROM THE IPHONE XS EVENT:

[ad_2]
Source link

About admin

Check Also

New iPhone 13? Don't forget to update!

[ad_1] Just got a new iPhone 13 and that new iPhone smell is still on ...

Leave a Reply

Your email address will not be published. Required fields are marked *