Computer scientists at Apple have released a research paper online on how autonomous cars can better detect cyclists and pedestrians while using fewer sensors, Reuters first spotted. While CEO Tim cook has called self-driving vehicles "the mother of all A.I. projects", the company filed a self-driving auto testing plan with regulators in California back in April. It also follows a recent spotting of Apple's self-driving test Lexus SUV last month.
Apple spent most of 2016 coming to grips with the reality that it couldn't build a vehicle, but this year, the tech conglomerate showed it still has ambitions to develop autonomous driving technology. The 3D detection network is pitched as an alternative to LiDAR, a laser-surveying method used in some self-driving auto models to measure and determine potential obstacles.
Published by Yin Zhou and Oncel Tuzel on Arxiv, a site for scientific papers yet to be peer reviewed, the report details software called VoxelNet.
Autonomous vehicles use a combination of 2D cameras and LiDAR (a remote sensing method that uses light in the form of a pulsed laser to measure distance ranges) to navigate the world around them. The equipment provides depth information, however their low resolution makes it hard to detect small objects that are far away without the help from a normal camera that is linked in real-time. "Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR".
Apple's team has achieved "highly encouraging results" in using AI to examine the sparse LiDAR data alone and pick out distant pedestrians and cyclists.
LiDAR is great at figuring out the exact position of objects in 3D space, but they have a notoriously low resolution. In July, Apple finally took a step in that directly by publishing the Apple Machine Learning Journal.
The paper is one of the clearest looks yet we've had at Apple's work on self-driving technology.