top of page
  • Kenneth Joel

A Brief History of Embedded AI

How sensor hubs on smartphones helped bring artificial intelligence to the edge


AIoT, TinyML, EdgeAI and MLSensors are the latest buzz words in the Artificial Intelligence and Embedded Systems communities. There’s plenty of hype around designing highly optimised, low footprint machine learning algorithms that run on ultra low power microcontrollers and DSPs targeted at sensing applications including but not limited to audio and video. This approach is particularly useful in ‘always-on’ applications where power consumption needs to be minimised and invasion of privacy is a serious concern.


Some existing applications already leverage embedded intelligence:


- Wake word detection: ‘Ok Google’ and Hey Siri’ are prime examples that we use almost every day

- Car Crash Detection: Newer smartphones can fuse data from multiple sensors like the microphone, inertial sensors and GPS to detect and alert emergency services in the event of a car crash

- Step counting and Workout Detection: Wearables use data from inertial and biosensors along with intelligent algorithms to track daily activities.




What’s common in all of these cases is the use of dedicated low power processing hardware that is

tightly coupled with sensors, running highly optimised algorithms to make the relevant inferences. Another commonality is the fact that they are all subsystems of more complex devices.

Owing to these developments, in the near future we may see embedded intelligence in other standalone devices such as: - Completely offline Smart Cameras to detect human presence [1] - Environmental sensors to detect forest fires [2] and illegal tree felling [3]


The growing popularity and widespread applications of Embedded Intelligence is inspiring.

But where did this all begin? Here’s my quick take on how ‘Sensor hubs’, particularly those in smartphones helped kick start this movement.


But first, what is a Sensor hub?

By definition, it a co-processor, microcontroller or DSP responsible for integrating data and providing simplified insights from multiple low power sensors in a device with the intent of freeing up the processing bandwidth of the main application processor.


The Sensor hub first started off as a neat power optimisation trick in smartphones. Phones as early as the iPhone 5s and Nexus 5X and 6P featured the Apple M7 Co-processor [4] and Android Sensor Hub [5] respectively. Apple used the M7 to handle the demanding inertial sensors like the accelerometer, gyroscope and compass and sensor fusion algorithms while the Android Sensor Hub did the same and also ran algorithms for advanced activity recognition.

Motorola further innovated on inertial sensor features with the catchy “chop-chop” gesture to turn on the flashlight.




We also began to see the overlap of sensors (microphone) and machine learning running on a low power processor get popular with wake word detection like “Hey Siri” and “Ok Google”. These features are being pushed to greater heights with Quick Phrases and Now Playing on the latest Pixel phones.




Thus, over the last 6–7 years smart phones and their sensor hubs proved to be the perfect Proof-of-concept to show the world that it is possible to deploy machine learning algorithms for sensors on very low power computing platforms like microcontrollers and DSPs.

It’s great to see this movement get its own name and independent audience in the form of the TinyML, EdgeAI and MLSensors communities.


 

Interestingly enough, semiconductor giants like Analog Devices, TDK and Robert Bosch who design and manufacture a lot of these sensors that go into smartphones, have their own unique take on Sensor hubs.


While the goal is still the same: to provide useful insights from multiple sensors while consuming as little power as possible. The applications are much broader. Since smartphones already have Sensor hubs of their own, independent ones are being developed for wearables, automobiles and other smart gadgets.


Sensor hubs on smartphones were initially discrete components. The M7 motion co-processor is an independent NXP LPC18A1 based chip. But over time these coprocessors got integrated into main smartphone SoC.



Apple A7 alongside M7 (LPC18A1): Source — WikiMedia Commons


However, discrete Sensor hubs are still available from semiconductor and sensor manufacturers. They combine AI and sensors to enable niche use cases like swim coaching [8].

There are often just microcontrollers, typically ARM Cortex M series, that are tightly coupled with the sensor and pre-loaded with algorithms to enable a specific use case. This is great for the manufacturers of these sensors because they are not only able to monetise the hardware but also the algorithms they develop for them.

It’s also great for companies using these ‘smart sensors’ to develop their own gadgets as they don’t need to spend time developing niche algorithms and can focus on system integration instead.


Sensor hubs are still in a very nascent stage, using mostly general purpose microcontrollers but as hardware continues to improve the possibilities are endless. Sensors themselves are getting more accurate and precise. ARM v9 and it’s focus on DSP and ML capabilities will greatly expand the set of models that can be implemented on an embedded device. The Ethos U-55 is a microNPU (Neural Processing Unit) from ARM that can soon find it’s way to Sensor hubs that are already implementing ARM IPs [9]. Many startups like Syntiant are also developing specialised hardware for neural network inference on the edge.


Exciting times ahead in the world of sensors! Stay tuned for more musings on EdgeAI and Smart Sensors…



 

References and Links


Comentários


bottom of page