Edge AI - bring artificial intelligence to where it is needed.

AI at the edge – Intelligent and self-learning systems

Intelligent and self-learning systems are increasingly becoming an important part of business processes in process automation or big data analysis. In the past, these intelligent systems had to be connected to a cloud, as this was the only way to provide the computing power needed to run the mathematical algorithms. Now Edge AI, short for Edge Artificial Intelligence, is enabling the next generation of intelligent systems and transferring the know-how to the devices themselves.

Preserving data sovereignty

Storing and analyzing personal or mission-critical data can lead to serious privacy issues.

In addition to being vulnerable to hackers and other cyberthreats through internet connectivity, various legal rules and regulations should also be considered.

For example, the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), signed into law in 2018, gives U.S. authorities wide-ranging access to data stored on the Internet. Edge AI can contribute to data sovereignty by evaluating data directly. This also makes Edge AI a strategic choice.

Cost-effectiveness and economy

Modern LIDAR or image sensors generate huge amounts of data. By transmitting only the computational results to other systems, the amount of data transferred can be significantly reduced. Another advantage is that with Edge AI you can use Artificial intelligence and its benefits even in places where network and internet connections are scarce or non-existent.

Practical implementations with Raspberry and Google

The versatile Raspberry Pi, which has long since arrived in the industrial environment, is also ideally suited for the local execution of artificial intelligence.

For this purpose, Google offers the Coral USB Accelerator, a USB stick with a Tensor Processing Unit (TPU) that can perform up to 4 trillion computing operations per second while consuming only 2 watts.

If you want to use a standalone board instead, you can use Google’s Coral Dev Board Mini. This relies on the same TPU. In addition, a Debian derivative developed by Google is used here, which is specifically designed for the needs of AI development.

Maxim

MAX78000 Feather scores both in price and energy savings. Its algorithms require only a few microjoules of energy.

The board, which measures just under 2.3 x 6.6 cm, nevertheless contains the well-known Arm® Cortex® M4 and a RISC-V coprocessor for real-time tasks in addition to the accelerator for neural convolutional networks (CNN). In general, CNNs can process 1D and 2D data, i.e. object recognition in speech and image, very efficiently.

Maxim provides an Eclipse-based Software Development Kit (SDK) with many sample programs for this purpose. One for speech recognition is already pre-installed so that you can get started right away.

More in-depth examples and explanations can be found in Elektor’s learning set, which includes an English-language book in addition to the board. On more than 250 colored pages you will find many practical examples.

Conclusion

Artificial Intelligence and Machine Learning don’t need huge data centers or expensive specialized accelerators, like NVIDIA’s Hopper GPU. You also can put AI in mobile speakers or wearable devices. The MAX78000 Feather is already prepared for battery power to the integrated MAX20303.

Completely independent designs can be realized with the MAX78000 Feather‘s MAX78000EXG+ microcontroller. These, along with many other 32-bit microcontrollers, are available in the Reichelt Shop.

Image: Adobe Stock

Leave a Reply

Your email address will not be published. Required fields are marked *