Artificial intelligence (AI) applications are very well suited for pattern recognition, enabling flexible decision making, for example, for driver assistance systems in cars or for industrial applications such as quality assurance. The availability of specialised tensor coprocessors and their integration into single-board computers and Computer on Module (COM) boards enable the execution of extremely CPU-intensive AI operations directly on site as needed.
Artificial intelligence is no longer a catchword or domain for a few mathematics specialists. It uses artificial neural networks similar to the structures in the brain for information processing with a high degree of abstraction.
While this does not give devices and machines the ability to think, they can at least draw analogical conclusions based on historical information. In addition, it enables them to learn information-based methods of collecting experience based on higher statistics, such as machine learning and deep learning.
More responsive machines
This learning ability has several advantages. For one thing, devices and machines can react to unexpected operating situations without the software having to map every eventuality in operation from the beginning in detail. This enables commissioning with basic programming and self-optimisation during operation. In industrial applications, these kinds of algorithms can be used, for example, to provide machines with a time advantage through the predictive positioning of the tool or workpiece.
In addition, the use of machine learning and deep learning keeps the software development effort manageable, because part of the fine-tuning can take place while training operating situations. These do not have to take place during operation. Much of this—especially more abstract general functionalities—can be trained in advance in computer simulation, safely and with far more training rounds than would be possible in reality.
Demanding information processing
The applications of artificial intelligence range from speech recognition to personal identification, or the recognition of objects using their location, size and characteristics to quality assurance. It is therefore good that artificial neural networks are ideally suited for in-depth analyses in large amounts of data, for example, for pattern recognition with high accuracy in sound, image and video data.
However, these methods place very high demands on computing power. In the past, this required AI applications to be outsourced to high-performance systems. Such applications are often offered as software-as-a-service (SaaS) in the cloud.
Due to limited communication bandwidth, processing was often not possible in real time. In addition, it sometimes entailed high costs for data transmission over public telecommunications networks. Many users also have concerns about the reliability of data connections and the loss of control over their information, given the large amount of data at play.

Decentralised intelligence
Digitalisation and Industry 4.0 require a system shift away from central structures to decentralised data processing. This also applies to devices and machines. Instead of developing the hardware from the ground up, their manufacturers often integrate single-board computers or controllers or standards-based computer-on-modules.
They have a huge product variety and these products are also available in robust versions with an extended temperature range for industrial use. Not least because of their low costs and dimensions, embedded computing allows individual control or data processing tasks to be solved directly on site.
Embedded boards communicate with each other as processing units on the Internet of Things, as well as with higher-level services and increasingly with cloud services. There, too, there is now a shift away from strictly centralised processing. Instead of one central intelligence, the edge devices are confronted with decentralised, often task-specific edge servers.
Different access points to AI
In order to remain independent of the transmission bandwidth, time-critical operations, such as the inference calculations of artificial intelligence, are increasingly carried out at device level, at the edge of the system boundary.
For this purpose, the well-known semiconductor manufacturers already offer powerful processors (Central Processing Units, CPU) with directly integrated functions for dedicated processing by AI applications. Since inference calculations have certain similarities with image processing, powerful graphic processing units (GPU) are even better suited than conventional CPUs for processing many AI tasks. Some well-known manufacturers of graphics boards have therefore jumped on the bandwagon and now offer their hardware and supporting developer tools specifically for AI applications.
Recently, many semiconductor manufacturers have launched special AI accelerators known as tensor processing units (TPU), such as Google’s Coral tensor processor. Even more than GPUs, these relieve the main processors of the particularly CPU-intensive AI operations that they are already able to perform, since they are pre-trained. Some of them, such as the Hailo-8™ AI accelerator with 26 TOPS, are particularly fast due to a built-in memory.
Independence through local AI
Leading manufacturers of development boards, single-board computers and computer-on-modules now offer products of all sizes and performance classes with integrated TPU. This allows AI applications to run in real time at the edge or even at device level, right at the site. This means that even very compact machines and devices can easily be equipped with machine learning and deep learning capabilities.
The decentralisation of artificial intelligence opens up previously unimagined application possibilities for this. It allows device and machine developers to design them to increase their functionality during operation. It can therefore assume a scope that would not be feasible by means of conventional programming or would only be possible with vast programming effort.
It also supports and facilitates the modularisation of larger machines. Individual, semi-autonomous modules and assemblies can use the integrated AI capabilities to coordinate with each other. Thus, some of the problems connected with the interaction of different system parts can be delegated to the modules themselves. In addition, this enables functional optimisation of the entire machine or system by mutual coordination of the individual parts.
A change in thinking required
Taking into account the wide variety of optimisation goals makes this a multi-dimensional task. Not only does this example show how AI does not make software developers and control programmers superfluous, but it enables them to take a different approach to problems and gives them different, often more convenient tools.
However, there is a need to learn and practise how to deal with these. And it requires a change in thinking from the previously widespread, sequential approach to control, regulation and automation tasks.

Images: Adobe Stock