Overview
Introduction
The TwinCAT Machine Learning Server enables the execution of AI models directly on the control IPC or on an Edge Device.
The TwinCAT Machine Learning Server consists of two components:
- The PLC function block as a client of the Machine Learning Server.
- The Machine Learning Server as a provider of services (loading, execution, ... of AI models).
These components provide asynchronous execution functionality for PLC programs. The concept of asynchronous calculation effectively decouples the AI model execution time from the cyclic operation of the PLC. The Machine Learning Server enables the execution of any sophisticated AI models on both CPUs and NVIDIA GPUs and is therefore particularly suitable for use with the C6043 Industrial PC. The Machine Learning Server is executed in the user mode of the operating system. This results in non-deterministic behavior that can only be partially mitigated by configuring the user mode components accordingly.
The TwinCAT Machine Learning Server loads AI models that are provided as ONNX files. All relevant AI frameworks, such as Tensor Flow, Pytorch, Scikit Learn, etc. support this interoperability standard. This decouples the training environment from the execution environment. Any training environment can be used to create AI models, which can then be executed with the TwinCAT Machine Learning Server.
Target groups and use cases
The Machine Learning Server is aimed at the following use cases, among others:
- Use of computationally intensive AI models where the expected reduction in computing time due to acceleration on a GPU overcompensates for the expected computing time fluctuations (jitter).
- In particular, vision AI models for image classification, object recognition or segmentation should be mentioned here.
- Use of AI models in low-priority tasks that are only loosely coupled with the deterministic PLC program.
- AI models whose results are not used by the control system, but are communicated to systems above the control level. For example, process analysis models in which the machine operator is informed, predictive maintenance models in which the service personnel are informed, etc.
- AI models whose results are not required by the controller at a specific point in time. For example, AI models to provide optimized or adapted process parameters.
Differentiation and comparison with similar TwinCAT products
In addition to the TwinCAT Machine Learning Server, there are other TwinCAT products with similar functionality, i.e. the execution of AI models.
- TF3800 TwinCAT Machine Learning Inference Engine
- TF3810 TwinCAT Neural Network Inference Engine
- TF7800 TwinCAT Vision Machine Learning
- TF7810 TwinCAT Vision Neural Network
The main differences between the listed products and the TwinCAT Machine Learning Server are listed in the following table.
Deterministic AI: TF3800, TF3810, TF7810 | Accelerated AI: TF3820 |
---|---|
Deterministic AI execution in the TwinCAT process | Near-real-time execution in a separate process |
Execution on standard x64 CPUs | Hardware acceleration on NVIDIA GPUs possible |
Supports selected AI models and operators | Supports current ONNX Opset version and thus current and diverse AI models |
Standard PLC function block for easy integration in TwinCAT | Standard PLC function block for easy integration in TwinCAT |
Interoperability through ONNX support | Interoperability through ONNX support |
License bundle: TF3810 includes TF3800, TF7800 and TF7810 | Can also be used as a server in a network with several clients |