Overview

Introduction

The idea behind machine learning is to learn a generalized correlation between inputs and outputs on the basis of example data. Accordingly, a certain amount of training data is required, on the basis of which a model is trained. In the training of the model, parameters of the model are adapted automatically to the training data by means of a mathematical process. In machine learning, the user has a large number of different models at his disposal. The selection and design of the models is part of the engineering process. Different types or designs of models fulfill different tasks, wherein the most important subdivisions are the classification and regression.

Classification: The model receives an input (an image, one or more vectors, etc.) and assigns it to a class. The output is correspondingly a categorized variable. These classes could be, for example, good part or bad part. Distinction can also be made between several classes, for example quality classes A, B, C, D.

Regression: The model receives an input and generates a continuous output. Not only are directly learned inputs assigned to directly learned outputs (as with a lookup table), but in addition the model is able to interpolate or, respectively, extrapolate non-learned inputs, provided it generalizes well. A functional correlation is learned.

Once a model has been trained, it can be used for the learned task, i.e. the model is used for inference.

TC3 Machine Learning Runtime

Beckhoff supplies components for the inference of models in the TwinCAT XAR with its products TF3800 TwinCAT 3 Machine Learning Inference Engine and TF3810 Neural Network Inference Engine. A common software basis is used for both products, which is referred to in the following as the Machine Learning Runtime (ML Runtime for short).

The ML Runtime is a module (TcCOM) integrated in TwinCAT 3 and executed in the TwinCAT XAR. It is thus possible to access the model interface (model inputs and model outputs) as well as to execute the model loaded in the ML Runtime in hard real-time.

The TF3800 and TF3810 have different licenses. The license required depends on the ML model to be loaded. In principle, the TF3800 is required for loading and executing classic ML models. The TF3810 license is required for the loading of neural networks. The TF3810 includes the TF3800 license.

Additional information on supported models and required licenses.

Workflow

Basically, the process of machine learning and integration into TwinCAT 3 consists of three phases:

  1. The collection of data
  2. The training of a model
  3. The deployment in the TwinCAT XAR

A large number of TwinCAT products are available for the collection of data from the controller:
see TwinCAT Scope, TwinCAT Database Server, TwinCAT Analytics Logger, TwinCAT IoT, etc.

ML models can be trained in a large number of software tools. ML models are usually created in programming environments such as Python or R. Various open source and free tools exist that are suitable for the creation of ML models, such as PyTorch, Keras and Scikit-learn. Trained models can be exported from these tools in a standardized format as an ONNX file. The ONNX file is a standardized description of the trained ML model. This file is first converted into a format that is conditioned for TwinCAT (XML or BML file).

Further Information:

TwinCAT offers two methods for deploying the model:

  1. The library TC3_MLL is provided for use in the PLC environment. ML models can be loaded asynchronously via a method call and subsequently executed cyclically in the PLC program by calling a further method. Additional information on the PLC API
  2. A simple method of machine learning without programming effort: a TcCOM object that can be inserted in the TwinCAT object tree in the development environment and configured. On starting the system, the TcCOM loads the configured model and executes it in the assigned cycle time.
    Additional information on the ML TcCOM

The picture below illustrates the deep integration of the Machine Learning Runtime in the TwinCAT XAR. Like all TwinCAT Runtime objects, the module is a TcCOM and is accordingly anchored deep in the hard real-time.

Overview 1:

Integration of machine learning into TwinCAT Analytics

The products Machine Learning Inference Engine and Neural Network Inference Engine can also be integrated into the TwinCAT Analytics workflow. Refer to the TwinCAT Analytics documentation for detailed information.

Overview 2: