TcCOM
The use of a TcCOM for the inference of an ML model in TwinCAT provides a very simple possibility to execute trained models in the TwinCAT XAR. In principle, the entire procedure is documented in the quick start, so that the steps described there are initially repeated and a few further details are given below.
Incorporation of a model by means of TcCOM object
This section deals with the execution of machine learning models by means of a prepared TcCOM object. This interface offers a simple and clear way of loading models, executing them in real-time and generating appropriate links in your own application by means of the process image.
Generate a prepared TcCOM object TcMachineLearningModelCycal
- 1. To do this, select the node TcCOM Objects with the right mouse button and select Add New Item…

Under Tasks, generate a new TwinCAT task and assign this task context to the newly generated instance of TcMachineLearningModelCycal
- 2. To do this, open the Context tab of the generated object.
- 3. Select your generated task in the drop-down menu.
- The instance of TcMachineLearningModelCycal has a tab called ML model configuration where you can load the description file of the ML algorithm (XML or BML) and the available data types for the inputs and outputs of the selected model are then displayed.
- The file does not have to be on the target system. It can be selected from the development system and is then loaded to the target system on activating the configuration.
- A distinction is made between preferred and supported data types. The only difference is that a conversion of the data type takes place at runtime if a non-preferred type is selected.
This may lead to slight losses in performance when using non-preferred data types. - The data types for inputs and outputs are initially set automatically to the preferred data types. The process image of the selected model is created by clicking Generate IO. Accordingly, by loading KerasMLPExample_cos.xml, you get a process image with an input of the type REAL and an output of the type REAL.
Activating the project on the target
- 1. Before activating the project on a target, you must select the TF3810 license manually on the Manage Licenses tab under System>License in the project tree, as you wish to load a multi-layer perceptron (MLP).
- 2. Activate the configuration.
- You can now test the model by manually writing at the input.
If the process image is larger, i.e. many inputs or outputs exist, it may be helpful not to generate each input individually as a PDO, but to define an input or output as an array type. To do this, check the checkbox Generate IO as array and click Generate IO.
Models with several engines, cf. XML Tag parameters, can be loaded, but only EngineId = 0 is used. Switching between the EngineIds with the TcCOM API is not provided for.
The ML description file used is automatically transferred from the Engineering system to the Runtime system on activating the configuration. File management details are described in the section File management of the ML description files.