Updating the AI model

AI models can be exchanged at runtime. The following describes two cases and the steps to follow.

Case 1: Model update without changing the model interface

Definition of the case:

In this case, the input and output interface to the AI model remains identical when the model is replaced. To do this, the input and output nodes must remain unchanged in their sequence (if there are several nodes) and in their shape.

It is recommended not to change the entire model architecture when updating the model, i.e. to carry out transfer training/fine-tuning on the existing AI model. As a result, the interfaces to the model do not change and the runtime behavior remains the same.

A model update can be carried out without a compile process and without a TwinCAT stop.

Steps to update the model:

During the time from Deconfigure to the completion of the Configure method, no inference calls can be sent to the server with this function block.

Case 2: Model update with model interface change

Definition of the case:

In this case, the input and output interface to the AI model changes when the model is swapped. As a result, the input and output data types in the PLC no longer match the model. As a rule, several places in the source code have to be changed.

It is recommended to revise the source code in TwinCAT XAE, retest the project and only then load it onto the machine. Please note that the runtime behavior of the AI model may also have changed.

In this case, a model update is connected with a TwinCAT stop when the new, modified TwinCAT project is loaded.