Beckhoff ML XML

Introduction to Beckhoff ML XML

The Beckhoff-specific XML format for the representation of trained Machine Learning Models forms a core component of the TwinCAT Machine Learning Inference Engine and TwinCAT Neural Network Inference Engine. The file is created from an ONNX file using the TC3 Machine Learning Model Manager or the Machine Learning Toolbox or the provided Python package.

As opposed to ONNX, the XML-based description file can map TwinCAT-specific properties. The XML guarantees an extended functional scope of the TwinCAT Machine Learning product – see for example the concept of the Multi-engines. On the other hand, it ensures seamless cooperation between the creator and user of the description file - compare Input and output transformations and Custom Attributes.

Essential areas of the Beckhoff ML XML are described below. This helps you to understand the functions it provides.

XML Tag <MachineLearningModel>

Obligatory tag with 2 obligatory attributes. The tag is generated automatically and may not be manipulated.

Sample:

<MachineLearningModel modelName="Support_Vector_Machine" defaultEngine="svm_fp64_engine">

The attribute modelName can be read in the PLC via the method GetModelName. The model type that is to be loaded is identified by the model name. For example, the attribute can take the values support_vector_machine or mlp_neural_network .

The attribute modelName in this tag should not be confused with the attribute str_modelName from <ModelDescription>.

XML Tag <CustomAttributes>

The tag CustomAttributes is optional and may be freely used by the user. The depth of the tree and the number of attributes are not limited. Creation can take place via the TC3 Machine Learning Model Manager. The XML can also be manually edited in this area.

Attributes can be read in the PLC via the methods GetCustomAttribute_array, GetCustomAttribute_fp64, GetCustomAttribute_int64 und GetCustomAttribute_str. In the XML the typification is given by the prefixes str_, int64_, fp64_ and so on.

Sample:

<CustomAttributes>
   <Model str_Name="TempEstimator" str_Version="1.2.11.0" />
   <MetaInfo arrfp64_InputRange="0.10000000000000001,0.90000000000000002" int64_TheAnswer="42" />
</CustomAttributes>

Here, a model with the name "TempEstimator" is created in the version 1.2.11.0. Thus, an array and an integer value are provided as further information. Sample code for reading the CustomAttributes can be downloaded from the Samples section.

XML Tag <AuxilliarySpecifications>

The AuxilliarySpecifications area is optional and is subdivided into the children <PTI> and <IOModification>.

Sample:

<AuxiliarySpecifications>
   <PTI str_producer="Beckhoff MLlib Keras Exporter" str_producerVersion="3.0.200525.0 str_requiredVersion="3.0.200517.0d"/>
   <ModelDescription str_modelVersion="2.3.1.0" str_modelName="CurrentPreControlAxis42" str_modelDesc="This is the most awesome model to control Axis42" str_modelAuthor="Max" str_modelTags="awesome,ingenious,astounding" />
   <IOModification>
      <OutputTransformation str_type="SCALED_OFFSET" fp64_offsets="0.48288949404162623" fp64_scalings="1.4183887951105305"/>
   </IOModification>
</AuxiliarySpecifications>

<PTI>

PTI stands for "Product Version and Target Version Information". The tool with which the XML was created and version of the tool at the time of the XML generation are specified here.

A minimum version of the executive ML Runtime can also be specified via the attribute str_requiredVersion. The query is regarded as passed if the attribute is not set. If the attribute is set, the query is regarded as passed if the ML Runtime Version is higher than or equal to the required version. If the query is not passed, i.e. if the version of the ML Runtime used is lower than the required version, then a warning is displayed when executing the Configure method.

<IOModification>

If inputs or outputs of the learned model are scaled in the training environment, the scaling parameters used can be integrated directly in the XML file so that TwinCAT automatically performs the scaling in the ML Runtime.

The scaling takes place by means of y = x * Scaling + Offset.

<ModelDescription>

Write attributes to this optional tag

XML Tag <Configuration>

The obligatory area Configuration describes the structure of the loaded model.

Example - SVM

<Configuration str_operationType="SVM_TYPE_NU_REGRESSION" fp64_cost="0.1" fp64_nu="0.3" str_kernelFunction="KERNEL_FN_RBF" fp64_gamma="1.0" int64_numInputAttributes="1"/>

Example - MLP

<Configuration int_numInputNeurons="1" int_numLayers="2" bool_usesBias="true">
   <MlpLayer1 int_numNeurons="3" str_activationFunction="ACT_FN_TANH"/>
   <MlpLayer2 int_numNeurons="1" str_activationFunction="ACT_FN_IDENTITY"/>
</Configuration>

A configuration exists once only and is generated automatically.

XML Tag <Paramaters>

The obligatory area Parameters substantiates the loaded model with the described <Configuration>. The learned parameters of the model are stored here, e.g. the weights of the neurons.

In the standard case, i.e. a learned model is described in an XML, the <Parameters> tag exists only once in the XML.

<Parameters str_engine="mlp_fp32_engine" int_numLayers="2" bool_usesBias="true">

Several models with identical <Configuration> can be merged via the Machine Learning Model Manager so that both models are described in a single XML. Distinction can then be made between the parameter sets by Engines, which is specified as an attribute for each parameter tag.

Sample:

<Parameters str_engine="mlp_fp32_engine::merge0" int64_numLayers="2" bool_usesBias="true">
   …
</Parameters>
<Parameters str_engine="mlp_fp32_engine::merge1" int64_numLayers="2" bool_usesBias="true">
   …
</Parameters>
<IODistributor str_distributor="multi_engine_io_distributor::mlp_fp32_engine-merge" str_engine_type="mlp_fp32_engine" int64_engine_count="2">
   <Engine0 str_engine_name="merge0" str_reference="sin_engine" />
   <Engine1 str_engine_name="merge1" str_reference="cos_engine" />
</IODistributor>

Two MLPs with an identical Configuration were merged here. The first engine bears the ID 0 and the internal name "mlp_fp32_engine::merge0" and can be addressed by the user via the reference "sin_engine". The second engine bears the ID 1 and the internal name "mlp_fp32_engine::merge1" and the reference "cos_engine".

The ID of the engine is sequentially incremented by the value one, starting from zero. The reference is a string that can be specified in the Model Manager during Merge.

If several engines are merged in an XML, all engines are loaded in the ML Runtime and are available for inference. The Predict method is to be transferred when calling the engine ID that is to be used. The reference for the engine can be transferred via the PredictRef method. A GetEngineIdFromRef method is also available for finding the associated ID from the reference. Switching between the engines is possible without latency.

There is an example of the use of multi-engines in the PLC in the Samples area.