ONNX Support
Supported ONNX opset version
The TwinCAT Machine Learning Server, more precisely the TcMlServer service, supports ONNX Opset version 21. Backward compatibility with more recent ONNX Opset versions is generally provided by ONNX.
The Opset version used is normally listed in the ONNX file under imports. In the image below, visualized with Netron, ONNX Opset version 18 as an example.
Restrictions on supported ONNX properties
No dynamic input or output shapes
Dynamic input or output shapes are not supported. Only the leading batchsize parameter may be a dynamic value (see next section).
Permitted are for example:
float32[244, 244, 1]
float32[1, 3, 244, 244]
float32[5, 244, 244, 3]
Not permitted are:
float32[244, 244, ?]
float32[244, height, 3]
float32[?, 244, 244, ?]
float32[1, 244, 244, unknown]
You can quickly fix the shape of ONNX nodes, e.g. using the onnxruntime package in Python, see Make dynamic input shape fixed | onnxruntime.
The shape...
float32[-1, 3, ?, ?]
becomes with...
python -m onnxruntime.tools.make_dynamic_shape_fixed --input_name x --input_shape 1,3,960,960 model.onnx model.fixed.onnx
float32[1, 3, 960, 960]
Batchsize must be leading
The TwinCAT Machine Learning Server only supports models with Batchsize as the leading parameter of the input node. The batchsize parameter may be dynamic (unlike all other parameters). Permitted are for example:
float32[batch, 3, 244, 244]
float32[?, 3, 244, 244]
float32[unknown, 244, 244, 3]
The PredictBatched() method can only be used if the model contains a dynamic batchsize as the leading parameter.
ONNX file size
Currently the size limit for a loadable ONNX file is 2 GB. Contact Beckhoff (see Support and Service) if this limit is a challenge for your application.