Neural Network Compatibility Check
This tool offers the possibility to check the compatibility of ONNX models with the functions of the product "TF7810 | TwinCAT 3 Vision Neural Network". The "TF7810_NeuralNetworkSupplements.zip" can be downloaded directly from the Beckhoff website.
The package contains the executable file "OnnxCompatibilityCheck.exe", the Power Shell script "CheckModelCompatibility.ps1" for simplified use and a folder "ONNX_Samples" with sample models.
Execution of the Power Shell script
After executing the script, the version information is displayed first. First from the application itself and then to the TwinCAT 3 Vision library versions to which the compatibility refers. As a user, you will be asked to specify a folder or model path. If a folder containing several ONNX models is specified, these are tested in alphabetical order. It is also possible to explicitly specify a model, e.g. "C:\ONNX_Samples\LemonModel.onnx".
A successful execution looks as follows, e.g. using the Lemon Sample model:
At the end, the execution is paused so that you can view the output.
Options of the Power Shell script
When the user opens the script for editing, he will find the parameters $modelPath and $inputShape in lines 2 and 3. The first parameter $modelPath is empty by default, so that the user is prompted to enter the path. Alternatively, a folder or model path can be stored in the script. In this case, the user is not prompted to make an entry and the stored path is used directly.
By default, the word "skip" is stored for the second optional $inputShape parameter, so that this query does not appear and the application is executed without the option. A fixed value can also be entered here, as in the comment with "1 3 224 224". Alternatively, the word "skip" can be deleted so that the user is prompted to enter it. This prompt for the "Model Input Shape" can also be skipped by pressing "Enter" when executing the script.
Application of the "OnnxCompatibilityCheck.exe"
The "OnnxCompatibilityCheck.exe" can be executed directly without a Power Shell script. In the Windows command prompt, the successful test of the Lemon Sample model looks like this, for example:
If only the name of the application is specified or -?, -h, -help is added and then confirmed with Enter, the following information and notes on the application are displayed.
Results
The following describes the results that can be output in different scenarios:
- At the start of execution, the system checks whether the path is valid and the file is in ONNX format. If this is not the case, execution is aborted and a corresponding message is displayed.
- If the ONNX model could be loaded successfully, this is displayed as the first information, including the file name.
- This is followed by information on the input and output layers of the model, which are read from the ONNX file:
- The shape of the tensor is shown in square brackets for the input and output layers. The number of elements in the respective dimension is displayed.
- If the input layer has more than two dimensions, the ETcVnElementType for the conversion (F_VN_ConvertDataLayout) from a 2D image to the respective input format of the model is also output. As a prerequisite for the correct detection of the ETcVnElementType, it is assumed that N (number of samples or batch size) = 1 and the number of channels is smaller than the width or height.
- For the input layer of the Lemon Sample model, for example, this looks like this: "Model input shape: [1, 3, 224, 224], TCVN_DL_4D_NCHW".
- If there is more than one input layer, their number and the respective shape of the tensor is displayed. As the execution of models with more than one input layer is not supported, the following message appears: "Not supported: Only models with a single input can be executed".
- For the output layers, the number is output first, followed by a list of the layers with their names and the shape of the tensor.
- For the output layer of the Lemon Sample model, for example, it looks like this:
"Model outputs: 1
Layer: '367' [1, 3]" - Neural networks can either be designed for a fixed (static) input format or offer the possibility of processing several different input formats (dynamic). For most models, the input layer is fixed and the values of the tensor are stored in the ONNX file.
The values can then be read, displayed and used directly for testing. - If models support several input / image sizes, the information on the dimensions, such as the width and height of the image in the ONNX file, is missing in this case. This is referred to as a dynamic shape of the input layer. If this shape of the input layer is recognized, the following message is displayed:
"Model input shape: dynamic
To test the execution, please provide the optional [ModelInputshape]
(e.g. OnnxCompatibilityCheck.exe DynamicShape.onnx 1 3 224 224)." - In order to be able to test a model with a dynamic input layer, the values for the "Model Input Shape" can also be transferred when the application is called up. A sample of the call and the transfer of the values is also output or can be seen in the help.
- There are models in which the input layer is fixed in the ONNX file, but which can also be executed with other values. If the user has this information or knows the other values, they can also use the call with the additional specification of the "Model Input Shape".
- The information on the supported input formats can usually be found in the documentation of the model provider, the source of the model or in the associated research article. If this information is not explicitly specified, it can also be derived by examining the model architecture or by using tools to inspect the ONNX file.
- In the event that a model cannot be executed, an attempt is made to output a possible cause in addition to the basic information. If the cause is not clear, no additional information is displayed. Possible causes are, for example, unknown or unsupported operators / data types or a newer ONNX "opset number". If an operator is named as the cause, this can also be a subsequent problem. Therefore, the operators of the previous layer should also be analyzed.