AI-based image processing
This sample demonstrates how to:
- Use TwinCAT Vision to load image data from the local hard disk and make it available in the PLC.
- Pre-process images with the TwinCAT Vision library.
- Use FB_MlSvrPrediction to start a session on the locally installed TwinCAT Machine Learning Server and load an AI model (classification model).
- Execute the inference on the TwinCAT Machine Learning Server.
- Continue to use the result in the PLC.
Download and overview of the files
You can download the project here: AI_based_Vision
- The ZIP contains a tnzip, which you can open in TwinCAT XAE via
File > Open > Open Solution from Archive.... - The models folder contains an ONNX and the already created JSON and PlcOpenXml.
- The dataset folder contains sample images that are to be processed.
Requirements
Install the following workloads:
- TwinCAT Standard
- TF3820 | TwinCAT Machine Learning Server
- TF3830 | TwinCAT Machine Learning Server Client
- TF7xxx | TwinCAT 3 Vision
Setting up the project
- Open the tnzip and save your project.
- Set up the FileSpource:
- In the System Manager, select the Tree Item
Vision > FileSource > ImageSource - Add the images from the models folder.
- Set the Cycle Time of the ImageSource to 100 ms.
- Name the FullPath to the model JSON in line 9 of the MAIN. The ONNX must be in the same folder.
- Make sure that all software licenses are available at least in the 7-day trial license:
- TC1200 TwinCAT PLC
- TF3820 TwinCAT Machine Learning Server
- TF7100 TwinCAT Vision Base
Optional: Adaptation of the TwinCAT Vision version
The example was created with version 5.6.5.0. If you want to use a different version, manually select the version that you have installed and want to use. To do this, first select the version for the corresponding objects. Right-click on the OTCID column to open a context menu. Select “Reload TMI/TMC Description(s) with changed version”. See also TwinCAT Vision documentation.
If you have not installed version 5.6.5.0 and do not manually change the setting of the version currently in use, you will receive an error message “Error loading Repository driver”.
The latest PLC library version available on the system will be used automatically.

Executing the project
Start the application with Activate Configuration on your target system. If all points are set correctly, after a short time the eState
should be set to eInference
and the variable sLabel
should display the result of the current inference.

If an error has occurred, the eState
is set to Error. You can then open the fbMlSvr instance and read out the error code. Use the table of error codes to narrow down the problem.
Excerpts from the PLC program
Declaration
In the declaration, the main points concerning the handling of the TwinCAT Machine Learning Server are the input and output data types of the AI model and the instance of the clients to the Machine Learning Server.
stModelInput : ST_lemon_modelInput; //model input datatype, imported via PlcOpenXml
stModelOutput : ST_lemon_modelOutput; //model output datatype, imported via PlcOpenXml
fbMlSvr : FB_MlSvrPrediction(); // Instance of Client to TcMlServer
The data types have already been read into the TwinCAT project via the PlcOpenXml (see models folder). The description can be found in the DUTs folder.
Configuration of the session
In this sample, the client opens a session on the TwinCAT Machine Learning Server in the E_State.eMlSvrConfiguration state. This specifies the system on which the server is accessible, which model is loaded in the session and on which hardware the model is to be executed.
fbMlSvr.stPredictionParameter.sMlModelFilePath := 'C:\models\lemon_model.json'; // fullpath to model
fbMlSvr.stPredictionParameter.sMlSvrNetId := '127.0.0.1.1.1'; // Server on local system
fbMlSVr.stPredictionParameter.eExecutionProvider := E_EXECUTIONPROVIDER.CPU; // CPU execution
// Submit configuration request to the TcMlServer
// Provide a generous nTimeout, as the configuration can take a substantial amount of time
IF fbMlSvr.Configure(nTimeout := 1000, nPriority:=0) THEN
IF fbMlSvr.nErrorCode <> 0 THEN
// If nErrorCode -1 is encountered, increase nTimeout
eState := E_State.eError;
ELSE
eState := E_State.eImageAcquisition;
END_IF
END_IF
Calling the method Configure()
sends the request to open a session to the server. The call is asynchronous to the PLC task and is acknowledged with a TRUE
when the session setup has been successfully completed.
Executing the model
In the state E_State.eInference
, the Predict
call is sent to the Machine Learning Server. This call is also asynchronous to the PLC task. The method returns TRUE if the result is available.
In this sample, the image of type ITcVnImage
is copied to the model input data type using the F_VN_ExportImage
function before the inference call.
F_VN_ExportImage(ipTensorImage, ADR(stModelInput.in_input1), nImageSize, hrVision);
// Submission of the asynchronous inference request to the TcMlServer
IF fbMlSvr.Predict(pDataIn := ADR(stModelInput),
nDataInSize := SIZEOF(stModelInput),
pDataOut := ADR(stModelOutput),
nDataOutSize := SIZEOF(stModelOutput),
nTimeout := 100,
nPriority := 0) THEN
IF fbMlSvr.nErrorCode <> 0 AND NOT fbMlSvr.bConfigured THEN
// If nErrorCode -1 is encountered, increase nTimeout
eState := E_State.eError;
ELSE
// Postprocessing of the inference results
F_Softmax(stModelOutput.out_367);
nPredictedClass := F_ArgMax(stModelOutput.out_367);
The result of the inference can be used after a successful error check.