AI-based image processing

This sample demonstrates how to:

Download and overview of the files

You can download the project here: AI_based_Vision

Requirements

Install the following workloads:

Setting up the project

Optional: Adaptation of the TwinCAT Vision version

The example was created with version 5.6.5.0. If you want to use a different version, manually select the version that you have installed and want to use. To do this, first select the version for the corresponding objects. Right-click on the OTCID column to open a context menu. Select “Reload TMI/TMC Description(s) with changed version”. See also TwinCAT Vision documentation.

If you have not installed version 5.6.5.0 and do not manually change the setting of the version currently in use, you will receive an error message “Error loading Repository driver”.

The latest PLC library version available on the system will be used automatically.

AI-based image processing 1:

Executing the project

Start the application with Activate Configuration on your target system. If all points are set correctly, after a short time the eState should be set to eInference and the variable sLabel should display the result of the current inference.

AI-based image processing 2:

If an error has occurred, the eState is set to Error. You can then open the fbMlSvr instance and read out the error code. Use the table of error codes to narrow down the problem.

Excerpts from the PLC program

Declaration

In the declaration, the main points concerning the handling of the TwinCAT Machine Learning Server are the input and output data types of the AI model and the instance of the clients to the Machine Learning Server.

stModelInput : ST_lemon_modelInput;      //model input datatype, imported via PlcOpenXml
stModelOutput : ST_lemon_modelOutput;    //model output datatype, imported via PlcOpenXml
fbMlSvr : FB_MlSvrPrediction();   // Instance of Client to TcMlServer

The data types have already been read into the TwinCAT project via the PlcOpenXml (see models folder). The description can be found in the DUTs folder.

Configuration of the session

In this sample, the client opens a session on the TwinCAT Machine Learning Server in the E_State.eMlSvrConfiguration state. This specifies the system on which the server is accessible, which model is loaded in the session and on which hardware the model is to be executed.

fbMlSvr.stPredictionParameter.sMlModelFilePath := 'C:\models\lemon_model.json';                            // fullpath to model    
fbMlSvr.stPredictionParameter.sMlSvrNetId := '127.0.0.1.1.1';                                    // Server on local system
fbMlSVr.stPredictionParameter.eExecutionProvider := E_EXECUTIONPROVIDER.CPU;                               // CPU execution

// Submit configuration request to the TcMlServer
// Provide a generous nTimeout, as the configuration can take a substantial amount of time
IF fbMlSvr.Configure(nTimeout := 1000, nPriority:=0) THEN
   IF fbMlSvr.nErrorCode <> 0 THEN
      // If nErrorCode -1 is encountered, increase nTimeout
      eState := E_State.eError;
   ELSE
      eState := E_State.eImageAcquisition;
   END_IF
END_IF

Calling the method Configure() sends the request to open a session to the server. The call is asynchronous to the PLC task and is acknowledged with a TRUE when the session setup has been successfully completed.

Executing the model

In the state E_State.eInference, the Predict call is sent to the Machine Learning Server. This call is also asynchronous to the PLC task. The method returns TRUE if the result is available.

In this sample, the image of type ITcVnImage is copied to the model input data type using the F_VN_ExportImage function before the inference call.

F_VN_ExportImage(ipTensorImage, ADR(stModelInput.in_input1), nImageSize, hrVision);
// Submission of the asynchronous inference request to the TcMlServer
IF fbMlSvr.Predict(pDataIn       := ADR(stModelInput), 
               nDataInSize   := SIZEOF(stModelInput), 
               pDataOut    := ADR(stModelOutput), 
               nDataOutSize := SIZEOF(stModelOutput),
               nTimeout    := 100,
               nPriority    := 0) THEN
      IF fbMlSvr.nErrorCode <> 0 AND NOT fbMlSvr.bConfigured  THEN
         // If nErrorCode -1 is encountered, increase nTimeout
          eState := E_State.eError;
      ELSE
         
         // Postprocessing of the inference results
         F_Softmax(stModelOutput.out_367);
         nPredictedClass := F_ArgMax(stModelOutput.out_367);

The result of the inference can be used after a successful error check.