PredictBatched()

The PredictBatched method submits an asynchronous batch inference order to the TcMlServer.

The method expects the provision of an array of the input data type of the model according to the specification of the ONNX file, see Preparing ONNX for use with TwinCAT Machine Learning Server.

The provided pointer to the output data must be valid and point to an instance of an array of output data types according to the created PlcOpenXml. Once the asynchronous inference has been successfully completed, the data in the transferred output memory area is valid and released for further processing.

See also Execute AI model.

 

Parameter

Type

Default

Description

INPUT

pDataIn

PVOID

 

Pointer to the instance of an array of input data types

INPUT

nDataInSize

UDINT

0

Size of the input data type (size of an element, not of the array)

INPUT

nBatchSize

UINT

 

Size of the batch

INPUT

pDataOut

PVOID

 

Pointer to the instance of an output data type

INPUT

pDataOutSize

UDINT

 

Size of the output data type

INPUT

nTimeout

ULINT

 

Number of PLC task cycles before the timeout error is returned.

INPUT

nPriority

UDINT

0

Priority of the request. Bigger means higher priority.

OUTPUT

PredictBatched

BOOL

 

Return value. TRUE as soon as the result of the asynchronous call is available. The result of the call can then be checked using the 'bError' and 'nErrorCode' properties.