ST_LatencyMonitor

Latency monitoring of the last inference. The total latency (from sending the data to receiving the result) is made up of the inference time of the model, the overhead of the ML server, the overhead of the ADS communication and the overhead of the PLC cycle. The latter two are referred to as communication latency. The PLC cycle overhead means that the inference result has already been sent to the PLC via ADS, but the task cycle has not yet started executing the predict method.
Further information can also be found here: Performance, latencies and configuration.

Name

Type

Default

Description

nInferenceLatency

LINT

 

Time in microseconds required to execute the inference on the server.

nServerLatency

LINT

 

Required execution time in microseconds of the TcMlServer without inference latency.

nCommunicationLatency

LINT

 

ADS communication time plus PLC cycle overhead due to asynchrony. Specified in microseconds.