ST_PredictionParameter
Configuration options for an inference session on the TcMlServer.
Name | Type | Default | Description |
---|---|---|---|
sMlModelFilePath | STRING(255) |
| Fullpath to the created JSON file (see Model Manager) |
eExecutionProvider | ExecutionProvider.CPU | The AI model named under sMlModelFilePath is to be executed on this specified hardware. | |
nDeviceId | UDINT | 0 | Index of the desired GPU device when using the "CUDA" ExecutionProvider. The index corresponds to the CUDA Compute Index and is only relevant for IPCs with multiple GPUs. |
bExclusiveSession | BOOL | TRUE | Determines the exclusivity of the inference session that is created for the FB instance on the TcMlServer. If TRUE, the TcMlServer creates an exclusive session, which is necessary for state-dependent models (e.g. RNNs) in order to avoid interference. If FALSE, the session can be shared with other FB instances that request the same configuration, which can reduce the memory load. |
nSessionTimeout | ULINT | 72 | Duration of inactivity in hours after which the FB session on the TcMlServer expires. The server then releases the allocated resources of the relevant client again. |
sMlSvrNetId | T_AmsSvrNetId | '127.0.0.1.1.1' | AMS Net Id of the device on which the TcMlServer service is accessible. Default is ‘local’. |