XGBoost
An XGBoost model can be used both for classification and for regression.
Compared to Gradient Boosting, the XGBoost has advantages with regard to the generalization of the model. The training data set should be large – considerably more samples compared to the number of features used.
Supported properties
ONNX support
- TreeEnsambleClassifier
- TreeEnsambleRegressor
Samples of the export of XGBoost models can be found here: ONNX export of XGBoost
![]() | Classification limitation With classification models, only the output of the labels is mapped in the PLC. The scores/probabilities are not available in the PLC. |
Supported data types
A distinction must be made between "supported datatype" and "preferred datatype". The preferred datatype corresponds to the precision of the execution engine.
The preferred datatype is floating point 64 (E_MLLDT_FP64-LREAL).
When using a supported datatype, an efficient type conversion automatically takes place in the library. Slight losses of performance can occur due to the type conversion.
A list of the supported datatypes can be found in ETcMllDataType.