Random Forest
A Random Forest can be used both for classification and for regression. The algorithm belongs to the ensemble methods, since a user-defined number of uncorrelated decision trees is built and trained. In the Random Forest, the prediction of the ensemble results from the averaged prediction of the individual trees.
Compared to individual Decision Trees, a Random Forest often has a better accuracy at the cost of the Random Forest is not being transparent with regard to the predictions made.
Compared to an SVM, a Random Forest is more efficient in terms of computing time, especially for high-dimensional data.
Supported properties
ONNX support
- TreeEnsambleClassifier
- TreeEnsambleRegressor
Samples of the export of Random Forest models can be found here: ONNX export of a Random Forest
![]() | Classification limitation With classification models, only the output of the labels is mapped in the PLC. The scores/probabilities are not available in the PLC. |
Supported data types
A distinction must be made between "supported datatype" and "preferred datatype". The preferred datatype corresponds to the precision of the execution engine.
The preferred datatype is floating point 64 (E_MLLDT_FP64-LREAL).
When using a supported datatype, an efficient type conversion automatically takes place in the library. Slight losses of performance can occur due to the type conversion.
A list of the supported datatypes can be found in ETcMllDataType.