From df257b82fcf31e2ebe05b12b9209c0821d637f7a Mon Sep 17 00:00:00 2001 From: Saulo Martiello Mastelini Date: Mon, 29 Jul 2024 09:52:24 -0300 Subject: [PATCH] [docs] remove unavailable tree algorithm and fix tree depth behavior description in the tree guidelines --- docs/recipes/on-hoeffding-trees.ipynb | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/recipes/on-hoeffding-trees.ipynb b/docs/recipes/on-hoeffding-trees.ipynb index 54d77140cc..eb0218b68f 100644 --- a/docs/recipes/on-hoeffding-trees.ipynb +++ b/docs/recipes/on-hoeffding-trees.ipynb @@ -73,7 +73,6 @@ "| Hoeffding Tree Regressor | HTR | Regression | No | Basic HT for regression tasks. It is an adaptation of the [FIRT/FIMT](https://link.springer.com/article/10.1007/s10618-010-0201-y) algorithm that bears some semblance to HTC | [[4]](https://link.springer.com/article/10.1007/s10618-010-0201-y)\n", "| Hoeffding Adaptive Tree Regressor | HATR | Regression | Yes | Modifies HTR by adding an instance of ADWIN to each node to detect and react to drift detection | -\n", "| incremental Structured-Output Prediction Tree Regressor| iSOUPT | Multi-target regression | No | Multi-target version of HTR | [[5]](https://link.springer.com/article/10.1007/s10844-017-0462-7)\n", - "| Label Combination Hoeffding Tree Classifier | LCHTC | Multi-label classification | No | Creates a numerical code for each combination of the binary labels and uses HTC to learn from this encoded representation. At prediction time, decodes the modified representation to obtain the original label set | -\n", "\n", "\n", "As we can see, although their application fields might overlap sometimes, the HT variations have specific situations in which they are better suited to work. Moreover, in River we provide a standardized API access to all the HT variants since they share many properties in common." @@ -832,7 +831,7 @@ "\n", "HTs monitor the incoming feature values to perform split attempts. To do so, they rely on a class of algorithms called *Attribute Observers* (AO) or *Splitters* (spoiler alert!). Each leaf node in an HT keeps one AO per incoming feature. After pre-determined intervals (`grace_period` parameter), leaves query their AOs for split candidates. Well, there are costs to monitor input features (mainly the numerical ones). In fact, AOs correspond to one of the most time and memory-consuming portions of the HTs. To manage memory usage, an HT firstly determines its least promising leaves, w.r.t. how likely they will be split. Then, these leaves' AOs are removed, and the tree nodes are said to be \"deactivated.\" That's it! The deactivated leaves do not perform split attempts anymore, but they continue to be updated to provide responses. They will be kept as leaves as long as there are not available resources to enable tree growth. These leaves can be activated again (meaning that new AOs will be created for them) if there is available memory, so don't worry!\n", "\n", - "**Hint:** another indirect way to bound memory usage is to limit the tree depth. By default, the trees can grow indefinitely, but the `max_depth` parameter can control this behavior." + "**Hint:** another indirect way to bound memory usage is to limit the tree depth. By default, the trees can grow until they get close to the maximum recursion limit enabled in the system, but the `max_depth` parameter can control this behavior." ] }, {