Use one attribute only once in scikit-learn decision tree in python
up vote
7
down vote
favorite
I am using scikit-learn python module to create a decision tree, and its working like a charm. I would like to achieve one more thing. To make the tree to split on an attribute only once.
The reason behind this is because of my very strange dataset. I use a noisy dataset, and i am really interested in the noise as well. My class outcomes are binary let say [+,-]. I have a bunch of attributes with numbers mostly in the range of (0,1).
When scikit-learn creates the tree it splits on attributes multiple times, to make the tree "better". I understand that in this way the leaf nodes become more pure, but thats not the case i would like to achieve.
The thing i did was to define cutoffs for every attribute by counting the the information gain in different cutoffs, and choosing the maximum. In this way with "leave-one-out" and "1/3-2/3" cross validation techniques i get better results then the original tree.
The problem is that when i try to automatize this, i run into a problem around the lower and upper bound e.g. around 0 and 1 because most of the elements will be under/upper that and i get really high informational gain, cause one of the sets are pure, even if it only contains 1-2% of the full data.
All in all i would like to do something to make scikit-learn to only split on an attribute once.
If it cannot be done, do you guys have any advice how to generate those cutoffs in a nice way?
Thanks a lot!
python scikit-learn decision-tree
add a comment |
up vote
7
down vote
favorite
I am using scikit-learn python module to create a decision tree, and its working like a charm. I would like to achieve one more thing. To make the tree to split on an attribute only once.
The reason behind this is because of my very strange dataset. I use a noisy dataset, and i am really interested in the noise as well. My class outcomes are binary let say [+,-]. I have a bunch of attributes with numbers mostly in the range of (0,1).
When scikit-learn creates the tree it splits on attributes multiple times, to make the tree "better". I understand that in this way the leaf nodes become more pure, but thats not the case i would like to achieve.
The thing i did was to define cutoffs for every attribute by counting the the information gain in different cutoffs, and choosing the maximum. In this way with "leave-one-out" and "1/3-2/3" cross validation techniques i get better results then the original tree.
The problem is that when i try to automatize this, i run into a problem around the lower and upper bound e.g. around 0 and 1 because most of the elements will be under/upper that and i get really high informational gain, cause one of the sets are pure, even if it only contains 1-2% of the full data.
All in all i would like to do something to make scikit-learn to only split on an attribute once.
If it cannot be done, do you guys have any advice how to generate those cutoffs in a nice way?
Thanks a lot!
python scikit-learn decision-tree
1
I wonder if you can achieve what you want by tuningmin_samples_leaf
,min_samples_split
ormin_weight_fraction_leaf
.
– maxymoo
Nov 15 '16 at 3:50
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
1
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36
add a comment |
up vote
7
down vote
favorite
up vote
7
down vote
favorite
I am using scikit-learn python module to create a decision tree, and its working like a charm. I would like to achieve one more thing. To make the tree to split on an attribute only once.
The reason behind this is because of my very strange dataset. I use a noisy dataset, and i am really interested in the noise as well. My class outcomes are binary let say [+,-]. I have a bunch of attributes with numbers mostly in the range of (0,1).
When scikit-learn creates the tree it splits on attributes multiple times, to make the tree "better". I understand that in this way the leaf nodes become more pure, but thats not the case i would like to achieve.
The thing i did was to define cutoffs for every attribute by counting the the information gain in different cutoffs, and choosing the maximum. In this way with "leave-one-out" and "1/3-2/3" cross validation techniques i get better results then the original tree.
The problem is that when i try to automatize this, i run into a problem around the lower and upper bound e.g. around 0 and 1 because most of the elements will be under/upper that and i get really high informational gain, cause one of the sets are pure, even if it only contains 1-2% of the full data.
All in all i would like to do something to make scikit-learn to only split on an attribute once.
If it cannot be done, do you guys have any advice how to generate those cutoffs in a nice way?
Thanks a lot!
python scikit-learn decision-tree
I am using scikit-learn python module to create a decision tree, and its working like a charm. I would like to achieve one more thing. To make the tree to split on an attribute only once.
The reason behind this is because of my very strange dataset. I use a noisy dataset, and i am really interested in the noise as well. My class outcomes are binary let say [+,-]. I have a bunch of attributes with numbers mostly in the range of (0,1).
When scikit-learn creates the tree it splits on attributes multiple times, to make the tree "better". I understand that in this way the leaf nodes become more pure, but thats not the case i would like to achieve.
The thing i did was to define cutoffs for every attribute by counting the the information gain in different cutoffs, and choosing the maximum. In this way with "leave-one-out" and "1/3-2/3" cross validation techniques i get better results then the original tree.
The problem is that when i try to automatize this, i run into a problem around the lower and upper bound e.g. around 0 and 1 because most of the elements will be under/upper that and i get really high informational gain, cause one of the sets are pure, even if it only contains 1-2% of the full data.
All in all i would like to do something to make scikit-learn to only split on an attribute once.
If it cannot be done, do you guys have any advice how to generate those cutoffs in a nice way?
Thanks a lot!
python scikit-learn decision-tree
python scikit-learn decision-tree
asked Nov 26 '15 at 11:30
Gábor Erdős
2,2311829
2,2311829
1
I wonder if you can achieve what you want by tuningmin_samples_leaf
,min_samples_split
ormin_weight_fraction_leaf
.
– maxymoo
Nov 15 '16 at 3:50
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
1
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36
add a comment |
1
I wonder if you can achieve what you want by tuningmin_samples_leaf
,min_samples_split
ormin_weight_fraction_leaf
.
– maxymoo
Nov 15 '16 at 3:50
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
1
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36
1
1
I wonder if you can achieve what you want by tuning
min_samples_leaf
, min_samples_split
or min_weight_fraction_leaf
.– maxymoo
Nov 15 '16 at 3:50
I wonder if you can achieve what you want by tuning
min_samples_leaf
, min_samples_split
or min_weight_fraction_leaf
.– maxymoo
Nov 15 '16 at 3:50
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
1
1
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I am not giving a method to directly deal with stopping the Classifier from using a feature multiple times. (Although you could do it by defining your own splitter and wiring it in, it is a lot of work.)
I would suggest making sure that you are balancing your classes in the first place, take a look at the class_weight
parameter for details. That should help a lot in your issue. But if that does not work you can still enforce that there are no leafs having too small weight in them using the min_weight_fraction_leaf
or similar parameters as suggested by maxymoo.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I am not giving a method to directly deal with stopping the Classifier from using a feature multiple times. (Although you could do it by defining your own splitter and wiring it in, it is a lot of work.)
I would suggest making sure that you are balancing your classes in the first place, take a look at the class_weight
parameter for details. That should help a lot in your issue. But if that does not work you can still enforce that there are no leafs having too small weight in them using the min_weight_fraction_leaf
or similar parameters as suggested by maxymoo.
add a comment |
up vote
0
down vote
I am not giving a method to directly deal with stopping the Classifier from using a feature multiple times. (Although you could do it by defining your own splitter and wiring it in, it is a lot of work.)
I would suggest making sure that you are balancing your classes in the first place, take a look at the class_weight
parameter for details. That should help a lot in your issue. But if that does not work you can still enforce that there are no leafs having too small weight in them using the min_weight_fraction_leaf
or similar parameters as suggested by maxymoo.
add a comment |
up vote
0
down vote
up vote
0
down vote
I am not giving a method to directly deal with stopping the Classifier from using a feature multiple times. (Although you could do it by defining your own splitter and wiring it in, it is a lot of work.)
I would suggest making sure that you are balancing your classes in the first place, take a look at the class_weight
parameter for details. That should help a lot in your issue. But if that does not work you can still enforce that there are no leafs having too small weight in them using the min_weight_fraction_leaf
or similar parameters as suggested by maxymoo.
I am not giving a method to directly deal with stopping the Classifier from using a feature multiple times. (Although you could do it by defining your own splitter and wiring it in, it is a lot of work.)
I would suggest making sure that you are balancing your classes in the first place, take a look at the class_weight
parameter for details. That should help a lot in your issue. But if that does not work you can still enforce that there are no leafs having too small weight in them using the min_weight_fraction_leaf
or similar parameters as suggested by maxymoo.
answered Nov 11 at 1:22
zsomko
3616
3616
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f33937532%2fuse-one-attribute-only-once-in-scikit-learn-decision-tree-in-python%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I wonder if you can achieve what you want by tuning
min_samples_leaf
,min_samples_split
ormin_weight_fraction_leaf
.– maxymoo
Nov 15 '16 at 3:50
Is it indeed possible as the OP describes?
– Dror
Jun 1 '17 at 8:11
I havebt found a way with sckit, i wrote my own methods
– Gábor Erdős
Jun 3 '17 at 8:41
1
Do you really want to use each attribute only once? What's wrong if the same feature x_i is used both in the left and right subtrees? If you want to avoid overfitting, you could use gradient boosting with very small trees instead of a single tree.
– David Dale
Nov 2 '17 at 16:11
Have you tried just using logistic regression? by definition it will use every attribute only once.
– Josep Valls
Jun 14 at 23:36