- This topic has 1 reply, 2 voices, and was last updated 7 years, 10 months ago by .
Viewing 1 reply thread
Viewing 1 reply thread
- You must be logged in to reply to this topic.
› Forums › Speech Synthesis › The front end › CART › information gain
In the worked example1, the concept(information gain) has been mentioned. It is the reduction of bits after splitting mathematically, while what’s the exact meaning of it?
When we partition some data points using a binary question, we hope to make the distribution of values of the predictee less uniform and more predictable. In other words, we try to reduce the entropy of the probability distribution of the predictee.
If we manage to do that, we have gained some information about the value of the predictee. We know more about it (= we are more certain of its value) after the split than before it.
The reduction in entropy from before to after the split is the information gain, measured in bits.
[other part of question answered separately – please include only one question per post]
Some forums are only available if you are logged in. Searching will only return results from those forums if you log in.
Copyright © 2024 · Balance Child Theme on Genesis Framework · WordPress · Log in