1 Answers
In information theory, the binary entropy function, denoted H {\displaystyle \operatorname {H} } or H b {\displaystyle \operatorname {H} _{\text{b}}} , is defined as the entropy of a Bernoulli process with probability p {\displaystyle p} of one of two values. It is a special case of H {\displaystyle \mathrm {H} } , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable X {\displaystyle X} that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive.
If Pr = p {\displaystyle \operatorname {Pr} =p} , then Pr = 1 − p {\displaystyle \operatorname {Pr} =1-p} and the entropy of X {\displaystyle X} is given by
where 0 log 2 0 {\displaystyle 0\log _{2}0} is taken to be 0. The logarithms in this formula are usually taken to the base 2. See binary logarithm.
When p = 1 2 {\displaystyle p={\tfrac {1}{2}}} , the binary entropy function attains its maximum value. This is the case of an unbiased coin flip.