Points: Grade: 0-15 0 16-18 1 19-21 2 22-24 3 25-27 4 28-30 5You must get at least grade 1 to pass the course.
Problems:
1.1 max 3p | A discrete
memoryless source emits a sequence of statistically independent binary
digits with probabilities p(1) = 0.005 and p(0) = 0.995. The digits
are taken 100 at a time and a binary codeword is provided for every
sequence of 100 digits containing three of fewer ones.
| |||||||
1.2 max 2p | Show that the
entropy of the probability distribution, (p1, ...,
pi, ..., pj, ..., pm), is less than
the entropy of the distribution (p1, ...,
(pi+pj)/2, ..., (pi+pj)/2,
..., pm). Show that in general any transfer of probability
that makes the distribution more uniform increases the entropy.
We only need to look at the difference between the entropies of the mentioned probability distributions. Define f(p)=-p log p. Then the difference is 2*f(pi+pj)-f(pi)-f(pj) which we need to show to be less than or equal to zero: f((pi+pj)/2)<= 1/2 (f(pi)+f(pj)) This follows if f is convex. Since df/dp=log p+1, it follows that d^2 f/dp^2=1/p>0 when p>0, hence f is convex. We need to define "more uniform". This is conveniently done using Kullback-Leibler divergence D(p,u) where u is the uniform distribution with m discrete probabilities 1/m. If D(p,u)>D(q,u), it means that q is "more uniform" than p. Since D(q,u)=sumi qi log (qi/(1/m))=-H(q)- sumi qi log m^-1=-H(q)+log m the inequality turns out to be -H(p)>-H(q) or H(p)<H(q) as desired. |
|||||||
2.1 max 2p |
Determine a Huffman code for the text of this problem.
Answer omitted, the code varies depending whether you include spaces or not etc... | |||||||
2.2 max 2p |
Consider all possible 256×256 pixel images with 8 bit grayscale
pixels. How many images can be compressed by 20 bits or more using a
uniquely decodable code? Compare the result with the number of all
possible images.
Each image has n=256*256*8 bits, then there are 2^n possible images. Denote l=n-20, which is the largest codelength of an image that fulfills the requirement given in the problem. Then Kraft-inequality gives an upper bound k as k*2^(-l)<=1 or k=2^l=2^(n-20). This is an upper bound, since shorter codelengths m<l would correspond to larger terms 2^(-m) in the Kraft inequality. To compare k with 2^n, divide k/2^n=2^(n-20)/2^n=2^(-20), which is approximately 0.000095 percent. So compression is difficult! | |||||||
2.3 max 2p | Assume
there is a source emitting symbols a and b with probability P(a |
ß)=ß. Given a sequence s with Na symbols a and
Nb symbols b, what is the probability P(a | s) if p(ß) =
[ß(1-ß)]-1/2 / pi.
The problem is otherwise identical to the one solved in MacKay's book in section 4.2.4, except that the prior is different. The solution is (Na + 1/2) / (Na + Nb + 1). | |||||||
3.1 max 2p | The
noisy channel coding theorem states that for a discrete memoryless
channel, it is possible to achieve a finite rate of transmission with
arbitrarily small error probability. Discuss the reasons why in real
applications higher error probabilities are unavoidable.
Some reasons: the noisy channel coding theorem assumes, that the
code is a block code and the block length can be chosen arbitrarily
large. This is not reasonable in practice. | |||||||
3.2 max 2p | Which
distributions p(x) maximize the differential entropy of x for each of
the following constraints:
The task is to maximize an integral over -p(x)log(p(x)) under constraints. The constraints can be expressed by integrals over p(x)f(x) = constant. The first constraint is that p(x) has to integrate to unity (f(x) = 1) and the next constraint is the one given in the problem. In the first problem f(x) = |x| and in the third problem f(x) = x^2. In the second problem the domain of x is [-1,1] but there are no other constraints. Using the method of Lagrange multipliers one can show that a general form of the solution is exp(c1f1(x) + c2f2(x) + ...), where ci are the Lagrange multipliers, one for each constraint. The solutions are:
| |||||||
4.1 max 2p |
Recompute the posterior probability (10.33) in the book by using the
given conjugate priors for the mean and variance.
Use the conjugate priors and add them into the posterior probability according to Bayes' theorem. Details omitted here. | |||||||
4.2 max 2p | With Laplace's method, what is the approximation
for the posterior probability density if the posterior has n local
maxima at x1, x2, ..., xn, the
corresponding joint densities of xi and the data D are
p(x1, D), ..., p(xn, D) and the corresponding
Hessian matrices of -log p(xi, D) are H1, ...,
Hn.
The solution is obtained as explained in the supplement to the
problem. It is | |||||||
5.1 max 2p |
Estimate in bits the total sensory experience that you have had in
your life - visual information, auditory information etc. Estimate
how much information you have memorized. Compare these with the
capacity of your brain assuming you have 1011 neurons
each making 1000 synaptic connections and that the capacity result
for one neuron (two bits per connection) applies. Is your brain full yet?
You need to use heavy approximations. First, ignore everything but
visual information; other sensory information should be a small
fragment of the visual information. Estimate your lifetime, say, in
seconds. Use only 1 significant digit:
| |||||||
6.1 max 2p | In the
MML principle the coding cost is measured by assuming a random
discretisation of the parameters wi with accuracies
ei. The coding cost of the accuracies is neglected. Show
that the MML principle can be seen as a special case of ensemble
learning and that the bits-back argument also justifies the neglection
of the coding cost of the accuracies. (Hint: consider what kind of
ensemble distribution for the weights is implicitly used in the MML
principle.)
The ensemble distribution implicitly assumed in the MML method is a hyper-box whose sides have lengths ei. Let us denote this distribution by Q(w). The coding length in MML is L(parameters) + L(data | parameters) = - log P(w1)e1 - log P(w2)e2 - ... - EQ{log P(D | w)}. The bits-back argument assumes that the sender picks the parameters w from distribution Q(w) and encodes them with an accuracy approaching infinity (ei -> 0). This makes the terms -log P(wi)ei infinite, but an infinite number of bits are got back also, since the sender can embed a secondary message in the choise of the parameters. The expected total number of bits used in coding is L(parameters) + L(data | parameters) - bits back = - EQ{log P(w)} - EQ{log P(D | w) } + EQ{log Q(w)}. Choosing the Q(w) to be the same as in MML yields identical coding length since - EQ{log P(w)} + EQ{log Q(w)} = - log P(w1)e1 - log P(w2)e2 - ... In the MML scheme the coding cost of the discretation accuracies ei is neglected, but in the bits-back scheme the sender and the receiver can agree in advance on arbitrarily small discretation intervals ei. | |||||||
7.1 max 1p |
Exercise 20.1 in the book, page 387.
Only the sign of the constant n is significant, since the activation is a linear combination of the weights and the constant appears as a multiplier of the activation. The state is updated using a step function of the activation, where only the *sign* of the activation is significant. | |||||||
7.2 max 2p |
Exercise 21.3 in the book, page 403.
Straightforward but tedious calculation, omitted here. | |||||||
8.1 max 2p |
Exercise 19.1 in the book, page 379.
Straightforward but tedious calculation, omitted here. | |||||||
8.2 max 2p |
Exercise 13.3 in the book, page 276. Straightforward calculation, omitted here. |