15-359: Probability and Computing (Fall 2010)

December 11, 2010

Averages for Final Parts

Filed under: Announcements — cmu15359 @ 4:48 pm

Hi All,

Here are the averages for the final HWs in the following format (Average, Total, Std. Dev)

HW 9: 37.92 / 50 (10.02)

HW10: 39.29 / 50 (9.04)

HW11: 41.08 / 60 (13.53)

–TAs

December 6, 2010

Final announcement

Filed under: Announcements — cmu15359 @ 1:00 pm

Just a repeat announcement: You are allowed to bring a hand-written, single-sided sheet of notes to the final exam on Tuesday for reference.

All the best!

December 2, 2010

Lecture 27

Filed under: Lectures — cmu15359 @ 11:12 pm

Today in the final class, we discussed game theory (slides posted in lecture-notes panel). We discussed general-sum games, zero-sum games, bluffing in poker, and we sketched how our bounds for the Randomized Weighted Majority algorithm in fact give a proof of the minimax theorem. We did not go into the proof of existence of Nash equilibria, but I left a proof sketch in the slides posted in case you are interested (don’t worry, you won’t be tested on that).

BTW, on 12/6 my (Avrim’s) office hours will be 10-11 instead of 4-5.

December 1, 2010

Lecture 26

Filed under: Lectures — Venkat Guruswami @ 3:39 pm

This lecture served as an introduction to information theory, one of the most important branches of applied mathematics invented in the 20th century. We began with some introductory remarks about the birth of this subject in Claude Shannon’s 1948 masterpiece “A Mathematical Theory of Communication,” mentioning the two key problems studied by the theory: data compression (removing redundancy in data) and coding for reliable communication (adding judicious redundancy to the data that gives it error-resilience).

We discussed some axioms that the “surprise” S(p) associated with an event of probability p, and deduced the formula S(p) = \log_2 (1/p) for this function. We then defined an(other) important quantity associated with a (discrete) random variable X, namely its entropy H(X), which is the expected surprise of the outcome of the random variable. More formally,

\displaystyle H(X) = {\mathbb E}_{x \leftarrow X} \log_2 \left( \frac{1}{\mathrm{Pr}[X=x]} \right) = \sum_{x \in \mathrm{range}(X)} p_X(x) \log_2 (1/p_X(x))

For X \sim \mathrm{Bernoulli}(p), H(X) = h(p) = p \log_2(1/p) + (1-p)\log_2 (1/(1-p), where h : [0,1] \rightarrow [0,1] is the binary entropy function. This function plays an important role in asymptotic combinatorics, since {n \choose {\alpha n}} \approx 2^{h(\alpha) n} for fixed \alpha as n \to \infty. We argued that if X takes at most n values, then H(X) \le \log_2 n, with equality attained when X is uniformly distributed over its range of n elements. Along the way, we saw the very useful Jensen’s inequality, which says that for a convex function f, {\mathbb E}[f(X)] \ge f({\mathbb E}[X]) (the inequality {\mathbb E}[X^2] \ge E[X]^2 is the special case f(x) = x^2), and for concave functions, the inequality is reversed: f({\mathbb E}[X]) \ge {\mathbb E}[f(X)].

Here are some notes from the Fall 2009 offering for further details about this.  (We only covered Section 1 of the notes.)

In the second half of the lecture, we discussed the Shannon coding theorem for noisy channels, focusing on a simple channel called the binary symmetric channel which flips each input bit independently with probability p. (In other words, in n transmissions, it xors an error vector distributed as \mathrm{Binomial}(n,p) to the actual transmitted string.) This second part is not included for the final, but if you are interested in learning more or just having a reference for what we covered, here are some notes from the coding theory course I taught in Spring 2010. (The same notes in pdf.)

November 30, 2010

Announcements

Filed under: Announcements — cmu15359 @ 1:54 pm
  1. The final exam is Tues Dec 7, 1:00-4:00pm, in GHC 4307.
  2. There will be a review session on Sunday (Dec 5) 11 am – 12:30 pm in GHC 4211. We will go through last year’s final exam. Please attempt the exam beforehand, and also please bring any questions you may have.
  3. In this week’s recitation, we will give a quick overview of the topics covered since the midterm.
  4. Please fill out the faculty course evaluation form by December 14.
  5. Please also fill out evaluation forms for your TAs by December 10: [Varun] [Ravi]

November 29, 2010

Hint for Q2

Filed under: Announcements,Problem sets — cmu15359 @ 6:17 pm

To obtain lower bound on \Phi(-u), try to upper bound \int_{-\infty}^{-u} \frac{e^{-x^2/2}}{x^2} in terms of \Phi(-u).

Additional office hours

Filed under: Announcements — cmu15359 @ 2:34 am

Since HW 11 is due on Tuesday, I (Varun) will have this week’s office hour on Monday (Nov 29) 5-6 pm.

November 28, 2010

Lecture 25

Filed under: Lectures — cmu15359 @ 10:52 pm

In this lecture, we began by completing our “elementary” analysis of PAC learning, showing that for any class of functions H, if we see m \geq \frac{1}{\epsilon}(\ln(|H|) + \ln(1/\delta)) examples, then with probability at least 1-\delta, all hypotheses h \in H with err_D(h) \geq \epsilon will make at least one mistake on the sample.  This means that we can be confident in any rules in H we find that are consistent with the data.  The analysis of this was just a direct probability argument for a single function (fix a high error function h and then draw the sample) followed by a union bound.

We then saw that one could interpret this as a mathematical justification of Occam’s razor.  In particular, for any way of describing functions that you have in your head, there can be at most 2^s functions of size (in bits) < s.  So this means that (plugging in \epsilon = 0.1) if you draw m examples, you can be confident in any rule you find that can be described in less than m/10 bits.  In particular, even if each of us has a different way of describing things and a different notion of what is “simpler”, we are each justified in following the principle of Occam’s razor.

We then asked the question: what about functions like “linear separators in the plane” where there are infinitely many functions in the class.  To analyze this, we discussed the idea of shatter coefficients and went into a tricky “double-sample” argument to show that you could replace “log of the total number of functions in H” with “log of the number of ways you can use functions in H to label a set of 2m points”.  E.g., for the case of linear separators in the plane, the former quantity is infinite but the latter quantity is at most \log((2m)^2) = O(\log m).

We then discussed a different model where rather than assuming examples are drawn from a probability distribution and labeled by a target function, we just have an arbitrary sequence of examples, and our aim is just to do “nearly as well as” the best of some set of predictors.  This is often called the problem of combining expert advice, and we analyzed the deterministic and randomized Weighted Majority algorithms for this problem.  What is especially nice is that the randomized version gets a particularly strong bound on the expected number of mistakes over any adversarially-chosen sequence of examples. For instance, we can set parameters to make at most 1.07m + 8\ln(n) mistakes, where m is the number of mistakes of the best predictor in hindsight.

November 19, 2010

Recitation 11/19

Filed under: Recitation — cmu15359 @ 2:19 pm

Today we looked at the following topics:

(1) Sampling techniques, in particular the Box-Muller method for generating samples from two independent N(0,1) distributions. We began by trying the invert-the-cdf approach for sampling, and noticed that we get “stuck” at the same place where we got stuck in class while computing I = \int_{-\infty}^{\infty} e^{-x^2/2} dx. The trick to handle this was to somehow compute I^2, and use its radial symmetry. We borrow the same trick into our problem as well. Instead of trying to sample from 1 gaussian, what if we sample from it twice?

We then saw that if (x,y) is such that x and y are sampled independently from Gaussians, then the random variable R^2 = x^2 + y^2 behaves as an exponential R.V, and the angle \theta = tan^{-1}(y/x) behaves as a uniform R.V [0,2\pi]. From this, we saw that if we “sample” R^2 and \theta from the appropriate distributions, we get (x,y) such that their pdfs are both from the Gaussian distribution, and furthermore, they are independent!

(2) We then moved on to solving the “opposite” problem, of estimating the gaussian given a collection of samples. For this, we calculated the (parameters) mean and variance that will maximize the likelihood of the occurrence of this sample given to us, and some calculus gave us that the best value for the estimated mean is \mu^a = (1/N) \sum_{i=1}^{N} x_i and estimated variance is 1/N \sum_{i=1}^{N} (x_i - \mu^a)^2.

(3) Finally we saw the difference between convergence in probability, and convergence “almost surely”, and how this gives the different forms of the Law of Large Numbers (Weak / Strong).

November 18, 2010

Lecture 24

Filed under: Lectures — cmu15359 @ 4:00 pm

We defined the moment generating function (MGF) \mathbb{E}[e^{tX}] of a random variable X. We stated results that the MGF, when it exists, characterizes the random variable. The reason for the name moment generating function is that the MGF is the exponential generating function of the moments \mathbb{E}[X^k] of X. We calculated the MGF of the standard Gaussion to be e^{t^2/2}. We saw that the MGF behaves very nicely under convolution and scaling:

\displaystyle M_{X+Y}(t) = M_X(t) M_Y(t) \quad \mathrm{and} \quad M_{cX}(t) = M_X(ct)

Using this we proved that if X_1,X_2,\cdots,X_n are i.i.d copies of a random variable X with mean 0, variance 1, and bounded moments, then the MGF of Z_n = (X_1+X_2+\cdots+X_n)/\sqrt{n} tends to e^{t^2/2} for large n, i.e.,

\displaystyle \lim_{n \to \infty} M_{Z_n}(t) = e^{t^2/2}

We concluded that the Z_n‘s converge to the standard Gaussian distribution, in the sense stated by the central limit theorem. We also stated the Berry-Esseen theorem, which gives a quantitative bound on the error term (or “Kolmogorov distance”) in the CLT.

With this, we moved to the final special topics segment of the course. We began a discussion of the first special topic: Introduction to Machine Learning. Today we saw the concept learning setting and an algorithm to learn decision lists and its analysis.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.