2011-12-10

Re: Practice Final Answers - Post your answers here.

Originally Posted By: jnewth

Problem 8: Show GNI is in IP.


Addressed in page 148 of the book and the notes of November 30th. The problem is relevant because it is a problem in IP not known to be in NP.

There is a great image of two isomorphic graphs and the permutation description at wikipedia: http://en.wikipedia.org/wiki/Graph_isomorphism

Graph isomorphism (GI)

GI, the problem of deciding if two graphs are isomorphs (ie, the nodes can be permuted without changing the edge structure to form the other graph) is in NP, because a certificate can be a description of the permutation. It is not known if GNI, that of declaring if two graphs are NOT isomorphic is in NP. It can however be stated as a problem in IP in the following way:

V: Pick i=0/1. Randomly permute the vertices of graph i to get a new graph H.
V: Send H to the prover P.
P: Identify which graph G0 or G1 the H came from. If it could have come from either, guess which one. Send the guess j back to V.
V: If i == j accept; else reject.

If the graphs are not the same then there exists a prover that can tell them apart, or rather there exists P such that Pr(V accepts) = 1. This is greater than 2/3, so we meet the Completeness requirement for IP.

If on the other hand the graphs are the same, the best that prover can do is guess G0 or G1, so in every case Pr[V accepts] jnewth — Sat Dec 10, 2011 6:46 pm <hr>
'''Originally Posted By: jnewth''' <br>Problem 8: Show GNI is in IP.<br><br><br>Addressed in page 148 of the book and the notes of November 30th. The problem is relevant because it is a problem in IP not known to be in NP.<br><br>There is a great image of two isomorphic graphs and the permutation description at wikipedia: http://en.wikipedia.org/wiki/Graph_isomorphism<br><br>Graph isomorphism (GI)<br><br>GI, the problem of deciding if two graphs are isomorphs (ie, the nodes can be permuted without changing the edge structure to form the other graph) is in NP, because a certificate can be a description of the permutation. It is not known if GNI, that of declaring if two graphs are NOT isomorphic is in NP. It can however be stated as a problem in IP in the following way:<br><br>V: Pick i=0/1. Randomly permute the vertices of graph i to get a new graph H.<br>V: Send H to the prover P.<br>P: Identify which graph G0 or G1 the H came from. If it could have come from either, guess which one. Send the guess j back to V.<br>V: If i == j accept; else reject.<br><br>If the graphs are not the same then there exists a prover that can tell them apart, or rather there exists P such that Pr(V accepts) = 1. This is greater than 2/3, so we meet the Completeness requirement for IP. <br><br>If on the other hand the graphs are the same, the best that prover can do is guess G0 or G1, so in every case Pr[V accepts] jnewth &mdash; Sat Dec 10, 2011 6:46 pm <hr>

Practice Final Answers - Post your answers here
Originally Posted By: jnewth

Problem 7: Show BPP is in Sigma-P2.


This proof is Sipser-Gacs presented pg 136 and the notes of November 28.

Proof Idea:
BPP is contained in the intersection of Sigma-P2 and Pi-P2 which can be simplified to Sigma-P2 because BPP=coBPP, i.e. BPP is closed under complementation.

If x is the language L in BPP, then there is a certificate r that causes M to accept x with some high probability. If x is not in the language, then there are a small number of certificates that will cause M to accept x erroneously. The rest of this proof uses the concept of "shifts" to show that thinking about the the sizes of these two sets of certificates (that cause M to accept x either correctly or incorrectly) can be phrased as a computation in Sigma-P2. Math ensues:

For a language in L in BPP:

x∈L⇒Pr[M(x,r)accepts]≥1-2^-n
x∉L⇒Pr[M(x,r)accepts]≤2^-n

For x of length n, Sx is the set of certificate "r" of length at most m for which M accepts .

If x∈L then size Sx = Pr(M(x,r) accepts correctly) * number of strings of length m
|Sx|>=(1-2^-n)*(2^m)

And by similar reasoning for x∉L

Sx = Pr(M(x, r) accepts erroneously) * number of strings of length m
|Sx| Shift: If we have a set of strings of length m and a shift string u of length m, we shift S by u by doing bitwise addition of every string in S with u, creating a new set of strings of the same size as the original set S. We generate a new set for each shift.

Claim 1: For every set S⊆{0,1}^m with |S|≤2^(m-n) and k vectors u1,...uk, the union of all strings for each shift Proof: The shift of the set has the same number of strings as the original set, so if we add all the shifted sets together, we would get at most k*|S| total strings. k*|S| strings is less than 2^m, the total possible number of strings of length m. This is because m=|r| (ie the length of a certificate) which can only be polynomial in n. The algebra is in the notes.

Claim 2: For every set S⊆{0,1}^m with |S|≥(1-2^-n)*(2^m), there exist u1,...,uk such that the union of all shifted sets = {0,1}^m

(IE A big enough set with k shifts CAN generate all 2^m strings of length m)

We want to show that there is at least SOME choice of shifts for which this is true. We do this by showing
Pr[there exists a string of length m that is not in S'] jnewth — Sat Dec 10, 2011 6:25 pm <hr>
'''Originally Posted By: jnewth''' <br>Problem 7: Show BPP is in Sigma-P2.<br><br><br>This proof is Sipser-Gacs presented pg 136 and the notes of November 28.<br><br>Proof Idea:<br>BPP is contained in the intersection of Sigma-P2 and Pi-P2 which can be simplified to Sigma-P2 because BPP=coBPP, i.e. BPP is closed under complementation.<br><br>If x is the language L in BPP, then there is a certificate r that causes M to accept x with some high probability. If x is not in the language, then there are a small number of certificates that will cause M to accept x erroneously. The rest of this proof uses the concept of &quot;shifts&quot; to show that thinking about the the sizes of these two sets of certificates (that cause M to accept x either correctly or incorrectly) can be phrased as a computation in Sigma-P2. Math ensues:<br><br>For a language in L in BPP:<br><br>x&isin;L&rArr;Pr[M(x,r)accepts]&ge;1-2^-n<br>x&notin;L&rArr;Pr[M(x,r)accepts]&le;2^-n<br><br>For x of length n, Sx is the set of certificate &quot;r&quot; of length at most m for which M accepts .<br><br>If x&isin;L then size Sx = Pr(M(x,r) accepts correctly) * number of strings of length m <br>|Sx|&gt;=(1-2^-n)*(2^m)<br><br>And by similar reasoning for x&notin;L<br><br>Sx = Pr(M(x, r) accepts erroneously) * number of strings of length m <br>|Sx| Shift: If we have a set of strings of length m and a shift string u of length m, we shift S by u by doing bitwise addition of every string in S with u, creating a new set of strings of the same size as the original set S. We generate a new set for each shift. <br><br>Claim 1: For every set S&sube;{0,1}^m with |S|&le;2^(m-n) and k vectors u1,...uk, the union of all strings for each shift Proof: The shift of the set has the same number of strings as the original set, so if we add all the shifted sets together, we would get at most k*|S| total strings. k*|S| strings is less than 2^m, the total possible number of strings of length m. This is because m=|r| (ie the length of a certificate) which can only be polynomial in n. The algebra is in the notes.<br><br>Claim 2: For every set S&sube;{0,1}^m with |S|&ge;(1-2^-n)*(2^m), there exist u1,...,uk such that the union of all shifted sets = {0,1}^m<br><br>(IE A big enough set with k shifts CAN generate all 2^m strings of length m)<br><br>We want to show that there is at least SOME choice of shifts for which this is true. We do this by showing <br>Pr[there exists a string of length m that is not in S'] jnewth &mdash; Sat Dec 10, 2011 6:25 pm <hr>

-- Practice Final Answers - Post your answers here
Originally Posted By: Nishant
Q-2) Give an NC circuit family whose member nth member, Cn, computes the parity of its n input bits.

Sol:)
The language PARITY ={x : x has an odd number of 1s} is in NC1. The circuit
computing it has the form of a binary tree. The answer appears at the
root; the left subtree computes the parity of the first |x| /2 bits and the right
subtree computes the parity of the remaining bits. The gate at the top computes
the parity of these two bits. Clearly, unwrapping the recursion implicit in our
description gives a circuit of depth O(log n). It is also logspace uniform.
'''Originally Posted By: Nishant''' Q-2) Give an NC circuit family whose member nth member, Cn, computes the parity of its n input bits.<br><br>Sol:)<br>The language PARITY ={x : x has an odd number of 1s} is in NC1. The circuit<br>computing it has the form of a binary tree. The answer appears at the<br>root; the left subtree computes the parity of the first |x| /2 bits and the right<br>subtree computes the parity of the remaining bits. The gate at the top computes<br>the parity of these two bits. Clearly, unwrapping the recursion implicit in our<br>description gives a circuit of depth O(log n). It is also logspace uniform.
2011-12-13

-- Practice Final Answers - Post your answers here
Originally Posted By: cwong
'''Originally Posted By: cwong'''

-- Practice Final Answers - Post your answers here
Originally Posted By: Jay_Reynolds_Freeman
Problem 5 -- submitted by Jay Freeman:

Give the randomized p-time algorithm for 2SAT and explain how its running time is determined.

This matter is described in the lecture of 23 November, slides 3 through 5. The basic idea is to start with any (random) assignment of variables and repeat the following N times:

If the formula is satisfied
emit "Satisfied"
Else
take any unsatisfied clause, and flip one of its literals (randomly chosen)

After N repetitions, emit, "Probably not satisfiable."

The key is to pick the number of literals to get a decent probability of finding a satisfying combination of literal assignments, if indeed there is one.

For 2SAT, it turns out that N = 2 * n**2, where n is the number of literals, will make that probability 0.5. The proof involves setting up recurrence relations in t(i), the expected number of "flip" steps to obtain a solution when we start with i literals incorrect. t(0) = 0 (of course), t( i ) Jay_Reynolds_Freeman — Tue Dec 13, 2011 2:31 am <hr>
'''Originally Posted By: Jay_Reynolds_Freeman''' Problem 5 -- submitted by Jay Freeman:<br><br>Give the randomized p-time algorithm for 2SAT and explain how its running time is determined.<br><br>This matter is described in the lecture of 23 November, slides 3 through 5. The basic idea is to start with any (random) assignment of variables and repeat the following N times:<br><br>If the formula is satisfied<br> emit &quot;Satisfied&quot;<br>Else<br> take any unsatisfied clause, and flip one of its literals (randomly chosen)<br><br>After N repetitions, emit, &quot;Probably not satisfiable.&quot;<br><br>The key is to pick the number of literals to get a decent probability of finding a satisfying combination of literal assignments, if indeed there is one.<br><br>For 2SAT, it turns out that N = 2 * n**2, where n is the number of literals, will make that probability 0.5. The proof involves setting up recurrence relations in t(i), the expected number of &quot;flip&quot; steps to obtain a solution when we start with i literals incorrect. t(0) = 0 (of course), t( i ) Jay_Reynolds_Freeman &mdash; Tue Dec 13, 2011 2:31 am <hr>

-- Practice Final Answers - Post your answers here
Originally Posted By: stang
Question #3: Consider the language {γ€ˆa,k,γ€ˆa_1,...a_n〉〉|a is the k-th smallest a_i}. Show this is in BPP.

Answer:
By using the proof given in the class notes (Nov 21) about PTM, we can say that finding the k-th smallest number can be simulated on PTMs. After finding the k-th smallest element, we can verify it with given a in the question #3. Since BPP is the language that is declared by the class of language decided by PTM(s) in O(T(n)) time, hence BPP = U_c BPTIME(n^c).
Thus, with the proofs from both, the language is in BPP.
'''Originally Posted By: stang''' Question #3: Consider the language {γ€ˆa,k,γ€ˆa_1,...a_n〉〉|a is the k-th smallest a_i}. Show this is in BPP.<br><br>Answer:<br> By using the proof given in the class notes (Nov 21) about PTM, we can say that finding the k-th smallest number can be simulated on PTMs. After finding the k-th smallest element, we can verify it with given a in the question #3. Since BPP is the language that is declared by the class of language decided by PTM(s) in O(T(n)) time, hence BPP = U_c BPTIME(n^c). <br> Thus, with the proofs from both, the language is in BPP.

-- Practice Final Answers - Post your answers here
Originally Posted By: sctice
I apologize for posting this so late in the game. I'm not sure whether I was supposed to do 4 or 6, so I'll start with 4.

Question 4: State Markov's Theorem and prove it.

I'm assuming this question is referring to Markov's inequality, which is defined and proven here: http://www.cs.sjsu.edu/faculty/pollett/ ... ml#%285%29. I'll go ahead and write it out. The theorem says that any nonnegative random variable X satisfies Pr(X >= k * E[X]) http://saravananthirumuruganathan.wordp ... qualities/.

Question 6: State Chernoff bounds.

Chernoff bounds are defined (and proved on the following slide) here: http://www.cs.sjsu.edu/faculty/pollett/ ... ml#%288%29. The bounds are useful (among other reasons) because they provide guarantees on error reduction for repeated trials, even when (as with BPP) both false positives and false negatives are possible.
'''Originally Posted By: sctice''' I apologize for posting this so late in the game. I'm not sure whether I was supposed to do 4 or 6, so I'll start with 4.<br><br>Question 4: State Markov's Theorem and prove it.<br><br>I'm assuming this question is referring to Markov's inequality, which is defined and proven here: http://www.cs.sjsu.edu/faculty/pollett/ ... ml#%285%29. I'll go ahead and write it out. The theorem says that any nonnegative random variable X satisfies Pr(X &gt;= k * E[X]) http://saravananthirumuruganathan.wordp ... qualities/.<br><br>Question 6: State Chernoff bounds.<br><br>Chernoff bounds are defined (and proved on the following slide) here: http://www.cs.sjsu.edu/faculty/pollett/ ... ml#%288%29. The bounds are useful (among other reasons) because they provide guarantees on error reduction for repeated trials, even when (as with BPP) both false positives and false negatives are possible.
X