63
Lower Bounds on Kernelization Venkatesh Raman Institiue of Mathematical Sciences, Chennai March 6, 2014 Venkatesh Raman Lower Bounds on Kernelization

Kernel Lower Bounds

Embed Size (px)

Citation preview

Page 1: Kernel Lower Bounds

Lower Bounds on Kernelization

Venkatesh Raman

Institiue of Mathematical Sciences, Chennai

March 6, 2014

Venkatesh Raman Lower Bounds on Kernelization

Page 2: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 3: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 4: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 5: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 6: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 7: Kernel Lower Bounds

Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Quadratic: k-Vertex Cover – 2k vertices but O(k2)edges

Cubic: k-Dominating Set in graphs without C4 – O(k3)vertices

Exponential: k-Path – 2O(k)

No Kernel: k-Dominating Set is W-hard. So is notexpected to have kernels of any size.

In this lecture, we will see some techniques to rule outpolynomial kernels.

Venkatesh Raman Lower Bounds on Kernelization

Page 8: Kernel Lower Bounds

OR of a language

Definition

Let L ⊆ {0, 1}∗ be a language. Then define

Or(L) = {(x1, . . . , xp) | ∃i such that xi ∈ L}

Definition

Let t : N→ N \ {0} be a function. Then define

Ort(L) = {(x1, . . . , xt(|x1|)) | ∀j |xj | = |x1|, and ∃i such that xi ∈ L}

Venkatesh Raman Lower Bounds on Kernelization

Page 9: Kernel Lower Bounds

Distillation

Let L, L′ ⊆ {0, 1}∗ be a pair of languages and let t : N→ N \ {0}be a function. We say that L has t-bounded distillation algorithmif there exists

a polynomial time computable function f : {0, 1}∗ → {0, 1}∗

such that

f ((x1, . . . , xt(|x1|))) ∈ L′ if and only if(x1, . . . , xt(|x1|)) ∈ Ort(L), and

|f ((x1, . . . , xt(|x1|))| ≤ O(t(|x1|) log t(|x1|)).

Venkatesh Raman Lower Bounds on Kernelization

Page 10: Kernel Lower Bounds

Fortnow-Santhanam

Theorem (FS 09)

Suppose for a pair of languages L, L′ ⊆ {0, 1}∗, there exists apolynomially bounded function t : N→ N \ {0} such that L has at-bounded distillation algorithm. Then L ∈ NP/poly. In particular,if L is NP-hard, then coNP ⊆ NP/poly.

Venkatesh Raman Lower Bounds on Kernelization

Page 11: Kernel Lower Bounds

Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillationalgorithm.

Use A to design NDTM that, with a “polynomial advice”, candecide L in P-time.

L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem!

Venkatesh Raman Lower Bounds on Kernelization

Page 12: Kernel Lower Bounds

Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillationalgorithm.

Use A to design NDTM that, with a “polynomial advice”, candecide L in P-time.

L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem!

Venkatesh Raman Lower Bounds on Kernelization

Page 13: Kernel Lower Bounds

Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillationalgorithm.

Use A to design NDTM that, with a “polynomial advice”, candecide L in P-time.

L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem!

Venkatesh Raman Lower Bounds on Kernelization

Page 14: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 15: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 16: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.

given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 17: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 18: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 19: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ Sn

If x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 20: Kernel Lower Bounds

Filling in the details

For the proof, we define the notions needed and the requirements.

Let |xi | = n ∀i ∈ [t(n)].

Let α(n) = O(t(n) log(t(n))).

Ln = {x ∈ L : |x | ≤ n}.given any (x1, x2, · · · , xt(n)) /∈ Or(L) (ie, xi ∈ Ln∀i ∈ [t(n)])

A maps it to y ∈ L′≤α(n)

we want to obtain a Sn ⊆ L′α(n) with |Sn| polynomiallybounded in n such that

If x ∈ Ln - ∃ strings x1, · · · , xt(n) ∈ Σn with xi = x for some isuch that A(x1, · · · , xt(n)) ∈ SnIf x /∈ Ln - ∀ strings x1, · · · , xt(n) ∈ Σn with xi = x for some i ,A(x1, · · · , xt(n)) /∈ Sn

Venkatesh Raman Lower Bounds on Kernelization

Page 21: Kernel Lower Bounds

How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given xsuch that |x | = n, checks whether x ∈ L in the following way.

Guesses t(n) strings, x1, · · · , xt(n) ∈ Σn

Checks whether one of them is x

Computes A(x1, · · · , xt(n)) and accepts iff output is in Sn.

Venkatesh Raman Lower Bounds on Kernelization

Page 22: Kernel Lower Bounds

How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given xsuch that |x | = n, checks whether x ∈ L in the following way.

Guesses t(n) strings, x1, · · · , xt(n) ∈ Σn

Checks whether one of them is x

Computes A(x1, · · · , xt(n)) and accepts iff output is in Sn.

Venkatesh Raman Lower Bounds on Kernelization

Page 23: Kernel Lower Bounds

How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given xsuch that |x | = n, checks whether x ∈ L in the following way.

Guesses t(n) strings, x1, · · · , xt(n) ∈ Σn

Checks whether one of them is x

Computes A(x1, · · · , xt(n)) and accepts iff output is in Sn.

Venkatesh Raman Lower Bounds on Kernelization

Page 24: Kernel Lower Bounds

How to get Sn

A : (Ln)t → L′≤α(n)

y ∈ L′≤α(n) covers a string x ∈ Ln — ∃x1, · · · , xt ∈ Σn withxi = x for some i and A(x1, · · · , xt(n)) = y

We construct Sn by iteratively picking the string in L′≤α(n)which covers the most number of instances in Ln till there areno strings left to cover.

Let us consider one step of the process. Let F be the set ofuncovered instances in Ln at the start of step.

By PHP there exists a string y ∈ L′≤α(n) such that A maps atleast

|F |t(n)

|L′≤α(n)|

tuples in F t(n) to y .

Venkatesh Raman Lower Bounds on Kernelization

Page 25: Kernel Lower Bounds

How to get Sn

A : (Ln)t → L′≤α(n)

y ∈ L′≤α(n) covers a string x ∈ Ln — ∃x1, · · · , xt ∈ Σn withxi = x for some i and A(x1, · · · , xt(n)) = y

We construct Sn by iteratively picking the string in L′≤α(n)which covers the most number of instances in Ln till there areno strings left to cover.

Let us consider one step of the process. Let F be the set ofuncovered instances in Ln at the start of step.

By PHP there exists a string y ∈ L′≤α(n) such that A maps atleast

|F |t(n)

|L′≤α(n)|

tuples in F t(n) to y .

Venkatesh Raman Lower Bounds on Kernelization

Page 26: Kernel Lower Bounds

How to get Sn

A : (Ln)t → L′≤α(n)

y ∈ L′≤α(n) covers a string x ∈ Ln — ∃x1, · · · , xt ∈ Σn withxi = x for some i and A(x1, · · · , xt(n)) = y

We construct Sn by iteratively picking the string in L′≤α(n)which covers the most number of instances in Ln till there areno strings left to cover.

Let us consider one step of the process. Let F be the set ofuncovered instances in Ln at the start of step.

By PHP there exists a string y ∈ L′≤α(n) such that A maps atleast

|F |t(n)

|L′≤α(n)|

tuples in F t(n) to y .

Venkatesh Raman Lower Bounds on Kernelization

Page 27: Kernel Lower Bounds

How to get Sn

A : (Ln)t → L′≤α(n)

y ∈ L′≤α(n) covers a string x ∈ Ln — ∃x1, · · · , xt ∈ Σn withxi = x for some i and A(x1, · · · , xt(n)) = y

We construct Sn by iteratively picking the string in L′≤α(n)which covers the most number of instances in Ln till there areno strings left to cover.

Let us consider one step of the process. Let F be the set ofuncovered instances in Ln at the start of step.

By PHP there exists a string y ∈ L′≤α(n) such that A maps atleast

|F |t(n)

|L′≤α(n)|

tuples in F t(n) to y .

Venkatesh Raman Lower Bounds on Kernelization

Page 28: Kernel Lower Bounds

How to get Sn

A : (Ln)t → L′≤α(n)

y ∈ L′≤α(n) covers a string x ∈ Ln — ∃x1, · · · , xt ∈ Σn withxi = x for some i and A(x1, · · · , xt(n)) = y

We construct Sn by iteratively picking the string in L′≤α(n)which covers the most number of instances in Ln till there areno strings left to cover.

Let us consider one step of the process. Let F be the set ofuncovered instances in Ln at the start of step.

By PHP there exists a string y ∈ L′≤α(n) such that A maps atleast

|F |t(n)

|L′≤α(n)|

tuples in F t(n) to y .

Venkatesh Raman Lower Bounds on Kernelization

Page 29: Kernel Lower Bounds

How to get Sn (Cont.)

At least

(|F |t(n)|L′≤α(n)|

)1/t(n)

= |F ||L′≤α(n)|1/t(n)

strings in F are

covered by y in each step.

We can restate the above statement, saying that at least ϕ(s)fraction of the remaining set is covered in each iteration,where

ϕ(n) =1

|L′≤α(n)|1/t(n)=

1

2(α(n)+1)/t(n)

There were 2n strings to cover at the starting. So, the numberof strings left to cover after p steps is at most

(1− ϕ(n))p2n ≤ 2n

eϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,which is polynomial in n since α(n) = O(t(n) log(t(n))).

Venkatesh Raman Lower Bounds on Kernelization

Page 30: Kernel Lower Bounds

How to get Sn (Cont.)

At least

(|F |t(n)|L′≤α(n)|

)1/t(n)

= |F ||L′≤α(n)|1/t(n)

strings in F are

covered by y in each step.

We can restate the above statement, saying that at least ϕ(s)fraction of the remaining set is covered in each iteration,where

ϕ(n) =1

|L′≤α(n)|1/t(n)=

1

2(α(n)+1)/t(n)

There were 2n strings to cover at the starting. So, the numberof strings left to cover after p steps is at most

(1− ϕ(n))p2n ≤ 2n

eϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,which is polynomial in n since α(n) = O(t(n) log(t(n))).

Venkatesh Raman Lower Bounds on Kernelization

Page 31: Kernel Lower Bounds

How to get Sn (Cont.)

At least

(|F |t(n)|L′≤α(n)|

)1/t(n)

= |F ||L′≤α(n)|1/t(n)

strings in F are

covered by y in each step.

We can restate the above statement, saying that at least ϕ(s)fraction of the remaining set is covered in each iteration,where

ϕ(n) =1

|L′≤α(n)|1/t(n)=

1

2(α(n)+1)/t(n)

There were 2n strings to cover at the starting. So, the numberof strings left to cover after p steps is at most

(1− ϕ(n))p2n ≤ 2n

eϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,which is polynomial in n since α(n) = O(t(n) log(t(n))).

Venkatesh Raman Lower Bounds on Kernelization

Page 32: Kernel Lower Bounds

How to get Sn (Cont.)

At least

(|F |t(n)|L′≤α(n)|

)1/t(n)

= |F ||L′≤α(n)|1/t(n)

strings in F are

covered by y in each step.

We can restate the above statement, saying that at least ϕ(s)fraction of the remaining set is covered in each iteration,where

ϕ(n) =1

|L′≤α(n)|1/t(n)=

1

2(α(n)+1)/t(n)

There were 2n strings to cover at the starting. So, the numberof strings left to cover after p steps is at most

(1− ϕ(n))p2n ≤ 2n

eϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,which is polynomial in n since α(n) = O(t(n) log(t(n))).

Venkatesh Raman Lower Bounds on Kernelization

Page 33: Kernel Lower Bounds

Take awayA few comments about the theorem

coNP ⊆ NP/poly implies PH = Σ3p.

The theorem gives us the collapse even if the distillationalgorithm is allowed to be in co-nondeterministic.

Main message is, that if you have t(n) instances of size n, youcan not get an instance equivalent to the Or of them inpolynomial time of size O(t(n) log t(n))

Venkatesh Raman Lower Bounds on Kernelization

Page 34: Kernel Lower Bounds

How to use the theorem to prove kernel lower bounds

We know that NP-complete problems can not have adistillation algorithm unless coNP ⊆ NP/poly.

We want to define some analogue of distillation to produce aninstance (x , k) of a parameterized problem L′, starting frommany instances of an NP-complete language L.

We call such an algorithm a composition algorithm. We willdefine it formally in the next slide.

The goal is that composition of an NP-complete language Linto L′, combined with a kernel of certain size for L′, gives usdistillation L.

So, if we can show that a composition algorithm exists from Lto L′ with desired properties, then L′ can not have a kernel ofcertain size.

Venkatesh Raman Lower Bounds on Kernelization

Page 35: Kernel Lower Bounds

How to use the theorem to prove kernel lower bounds

We know that NP-complete problems can not have adistillation algorithm unless coNP ⊆ NP/poly.

We want to define some analogue of distillation to produce aninstance (x , k) of a parameterized problem L′, starting frommany instances of an NP-complete language L.

We call such an algorithm a composition algorithm. We willdefine it formally in the next slide.

The goal is that composition of an NP-complete language Linto L′, combined with a kernel of certain size for L′, gives usdistillation L.

So, if we can show that a composition algorithm exists from Lto L′ with desired properties, then L′ can not have a kernel ofcertain size.

Venkatesh Raman Lower Bounds on Kernelization

Page 36: Kernel Lower Bounds

How to use the theorem to prove kernel lower bounds

We know that NP-complete problems can not have adistillation algorithm unless coNP ⊆ NP/poly.

We want to define some analogue of distillation to produce aninstance (x , k) of a parameterized problem L′, starting frommany instances of an NP-complete language L.

We call such an algorithm a composition algorithm. We willdefine it formally in the next slide.

The goal is that composition of an NP-complete language Linto L′, combined with a kernel of certain size for L′, gives usdistillation L.

So, if we can show that a composition algorithm exists from Lto L′ with desired properties, then L′ can not have a kernel ofcertain size.

Venkatesh Raman Lower Bounds on Kernelization

Page 37: Kernel Lower Bounds

How to use the theorem to prove kernel lower bounds

We know that NP-complete problems can not have adistillation algorithm unless coNP ⊆ NP/poly.

We want to define some analogue of distillation to produce aninstance (x , k) of a parameterized problem L′, starting frommany instances of an NP-complete language L.

We call such an algorithm a composition algorithm. We willdefine it formally in the next slide.

The goal is that composition of an NP-complete language Linto L′, combined with a kernel of certain size for L′, gives usdistillation L.

So, if we can show that a composition algorithm exists from Lto L′ with desired properties, then L′ can not have a kernel ofcertain size.

Venkatesh Raman Lower Bounds on Kernelization

Page 38: Kernel Lower Bounds

How to use the theorem to prove kernel lower bounds

We know that NP-complete problems can not have adistillation algorithm unless coNP ⊆ NP/poly.

We want to define some analogue of distillation to produce aninstance (x , k) of a parameterized problem L′, starting frommany instances of an NP-complete language L.

We call such an algorithm a composition algorithm. We willdefine it formally in the next slide.

The goal is that composition of an NP-complete language Linto L′, combined with a kernel of certain size for L′, gives usdistillation L.

So, if we can show that a composition algorithm exists from Lto L′ with desired properties, then L′ can not have a kernel ofcertain size.

Venkatesh Raman Lower Bounds on Kernelization

Page 39: Kernel Lower Bounds

Weak d-Composition

(Weak d-composition). Let L̃ ⊆ Σ∗ be a set and letQ ⊆ Σ∗ × N be a parameterized problem. We say that L weakd-composes into Q if there is an algorithm C which, given t stringsx1, x2, . . . , xt , takes time polynomial in

∑ti=1 |xi | and outputs an

instance (y , k) ∈ Σ∗ × N such that the following hold:

k ≤ t1/d(maxti=1 |xi |)O(1)

The output is a YES instance of Q if and only if at least oneinstance xi is a YES-instance of of L̃.

Theorem

Let L̃ ⊆ Σ∗ be a set which is NP-hard. If L̃ weak d-composes intothe parameterized problem Q, then Q has no kernel of sizeO(kd−ε) for all ε > 0 unless NP ⊆ coNP/poly.

Venkatesh Raman Lower Bounds on Kernelization

Page 40: Kernel Lower Bounds

Proof of the theorem

Theorem

Let L̃ ⊆ Σ∗ be a set which is NP-hard. If L̃ weak d-composes intothe parameterized problem Q, then Q has no kernel of sizeO(kd−ε) for all ε > 0 unless NP ⊆ coNP/poly.

Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. Afterapplying the kernelization on the composed instance, the size ofthe instance we get is

O(t(n)1/dnc)d−ε) = O(t(n)1−(ε/d)nc(d−ε))

= O(t(s)) (for t(s) sufficiently large)

= O(t(s) log t(s))

Venkatesh Raman Lower Bounds on Kernelization

Page 41: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 42: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 43: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 44: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 45: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 46: Kernel Lower Bounds

Some comments about composition

In composition, we asked for the parameter k to be at mostt1/d(n)O(1). That ruled out kernels of size kd−ε.

What if we can output an instance with k = to(1)(n)O(1)?Then we can rule out kernels of kd−ε for ALL d!

We call such an algorithm just “composition”.

Since theorem of Fortnow-Santhanam allowsco-nondeterminism, so that allows using coNP compositionsfor proving lower bounds.

Sometimes getting composition from arbitrary instances of alanguage can be difficult.

Some structure on the input instances helps to get acomposition (next slide).

Venkatesh Raman Lower Bounds on Kernelization

Page 47: Kernel Lower Bounds

Polynomial Equivalence Relation

(Polynomial Equivalence Relation). An equivalence relation Ron Σ∗ is called a polynomial equivalence relation if the followingtwo conditions hold:

1 There is an algorithm that given two strings x , y ∈ Σ∗ decideswhether x and y belong to the same equivalence class in(|x |+ |y |)O(1) time.

2 For any finite set S ⊆ Σ∗ the equivalence relation R partitionsthe elements of S into at most (maxx∈S |x |)O(1) classes.

Venkatesh Raman Lower Bounds on Kernelization

Page 48: Kernel Lower Bounds

What to do with Polynomial Equivalence Relation

The equivalence relation can partition the input on the basisof different parameters. These equivalence classes can be usedto give the input to the composition a nice structure.

The helpful choices are often partitions which have the samenumber of vertices, or the asked solution size etc.

Then all we need to do, is to come up with a compositionalgorithm for instances belonging to same equivalence class.

Since there are only polynomial number of equivalence classes,in the end we can just output an instance of Or(L′)

Next slide is a nice illustration of this method by Micha lPilipczuk.

Venkatesh Raman Lower Bounds on Kernelization

Page 49: Kernel Lower Bounds

Proof

OR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 50: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 51: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 52: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 53: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 54: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 55: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 56: Kernel Lower Bounds

ProofOR-SAT

OR-SAT

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

NP

-hrd

L

NP

-hrd

1

NP

-hrd

1

NP

-hrd

1

NP

-hrd

2

NP

-hrd

2

NP

-hrd

2NP

-hrdk′

NP

-hrd

k′

NP

-hrd

k′

L

cmp

poly(k)

cmp

poly(k)

cmp

poly(k)

L

kern

kern

kern

OR-L

OR-L

Michał Pilipczuk No-poly-kernels tutorial 11/31

Page 57: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 58: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 59: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 60: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 61: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 62: Kernel Lower Bounds

Take away

We use compositions to rule out polynomial kernels.

A composition from NP-hard problem L to parameterizedproblem L′ gives kernelization hardness for for L′.

k = to(1)nc ⇒ No polynomial kernel.

k = t1/dnc ⇒ No kernel of size kd−ε.

We can make use of equivalence classes to give structure toinput of the composition.

Examples on the board!

Venkatesh Raman Lower Bounds on Kernelization

Page 63: Kernel Lower Bounds

Thank You!

Venkatesh Raman Lower Bounds on Kernelization