Jump to content

Exponential bound on capsets

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by ProveStuff (talk | contribs) at 20:45, 16 August 2019 (Created page with '{{subst:AFC submission/draftnew}}<!-- Important, do not remove this line before article has been created. --> In affine geometry, a '''cap set''' is a subs...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)


In affine geometry, a cap set is a subset of (an -dimensional affine space over a three-element field) with no three elements in a line. See cap set for an introduction to the definitions and history of the problem. The cap set problem is the problem of finding the size of the largest possible cap set, as a function of .[1] The first few cap set sizes are 1, 2, 4, 9, 20, 45, 112, ... (sequence A090245 in the OEIS).

Cap sets may be defined more generally as subsets of finite affine or projective spaces with no three in line, where these objects are simply called caps.[2]


We prove here that for any prime power , a subset that contains no arithmetic progression of length has size at most for some . Thus we prove that caps are small. The proof is taken from

Tao, Terry (May 2016). "Symmetric version of capset". 

who reformulated the original argument which not only simplified the proof but also helped generalize it to many other results.

Proof idea

The method is reminiscent of the linear algebra or polynomial method in combinatorics. We will use the property of to build a function that on the one is as complicated as , but on the other hand show it is not complicated, and thus conclude is small.

Fix a field . Recall that for any set and , is the function returning if and otherwise.

If contains no , then as functions

The proof is then composed of showing the right side is as "complicated" as , which we do in the next subsection, and later that the left side is simple, and thus concluding is small.

Slice rank

Fix a field , and suppose we have arbitary finite sets . We can imagine any function as an dimensional cube containing elements of , its sides being indexed by , and the value at being .

In the case this is a matrix, and we have its rank which measures its "complexity". One of the definitions of the rank of a matrix is the minimal so that can be written as the sum of rank 1 matrices. Translating this to our correspondence of matrices indexed by with functions , for , a rank function is one of the form for some functions , . Thus rank for a function is the minimal so that

for functions , .

If we try to generalize rank to larger , a reasonable offer is to say the rank of is the minimal so that there is a decomposition

and indeed usually one refers to this as rank. It turns out we'll be interested in a similiar notion, that of slice rank, which is the minimal so that there is a decomposition

for ,

Thus the meaning is that we're allowed separate one variable in each summand. Thus for this is the usual rank, but for larger it's at most the regular rank, but can be smaller.

We let denote the slice rank of .

Lemma 1 - the function is complicated

Suppose , and is a diagonal function, that is for constants , then .

Proof of Lemma 1

We may assume without loss of generality that all . The proof is by induction on . The case is well known from linear algebra.

Suppose , and let be the sets indicating for each variable which summands separated it so that .

Suppose without loss of generality . We can view each of as vectors in , and so we have an at least dimensional perpendicular vector space to . But if we have a matrix of full rank, there is a minor that is of full rank, and so we can find a vector that has at least nonzero entries in the perpendicular space. Call this vector\function .

Then take the equality

Now multiply both sides by and sum over , we then win by induction.

In particular we see

has slice rank at least

Lemma 2 - the function is simple

This is a beautiful and simple argument. We claim that

as a function has low slice rank, and in particular it follows that the restriction to has low slice rank. By low we mean for some .

Proof of Lemma 2

We have in the identity .

Thus,

Opening the right side, we get a polynomial of degree of degree at most in each variable. For each monomial, by an averaging argument it is of degree at most in at least one of the three sets of variables .

Thus if we take for instance monomials of degree at most in we can group them by the exact part of the monomial, and each of those becomes a function of the form . Thus we if we let be the number of those monomials of degree at most in , then doing the same for , we find the slice rank of the function is at most (for monomials who have degree less than in more than on set of variables, arbitarily choose which one to use). Finally, the point is that ; we're trying to count integer solutions with . The total number of potential substitutions is , and most substitutions don't work since if we take a random substitution one the expectation of is . This is formalized by using a concrete large deviation bound such as Chernoff bound.

Finishing the proof

The two lemmas together show for some .

As a remark, by a tensor argument one may obtain ; indeed if one has some that is 3-progression free, then so is , and thus , taking the mth root we get , now take


See also


References

  1. ^ Austin, David (August 2016), "Game. SET. Polynomial.", Feature column, American Mathematical Society.
  2. ^ Cite error: The named reference edel was invoked but never defined (see the help page).