## Archive for the ‘**math**’ Category

## Antichains: Crocodiles and schoolgirls in parliament

Antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable.

I’ve mentioned them a couple of times explicitly or implicitly: in Crocodile Dinner, in Important Theorems From Combinatorics, in the Crossing Lines problem and in its “anime variation”.

Here is another elementary problem that is kind of in between “schoolgirl rivalry” and “crocodiles that swallow each other”.

**Consider 435 persons. Assume that every one of them has lied to exactly one other person from this set. **(assume they don’t lie to themselves). **Prove that it’s possible to find a subset of at least 145 people so that nobody in that subset has lied to nobody in that subset.**

Solution follows after a short break:

## A loaded die study

I got a set of handcrafted dice for Xmas. They look like this:

They are wooden, the shapes are obviously imperfect, and the dots are made of metal.

I immediately suspected that they can’t be well balanced and decided to conduct a little study, that turned out to be **fun and instructive**.

The first experiment I did was dropping the die into water. It appeared that the die is lighter than water and it **always floats the 1 side up and the 6 side down**:

*[Side quest: A perfect cube with the density of exactly 1/2 of the density of water floats in water. Will it float side-up, edge-up or vertex-up? This problem is too tough for me, and I don’t have a solution.]*

Anyway, after the float test it was evident to me that the die is biased in the 1- 6 direction. But by how much is this imbalance affecting the outcome of rolling the die on the table?

My next test was the Roll Test. **I rolled the die 121 times. **Why 121? I wanted a number that is close to 100, close to a multiple of 6 and close to a perfect square. That is because I was pretty sure I’d be able to compute the sigma on a napkin and that would be it. Both 120 and 121 are good numbers. But after I did 120 rolls I thought, why don’t I do one more.

Here are the results of my 121 rolls:

1 | 2 | 3 | 4 | 5 | 6 |

27 | 19 | 19 | 20 | 21 | 15 |

Right, 1 and 6 are obvious outliers… ??? … But **they are within the 2-sigma range**… But I’m more than confident in my alternative hypothesis! Like a true researcher, I’m going to find a way to confirm it!

So, do I do another 100 rolls? No way, that would be no fun! I’m going to pretend that I’m not dealing with a stupid die, but rather with a particle accelerator, and that I’m over the budget, so this sample is all I have. I’ll do various stats tests on my sample, and I’m going to find one that confirms that this stupid die is loaded!

Sadly, both Chi-square and Kolmogorov-Smirnoff yield p-values about 0.5 that is 10 times greater than what I need in order to reject the hypothesis that the die is fair.

But I’m not giving up. Why was I doing all those tests that attempted to **refute the null-hypothesis without having any information about my alternative hypothesis: that the die is biased specifically towards 1 and specifically against 6**. And why was I doing all those old-fashioned tests at all? It’s not 19th century and I have a very capable computer at my disposal. I can simulate whatever I want.

Results:

Alternative Hypothesis (out of 121 rolls) |
p-value |

At least 1 number appears more that 26 times | 0.37 |

At least 1 number appears more that 26 times AND at least 1 number appears less that 16 times | 0.3 |

Exactly 1 number appears more that 26 times AND exactly 1 number appears less that 16 times | 0.2 |

The number ONE appears more that 26 times AND the number SIX appears less that 16 times | 0.01 |

So I guess I **could** now conclude that the die is biased specifically towards 1 and specifically against 6. But **should** I?

The next step should be checking how significant this bias is for some actual game. I’d need to simulate a game with a realistic bet and see how much money can be won using this die within a realistic timeframe.

I used R for the simulations. Below is the piece of code that does that. **Happy New Year!**

## Closest neighbors

A simple but curious statement:

**In any finite set of points on the plane, there are points that have less than 4 closest neighbors.**

Not sure if this is a deep or useful observation.

Not sure if I have a rigorous proof. I’ll publish a sketch of a proof later.

And here is how you prove it:

## What did Euler know?

I heard a legend that I refuse to believe. It goes like this:

Euler studied irreducible factorizations of polynomials . He noticed that all coefficients that appear in the factors are 1, –1 and 0. E.g. . He attempted to prove that that’s always the case, but he couldn’t. He computed all factorizations up to and he didn’t see any coefficients other than 1, –1 and 0.

He died convinced that this property always holds. But it doesn’t. Had he factorized , he would see some 2s too.

Did Euler know that the factors are cyclotomic polynomials? Maybe not. Even if he noticed that, he probably wouldn’t be able to prove it. Even if he believed that, he wouldn’t be able to derive any general properties of the coefficients.

But if he really computed all those factorizations up to the power 100, how come he didn’t notice any patterns? I bet he did! He must have come up with some clever methods of computations, and he couldn’t miss the fact that the structure of the factorization of the polynomial depends on the prime factorization of the power. Then why would he stop at 100? Why not check 3*5*7 = 105 ?

Ok, let’s assume the legend is true. Then I’m curious if Euler would really think that the property is **true **or would consider it an open conjecture? We know that Euler didn’t care much about formal foundations. He died before Cauchy was born, so he never got a chance to put his work on a formal ground. Laplace, on the other hand (according to another legend), did recheck all his work after he learned about Cauchy’s formalism. (I mentioned Laplace here; I think he was at least as brilliant as Euler).

## Hilbert’s 10th problem and incompleteness

The negative solution of Hilbert’s 10th problem is one of the most striking manifestations of the incompleteness of arithmetic. Undecidable propositions don’t have to look “paradoxical” like the Gödel statement. And they don’t have to involve unimaginably large structures like the Goodstein sequence.

There are undecidable statements that simply talk about existence of solutions of Diophantine equations.

The reason for that is even more striking: any question of the form “is there a natural number possessing the property P” can be formulated as “does this particular Diophantine equation have a solution”. Basically, any property of the natural numbers can be expressed as a polynomial equation. And we are not talking about some unimaginably huge polynomials here. First of all, they are effectively defined, i.e. we can really construct them. And also their size is bounded by some very reasonable constants. E.g. we know that we can use equations of the degree not greater than 4 with no more than 58 unknowns. Or we can limit the number of unknowns to 26 so we can use the letters of the alphabet, in which case we would need to allow the degree to go up to 24. Even 9 unknowns are sufficient, but then the degree would have to grow up to 10^{45} .

I can’t help noticing once again that arithmetic has way way more expressive power than we actually need.

Here is a nice paper by Yuri Matiyasevich which explains this stuff on a very popular level. Another text by Matiyasevich with a historical account of his collaboration with Julia Robinson.

The history of Hilbert’s 10th is as fascinating as its contents.

It is curious that Julia Robinson’s teacher was Alfred Tarski who strongly believed that there exist non-Diophantine enumerable sets. And Martin Davis who formulated the key conjecture was a student of Alonzo Church. Yuri Matiyasevich (who gave the final proof of that conjecture now known as the DPRM theorem) in one of his lectures mentioned that he had used the Russian word наглый to describe Davis’s conjecture. This word is actually much stronger than “bold”, it’s almost “arrogant-obnoxious-badass-bold”.

I find it remarkable that this astonishing solution of a genuine number-theoretical problem was achieved by logicians. BTW, the “P” in “DPRM” is Hilary Putnam who is more of a philosopher than a logician.

Wikipedia does a good job too. Here and here. One small addition. Some texts give the impression that the DPRM theorem solves Hilbert’s 10th “because undecidable sets are known to exist”. In fact we don’t need to know about undecidable sets or to invoke the halting argument. We can simply use the same Turing’s diagonalization trick as for the halting problem. Since we can enumerate all diophantine equations, we can diagonalize by feeding the n-th equation with n as the parameter to the “oracle”. Then, since the output is a diophantine set, its representation must be on the original list, and we can show that the “oracle” must lie…

## Doubting the Consistency: infinite vs. unimaginably large

*“The consistency of P remains an open problem.”*

*Ed Nelson.*

*“There was also a Beaver, that paced on the deck,
Or would sit making lace in the bow:
And had often (the Bellman said) saved them from wreck,
Though none of the sailors knew how.”*

*L. Carroll, “The Hunting of the Snark”.*

So, how could anyone doubt the consistency of PA? Isn’t a model of PA, which is almost “material” and can be used to verify the truth of all the axioms? And didn’t Gentzen prove the consistency of PA using induction up to , which is obviously true because it simply says that finite trees can be well-ordered?

If you believe in all those “true” and “exists” above (which most people and mathematicians do), then you have no reason to doubt the consistency of PA. But there are people who study foundations of math in more depth than regular mathematicians do. They learn that the structure of the natural numbers is far more complex and mysterious than even what Gödel uncovered. They run into stuff that makes them doubt all those “true” and “exists”. Or I should rather say, to doubt that the current established formalism justifies all those “true” and “exists”.

I’d like to give a few examples.

You can avoid Gödel’s Second Incompleteness theorem by doing away with the 2nd Hilbert-Bernays condition , which is so subtle that no one cares about it. And you don’t even need to change anything in PA, you just need to change the numbering in the proof of Gödel’s 1st Theorem. After that the formula “PA is consistent” will become provable in PA. That would be a slightly different formula than the G statement, but its meaning will be almost exactly the same. (I believe this was first noticed by Solomon Feferman). [this particular fact is not related to Nelson’s attempt to prove inconsistency, I just want to demonstrate how subtle the 2nd Theorem is]

For any n, you can easily define an inconsistent theory in which the shortest proof of a contradiction has the length of the order .

For any n, you can (not so easily) define an inconsistent theory in which the shortest proof of a contradiction has the length of the order with exponentiation repeated 2^{n} times. (The latter result is due to George Boolos, I believe). This number is so unimaginably large, that this theory would be consistent for all practical purposes not only of gods themselves, but of creators of creators of gods as well.

You don’t need no induction. For any computable property P of natural numbers, there exists a number B such that if then . Seriously, it is always sufficient to check if the property holds for a finite segment of the naturals in order to prove that it holds for all naturals. But it is not possible to compute (derive in a mechanical manner) the numbers B or their upper bounds. Some lower bounds are known, and guess what – they are unimaginably large (See http://en.wikipedia.org/wiki/Busy_beaver#Applications). See also this popular essay by Scott Aaronson http://www.scottaaronson.com/writings/bignumbers.html

Those are some of the facts that make me believe that there might be indeed something wrong with Peano Axioms. This believe is a reason to be excited and not a reason to be worried. I am not an ultrafinitist for two reasons: I believe in Church Thesis (and even more in the informal Church-Turing Thesis) and I think that to be an ultrafinitist one needs to possess way more knowledge about the Foundations of math than I do.

## Note on proofs of inconsistency

There are many discussions going on around the web in the wake of E. Nelson’s claim about inconsistency of PA . While I can’t contribute on a professional level, I’d like to make a trivial note:

Nelson is not trying to prove the statement “the theory is inconsistent” within the theory itself. Such a proof would not have any consequences regarding actual consistency of the theory.

I wrote about this more than once already: there are consistent theories that prove own inconsistency . The reason being that consistency is not a very strong condition. Consistent theories may potentially lie. Consistent theories cannot prove Π-falsehoods, but they can prove Σ-falsehoods. I outlined the relationships between consistency and soundness here.

Now, what Nelson **is **doing is really straightforward. He is simply deriving a contradiction within the theory. Not something like “0=1” of course (though if he is right, then a proof of “0=1” does indeed exist).

Stay tuned, to be continued…