Showing posts with label mathematics. Show all posts
Showing posts with label mathematics. Show all posts

Friday, January 08, 2016

(Another) Solution: Writting year as sums of runs of numbers

The solution of just grinding out of the algebra is fine but it's a bit dull and unintuitive. We can use the discussion at the end of the last post to produce a simple proof with almost no algebra.

What we're going to prove is that the number of sequences of consecutive integers that sum to \(Y\) is equal to twice the number of odd divisors of \(Y\).

First off note that if the sum is positive, then there must be more positive numbers in the sequence than negatives. I'm going to assume it's positive, if it's negative then the procedure works, just with the signs changed.

Next, note that the solutions to the problem always come in pairs. Every sequence of even length has a corresponding sequence of odd length. You can construct one from the other as follows.

If 0 appears in the sequence then you cancel every negative number with it's opposite positive number and remove the zero. Every time you cancel a \(-\) with a \(+\) you remove 2 elements from the sequence, so it stays odd or even. Finally removing the 0 flips from odd to even or vice versa. E.g. $$ -3 + -2 + -1 + 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 $$ becomes $$ 4 + 5 + 6 + 7 $$ 2 slightly non-obvious cases of this are sequences beginning with 0 or 1. They have no negative elements, so you just either add or remove 0 at the start.

Conversely if the sequence does not contain 0, prepend all the numbers from 0 up to the start of the sequence and also prepend their negatives. This does not change the sum but it flips the length from even to odd or vice versa. E.g. $$ 4 + 5 + 6 + 7 $$ becomes $$ -3 + -2 + -1 + 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 $$

So now we just consider the solutions of odd length. If we can show that there is exactly one of those for every odd divisor of \(Y\), we're done. That turns out to be fairly easy.

First, lets say you have an odd length sequence that sums to \(Y\). Since it's odd-length, it must have a middle number, let's say \(n\) and then there are \(k\) numbers before and \(k\) numbers after. The befores and afters go together in pairs. \(n-1\) with \(n+1\), \(n-2\) with \(n+2\) etc. Each of these pairs sums to \(2n\) and there are \(k\) pairs plus a lone \(n\) from the middle giving a total of \((2k+1)n\). So we have that \(Y=(2k+1)d\) and so \(2k+1\) (the length of the sequence) must be an odd divisor of \(Y\). So this shows that any sequence that sums to Y leads to a specific odd divisor of Y.

Conversely given an odd divisor of \(Y\), \(2k+1\) and we set \(n = Y/(2k+1)\) then the sequence of length \(2k+1\) centred on on \(n\) sums to \(Y\). So every odd divisor leads to a sequence of odd-length and we're done.

Sunday, January 03, 2016

Solution: Writting year as sums of runs of numbers

It turns out the answer is fairly well known and involves what are known as polite numbers but I hadn't heard of them before.

So in order to solve this, we consider sequences of positive consecutive integers. I've added "positive" in there because sequences that are all negative are just positive sequences with the sign flipped and so are not interesting. Sequences that start negative and go positive are also not interesting as you can a bunch of the numbers will cancel out and you'll be left with a sequence that's purely negative or a purely positive.

There's a well-known formula for the sequences starting at \(1\) and that the formula for triangular numbers: \(1 + 2 + ... + n = n(n+1)/2\). We can use this to get a formula for sequences that don't start at \(1\). The sum \((k+1) + (k+2) + ... + (k+n)\) is just the sum from 1 to n minus the sum from 1 to k. More formally that's \begin{equation}\label{2.1} \begin{split} &{} n(n+1)/2 - k(k+1)/2\\ =& 1/2(n^2+n -k^2 -k)\\ =& 1/2(n^2-k^2 + n -k)\\ =& 1/2((n+k)(n-k) +(n -k))\\ =& 1/2((n+k + 1)(n-k))\\ \end{split} \end{equation}

So now if we want to get a specific number \(Y\) out of that then $$ Y = 1/2((n+k + 1)(n-k)) $$ so $$ 2Y = (n+k + 1)(n-k) $$

Notice that the factors on the right hand side differ by \(2k + 1\) this means that one is always even and one odd and the odd one must be a divisor of \(Y\) (any odd divisor of \(2Y\) must divide \(Y\) also). So every odd divisor, \(d\) of \(Y\) gives us 2 possibilities. Either $$ \begin{split} &n+k+1& = d\\ &n-k& = 2Y/d\\ \end{split} $$ or $$ \begin{split} &{}n+k+1& = 2Y/d\\ &{}n-k& = d\\ \end{split} $$

Solving these for n and k we get either $$ \begin{split} n &= (d-1)/2 + Y/d\\ k &= Y/d - (d + 1)/2\\ \end{split} $$ or $$ \begin{split} n &= (d-1)/2 + Y/d\\ k &= (d - 1)/2 - Y/d\\ \end{split} $$

In both cases \(n\) is the same. The \(k\)s are different and in fact they are almost negatives of each other (add them together and you get \(-1\)). This means that for any odd divisor \(d\) you get exactly two solutions but exactly one of them will have a positive \(k\). \(n\) is positive if and only f \(d\) is positive.

So in fact for any \(Y\) there are exactly as many solutions as there are positive odd divisors of \(Y\). In the case of 2016 we have factors \(2016 = 3.3.7.32\) $$ \begin{split} d=1 &: 2016 = 2016\\ d=3 &: 2016 = 671+672+673\\ d=7 &: 2016 = 285+286+...+291\\ d=9 &: 2016 = 220+221+...+228\\ d=21 &: 2016 = 86+87+...+106\\ d=63 &: 2016 = 1+2+3+...+63\\ \end{split} $$ and for \(2015 =5*13*31\) we have $$ \begin{split} d=1 &: 2015 = 2015\\ d=5 &: 2015 = 401 + 402 + ... + 405\\ d=13 &: 2015 = 149 + 150 + ... + 161\\ d=31 &: 2015 = 50 + 51 + 80\\ d=65 &: 2015 = 2 + 3 + ... + 63\\ d=155 &: 2015 = 65 + 66 + ... + 90\\ d=403 &: 2015 = 197 + 198 + ... + 206\\ d=2015 &: 2015 = 1007 + 1008\\ \end{split} $$

In the 2015 example, the divisors 65, 155, 403 and 2015 are big enough that k becomes negative and we need to switch to the second solution. This doesn't happen at all for 2016 - the odd divisors never get very big because a lot of 2016 is kind of locked up inside the 32.

When the sequence comes from the first solution, it has exactly \(d\) elements and is centered around \(Y/d\). This makes sense intuitively, you can sum the sequence by adding the first and last, the second and second last, etc each contributes \(2Y/d\) and finally the middle one gives you one more for a total of \(dY/ = Y\).

When the sequence comes from the second solution, that doesn't work but instead of switching to solution 2, we can stick with solution 1 and accept a negative \(k\). Then we get a \(d\)-element sequence that sums to \(Y\) $$ \begin{split} d=65 &: 2015 = -1 + 0 + 1 + 2 + 3 + ... + 63\\ d=155 &: 2015 = -64 + -63 + ... + 90\\ d=403 &: 2015 = -196 + -195 + ... + 206\\ d=2015 &: 2015 = -1006 + -1005 + ... + 1008\\ \end{split} $$ each of these is actually the same as the solutions above (just cancel all the negatives with their positive counterpart)

Puzzle: Writting year as sums of runs of numbers

This year is 2016 and \[ 2016 = 1 + 2 + 3 + ... + 63 \] I didn't notice this myself, I saw it written somewhere but then I wondered if there were any other ways of writing 2016 as a sum of consecutive numbers and it turn out there are.

So, the puzzle is, how many ways are there of writing 2016 as a sum of consecutive numbers and what's the answer for the general case where you're trying to write any number \(Y\) as such a sum. The answer is quite simple but non-obvious.

I'll post the solution shortly My solution is here, please leave comments or solutions below.

Monday, May 02, 2011

Forget π (pi), τ (tau) is where it's at

Below is Vi Hart's video explaining why τ (= 2π) is the more natural "circle constant". Everything just works out more neatly with τ and thinking about a full turn rather than a half turn just makes sense. And yes,

e = 1
is how it should be. The Tau Manifesto has more details but suffice to say, while Mar 14th never did much for me, on Jun 28th I might just make an effort this year.

No this is not entirely a waste of time, using τ with beginners would actually eliminate quite a bit of confusion.

Monday, February 21, 2011

Bit of a maths puzzle.


Ugh! This comes out nicely on my blog but looks like crap in Google reader, presumably because the javascript can't run.
Define a function:
\[ f(x) = \frac{2x}{1 - x^2} \] In case that maths layout thing isn't working, that's f(x) = 2x/(1-x^2)
Now, \[ f(0) = 0 \] so that repeats. Also \[ f(\sqrt 3) = -\sqrt 3 \] and \[ f(\sqrt -3) = \sqrt 3 \] so that repeats too.
The question is, what values repeat if you keep applying f to the result? I thought I knew the answer but now I think maybe it's more tricky.

Sunday, December 13, 2009

Computing combinatorials

I was commenting on this fun post about efficiently computing n-choose-c but my attempt to post some code seems to have been eaten by the comment system (I suspect use of square and angle brackets upset it), so I'll post something here instead.

Here's why c always takes integer values and more generally why (k + 1) * (k + 2) * ... * (k + n) is always divisible by factorial(n). There are three ways to show this.

Apologies for the crappy maths typesetting, I should really figure out how to do this nicely.

Proof by hand-waving.

First off, something hand-wavy. One of (k + 1)*(k + 2) is even so 2 divides into that product. One of (k + 1)*(k + 2)*(k + 3) is threeven (not a real word!) so 3 divides into that product. 4 is not so simple. One of (k + 1)*(k + 2)*(k + 3)*(k + 4) is fourven so 4 divides into that product - however maybe the fourven number was also the even number we used to cancel the 2 earlier on so it only has one 2 left for division but if that's the case then since one of the remaining 3 numbers is also even and you can get a 2 from that to make up the 4. 5 works just like 2 and 3. 6 contends with both 2 and 3 but you can resolve that too. This gets messy quickly but essentially, for any prime p, every time you come to the next multiple of p in the divisor you will have already added at least one multiple of p in the k+i product and by the time you hit a multiple of p2 you will have already hit a corresponding one in the k+i product too, so you'll still have enough ps in the product overall to cancel all the ps in the divisor. Still a bit hand-wavy but it could be made rigorous, however there is another proof that's easily rigorous but less insightful.

Proof by counting prime factors.

There is a well known formula for finding the power of p in factorial(n). It's

sum(i=1..infinity, [n/pi])

where [] means take the integer part, throwing away any fractions. So if p = 5, and n=30 then this is
[30/5] + [30/25} + [30/125] + [30/625] + ... = 6 + 1 + 0 + 0 + ...

It's all zeros after that, so the end result is 7. So 57 divides factorial(30) and 7 is the highest power of 5 that divides it.

So rewriting the k + i product as factorial(k + n)/factorial(k) we now know that for any prime p, it contains exactly

sum(i=1..infinity, [(n+k)/pi]) - sum(i=1..infinity, [(k)/pi])

powers of p. If this is greater than
sum(i=1..infinity, [(k)/pi])

then we know that factorial(n) divides factorial(k + n)/factorial(k).

Well, [x + y] >= [x] + [y] so

[(n+k)/pi] >= [(n)/pi] + [(k)/pi]

now bring one term over to the left hand side to get

[(n+k)/pi] - [(n)/pi] >= [(k)/pi]

Now sum over i=1..infinity and you get exactly what we needed to show. So for any prime p, there are at least as many powers of p in the k+i product as there are in factorial(n), all the ps in the divisor cancel out and the divisor divides in evenly.

Proof by "it just is".

The final and least insightful way is to note that factorial(n+k)/(factorial(k) * factorial(n)) is n+k choose n and so must be an integer

Code

Here's a python snippet that checks that the division is always even with an assert.

Give it an n and it will print out comb(n, i) for all i. The assert never fires for me.

As it loops, c gets the values comb(k + 1, 1), comb(k + 2, 2), ..., comb(n, n - k), all of which are integers (and remember that comb(n, n - k) = comb(n, k) so c ends up with the desired value.)

If your lanugage does tail-call optimisation, you could define

comb(n, 0) = 1
comb(n, i) = comb(n - 1, i - 1) * n / (n - k)

and be done. With memoisation you could avoid lots of repeated calculations.

Monday, October 12, 2009

Hyperbinary numbers

Update: The proof is over here.

I've been following along the a discussion on The Math Less Travelled about hyperbinary numbers (here, here and here). I'm posting this here because it's a bit too big to fit in the margins of the other blog (although I have found no remarkable proof). Since I have very little time to work on this (and have already given myself a few nights with insufficient sleep!) I thought I'd do a brain-dump of what I did in case someone else can finish it. Apologies for the awful formatting, I don't post much maths so have no nice styles for it and no time to find them.

Quick summary

h(n) is the number of ways of writing n as a sum of powers of 2 where each power of 2 can only occur 0, 1 or 2 times. See the articles for more details but the key facts are (I'll refer to them by these numbers):

  1. h(2n) = h(n) + h(n - 1)
  2. h(2n + 1) = h(n)

From those you can get several relationships, like h(k * 2^n - 1) = h(k-1). Other interesting stuff is in those posts. The main question is how to find the inverse of h(n). That is, given k, what values of n have h(n) = k. It turns out that each even solution to that leads to an infinite number of odd solutions and all odd solutions can be traced back to an even solution. So the question then is what are the even solutions - the "primary occurrences"? My contribution so far has been to notice that there seem to be φ(k) even solutions for a given k although I am still stumped on how to prove that.

Another useful things is that h(2n + 1) = h(n) means that all the odd positions are just repeats of earlier values. Applying fact 1 to the right hand side of fact 2 gets you h(2n) = h(2n + 1) + h(2n - 1). So all the even positions are just the sum of the values that come immediately before and after.

Finally, h(2^n - 1) = 1 and h(2^n) = n + 1 and from that you can get h(2^n + 1) = n

New stuff

The last bits got me thinking about h(2^n + k). It's fairly easy to figure that out for more and more values of k. For h(2^n + 2) just apply fact 1 to get that as the sum of 2 things we already know:

h(2^n + 2) = h(2^(n-1) + 1) + h(2^(n-1)) = (n-1) + (n-1) + 1 # note this line for discussion below. = 2n - 1.
Looking at the table of values primary occurrences (here) you can see that all of the odd entries do indeed have 2^n + 2. In fact after a bit, it's the lowest value for each odd entry.

What about 1? 2*1 - 1 = 1 but if n = 1 the line commented above is not valid because h(2^n + 1) = n is only valid for n > 1.

To get a bit further, consider h(2n) = h(2n + 1) + h(2n - 1) from above. Moving things around a little that becomes h(2n + 1) = h(2n) - h(2n - 1). So we can get the next odd entry by subtracting the previous odd entry from the previous even entry. So h(2^n + 3) = n - 1. Since 2^n + 3 is odd this doesn't tell us anything about primary occurrences but it's still useful as all odd entries are duplicated later on.

Now we can get h(2^n + 4) = h(2^(n-1) + 2) + h(2^(n-1) + 1) and again we know what they are = (2n - 1) + n = 3n - 1. And low and behold, all the entries (except 2) in the primary occurrences table of the form 3n - 1, include have 2^n + 4 (5 -> 12, 8 -> 20, 11 -> 36, ...). What about 2? 2 = 3*1 -1 but if n=1 then the relation above breaks because the 2n - 1 rule is only valid for n > 1.

Continuing on in the same vein you get the first 100:

With the 2^n + k only being valid for n such that 2^n > k.

Obviously calculating these by hand gets a bit boring. To write some code to do it I had to play with f(n, m) = h(2^n + m) and it breaks down into 2 cases for m even and odd.

For the odd case, let m = 2k + 1. So

f(n, m) = h(2^n + m) = h(2^n + 2k + 1) = h(2n + 1) = h(2^n + 2k) - h(2^n + 2k - 1)

For the even case, let m = 2k. So

f(n, m) = h(2^n + m) = h(2^n + 2k) = h(2(2^(n-1) + k)) = h(2^(n-1) + k) + h(2^(n-1) + k -1) (applying fact 1) = h(2^(n-1) + m/2) + h(2^(n-1) + m/2 -1) = f(n - 1, m/2) + f(n -1, m/2 - 1)

Turning that into python and adding a couple of base cases gives the following code (it tries to also cope with 2^n - m but there's a bug somewhere).

You can run it as ./hyperbinary.py 0 100 or to get the first 50 even entries. Looking at the output it seems that h(2^n + m) = h(m) * n + c. I haven't proved it formally but it should be easy enough, just show that the coefficient of n obeys the fact 1 and fact 2. I didn't find a good relation between m or h(m) and the c above (the best I can spot is that int(c/h(m)) is always log_2(m) and when m is 2^k c=k*h(m) + 1). Nor did I relate any of this to φ although at least now there appears to be some modulo arithmetic going on.

Wednesday, July 29, 2009

Answer to 1 of the 2 puzzles and a new puzzle.

I take Nicole's comment to mean she gives up, although Nicole, it turns out that your comment has a deep relation to this problem and raises a new question.

To find out on average how many items are left unmoved by a random permutation, you can consider every permutation and how many are left unmoved by each, add them all up and divide by the total number of permutations. To avoid mathsy notations I'm going to write a tiny program that calculates this total.

total = 0 # Check what every permutation does to every item, if it leaves it # in the same place then increase the total by 1 for p in perms: for i in 1..N: if p(i) == i: total = total + 1
Now, here's the key to this problem - swap the loops. The final answer stays the same we just end up adding things up in a different order.
total = 0 # Check what every permutation does to every item, if it leaves it # in the same place then increase the total by 1 for i in 1..N: for p in perms: if p(i) == i: total = total + 1
So now what we're doing is looping over the items and counting how many permutations leave that item in place and that's something we can work out. The number of permutations that fix a given item is (N-1)! (where ! is factorial) because when you fix 1 item, you're left with N-1 other items which you move around however you like so it's just the number of permutations of N-1 items which is (N-1)!. So now the code can be just
total = 0 # Check what every permutation does to every item, if it leaves it # in the same place then increase the total by 1 for i in 1..N: total = total + (N-1)!
which is just adding (N-1)! N times. That's N x (N-1)! which is just N!. Now we have to divide that by the total number of permutations which is also N! to get an average of 1.

You can also get to this answer by drawing a grid with all the permutations across the top and all the items down the side. You put a tick in a cell in the grid if the permutation on the top leaves the item on the left unmoved. Now count the ticks. Doing it column by column is hard but doing it row by row is easy because there will be (N-1)! ticks in each row.

So the conclusion is that if you do a perfect shuffle of a deck of cards, on average 1 card will end up in the same places as before.

So where does Nicole's comment come in? I knew it was Ralph Wiggum who said it but I couldn't remember the context turns out. I found it here. It's

  Lisa: Hey Ralph, want to come with me and Alison to play "Anagrams"?
Alison: We take proper names and rearrange the letters to form a
        description of that person.
 Ralph: My cat's breath smells like cat food.
and what we've just shown is that given a random anagram, you'd expect 1 letter on average to end up in the same place - well kind of, that's only true if there are no repeated letters. I hadn't thought of this in terms of anagrams before and that adds a nice twist in terms of repeated letters. Thanks Nicole!

So the new puzzle is - given a string written in scrabble tiles (it can have repeated letters), if you mess up the tiles and make a new string at random, how many letters do you expect to be in the same place as before?

Monday, July 28, 2008

An interesting maths proof (to me anyway).

This is a comment I left on a blog but the comment box had no preview button and my comment came out kinda mangled, so I'm posting it here.

First of all, it's a long time since I proved anything so I may be wrong. Secondly, the markup is probably going to be terrible, there seems to be no preview button and I can't find wordpress's help section for markup. Finally, apologies if this is straying too far from the original idea of proof automation but I think this answers the question of whether this is true for Complex also. I can't really claim that this proof is discoverable by pattern matching as it's not really an epsilon-delta proof. That said, if I haven't made a mistake, the problem becomes "easy" if you consider a compactification of R and maybe looking at compactifications is a reasonable strategy for theorem provers.

Anyway, I think you can capture all of the fiddly parts and the proof-by-contradiction inside the Heine-Borel Theorem instead of using the Baire Category Theorem, which seems to be a bit more heavy-weight.

The essence is to show that for any δ > 0, there is a single Nδ such that |f(nX)| < δ for X ∈ [0,1] and n > Nδ. The conclusion follows from the fact that any Real > Nδ is something in [0,1] times an integer at least as big as N and therefore |f| will be < δ for this Real too.

Let X ∈ Real. We are given that f(nX) -> 0 as n -> ∞. More formally, ∀ δ > 0, ∀ X ∈ Real, ∃ Nδ,X ∈ Natural such that |f(nX)| < δ ∀ n > Nδ,X. Since f(X) is continuous there is an open set, Oδ,X containing X such that |f(Nδ,X y)| < δ ∀ y ∈ Oδ,X (it is the inverse image of (-δ, δ)).

For a fixed δ, {Oδ,X: X ∈ [0,1]} gives us an open cover of [0,1] and Heine-Borel tells us that there is a finite subcover. This means we have a finite set of Oδ,X, each with an Nδ,X and we can take the maximum Nδ,X and call it Nδ. This is a Natural such that |f(nX)| < δ ∀ X ∈ [0,1], ∀ n > Nδ.

Now let X ∈ Real, X > Nδ. Let N = [X]+1, where [] is the greatest integer function and let θ = X/N. Note that θ ∈ [0,1] and N > Nδ. So |f(Nθ)| < δ. That is, |f(X)| < δ.

So all you need is the compactness of [0,1] and the fact that you can generate all of Real+ from [0,1] and multiplication by Natural. So it seems that is true for Complex or RealN etc. where you take x -> ∞ to mean |x| -> ∞ and you use [-1,1] instead of [0,1].

If Real itself was compact you could skip the scaling step entirely. I think rephrasing the question in terms of Real ∪ {-∞, ∞} makes it almost trivial but I must admit I'm going beyond what I can remember from my student days, I expect someone will tell me that rephrasing it like that causes something else to break.

I think it would be reasonable for a theorem prover to start by choosing a δ to aim for and use continuity to construct the cover of Real from the inverse images of (-δ, -δ). It would get stuck then and I don't know whether there is a reasonable automated bridge to the solution from there.

Sunday, April 20, 2008

Listing the rational numbers

Here's a nice series of posts about how to list every rational number in its lowest terms, with no repeats (ie. only include 2/5 not 4/10). The key is to make a tree with 1/1 as the root and for each node i/j, put i+j/j as the left child and i/i+j as the right child. It's all laid out very clearly in the series of post which is based on a terse maths paper.

In the comments I found a link to another paper which comes at this using matrices and relates this tree with another previously known tree that also lists the rationals. It's a fairly readable paper too but a bit more technical.