• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Math Quiz Thread

Here's one. Consider a family of polynomials yn(x) with degree n in x: constant, linear, quadratic, cubic, quartic, quintic, ...

They satisfy the differential equation p2(x)*y'' + p1(x)*y' + (p0(x) + q(n))*y = 0

From the y's being those polynomials, what constraints can one find for p0, p1, p2, and q?

What kind of constraints are you looking for?
 
Here's one. Consider a family of polynomials yn(x) with degree n in x: constant, linear, quadratic, cubic, quartic, quintic, ...

They satisfy the differential equation p2(x)*y'' + p1(x)*y' + (p0(x) + q(n))*y = 0

From the y's being those polynomials, what constraints can one find for p0, p1, p2, and q?

What kind of constraints are you looking for?
Hints:

Try plugging in y0(x) = y00, a nonzero constant. What does that differential equation become? What does it tell us about p0, p1, p2, and q?

Once you've done that, try plugging in y1(x) = y11*x + y10, where y11 is nonzero.

Then try plugging in y2(x) = y22*x2 + y21*x + y20, where y22 is nonzero.
 
Here's one. Consider a family of polynomials yn(x) with degree n in x: constant, linear, quadratic, cubic, quartic, quintic, ...

They satisfy the differential equation p2(x)*y'' + p1(x)*y' + (p0(x) + q(n))*y = 0

From the y's being those polynomials, what constraints can one find for p0, p1, p2, and q?

What kind of constraints are you looking for?
Hints:

Try plugging in y0(x) = y00, a nonzero constant. What does that differential equation become? What does it tell us about p0, p1, p2, and q?

Once you've done that, try plugging in y1(x) = y11*x + y10, where y11 is nonzero.

Then try plugging in y2(x) = y22*x2 + y21*x + y20, where y22 is nonzero.


I was more asking if there is a big reveal at the end, like "they're all zero!".

If you're just looking for the form of the solutions, p0 is constant, p1 is linear, p2 is quadratic, and q is quadratic. If the y's are given, each of p0, p1, and p2 is determined by the appropriate coefficients of y and values of q.
 
If you're just looking for the form of the solutions, p0 is constant, p1 is linear, p2 is quadratic, and q is quadratic. If the y's are given, each of p0, p1, and p2 is determined by the appropriate coefficients of y and values of q.
That's pretty much right.

Here's a proof.

Start off with y0(x). It gives us (p0(x) + q(0)) = 0. Without losing generality, we can set q(0) = 0. That makes p0 = 0 also.

Next is y1(x). It gives us p1(x)*y11 + q(1)*(y11*x + y10) = 0. So p1(x) is a linear function of x: p11*x + p10 where p11 is nonzero.

Then y2(x). It gives us 2*p2(x)*y22 + p1(x)*(2*y22*x + y21) + q(2)*(y22*x2 + y21*x + y20) = 0

Thus, p2(x) is at most quadratic: p22*x2 + p21*x + p20

However, p2(x) can be linear, nonzero constant, or zero.

Let's now work with the coefficient of xn in yn(x). Looking at the resulting coefficient in the differential equation, we get
q(n) = n*(p22*(n-1) + p11)

So q is quadratic if p is quadratic, and linear otherwise.

Thus, from y1(x), y2(x), q(1), and q(2), one gets p2(x), p1(x), and q(n), thus determining the solutions of

p2(x)*y'' + p1(x)*y' + q(n)*y = 0

to within integration constants. For n >= 3, can it be shown that it has polynomial solutions with degree n?
 
I'll now show that one does indeed get a polynomial for every n.

Try a polynomial solution: y = sumk c(k)*xk

Plugging it in gets this recurrence equation for the c(k)'s:

(k-n)*(p22*(k+n+1) + p11)*c(k) + (k+1)*(p21*k + p10)*c(k+1) + (k+1)*(k+2)*p20*c(k+2) = 0

First, consider where p20 is zero. In general, one gets a polynomial solution with c(1) ... c(n) being multiples of c(0) and c(n+1) and later ones all being zero. Thus, a degree-n polynomial. There is an exceptional case. If p10 = -21*m for some nonnegative integer m, then the solution blows up at k = m, thus limiting what polynomial solutions are possible. One may be able to start a polynomial at k = m+1, however.

But if p20 is nonzero, then the c's will be a sum of multiples of c(0) and c(1). If a solution has nonzero c(n+1), it will be an infinite series. But in that case, there is a way of avoiding that and giving a polynomial with degree at most n. Consider two solution:
c(0) = 1, c(1) = 0 -- c(k) = c0(k)
c(0) = 1, c(1) = 0 -- c(k) = c1(k)
The polynomial solution is c(k) = c0(k)*c1(n+1) - c1(k)*c0(n+1) for at least one of c0(n+1) and c1(n+1) nonzero. If they are both zero, then there are two polynomial solutions.

Finally, are there any families of polynomials that are special cases of the polynomials that I have described?
 
Evaluate \(\int_0^1 \int_0^1 n(1-xy)^{n-1} dxdy\)
It's easy to integrate over one of the variables.
\(\int_0^1 \frac{1 - (1-x)^n}{x} dx\)
Change variables: x -> 1-x. This integral becomes
\(\int_0^1 \frac{1 - x^n}{1-x} dx\)
The integrand becomes 1 + x + x2 + ... + xn-1. Integrating it gives
\(\sum_{k=1}^n \frac1k\)

Evaluate \(\int_0^\infty \frac{\ln x}{e^x} dx\)
One can do this by integration by parts:
\(\int \frac{\ln x}{e^x} dx = - (\ln x)e^{-x} + \int \frac{e^{-x}}{x} dx\)
giving
\(\int_x^\infty \frac{\ln x}{e^x} dx = (\ln x)e^{-x} + E_1(x)\)
That latter function is an  exponential integral. That article gives a series for it, and using it gives
\(\int_0^\infty \frac{\ln x}{e^x} dx = - \gamma\)
where γ is the Euler-Mascheroni constant, about 0.577...
 
It's easy to integrate over one of the variables.
\(\int_0^1 \frac{1 - (1-x)^n}{x} dx\)
Change variables: x -> 1-x. This integral becomes
\(\int_0^1 \frac{1 - x^n}{1-x} dx\)
The integrand becomes 1 + x + x2 + ... + xn-1. Integrating it gives
\(\sum_{k=1}^n \frac1k\)

Evaluate \(\int_0^\infty \frac{\ln x}{e^x} dx\)
One can do this by integration by parts:
\(\int \frac{\ln x}{e^x} dx = - (\ln x)e^{-x} + \int \frac{e^{-x}}{x} dx\)
giving
\(\int_x^\infty \frac{\ln x}{e^x} dx = (\ln x)e^{-x} + E_1(x)\)
That latter function is an  exponential integral. That article gives a series for it, and using it gives
\(\int_0^\infty \frac{\ln x}{e^x} dx = - \gamma\)
where γ is the Euler-Mascheroni constant, about 0.577...

Yup. The second one can also be done using only the limit definition of e, and without needing knowledge of the exponential integral. First, split the integral and integrate by parts in each piece.

\(\int_0^\infty e^{-x} \ln x dx = \int_0^1 e^{-x} \ln x dx + \int_1^\infty e^{-x}\ln x dx = \int_0^1 \frac{e^{-x}-1}{x} dx + \int_1^\infty \frac{e^{-x}}{x} dx\)

Substituting in the limit definition of \(e\) and changing variables \(t = \frac{x}{n},\qquad s = 1 - t\):

\(\int_0^\infty e^{-x} \ln x dx= \lim_{n\to\infty}\left[ \int_0^1 \frac{(1-\frac{x}{n})^n-1}{x} dx + \int_1^n\frac{(1-\frac{x}{n})^n}{x} dx\right] = \lim_{n\to\infty}\left[ \int_0^{\frac{1}{n}} \frac{(1-t)^n-1}{t} dt + \int_{\frac{1}{n}}^1\frac{(1-t)^n}{t} dt\right]= \lim_{n\to\infty}\left[ \int_0^1 \frac{(1-t)^n-1}{t} dt + \int_{\frac{1}{n}}^1\frac{1}{t} dt\right] = \lim_{n\to\infty}\left[ -\int_0^1 \frac{1-s^n}{1-s} ds - \ln(\frac{1}{n})\right]\)

This becomes

\(\int_0^\infty e^{-x} \ln x dx= \lim_{n\to\infty}\left[\ln(n)- \int_0^1 1 + s + s^2 +\dots+s^{n-1} ds \right] = \lim_{n\to\infty}\left[\ln(n) -(1 + \frac{1}{2} + \dots + \frac{1}{n})\right] = -\gamma\)
 
Thanx. Good derivation.

Now for what sorts of polynomials are in that differential equation.
  • p2 = 0. Solution is yn(x) = (constant) * (p1(x))n
  • p2 is constant: Hermite polynomials
  • p2 is linear: associated Laguerre polynomials -> plain Laguerre polynomials
  • p2 is quadratic: Jacobi polynomials -> Gegenbauer polynomials -> Legendre polynomials, Chebyshev polynomials
For nonzero p2, the polynomials have some interesting definitions and interrelationships and orthogonality properties.
 
I've found some pages on "neutral geometry": theorems-euclid-hyper, theorems-plane-geom

Let's evaluate them using a space / manifold with a differentiable metric and appropriate definitions of "line", "distance", and "angle". Like geodesic curve for line.

Axiom 1 (The Set Postulate). The manifold and its submanifolds, like lines, are sets of points. Check.

Axiom 2 (The Existence Postulate). At least two points exist. Check.

Axiom 3 (The Incidence Postulate). One and only one line (geodesic) goes between any two points. True locally, but not necessarily true globally. For a sphere, two antipodal points have an infinite number of great circles going through them. For a rectangle with periodic boundary conditions, every pair of points has an infinite number of lines going through it.

Axiom 4 (The Distance Postulate). Between two points P and Q is a single value of the distance between them. Each geodesic between them may have its own distance value, and if P and Q satisfy the Incidence Postulate, then it will be unique.

Axiom 5 (The Ruler Postulate). For points P and Q, distance(P,Q) = |f(Q) - f(P)| for some unique distance function f. Each geodesic between P and Q will have its own distance function, and if P and Q satisfy the Incidence Postulate, then it will be unique.

Axiom 6 (The Plane Separation Postulate). A line always divides a plane into two sides. It can be generalized to a n-dimensional manifold by considering a submanifold S of it. If S satisfies this postulate, then it divides the original manifold into two disjoint sets of points, each side set of the manifold. Consider two points P and Q not in S and count how many times a line segment between them crosses S.
  • In the same side set <-> even
  • In different side sets <-> odd
S must have one less dimension than the original manifold for it to happen. It will happen locally, but not necessarily globally. Like in a manifold with periodic boundary conditions or a Moebius-strip manifold.

Axiom 7 (The Angle Measure Postulate). Every angle has a value determined by the directions of the lines from it. For a manifold with a metric g, one defines an angle between two geodesic tangents t1 and t2 at some point with

\(\cos a = \frac{t_1.g.t_2}{\sqrt{(t_1.g.t_1)\cdot(t_2.g.t_2)}\)

Axiom 8 (The Protractor Postulate). For a half-rotation around point O from point A to antipodal point A', then the angle between lines out to points B and C in that half-rotation has a value given by |g(OC) - g(OB)| where g is some angle function.

Let OA have tangent vector t0 and an in-between line have tangent vector t1. Then OB has tangent vector tb = a10*t0 + a11*t1 and OC has tangent vector tc = a20*t0 + a21*t1. Set g(OB) = value of angle AOB and g(OC) = value of angle AOC. Using the above formula for an angle, I was able to verify it without making the metric locally Euclidean.

Axiom 9 (The SAS Postulate). Consider a triangle given by two side lengths and a value of the angle between those sides, the SAS values. One can find the remaining side length and angle values from these. This postulate states that every triangle that shares SAS values will share the remaining values. That's locally true but not necessary nonlocally.

What constraints on the manifold's metric can one find? One creates a slightly nonlocal triangle, by taking into account the effects of curvature to first order. That simplifies the solution process. In the case of constant curvature in 2D, the answer is
(sum of angles of triangle) = 180d + (curvature) * (area of triangle)

This will be true in the small limit, of course, and one can use it to come up with a general formula using the manifold's metric's Riemann curvature tensor.

Vertex offsets:
x12 = x2 - x1
x13 = x3 - x1

"Area tensor"
Aij[/sub] = (1/2)*(x12i*x13j - x13i*x12j)
Raised indices don't mean powers here; that's a common differential-geometry convention

Area
A = sqrt((x12.g.x12)*(x13.g.x13) - (x12.g.x13)2)
for metric tensor g

Angle excess = (1/4) * Rijkl*Aij*Akl / A

For the SAS postulate to hold, this must equal K*A for some constant K, since one can make a triangle anywhere. That means
Rijkl*Aij*Akl = K*A2
Take x12 along coordinate axis 1 and x13 along coordinate axis 2. Then,
R1212 = K*(g11*g22 - (g12)2)

More generally, Rijkl = K*(gik*gjl - gil*gjk)

In 2D, the Riemann tensor will always have this form, while in more than 2 dimensions, the Bianchi identity implies that K is constant. So in general, the SAS postulate implies maximal symmetry in all numbers of dimensions.
 
Good. It should be easy to prove.

Here's another one.

The Fibonacci series fib(n) is defined as

fib(0) = 0, fib(1) = 1, fib(n) = fib(n-1) + fib(n-2)

Many of us are familiar with the series for positive n. But what is the series for negative n? Is it related to the series for positive n?
 
What about with


Binet's formula?
\(\frac{\Phi^n-\phi^n}{\sqrt{5}}\)

the Phi's don't look right here, sorry... it's the golden ratio \(\Phi\) and its conjugate \(\phi\)

 
Last edited:
In that formula, what do those letters mean?

Also, it ought to be possible to derive what I was asking about. How is fib(-n) related to fib(n)?
 
In that formula, what do those letters mean?

Also, it ought to be possible to derive what I was asking about. How is fib(-n) related to fib(n)?

Ok, I rewrote it above. A bit more on it here:

I was very into the  Fibonacci sequence and the  golden ratio 4-5 years ago. Still love the golden ratio.... and how it pops up!

Messed around with the Pell and  Lucas numbers as well- more interesting number sequences that can be generated with a similar type of formula (I believe they are called "Binet type formulas" after  Jacques Philippe Marie Binet). I worked on some generic formulas that generate number sequences. Should probably look through my old work- I think I could generate a bunch of different number sequences with the generic formulas I worked on.

Here is the Binet type formula for the Fibonacci sequence (Phi and phi being the golden ratio and its conjugate):

\(\frac{\Phi^n-\phi^n}{\sqrt{5}}\)

Do you remember the formula that I posted, with the nested radicals (pi was with 2...)?

\( \sqrt[n]{x^n-x+\sqrt[n]{x^n-x+\sqrt[n]{x^n-x+\ldots}}} \)

Here was your response:


The solution is essentially recursive:
\( y = \sqrt[n]{x^n-x+y} \)
or
\( x^n - y^n = x - y \)
Thus, a solution is y = x.

What important number do you get from the following:

x=2, n=2, a= the total number of radicals, do the limit approach, not an infinite amount of nestings
note: corrected the following equation, it was missing ()
\( \sqrt[n]{\left( x-\sqrt[n]{x^n-x+\sqrt[n]{x^n-x+\sqrt[n]{x^n-x+\ldots}}}\right) \ \times (n\ \times x^{n-1})^a \ \)

Note that what you multiply the radicals by is simply the derivative of x^n...
I find that it converges to a value independent of a, but I cannot proceed any further.


what about this one B>1 x>0:
\(log_B(B^x-x + log_B(B^x-x + log_B(B^x-x+\ldots)))\)


Recursion again.
\(y = \log_B(B^x - x + y)\)
giving
\(B^y - B^x = y - x\)
or y = x again.






Well, if you input the golden ratio into x^2-x you get 1, which means that infinitely nested \(\sqrt{1+\sqrt{1+\sqrt{1...\) gives you the golden ratio.

Likewise, the  plastic number that satisfies x^3-x=1 can be generated by \(\sqrt[3]{1+\sqrt[3]{1+\sqrt[3]{1...\)

 
For Kharakov:

That does not answer the question.

What are the Φ and φ in your formula? Could you please write out explicit expressions for them?

You have included a lot of irrelevant stuff in your most recent solution, but you still have not found a formula for fib(negative) in terms of fib(positive). There is one, and a very simple one.

 

Sure that it doesn't answer the question? :D

The golden ratio is \(\Phi=\frac{1+sqrt{5}}{2}\) and its conjugate is \(\phi=\frac{1-sqrt{5}}{2}\)

You can use this Wolfram Alpha script**** to generate the Fibonacci numbers. Just change n to whatever place in the sequence you want. n= 6 will give you 8, n=-6 will give you -8, n=7 or -7 will give you 13, etc. etc..

Ok, having a problem with cross scripting protection. Hehe...

So, go to Wolfram Alpha and input this script into the parser:

[((sqrt(5)+1)/2)^n - ((1-sqrt(5))/2)^n]/sqrt(5); n=6

Change n to vary which Fibonacci number you get.

The formula generates the Fibonacci and negaFibonacci sequences.

 
More on my Fibonacci-series question:

I'll give the solution.

Consider the Fibonacci recurrence again.
fib(n) = fib(n-1) + fib(n-2)
Now reverse its direction:
fib(n-2) = - fib(n-1) + fib(n)
Set n = 2-n
fib(-n) = - fib(-n+1) + fib(-n+2)
Or
(-1)-n*fib(-n) = (-1)-n+1*fib(-n+1) + (-1)-n+2*fib(-n+2)

So (-1)n*fib(-n) obeys the same recurrence as fib(n).

Let's now consider the starting values.
fib(1) = fib(0) + fib(-1)
fib(-1) = 1
fib(0) = fib(-1) + fib(-2)
fib(-2) = -1

So fib(-1) = fib(1) and fib(-2) = - fib(2)
Since fib(-n) acts like (-1)n*fib(n) we have

fib(-n) = (-1)n+1*fib(n)

That's what I was looking for.

One can also prove this result from Binet's formula.

 
K, I thought that was implied by what you asked.

I like these 2 general forms of the


Binet type formulas. They generate the Fibs, the Pell/Lucas numbers, etc.

\(\)

\( G_n= \frac { (\frac{a+\sqrt{a^2+4}}{2})^n - (\frac{a-\sqrt{a^2+4}}{2})^n} {\sqrt{a^2+4}} \)

And the other general form simply switching the denominator (although above a=3... you'll see):

\( G_n= \frac { (\frac{a+\sqrt{a^2+4}}{2})^n - (\frac{a-\sqrt{a^2+4}}{2})^n} {a} \)


 
Last edited:
Back
Top Bottom