• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Playing with Willans Formulae and Analytic Step Function Approximations (and actual step functions)

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
14,734
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
So, recently, I've been playing with analytical tools to do some dumb shit.

Willans formulae are functions that use brute force to check for primality, by taking some product and all of its products.

Currently, I have been playing with a constructable wave form as follows:
\(p\left(x,x_{1},x_{2}\right)\ =\tanh\left(-k\left(x-\left(2x_{1}\right)\left(x_{2}+2\right)\right)\right)-\tanh\left(-k\left(x-\left(2x_{1}+1\right)\left(x_{2}+2\right)\right)\right)\)

The purple is this:
\(p\left(x,0,0\right)\)

The green is this:
\(\left(1+\sum_{x_{1}=-s}^{s}p\left(x,x_{1},0\right)-p\left(x,0,0\right)\right)\)
desmos-graph.png

As can be seen, This function can be provoked to have zeros at places.

Recently, I also discovered THIS gem, which can be provoked to produce an even more exact structure:
\(k\arcsin\left(\sin\left(x\right)\right)+k\arcsin\left(\sin\left(x+v\right)\right)\)
As can be seen in the following graph, the resulting wave is trapezoidal. at sufficiently high K, with v close enough to but less than pi the system becomes square, however using this to construct a willans/eratosthenes prime counter requires being able to take a bite out of it to make the product.
unfortunately, I've been having a hard time finding a single sum that represents a single "bite" out of such a trapezoidal system:
desmos-graph(1).png

Is there even some function that continuously defines a line with a single trapezoid out of it? I think it would have to be the sum of two functions that define a single "zag" of a flat line the way the tanh sum does a "steplike shape", but I don't know any way to precisely define just one. It would dramatically speed up the productization time, and would allow an error-free infinite sum.

This connects to Willans methods in the following way:
\(\prod_{n=0}^{s}\left(1+\sum_{x_{1}=-s}^{s}p\left(x,x_{1},n\right)-p\left(x,0,n\right)\right)^{2}\)
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{s}\left(1+\sum_{x_{1}=-s}^{s}p\left(n_{2},x_{1},n\right)-p\left(n_{2},0,n\right)\right)^{2}\)
desmos-graph(3).pngdesmos-graph(4).png
Such functions, when taken a bite out of them, represent "is divisible by" as a statement by their zeroes. By squaring such "bitten" wave functions, you can then productize them, representing "is divisible by any prior number".

The trapezoidal function seems the best fit for this, since it doesn't have error past the "shoulder" and while the productization is still expensive, this allows placing the flats so that the shoulders can't interfere with zero-ness of non-prime values between them nor the one-ness at primes. I suspect the trapezoidal function will require high values of K at arbitrarily high values of x, and arbitrarily precise measurements of pi, however, to keep the "shoulders" of higher "mutiple waves" from intruding, and that's assuming there's a function I can use to take a bite out of the trapezoidal wave in the first place.

Finally, there's another function, namely the derivative of the arcsin(sin(x)), which because of it's integral's triangular shape, represents a step function, though is undefined at certain points (unless you interpret the undefined points as zeroes, in which case the Willan's Formulae works just as well assuming you can do the "bite" operation. If you could make the square wave asymmetrical, though, you wouldn't need that, even:
\(\frac{\cos x}{\sqrt{1-\sin^{2}x}}\)
desmos-graph(6).png
 
Nicely done. Digital signals are quasi trapezoidal, no sharp discontinuities at the tradition points. So a trapezoid with soft transitios is useful in simulations.
 
Nicely done. Digital signals are quasi trapezoidal, no sharp discontinuities at the tradition points. So a trapezoid with soft transitios is useful in simulations.
Well, it's more, I don't even really know what I did. I did realize last night how to make a "trapezoid parked on top of a line", as follows using a difference between absolute values.

This SHOULD have a provable identity with a min/maxing of the absolute value function and the equations of two lines, letting me just completely step away from tanh and use arcsin(sin(x+v)+arcsin(sin(x-v)) as the wave carrier, completely eliminating and making precise the inner function to the willans product. This will hopefully make the visualization much quicker, granted I have a long slog here in trying to isolate the relationships between the slopes of the trapezoidal wave, the size of v, the width of the trapezoidal wave, and all those same
parameters in the "single bite", as there's going to be a very weird relationship between them.

I should be able to post the identities between sums of the min/max trapezoid and the arcsin(sin(x)) functions.

I don't know whether there would be value in converting the tanh sum into a sine sum, though there is an identity to be discovered there, I think, and it would probably have some sort of analytical use: as the k of a tanh product becomes small, you can add a magnitude factor to their infinite product and as K approaches zero, it's shape approaches "sine shape" rather than "square shape".

At Infinite K, tanh products becomes square and infinitesimal K becomes sinusoidal, albeit requiring an inversely large factor to bring their amplitude towards sine. So something like 1/k prod(tanh(xk+v)) (with additional modifiers on k and v, both inside and out) approaches something like sin(x).

I'll post examples later today if you're interested.

I can see why people get lost in number theory sometimes. The puzzles are fun, though admittedly I'm approaching them more like a software engineer than a mathematician: I don't care about proving those identities vigorously.

Usually, I just put both functions in Desmos and fiddle with factors until sliding a slider no longer "separates" them. I know I could set the two equal to one another and solve for 0=0 to prove any of it but some of these things would involve pathways I don't understand, like how to actually prove convergences between infinite products and "flatter" functions.

Anyway, it's Saturday so I'm going to try discovering the sum identity for arcsin(sin(x+v)+arcsin(sin(x-v)).
 
Ok, I did find a way to express a trapezoid on a line using the analytical max and min functions as follows:
\(f\left(x\right)=\left|x\right|\)
\(m\left(x\right)=\frac{f(x)+g-\left|f(x)-g\right|}{2}\)
\(Trapezoid\left(x\right)=\frac{m(x)+C+\left|m(x)-C\right|}{2}\)
where C determines the cutoff from the point and g determines the cutoff at the top.

desmos-graph(8).png

and a singleton:
\(\left|x+\frac{1}{2}\right|-\left|x-\frac{1}{2}\right|\)
desmos-graph(9).png
 
Way back I read a short book 'How To Read And Do Proofs' by Sp;pw. I needed to be able to follow proofs in engineering texts.


There is the backwaters forwards method.

You can start with something that works and work backwards to a mathematical expression as to why it works, or start with a mathematical expression of why something should and work forward to show it does work.

In practice you go back and forth until you zero in a solution.
 
More results, apparently the approximation of tanh to the trapezoidal shape shares commonalities as to where the slopes of the limits place the corners, as in the presented function the slope of the intercept at 0 is the same:
\(a\left(x\right)=\frac{\left(\left|k\left(x\right)+1\right|-\left|k\left(x\right)-1\right|\right)}{2}\)
\(a_{1}\left(x,x_{1},x_{2}\right)=-a\left(x-\left(2x_{1}\right)\left(x_{2}+2\right)\right)\) vs \(p_{1}\left(x,x_{1},x_{2}\right)=\tanh\left(-k\left(x-\left(2x_{1}\right)\left(x_{2}+2\right)\right)\right)\)
\(a_{2}\left(x,x_{1},x_{2}\right)=-a\left(x-\left(2x_{1}+1\right)\left(x_{2}+2\right)\right)\) vs \(p_{2}\left(x,x_{1},x_{2}\right)=\tanh\left(-k\left(x-\left(2x_{1}+1\right)\left(x_{2}+2\right)\right)\right)\)
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{s}\left(1+\sum_{x_{1}=-s}^{s}A\left(n_{2},x_{1},n\right)-A\left(n_{2},0,n\right)\right)^{2}\) vs \(\sum_{n_{2}=2}^{x}\prod_{n=0}^{s}\left(1+\sum_{x_{1}=-s}^{s}P\left(n_{2},x_{1},n\right)-P\left(n_{2},0,n\right)\right)^{2}\)
\(A\left(x,x_{1},x_{2}\right)\ =a_{1}\left(x,x_{1},x_{2}\right)-a_{2}\left(x,x_{1},x_{2}\right)\) vs \(P\left(x,x_{1},x_{2}\right)\ =p_{1}\left(x,x_{1},x_{2}\right)-p_{2}\left(x,x_{1},x_{2}\right)\)
Screenshot 2024-04-27 at 10-59-15 graph2.png

Now to tie this to the arcsin(sin(x)) difference trapezoidal wave, since I can just take a simple bite out of that one now, albeit the transforms are going to be gross.[/LATEX]
 

Attachments

  • Screenshot 2024-04-27 at 10-59-15 graph2.png
    Screenshot 2024-04-27 at 10-59-15 graph2.png
    81.8 KB · Views: 0
Last edited:
Displayed are the following:
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}A\left(n_{2},x_{1},n\right)-A\left(n_{2},0,n\right)\right)\right|\)
\(\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}A\left(x,x_{1},n\right)-A\left(x,0,n\right)\right)\right|\)
\(\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}P\left(x,x_{1},n\right)-P\left(x,0,n\right)\right)\right|\)
here k is set to 1. I should have said but didn't that s is assumed to be infinity here.

I think the only thing left to do is to move from A to arcsin(sin(x)) so I can eliminate the product and see what it does at high S without needing to actually do that sum because such sums to produce waveforms are gross and cause the result to be a mere approximation. Also, I just discovered this function does something REALLY weird at negative K, and the result of the product looks to be at log scale? at k = -1, the function becomes precise. I've never seen anything like it before, and encourage the reader to hop over to Desmos and try plugging these in for negative K.
desmos-graph(10).png
what happens when K is -1:
desmos-graph-11-png.45973
 

Attachments

  • desmos-graph(11).png
    desmos-graph(11).png
    114.3 KB · Views: 22
Well, I found the convergence on the arcsin(sin(x)) version and it was a doozy:
\(1+\sum_{x_{1}=-s}^{s}A\left(x,x_{1},1\right)\)==\(t\left(x,x_{1},x_{2},x_{3},x_{4}\right)=-x_{4}\left(\frac{2\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}\right)x_{2}}{2}\right)\right)}{\pi}-\frac{2\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}\right)x_{2}}{2}-\pi x_{3}\right)\right)}{\pi}\right)\) when \(t\left(x,-1,\frac{2}{3},1+\frac{1}{3},\frac{3}{4}\right)\)

To increase the indexes, it proceeds as follows:
\(t\left(x,-2,\frac{2}{4},1+\frac{2}{4},\frac{4}{4}\right)\)
\(t\left(x,-3,\frac{2}{5},1+\frac{3}{5},\frac{5}{4}\right)\)
and so on
 
Looks like you ended up with the Fourier series. Any real function has a unique transform.

ff you know the Fourier coefficients of a waveform you can synthesize it. Coefficients for common waveforms are on the net. Rectangular and trapezoidal waves are common.
 
ok, I'm going to need to work on simplifying this soon but, I was able to find a parameterized function using the arcsin(sin(x)) operation as a kernel here:
so now I have a transformation defining dependent variables of the function t, as above for k = 1:
\(a\left(x\right)=\frac{\left(\left|k\left(x\right)+1\right|-\left|k\left(x\right)-1\right|\right)}{2}\)
\(a_{1}\left(x,x_{1},x_{2}\right)=-a\left(x-\left(2x_{1}\right)\left(x_{2}+2\right)\right)\)
\(a_{2}\left(x,x_{1},x_{2}\right)=-a\left(x-\left(2x_{1}+1\right)\left(x_{2}+2\right)\right)\)
\(A\left(x,x_{1},x_{2}\right)=a_{1}\left(x,x_{1},x_{2}\right)-a_{2}\left(x,x_{1},x_{2}\right)\)
\(t\left(x,x_{1},x_{2},x_{3},x_{4}\right)=-x_{4}\left(\frac{2\arcsin\left(\sin\left(\frac{\pi\left(\left(x-\frac{x_{1}}{2}\right)\right)x_{2}}{2}\right)\right)}{\pi}-\frac{2\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}\right)x_{2}}{2}-\pi x_{3}\right)\right)}{\pi}\right)\)
\(T\left(x,x_{1}\right)=t\left(x,-x_{1},\frac{2}{2+x_{1}},1+\frac{x_{1}}{x_{1}+2},\frac{2+x_{1}}{4}\right)\)

Then this defines "is prime" with 1 or 0, yielding exactly 1 or 0 every time
\(\prod_{n=0}^{s}\left|\left(T\left(x,n\right)-A\left(x,0,n\right)\right)\right|\)

And this acts as a count thus far:
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{s}\left|\left(T\left(n_{2},n\right)-A\left(n_{2},0,n\right)\right)\right|\)
And this counts all primes up to S^2.

I haven't figured out how to fully express variation on k in the domain of T yet, though. I very much want to do this because I want to see what the function does when it goes into "negative K mode" at larger numbers.
desmos-graph(12).png

Heh, just realized I can set the upper bound on the product using sqrt(x):
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{\sqrt{x}}\left|\left(T\left(n_{2},n\right)-A\left(n_{2},0,n\right)\right)\right|\)
 
Looks like you ended up with the Fourier series. Any real function has a unique transform.

ff you know the Fourier coefficients of a waveform you can synthesize it. Coefficients for common waveforms are on the net. Rectangular and trapezoidal waves are common.
I don't suppose you know what I found when I found \(\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}A\left(x,x_{1},n\right)-A\left(x,0,n\right)\right)\right|\) with negative k? or how to reverse the synthetic version here into a whole function?
 
Looks like you ended up with the Fourier series. Any real function has a unique transform.

ff you know the Fourier coefficients of a waveform you can synthesize it. Coefficients for common waveforms are on the net. Rectangular and trapezoidal waves are common.
I don't suppose you know what I found when I found \(\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}A\left(x,x_{1},n\right)-A\left(x,0,n\right)\right)\right|\) with negative k? or how to reverse the synthetic version here into a whole function?
I think the answer is the forward and inverse Fourier transform. You can find it worked out for trapezoids on the net.

The inverse transform is used to synthesize waveforms, it is used in synthesizers.

When it comes to waveforms and signals all roads lead to the Fourier Transform and Fourier Series.

Scroll down to trapezoids. You can use the Numpy FFT to deonstrate it.



asin() and sin() are inverse functions, so asin(sin(x)) = x. I expect the derivative is cosine.
 
Looks like you ended up with the Fourier series. Any real function has a unique transform.

ff you know the Fourier coefficients of a waveform you can synthesize it. Coefficients for common waveforms are on the net. Rectangular and trapezoidal waves are common.
I don't suppose you know what I found when I found \(\prod_{n=0}^{s}\left|\left(1+\sum_{x_{1}=-s}^{s}A\left(x,x_{1},n\right)-A\left(x,0,n\right)\right)\right|\) with negative k? or how to reverse the synthetic version here into a whole function?
I think the answer is the forward and inverse Fourier transform. You can find it worked out for trapezoids on the net.

The inverse transform is used to synthesize waveforms, it is used in synthesizers.

When it comes to waveforms and signals all roads lead to the Fourier Transform and Fourier Series.

Scroll down to trapezoids. You can use the Numpy FFT to deonstrate it.



asin() and sin() are inverse functions, so asin(sin(x)) = x. I expect the derivative is cosine.
Well, arcsin and sin are inverse functions, but there's a catch: arcsin essentially "traps" the wave, such that outside of +/- pi/2, it reflects in slope. Much like the square root of both positive and negative squared numbers is positive, the arcsin(sin(x)) is similarly going to produce a slope that is negative instead of positive, so that the arcsine(sin(x)) only = x for pi/2< x < pi/2; it has a "triangular" wave", and something similar happens such that at odd intervals, the slope reverses.

I'm not really interested in doing the whole transform and walking through the identity. I'm not sure I even know enough to do that work.

\(T\left(x,x_{1}\right)=-\frac{2+x_{1}}{2\pi}\left(\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}\right)}{2+x_{1}}\right)\right)-\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}-2\right)}{2+x_{1}}\right)\right)\right)\)
I did find a simpler way to express the function itself. Hopefully K will be an easier addition now that it's simplified and terms are cancelled.

I imagine if this was pulled apart all the way analytically, there would be something funky happening in the interaction, invoking some manner of complex relationship with imaginary parts

edit: found where to stick K:
\(T\left(x,x_{1}\right)=-\frac{k\left(2+x_{1}\right)}{2\pi}\left(\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}-\frac{k-1}{k}\right)}{2+x_{1}}\right)\right)-\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}-2+\frac{k-1}{k}\right)}{2+x_{1}}\right)\right)\right)\)

Sadly, this doesn't cause the same really fucking cool inversion that happens in the tanh product domain

edit2: found the function that looks like "with negative K"...
\(\sum_{n_{2}=2}^{x}\prod_{n=0}^{\sqrt{x}}\left|2-\left(T\left(n_{2},n\right)-A\left(n_{2},0,n\right)\right)\right|\)
 
Last edited:
I managed to adapt the "triangle wave" function rom wikipedia to the purpose:
\(T\left(x,x_{1}\right)=-\left(x_{1}+2\right)\left(\left|\frac{\left(\left(x+\left(x_{1}+1\right)\right)\right)}{2x_{1}+2^{2}}-\operatorname{floor}\left(\frac{\left(x+\left(x_{1}+1\right)\right)}{2x_{1}+2^{2}}+\frac{1}{2}\right)\right|-\left|\frac{\left(\left(x-\left(x_{1}+1\right)\right)\right)}{2x_{1}+2^{2}}-\operatorname{floor}\left(\frac{\left(x-\left(x_{1}+1\right)\right)}{2x_{1}+2^{2}}+\frac{1}{2}\right)\right|\right)\)

The result is a trapezoidal wave that "controls" well.
 
The Tukey Window is one of several window functions used with sampled data and the discrete Fourier Transform.

The code creates a soft trapezoid. It would take some code to turn it into a repeating waveform scaled in time.

Scroll down to Tukey.



Code:
mport numpy as np
import matplotlib.pyplot as plt

def trap_tukey(yr,alpha,amplitude):
    #Tukey Window
    # creates a tra-ezoid pulse  with sft trasitions
    # 0 < alpha <= 1
    n1 = int(len(yr))
    for i in range(n1):yr[i] = amplitude
    n2 = int(alpha* n1/2)
    for i in range(n2): 
          w = .5*(1.- np.cos(2*np.pi*i/(alpha*(n1 - 1))))
          yr[i] = w*amplitude
          yr[n1-1-i] = yr[i]


n = 2**10
t = np.linspace(0,10,n)
y = np.ndarray(shape=(n),dtype=np.double)
trap_tukey(yr,.2,2.5)

plt.plot(t,yr,linewidth=2.0,color="k")
plt.grid(color = 'k',linestyle= '--',linewidth = 1)
plt.show()
 

Synthesizing waveforms.
I read a bit about the triangle wave stuff. I couldn't really follow it well.

At any rate, I found some more convenient representations:
\(J\left(x,x_{1}\right)=-\frac{2+x_{1}}{2\pi}\left(\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}\right)}{2+x_{1}}\right)\right)-\arcsin\left(\sin\left(\frac{\pi\left(x-\frac{x_{1}}{2}-2\right)}{2+x_{1}}\right)\right)\right)-\frac{\left|x-1\right|+\left|x-x_{1}-1\right|-\left|x-x_{1}-3\right|-\left|x+1\right|}{2}\)
\(e^{\sum_{x_{1}=0}^{\sqrt{x}}\ln\left(\left|J\left(x,x_{1}\right)\right|\right)}\)
\(\sum_{x_{2}=2}^{x}\left(e^{\sum_{x_{1}=0}^{\sqrt{x}}\ln\left(\left|J\left(x_{2},x_{1}\right)\right|\right)}\right)\)

Screenshot 2024-04-28 at 20-56-54 Desmos Graphing Calculator.png
 
Screenshot 2024-04-29 at 08-17-24 J-equation.png

Hey @steve_bank I did some adjustment here to make both index from 2. I could probably make them both index from 0 too, but it's whatever.

The real question I have then is "what did I do?"

I know there are a lot of open questions about expressing primes, and I'm not sure really what this constitutes as a statement. There are probably all kinds of things that can be done with it as a theorem, but I'm also not "aware" enough to understand what of my work is "interesting" and what other of my work is "common knowledge".

If anyone here (@Swammerdami @Loren Pechtel @lpetrich) understand any of the big questions of number theory let me know? I'm actually kinda excited about this project again after getting away from those double-products and getting the indicator into a sum rather than a product.
 
Jarhyn,

As long as you are enjoying yourself I think hat is all that really matters.


As to triangle waves and Fourier series.

Put a sine wave into an audio amp and you hear a single tone. Combine two sines and you hear a sound. Put the combined signal through a filter that rejects one of the sines and you hear only a single tone.

A square or triangle wave has a series that describes the single sines that make up the signal. Put a triangle wave trough a bandpass filter that selects one of the sines in its Fourier series and you hear the frequency.

Put a square wave with a frequency of 1kz through a low pass filter that cuts off above 1kz and you get a 1kz sine wave.
 
Jarhyn,

As long as you are enjoying yourself I think hat is all that really matters.


As to triangle waves and Fourier series.

Put a sine wave into an audio amp and you hear a single tone. Combine two sines and you hear a sound. Put the combined signal through a filter that rejects one of the sines and you hear only a single tone.

A square or triangle wave has a series that describes the single sines that make up the signal. Put a triangle wave trough a bandpass filter that selects one of the sines in its Fourier series and you hear the frequency.

Put a square wave with a frequency of 1kz through a low pass filter that cuts off above 1kz and you get a 1kz sine wave.
That's the thing though... I mean, this can be used to calculate whether harmonics are produced between arbitrary values, and it makes a nice test for coprimeness, because I think a lot of terms cancel?

I do hope eventually that I start solving problems around this that aren't solved, though. My reason for this is that I do know that given my ability to do this without formal training, and the repeated nature of how I "learn through original discovery", I do think I have at least the potential to push into solving problems others haven't yet, and want to see if I can manage that. I know I've said it before to you in particular, but I discovered a new way to subtract with carries in elementary school that didn't require counting down like "they" said to.

I've started relying on known theorems at this point, and I actually do want to find the end of the branch and add whatever growth I can to it rather than merely growing like a second tree copying the first tree of knowledge, but worse.

I guess I do have some burning need to add a name to "math, all that we know of it".

Another reason why it's... More personal, I guess... Goes to how people will often accuse me of word salad, or otherwise fail to understand what I'm talking about, or to question my very ability to have insights in the first place.

In some ways Steve, you are part of that problem insofar as whenever I try to discuss something weird, out of your domain of expertise, such as the links between psychology, self-mastery, cognitive-behavioral therapy, and occult practices, you do the same and criticize my thoughts harshly instead of trying to actually find out what I mean.

A lot of that is on me because of how vigorously and often rudely I disagree with you on particular subjects, but you haven't made it easy for me to live and let live there much of the time we've known each other.

And some of it is just to get high on the feeling of having solved a puzzle, and the transcendently high feeling of knowing you solved something nobody else ever has as far as anyone knows, or the lesser version of just not needing to look at a solution.

So, it should be enough that it makes me happy, but how happy it can make me depends on part on how much I actually accomplish as objective progress.
 
Last edited:
Back
Top Bottom