• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Finding Zeros

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,769
Location
seattle
Basic Beliefs
secular-skeptic
Finding zero crossings is straight forward. Just look for a sign change in the data an unambiguous indication of a zero.

The problem is given a peak that is mathematically exactly at zero the odds of it being exactly zero with sampled data and digital commutation is low.


My solution was to find the peaks and see how close to zero they are. Set a limit window for a zero. Alternatively the data could be rounded to a limited number of decimal places.

The 1st derivative is used to find an infection point indicating a peak. The derivative is dy/dx but dividing by dx is unnecessary.

In the plot there is a local minimum at abound 0.55 seconds close to zero. Vary the number added to y + y2 in the code to move the curve up and down about zero to test the code.

Python

Code:
import numpy as np
import array as ar
import math as ma
import matplotlib.pyplot as plt

def plotxy(xlo,xhi,ylo,yhi,x,y,title,xlabel,ylabel):
        # single y plot
        font1 = {'family': 'arial',
        'color':  'black',
        'weight': 'heavy',
        'size': 15,
        }
        [fig, p1] = plt.subplots(1)
        p1.plot(x,y,linewidth=2.0,color="k")
        p1.grid(color='k', linestyle='-', linewidth=1)
        p1.grid(which='major', color='k',linestyle='-', linewidth=0.8)
        p1.grid(which='minor', color='k', linestyle='-', linewidth=0.3)
        p1.set_xlim(xlo,xhi)
        p1.set_ylim(ylo,yhi)          
        p1.set_title(title, fontdict = font1)
        p1.set_xlabel(xlabel, fontdict = font1)
        p1.set_ylabel(ylabel, fontdict = font1)
        p1.minorticks_on()
        plt.show()

def sign(x):
    if x >= 0: return 1
    return -1
   
def find_zeros(zmin,x,y):
    zcount = 0
    n = len(x)  - 1
    n2 = n - 1
    z = ar.array('i') # indeces of zeros

    # find xero crossings
    for i in range(n):
        if y[i] < 0 and y[i+1] > 0:
            z.append(i)
            zcount += 1
        if y[i]  > 0 and y[i+1] < 0:
            z.append(i)
            zcount += 1
           
    # check if peaks are zeros
    for i in range(n2):
        if sign(y[i+1]-y[i]) !=  sign(y[i+2]-y[i+1]): # inflection point
            # approaching zero from the positive
            if y[i+1] >= 0 and y[i+1] < zmin and sign(y[i+1]-y[i]) == -1:
                z.append(i+1)
                zcount += 1;
            # approaching zero from the negative
            if y[i+1] <= 0 and y[i+1] > -zmin and sign(y[i+1]-y[i]) == 1:
                z.append(i+1)
                zcount += 1                                      

    # check end points
    if abs(y[0]) >= 0 and abs(y[0]) < zmin:
        z.append(i)
        zcount += 1
    if abs(y[n-1]) >= 0 and abs(y[0]) < zmin:
        z.append(i)
        zcount += 1
                           
    z = np.sort(z)
    return  z
 

n = 1000  
t = np.linspace(0,1,n)
f1 = 2
f2 = 3
y = np.sin(2*np.pi*f1*t)
y2 = np.sin(2*np.pi*f2*t)
y = y + y2   + 0.2235
z = find_zeros(.001,t,y)

nz = len(z)
print(" zero count  ",nz)
print("time                    Y")
for i in range(nz): print("%.10f % .10f"%(t[z[i]],y[z[i]]))
plotxy(min(t),1,-3,3,t,y,"sines","time","")
 
Finding roots is greatly helped if one understands one's function well enough to get a good idea of where to look.

There are several theorems that are helpful for polynomials, for instance. A polynomial over complex numbers with degree n, the maximum power of its independent variable, has n complex roots. That means that a polynomial over real numbers has at most n roots. From counting zero crossings, for distinct roots, the number is n - 2m. For coinciding ones, it is more difficult. From crossings of zero, and its asymptotic values, a real-domain polynomial has at least one root for odd degree, and may have no roots for even degree.

 Descartes' rule of signs - "It asserts that the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients), and that the difference between these two numbers is always even."

For a similar result about negative roots, one flips the sign of the independent variable, and for roots greater or less than some quantity, one shifts the independent variable by that amount.

 Sturm's theorem
For a polynomial P, make a Sturm sequence or Sturm chain:
P(0) = P
P(1) = derivative of P
...
P(i) = - remainder of dividing P(i-2) by P(i-1)

Number of sign flips for arg value a: F(a). For a square-free polynomial, there are V(a) - V(b) roots between a and b.

 Geometrical properties of polynomial roots - has some upper-limit theorems for polynomial roots.

 Rational root theorem - if an integer-coefficient polynomial root is rational, p/q in lowest terms, then p divides the lowest coefficient and q divides the highest coefficient.

 Gauss–Lucas theorem - in the complex plane, all the roots of the derivative of a polynomial are inside the convex hull of the roots of that polynomial itself. Convex hull of some points - the smallest polygon that contains all those points.
 
Ah yes, but find zeroes of complex equations! Suddenly you are into the realm of Riemann, and maybe winning a fields medal.
 
A lot numerical techniques predate computers or developed during slow main frame and 4 megahertz PCs. Iteration counts could matter.

Numb of iterations can still matter, but not so much.

For a sine at 1 hertz there is a zero at exactly 0.5 seconds.

100 to 1 million iterations, Each iteration decreases dx.


100 0.498989898989899 0.006346609218342
1000 0.499899899899900 0.000628947436730
10000 0.499989998999900 0.000062838136844
100000 0.499998999990000 0.000006283248140
100000 0.499998999990000 0.000006283248140
1000000 0.499999899999900 0.000000628319159

An old technique is to do do a coarse scan and then zero in on a zero by decreasing dx about the zero until an exit condition is reached.

Newton's method, an oldie but a goodie. Successive approximation.


A zero complex number is 0 + i0. Or abs(x + iy) = 0.

Zeros of complex functions are important in linear and control systems.



1/(x1 = jy1)-(x2 - jy2) then results in zero is a no no in control systems.
 
Last edited:
A lot numerical techniques predate computers or developed during slow main frame and 4 megahertz PCs. Iteration counts could matter.

Numb of iterations can still matter, but not so much.
But as the old saying goes, "waste not, want not". There are plenty of applications where one wants to do a LOT of calculations, like computer graphics and training AI models and a variety of theoretical simulations.

 Fast inverse square root
algorithm - John Carmack's Unusual Fast Inverse Square Root (Quake III) - Stack Overflow
A good initial guess, followed by Newton's method.

Newton's method, an oldie but a goodie. Successive approximation.
To find x where function f of it is zero: f(x) = 0:
\( \displaystyle{ x \gets x - \frac{f(x)}{f'(x)} } \)

That's what's used to calculate reciprocals and square roots and the like.

Find x such that xn = a:
\( \displaystyle{ x \gets x - \frac{ x^n - a}{n x^{n-1}} = \frac{x}{n} \left( (n - 1) + \frac{a}{x^n} \right) } \)

n = 1/2
\( \displaystyle{ x \gets \frac{1}{2} \left( x + \frac{a}{x} \right) } \)

n = -1
\( \displaystyle{ x \gets x (2 - a x) } \)

n = -1/2
\( \displaystyle{ x \gets \frac{x}{2} (3 - a x^2) } \)
 
True in how to apply math is nt black and white. Depends on the problem and the preferences of the individual.

Back in the 80s when all I had was a calculator I used Simpson's Rule to integrate functions. Today Python Numpy and other math tools has a built in trapezoid function with which I can run in the thousands of steps with high accuracy with little time penalty. Faster still in compiled C.

Many practical problems are not solvable without iterative computer solutions. Beyond first and second order polynomials I do not think there is any general solutions to zeros of polynomials.

A lot of technical work was done uisng GWBasic that came with the original DOS distribution.

When I started out I took some pride in working out analytical solutions to problems, but the problems became too complex and took too much time.

For something to do I coded a solution to the Inverse Error Function ierf(). I used a series solution with 200 iterations.
 
Back
Top Bottom