• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Python

Complex numbers

Magnitude = sqrt(real^2 + imaginary^2).
Phase = atan(imaginary.real).

Code:
import math
import cmath
R2D = 360/(math.pi*2)  # radians to degrees
y = 0 + 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 0 - 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 1 + 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 1 - 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))

1.00000      90.00000
1.00000      -90.00000
1.41421      45.00000
1.41421      -45.00000

Arrays of complex numbers

n = 10
x = n*[0. + 0.j] # create complex array
ph = n*[0]
m = n*[0]
for i in range(n):
        x[i] = i  +  2*i*1j
        m[i] = abs(x[i])
        ph[i] = cmath.phase(x[i])*R2D

for i in range(n):
   print('%10.5f   \t%10.5fi  \t%10.5f   \t%10.5f ' %(x[i].real,x[i].imag,m[i],ph[i]))
 
How many decimal paces do you get? It depends on the integer part. There are only so many bits.

With zero integer part you get about 16 decimal places for a double. Increase the int part and number of decimal places go down.


Code:
x = 0.01234567890123456789
for i in range(16):
        print("%40.20f" %x)
        x += 9*pow(10,i)

0.01234567890123456843
9.01234567890123372536
99.01234567890122661993
999.01234567890128346335
9999.01234567890060134232
99999.01234567890060134232
999999.01234567887149751186
9999999.01234567910432815552
99999999.01234567165374755859
999999999.01234567165374755859
9999999999.01234626770019531250
99999999999.01234436035156250000
999999999999.01232910156250000000
9999999999999.01171875000000000000
99999999999999.01562500000000000000
999999999999999.00000000000000000000
 
Python uses bignum integers and 8-byte floating-point numbers.

However, the Python standard library includes "array" - a class for efficiently storing numerical values. It covers all the data types available for C/C++ -- both signed and unsigned integers with 1, 2, 4, or 8 bytes, and 4-byte and 8-byte floating-point numbers.
 
Run the code snippet in C and you should get the same results.

8 byte floats are 64 bit doubles.

Python floats are IEEE 754 double-precision binary floating-point numbers, commonly referred to as “doubles”, and take up 64 bits. Of those: 1 is for the sign of the number; 11 are for the exponent; and.


The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The standard defines:

arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)

IEEE 754-2008, published in August 2008, includes nearly all of the original IEEE 754-1985 standard, plus the IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The current version, IEEE 754-2019, was published in July 2019.[1] It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations.
 
So, in Python you can explicitly type a numerical array. The interpreter will flag an error if for example you try to write a float value to an int array.



I can see the utility of how Python is constructed, but flexibility can lead to complexity.

Code:
import array
n = 10
x = array.array('d',n*[0])  # 64  bit double
y = array.array('i',n*[0])  # 16 bit int
for i in range(n):
    x[i] = i
    y[i] = i
    print("%f  %d" %(x[i],y[i]))
    
0.000000  0
1.000000  1
2.000000  2
3.000000  3
4.000000  4
5.000000  5
6.000000  6
7.000000  7
8.000000  8
9.000000  9
 
A Fourier Transform has been used as a speed benchmark.

I coded a DFT in C and Python. For an n = 2^14 transform C executes in about 6 seconds and Python 60. To be expected, compiled versus interpreter execution times.

Code:
C

void dftr(int N,double *yr,double fs,double *f,double *mag) {
    double W = _2PI/double(N),C3;
    compvar sum;
    int freq, k;
    double C2 = 2. / double(N);
    double fsum = 0,df = fs/double(N);

    for (freq = 0; freq < N/2; freq++) {
        f[freq] = fsum;

        C3 = W * freq; //complex frequency
        fsum += df;
        sum = 0. + 0.i;

        for (k = 0; k < N; k++)
            sum += yr[k]* exp(-1.i*C3*double(k));
            //sum += yr[k]*( cos(C3*double(k)) -1.i*sin(C3*double(k)));
            //sum += yr[k]*cos(C3*double(k));
          if (freq == 0){
            //mag[0] is the average  value
            mag[freq] = real(sum)/double(N);
            cout<<mag[freq]<<endl;
            }
            else{
             mag[freq] = C2*abs(sum);
              }

   }//for freq
   cout<<"TRANSFORM DONE"<<endl;
}//run()

Python


def dft(n,y,fs,f,mag):
    print("Start")
    n2 = int( n/2)
    print(n,"   ",n2)
    w = 2.*m.pi/n
    c2 = 2./n
    fsum = 0
    df = fs/n
    for freq in range(n2):
        f[freq] = fsum  # trasform frequency scaling
        fsum +=  df
        c3 = w * freq
        sum = 0. + 0.j
        for k in range(n):
                sum += y[k]*cmath.exp(-1.j*c3*k)
                #sum += y[k]*( m.cos(c3*k) -1j*m.sin(c3*k));
        if freq == 0:
                mag[freq] = sum.real/n
        else:
                mag[freq] = c2 *abs(sum)
    print("Done")
 
Last edited:
I fixed errors in your code.
My results are 6 seconds for C++ and 45 seconds for python code.
I guess my python is better than yours :)

Code:
#include <iostream>
#include <math.h>
#include <vector>
#include <complex>

using namespace std;
using compvar=complex<double>;

const double _2PI=(2*M_PI);

void dftr(int N,double *yr,double fs,double *f,double *mag) {
    double W = _2PI/double(N),C3;
    compvar sum;
    int freq, k;
    double C2 = 2. / double(N);
    double fsum = 0,df = fs/double(N);

    for (freq = 0; freq < N/2; freq++) {
        f[freq] = fsum;

        C3 = W * freq; //complex frequency
        fsum += df;
        sum = 0. + 0.i;

        for (k = 0; k < N; k++)
            sum += yr[k]* exp(-1.i*C3*double(k));
            //sum += yr[k]*( cos(C3*double(k)) -1.i*sin(C3*double(k)));
            //sum += yr[k]*cos(C3*double(k));
          if (freq == 0){
            //mag[0] is the average  value
            mag[freq] = real(sum)/double(N);
            cout<<mag[freq]<<endl;
            }
            else{
             mag[freq] = C2*abs(sum);
              }

   }//for freq
   cout<<"TRANSFORM DONE"<<endl;
}//run()
int main()
{
    int nPoints=1<<14;
    vector<double> yr(nPoints);
    vector<double> f(nPoints);
    vector<double> mag(nPoints);
    dftr(nPoints,yr.data(), 100., f.data(), mag.data());
    return 0;
}

Code:
#!/usr/bin/python3
import array
import cmath

def dft(n,y,fs,f,mag):
    print("Start")
    n2 = int( n/2)
    print(n,"   ",n2)
    w = 2.*cmath.pi/n
    c2 = 2./n
    fsum = 0
    df = fs/n
    for freq in range(n2):
        f[freq] = fsum  # trasform frequency scaling
        fsum +=  df
        c3 = w * freq
        sum = 0. + 0.j
        for k in range(n):
                sum += y[k]*cmath.exp(-1.j*c3*k)
                #sum += y[k]*( m.cos(c3*k) -1j*m.sin(c3*k));
        if freq == 0:
                mag[freq] = sum.real/n
        else:
                mag[freq] = c2 *abs(sum)
    print("Done")
    
if __name__ == "__main__":
    N=1<<14
    ar=N*[1.2];
    y=array.array("d",ar)
    f=array.array("d",ar)
    mag=array.array("d",ar)
    dft(N,y,100,f,mag)
 
Here's how to speed up the trigonometry calculations. Use trigonometric identities to reduce evaluation of trigonometric functions to a bare minimum.

cos(a+b) = cos(a)*cos(b) - sin(a)*sin(b)
sin(a+b) = sin(a)*cos(b) + cos(a)*sin(b)

So if one has a and k*a for some a and k, one can calculate (k+1)*a using these identities, and repeating this operation will give you the whole sequence.
 
lpetrich, math library does that already. It's called Euler's formula.
That's \( e^{ix} = \cos x + i \sin x \) -- what I was talking about is trig addition identities, for calculating the sines and cosines of angles a, 2a, 3a, 4a, 5a, ... quickly from those of a.
 
\(e^{i\cdot(a+b)}=e^{ia}\cdot e^{ib}\)
\(\cos (a+b) + i\sin (a+b)=(\cos (a) + i\sin (a))\cdot (\cos (b) + i\sin (b))\)
 
@bigfield have you done any machine learning? Wondering if there's any way to whip up a reasonably quick project that does something in this vein? With some kind of data set I can access somewhere.

I'm interested in taking a look at machine learning.. may just have to poke around the web.
 
lpetrich, math library does that already. It's called Euler's formula.
That's \( e^{ix} = \cos x + i \sin x \) -- what I was talking about is trig addition identities, for calculating the sines and cosines of angles a, 2a, 3a, 4a, 5a, ... quickly from those of a.
The power function executes faster than the trig form. I don't think exp() is done via Euler's idetity.
 
Last edited:
@bigfield have you done any machine learning? Wondering if there's any way to whip up a reasonably quick project that does something in this vein? With some kind of data set I can access somewhere.

I'm interested in taking a look at machine learning.. may just have to poke around the web.
No, although that's on my to-do list for L&D on work time.

I haven't used it myself but Kaggle looks like a good place to get datasets and project ideas.

 
lpetrich, math library does that already. It's called Euler's formula.
That's \( e^{ix} = \cos x + i \sin x \) -- what I was talking about is trig addition identities, for calculating the sines and cosines of angles a, 2a, 3a, 4a, 5a, ... quickly from those of a.


You can look at it like taking logs of two numbers and adding the logs to multiply.

Phasors. The utility of the exponential form and the identity is multiplying and dividing vectors without the messy arithmetic in rectangular form.

Math trivia. Slide rules for engineering and science had exponential scales for multiplying and dividing vectors. Before calculators.
 
Demonstrate how logarithms are used with scalars instead of complex numbers. Converting multiplication and division to addition and subtraction.


# 2*3 and 2/3 using base e logs
x = math.log(2)
y = math.log(3)
zm = x + y #multiply x * y
zd = x - y #divide x / y
print(pow(math.e,zm))
print(pow(math.e,zd))


5.999999999999999
0.6666666666666666
 
And for complex numbers

a = 1 + 2j
b = 3 + 4j

cm = a * b
cd = a / b
print(abs(cm)," ",abs(cd))

al = cmath.log(a)
bl = cmath.log(b)
dm = al + bl
dd = al - bl
dmp = pow(math.e,dm)
ddp = pow(math.e,dd)
print(abs(dmp)," ",abs(ddp))

11.180339887498949 0.4472135954999579
11.180339887498942 0.447213595499958
 
Demonstrate how logarithms are used with scalars instead of complex numbers. Converting multiplication and division to addition and subtraction.


# 2*3 and 2/3 using base e logs
x = math.log(2)
y = math.log(3)
zm = x + y #multiply x * y
zd = x - y #divide x / y
print(pow(math.e,zm))
print(pow(math.e,zd))


5.999999999999999
0.6666666666666666
You do realize that in order to calculate logarithm/exponent you will need an actual multiplication/division?
 
Last edited:
Back
Top Bottom