• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.


Complex numbers

Magnitude = sqrt(real^2 + imaginary^2).
Phase = atan(imaginary.real).

import math
import cmath
R2D = 360/(math.pi*2)  # radians to degrees
y = 0 + 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 0 - 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 1 + 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))
y = 1 - 1j
print('%.5f  \t%.5f' %(abs(y),cmath.phase(y)*R2D))

1.00000      90.00000
1.00000      -90.00000
1.41421      45.00000
1.41421      -45.00000

Arrays of complex numbers

n = 10
x = n*[0. + 0.j] # create complex array
ph = n*[0]
m = n*[0]
for i in range(n):
        x[i] = i  +  2*i*1j
        m[i] = abs(x[i])
        ph[i] = cmath.phase(x[i])*R2D

for i in range(n):
   print('%10.5f   \t%10.5fi  \t%10.5f   \t%10.5f ' %(x[i].real,x[i].imag,m[i],ph[i]))
How many decimal paces do you get? It depends on the integer part. There are only so many bits.

With zero integer part you get about 16 decimal places for a double. Increase the int part and number of decimal places go down.

x = 0.01234567890123456789
for i in range(16):
        print("%40.20f" %x)
        x += 9*pow(10,i)

Python uses bignum integers and 8-byte floating-point numbers.

However, the Python standard library includes "array" - a class for efficiently storing numerical values. It covers all the data types available for C/C++ -- both signed and unsigned integers with 1, 2, 4, or 8 bytes, and 4-byte and 8-byte floating-point numbers.
Run the code snippet in C and you should get the same results.

8 byte floats are 64 bit doubles.

Python floats are IEEE 754 double-precision binary floating-point numbers, commonly referred to as “doubles”, and take up 64 bits. Of those: 1 is for the sign of the number; 11 are for the exponent; and.

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The standard defines:

arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)

IEEE 754-2008, published in August 2008, includes nearly all of the original IEEE 754-1985 standard, plus the IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic. The current version, IEEE 754-2019, was published in July 2019.[1] It is a minor revision of the previous version, incorporating mainly clarifications, defect fixes and new recommended operations.
So, in Python you can explicitly type a numerical array. The interpreter will flag an error if for example you try to write a float value to an int array.

I can see the utility of how Python is constructed, but flexibility can lead to complexity.

import array
n = 10
x = array.array('d',n*[0])  # 64  bit double
y = array.array('i',n*[0])  # 16 bit int
for i in range(n):
    x[i] = i
    y[i] = i
    print("%f  %d" %(x[i],y[i]))
0.000000  0
1.000000  1
2.000000  2
3.000000  3
4.000000  4
5.000000  5
6.000000  6
7.000000  7
8.000000  8
9.000000  9
Top Bottom