• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Programming Thread

I know how logarithms work, but that does not solve the problem of finding the natural log of a number. You have e as a definition to work with.

Post code of an alternative solution to what I posted.


With e defined I am looking at fitting some form of e^-x to 1/x which canthen be fainthearted as a definite integral.

There can also be a polynomial fit to 1/x.

I do this for mental exercise and for pleasure, anybody can look up solutions on the net.

I was always under time pressure to solve problems, taking my time working math probes is enjoyable.

If you view this as an existential battle that is your trip not mine.
 
As an approximation to log()

Is there an f(x) for which 1/x = 1 - e^f(x)?

If so, 1 - e^f(x) could be interrogated to calculate log() without an iterative solution.

I will post something when I get it working.
 
Calculating a logarithm without an iterative solution? What counts as an iterative solution? Adding up terms of an infinite series? Using some algorithm like Newton's method?

For function f(x) of variable x, where we want to find x that makes f(x) = 0, Newton's method involves iterations

\( \displaystyle{ x_{new} = x - \frac{f(x)}{f'(x)} } \)

For inverse powers, a1/n one uses f(x) = xn - a

\( \displaystyle{ x_{new} = \frac{x}{n} \left( n - 1 + \frac{a}{x^n} \right)} \)

For square roots, n = 2, and

\( \displaystyle{ x_{new} = \frac{1}{2} \left( x + \frac{a}{x} \right)} \)

For inverse square roots, n = -2, and

\( \displaystyle{ x_{new} = \frac{x}{2} \left( 3 - a x^2 \right)} \)

One can thus avoid doing division.

For reciprocals, n = -1, and

\( \displaystyle{ x_{new} = x \left( 2 - a x \right)} \)

One can use this algorithm to do division: a/b = a * 1/b
 
I decided to check on some CPU instruction sets to see what is built into them. Many CPU's use "microcoding" - a CPU contains a mini-CPU that runs a mini-program for every instruction.

One of the simplest CPU designs that is still in use is  MOS Technology 6502 with 6502 Instruction Set Introduced in 1975, it is an 8-bit chip, with 8-bit data size, though memory addressing is 16-bit.

It does arbitrary branching and arbitrary data access, so it is Turing-complete to within resource limitations. The only arithmetic it does is addition and subtraction. One has to write software to do multiplication and division and floating-point operations.

 Intel 8086 from 1978, with 8086 instructions another 8-bit chip. It is also integer-only, and it does multiplication and division, though with microcode.

Reverse-engineering the multiplication algorithm in the Intel 8086 processor - as one would expect, it's shifting and adding, a binary version of long multiplication.

Reverse-engineering the division microcode in the Intel 8086 processor - as one would expect, it's shifting and subtracting, a binary version of long division.

The  Intel 8087 chip with Data Types and Instruction Set of 8087 adds floating-point instructions to the Intel 8086 chips. It does addition, subtraction, and multiplication, but it has no division instruction. So one must do it in software.
 
#include <string>
#include <iostream>
#include <thread>
#include <unistd.h>
#include <stdio.h>
#include <math.h>
#include <vector>
using namespace std;
using UN64=unsigned long long;
using UN32=unsigned int;
union Udbl
{
  double d;
  struct
  {
    long long unsigned int mant:52;
    unsigned int  power:11;
    unsigned int  sign:1;
  } flds2;
};
double log_barbos(double x)
{
  long long int mant;
  int pwr;
  Udbl d1;
  d1.d=x;
  if(x<0)return -NAN;
  if(x==0)return -INFINITY;
  mant=d1.flds2.mant;
  pwr=d1.flds2.power;
  double xx0=(double)mant * exp2(-52);
  double result=0;
  double tbl_size=256;
  double shift=tbl_size;
  while(xx0>=exp2(-53))
  {
    int m0=xx0*shift;
    double dx=(double)m0/shift;
    result+=log(1+dx);
    xx0=(xx0-dx)/(1+dx);
    shift*=tbl_size;
  }
  return result+log(2)*(pwr-1023);
}
void  log_test(double x)
{
  printf("log(%g):\nLog        %.16g\nlog barbos %.16g\n",x,log(x),log_barbos(x));
  printf("______________________\n");
}
void main4()
{
      log_test(123);
      log_test(0.00000001);
      log_test(3.158875878e67);
      log_test(0.);
      log_test(-10);
}


Argument is assumed to be normalized. And tables are not implemented. This is merely a demonstration.
The same thing can be done with exp() sincos(), sqrt()
 
Last edited:
The oldest CPU architecture still in use is the IBM System/360 series and its numerous successors. It was introduced in 1964 and first delivered in 1965, and its present-day successor is the Z series.

It does both integer and floating-point arithmetic: addition, subtraction, multiplication, and division.

Returning to the 8086 and 8087 I've found Intel 8087 datasheet - it lists division along with the other three arithmetic operations.

Extracting ROM constants from the 8087 math coprocessor's die
The 8087 operated as a co-processor with the 8086 processor. When the 8086 encountered a special floating-point instruction, the processor ignored it and let the 8087 execute the instruction in parallel.1 I won't explain in detail how the 8087 works internally, but as an overview, floating-point operations are implemented using integer adds/subtracts and shifts. To add or subtract two floating-point numbers, the 8087 shifts the numbers until the binary points (i.e. the decimal points but in binary) line up, and then adds or subtracts the fraction. Multiplication, division, and square root are performed through repeated shifts and adds or subtracts. Transcendental operations (tan, arctan, log, power) use CORDIC algorithms, which use shifts and adds of special constants for efficient computation.
 CORDIC - "coordinate rotation digital computer"
CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition, QR factorization and many others.
 
Motorola 68000 has all four integer operations, and the 68040 has all four floating-point instructions also.

ARMv1 - ARM - WikiChip - the first ARM chips had only integer addition and subtraction - no multiplication or division, no floating point

ARMv2 - ARM - WikiChip - has multiplication though not division.

But more recent ARM chips have division in them, and also floating-point support. I've found on-chip division in MIPS, SPARC, and PowerPC.
 
Iterative as in numeral integration versus a definite integral.

integral e^x = e^x
internal e^x 0 to 1 = e^1 - e^0

or iterative
int n = 1000;
double hilim = 1,lolim = 0;
double dx = (hilim-lolim)/n,sum = 0, x = 0;
for(i = 0;i<1000){
sun += exp(x)*dx
x += dx
}

Non linear systems typically require an iterative solution. Iterate until a relative error is below some value. An example the electrical simulator SPICE. The general term for it is a solver.

Solid state electronics are non linear.
 
Motorola 68000 has all four integer operations, and the 68040 has all four floating-point instructions also.

ARMv1 - ARM - WikiChip - the first ARM chips had only integer addition and subtraction - no multiplication or division, no floating point

ARMv2 - ARM - WikiChip - has multiplication though not division.

But more recent ARM chips have division in them, and also floating-point support. I've found on-chip division in MIPS, SPARC, and PowerPC.
Floating point arithmetic is done with integer operations. The difference is interpreting the decimal point.
 
There must be a good reason, I am surprsed a modern processor would not have a hardware multiply.

An intro to binary math

Fixed point is easier to see than floating point. As the ae implies, in fixed point the decimal point is fixed. The number of fractional bits and integer bits are fixed.

For an 8 bit integer bits are numbered left to right b7 b6 ...b0. Each bit is nuercally 1 or 0.

The lsb or least significant bit is obviously1.

the weighted value is lsb*(b7*2^7) + lsb*(b6*2^6)..+ lsb*(b0*2^0)

so, 00000001 = 1 00000010 = 2 00000011 = 3 and so on.

In a processor fractional numbers are integers with a different binary weighting.

for an 8 bit fractional number the lsb is 1/256 = 0.0039062. That is the smallest number.

The decimal number is lsb*(b7^7) + lsb*(b6^6.)..+ lsb*(b0^0)

So, .10000000 = 0.5 .01000000 = .25 .10000001 = .5039062


A 16 bit fixed point nymer with 8 integer and 8 frqctionl bits.

0000001110000000 = 3.5 decimal.

Multinational and addition works as with base 10.

In fixed point the dynamic range, max to min, is fixed. Worls fine if yiu know the bouds of the computations before hand.

Floating point adjusts the decimal point as the integer part grows. As int part grows number of biys available for the fract part goes down.

Some code I wrote a ways back as a review. When you type base 10 on a computer code converts it to binary floating point.

the ipnits to the multiplier are a and b in fixed point. Hex or binary. A and b are multiplied and cobsed to text for display.1 sign bit, 7 integer bits, 8 fractional bits. Didn't work out signed multiplication.

Didn't bother to clean up the code but it is easy to follow. Didn't write a string to fixed point function for console input.

Binary multiplication.

Code:
2 * 3

      0011 binary 3
      0010 binary 2
----------------------------
          00
        11
---------------------
        110   binary 6


Code:
float fixpt2dec(unsigned int num, int nint, int nfrac){
    //converts fixed point to decimal for dsplay
    int i;
    unsigned int mask = 1;
    float fraclsb, decval = 0;
    fraclsb = 1 / pow(2, nfrac);
    //fractional sum
    for (i = 0; i < nfrac ; i++) {
        if (mask & num) decval += (fraclsb * pow(2, i));
        mask = mask << 1;
    }
    //integer sum
    for (i = 0; i<nint;i++){
        if (mask & num) decval +=  pow(2, i);
        mask = mask << 1;
    }

    if(num&mask)decval = decval * -1.;

    return(decval);
}//fixrftodec()

unsigned int  dec2fixpt(unsigned int num, int nint, int nfrac){

return(1);

}//dec2fixpt()

double dec2bin(int nb, double num) {
    double lsb, dval = 0, vmow, vlast;
    int i, n;
    n = pow(2, nb);
    lsb = 1 / double(n);
    for (i = 0; i <= n - 1; i++){
        if(dval < num) dval = (lsb * i);
        if (dval > num){ //i = n;
            //dval = lsb * (i - 1);


        }
    }
    return(dval);
}//dec2bin()

/*
float _atof(string s) {
    int dec_pt = -1, s_len, i, sign = 1, n_start = 0;;
    float value = 0;
    s_len = s.size(s);
    if (s[0] == '-') { sign = -1, n_start = 1; }
}//_atof)
*/

int main()
{

    double decval = 0;
    int Nbits = 16, Nfrac = 8, Nint = 7;
    unsigned int a = 0x7f01, b = 0x0100;
    unsigned int acc = 0,mask = 1;
    double dv1 = 0,dv2 = 0;
    int nbits = 15,i;

    // multiply shift and add
    for (i = 0; i < nbits; i++)if (a & (mask << i)) acc += (b << i);
    acc = acc >>8; // set decimal point
    // decimal values for display
    dv1 = fixpt2dec(a, Nint, Nfrac);
    dv2 = fixpt2dec(b, Nint, Nfrac);
    printf("a  %3.10f  b %3.10f\n",dv1,dv2);
    // decimal value of multiplication product
    decval = fixpt2dec(acc, Nint, Nfrac);
    printf("acc 0X%x\n",acc);
    printf("decval %4.8f\n",decval); ;

    return(0);

}

a = 0x7f00 (127.0), b = 0x0080 (0.5)
Result a * b = 63.5
 
Iterative as in numeral integration versus a definite integral.
That's not what iteration is. Iteration is repeating some operation to get some result, like in Newton's method.

As to fixed vs. floating point, fixed-point numbers are implemented as integers that are multiplied by some assumed scaling factor.

As to not having a hardware multiply, that is tolerable if one does not do much multiplication. It's Turing universality - as long as a chip is Turing complete, one doesn't need to have much on it. Having a lot on it is good for performance, however.
 
Colloquially from my experience iterative means iterations as opposed to an equation into which you plug values. Semantics.


Anticipating the question I wrote a simple example of a solver.,or an optimizer. I think Excel calls it a goal seeker.

An application cam have hundreds of eequations including integrates and derivatives. I worked in a pressure sensor group. I did not crete it, a solver/optimizer was used to take manfcatung meauremts data on a ssensor and optimize calibration for perfomance.

You can start with coarse resolution and prgressively increase resolution zeroing in on an optimum. In this case the goal is not an exact solution, but a set of numbers that place mutiple varabls within given bounds. Could be a chcal prcess of pramceutcal manufacturng process.

There are solver packages, inputs are matrices.

Code:
void linspace(int n,double lo,double hi, double *y){
    // array of equally spaced doubles
    int i;
    double dy = (hi-lo)/(n-1);
    y[0] = lo;
    for(i=1;i<n;i++)y[i] = y[0] + (i * dy);
}

int main(){
    int i,j, n = 10000;
    double x2lo = 0,x2hi = _PI,x1lo = 1,x1hi = 10;
    double y1,y2,x1[n],x2[n];
    double y1tol = .01,y2tol = .01;
    double y1targ = 2, y2targ = 2.5;
    linspace(n,x1lo,x1hi,x1);
    linspace(n,x2lo,x2hi,x2);

    for(i=0;i<n;i++){
      for(j = 0;j<n;j++){
        y1 = x1[i] + log(x1[i]);
        y2 = sin(x2[j]) + x1[i];
        if((abs(y1-y1targ) <= y1tol) &&
           (abs(y2-y2targ) <= y2tol) ){
                cout<<"SUCCESS"<<endl;
void linspace(int n,double lo,double hi, double *y){
    // array of equally spaced doubles
    int i;
    double dy = (hi-lo)/(n-1);
    y[0] = lo;
    for(i=1;i<n;i++)y[i] = y[0] + (i * dy);
}

int main(){
    int i,j, n = 10000;
    double x2lo = 0,x2hi = _PI,x1lo = 1,x1hi = 10;
    double y1,y2,x1[n],x2[n];
    double y1tol = .01,y2tol = .01;
    double y1targ = 2, y2targ = 2.5;
    linspace(n,x1lo,x1hi,x1);
    linspace(n,x2lo,x2hi,x2);

    for(i=0;i<n;i++){
      for(j = 0;j<n;j++){
        y1 = x1[i] + log(x1[i]);
        y2 = sin(x2[j]) + x1[i];
        if((abs(y1-y1targ) <= y1tol) &&
           (abs(y2-y2targ) <= y2tol) ){
                cout<<"SUCCESS"<<endl;
                printf("y1 Target  %f  y1  %f  x1  %f\n",y1targ,y1,x1[i]);
                printf("y2 Target  %f  y2  %f  x2  %f\n",y2targ,y2,x2[j]);
                return 0;
                }
      }//i
    }//i

    cout<<"fail"<<endl;
    return 0;
}





                return 0;
                }
      }//i
    }//i

    cout<<"fail"<<endl;
    return 0;
}

.
 
Last edited:
Iterative as in numeral integration versus a definite integral.
That's not what iteration is. Iteration is repeating some operation to get some result, like in Newton's method.

As to fixed vs. floating point, fixed-point numbers are implemented as integers that are multiplied by some assumed scaling factor.

As to not having a hardware multiply, that is tolerable if one does not do much multiplication. It's Turing universality - as long as a chip is Turing complete, one doesn't need to have much on it. Having a lot on it is good for performance, however.
True. Decimal digital multiplication can be and was done as I outlined in the multiplication example.

You could do 64 bit floats on an 8 bit processor, but it was slow.

I think it was I386 that first included an on chip math co-processor. Before that there were math coprocessor chips you could buy and plug into the motherboard to increase math speed. Back when processor speeds were in the 10 megahertz range.

Tailored fixed point ran faster. There were decated DSP chips that wre based on fixed point.
 
  •  x86 - the first 80x86 chip to get an on-chip floating-point unit was the 80486. Before that, a CPU's FPU was a separate chip: 8087, 80287, 80387.
  •  Motorola 68000 series first on-chip FPU in 68040
  •  MIPS architecture and  PA-RISC and  PowerPC are complete
  •  DEC Alpha is complete except for no integer divide, though it does have floating-point divide
  •  SPARC initially no integer multiply or divide, later got those instructions. Floating point always
  •  ARM architecture family initially no integer multiply or divide, later ones got integer multiply, with integer divide being optional. Floating point optional.
It may seem like a paradox that some advanced chips don't have built-in multiplication or division, but there it is.
 
Back
Top Bottom