• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Programming Thread

Macros vs. functions? A macro does not have function-call overhead, and it may be optimized with reference to the surrounding code. So a macro can be faster, and the inline feature of C++ is intended to get the performance of a macro while being type-safe and parse-safe.
Personally, I like macros for another reason: when you have a piece of code that is absolutely 100% right, and made of heavy or clever math, and can be named in a sensible way putting it in a big, hairy, scary macro can be more effective for keeping idiot newbies from touching it.

A functional macro that is clearly very fragile and put in a place where other scary and fragile things go is a great way to keep devs away.
I would have thought it better to have a code review process where a senior dev can stop newbies from making such miatakes and perhaps teach them the right way to do whatever it they've been tasked to do. Then they won't be idiots anymore.

Funny to think that there might be orgs out there who obfuscate source code to make it harder for their own people to read.
The source code for doing certain things deep in the OS has a certain look to it that reflects this same philosophy.

Oftentimes the macros are assembly code, even.
Is it an old-timey practice that pre-dates version-control software?
 
Macros vs. functions? A macro does not have function-call overhead, and it may be optimized with reference to the surrounding code. So a macro can be faster, and the inline feature of C++ is intended to get the performance of a macro while being type-safe and parse-safe.
Personally, I like macros for another reason: when you have a piece of code that is absolutely 100% right, and made of heavy or clever math, and can be named in a sensible way putting it in a big, hairy, scary macro can be more effective for keeping idiot newbies from touching it.

A functional macro that is clearly very fragile and put in a place where other scary and fragile things go is a great way to keep devs away.
I would have thought it better to have a code review process where a senior dev can stop newbies from making such miatakes and perhaps teach them the right way to do whatever it they've been tasked to do. Then they won't be idiots anymore.

Funny to think that there might be orgs out there who obfuscate source code to make it harder for their own people to read.
The source code for doing certain things deep in the OS has a certain look to it that reflects this same philosophy.

Oftentimes the macros are assembly code, even.
Is it an old-timey practice that pre-dates version-control software?
That predates people thinking compiler optimizations are acceptable in OS code.
 
There is an anomaly in both functions RUFF and ruf2,. There are sequential duplicate numbers.

You persist in trying to find flaws in my code that aren't flaws.

I think you're referring to the fact that the return values from random() -- suppose they re 10, 11, 12, 13, 14, 15, 16 -- get the "|1" applied and become. 11, 11, 13, 13, 15, 15, 17, ... Is that it?

So what? All that means is that one of the random() bits is being overwritten and forced to 1. This is deliberate and I explained why. If you want more granularity, just use more random bits, or other easy workaround.

I previously posted:

Let's be clear on the effect of discarding those two random bits. When you call my ad-hoc ruf() to get a random float you might get, by chance
.0186264459
or you might get
.0186264496
But you will never get a number in between those two. Please. Look at those two numbers again. The ad hoc ruf() that I presented above cannot return a number larger than .0186264459 but smaller than .0186264496. The granularity imposed by the ">>2" is just too coarse. (Obviously it's the exact same granularity throughout the 0 < x < 1 range. No cherry-picking here.)

Look at the two numbers again. Raise your hand if you think this coarse granularity is likely to be a problem in any of 99.999% of applications.

Look at the numbers. If the granularity is really too coarse, just use more random bits! Do you really need help doing that??
 
My posts seem to attract anger or nitpicking. (One Infidel found fault with the comma placement in COMMENTS!) and I assume Swammi's absence will bring happiness.. I've continued to post in this thread just to bring closure to a conversation I started.
I think nitpicking is better than ignoring posts directed at somebody. I'm used to people nitpicking my posts in some other threads. My comments about your commas were based on this: (post #374).

I assume that in "ignoring posts directed at somebody" the somebody is me. If I've failed to answer any of your questions, please notify me. Meanwhile I will respond to this post.

I've mentioned that I am (or was) an idiot savant, though it is the idiot that may be most apparent......If it weren't for the idiot part of my brain, I might be famous.
I couldn't see any reason for you to be called an "idiot". The issue with your commas was the only thing I could think of so I brought it up as a way of trying to agree with your statement. Sorry if it offended you or whatever.

And by the way, I assume you're well aware that your rejoinder about the comma "issue" was already given -- this is getting rather redundant. I brought it up yet again, because I was absolutely boggled that there would be discussion of spaced in a comment.

When I speak of "stupidity" I am NOT referring to my work with computers. (I realize that some find my code hard to read -- that may be partly due to my autism.)

My stupidity manifests itself in MANY ways. I was the very last kid in Kindergarten to be able to tie my shoes. In high school with an oral presentation due I just kept hoping the teacher would forget about me. I was morbidly shy. Even in my mid-20s I would sometimes be completely silent, bizarrely so. This all despite that I won all the scholastic medals and had scores on standardized tests exceeding 99.99 percentile.

I was left-handed, but my 1st-grade teacher "fixed" that. I wrote very very tiny, squeezing TWO lines onto narrow-lined paper! My 3rd-grade teacher fixed that.

My business skills are absolutely atrocious; my social skills also quite bad. And -- some of you may have noticed this -- I have a tendency to get HORRIBLY annoyed and angry over absolutely irrelevant trivia. I've done bizarrely foolish things with women who wanted to date me.

In my careers as computer programmer, systems and circuit designer, and troubleshooter, nobody ever considered me stupid. My co-workers were awed. Needless to say my code was NEVER the butt of the picayune and wrong-headed nitpicking we've seen here. When I first started writing C code circa 1980, I squeezed things together without white space, but by the mid 80's, when I was first writing C professionally, I was using One True Style. (During the late 70's and early 80's I worked with hardware.)

Over the decades I've gradually gotten a little better. Some people find me friendly and fun, even handsome. I'm 73 years old but pretend I'm still in my 60's and enjoy life.

More questions?
 
My posts seem to attract anger or nitpicking. (One Infidel found fault with the comma placement in COMMENTS!) and I assume Swammi's absence will bring happiness.. I've continued to post in this thread just to bring closure to a conversation I started.
I think nitpicking is better than ignoring posts directed at somebody. I'm used to people nitpicking my posts in some other threads. My comments about your commas were based on this: (post #374).
I assume that in "ignoring posts directed at somebody" the somebody is me. If I've failed to answer any of your questions, please notify me. Meanwhile I will respond to this post.
On second thoughts I shouldn't be expecting a reply from you when I raise some points about what you said like in post #380... in a similar way I don't really relate to you saying " am less happy when someone quotes me without a Like". (well if I was used to getting likes I might agree with you)
I've mentioned that I am (or was) an idiot savant, though it is the idiot that may be most apparent......If it weren't for the idiot part of my brain, I might be famous.
I couldn't see any reason for you to be called an "idiot". The issue with your commas was the only thing I could think of so I brought it up as a way of trying to agree with your statement. Sorry if it offended you or whatever.
And by the way, I assume you're well aware that your rejoinder about the comma "issue" was already given -- this is getting rather redundant. I brought it up yet again, because I was absolutely boggled that there would be discussion of spaced in a comment.
I tried explaining it to you - it is purely about me trying to understand your statement that you can be an "idiot".
When I speak of "stupidity" I am NOT referring to my work with computers. (I realize that some find my code hard to read -- that may be partly due to my autism.)

My stupidity manifests itself in MANY ways. I was the very last kid in Kindergarten to be able to tie my shoes. In high school with an oral presentation due I just kept hoping the teacher would forget about me. I was morbidly shy. Even in my mid-20s I would sometimes be completely silent, bizarrely so. This all despite that I won all the scholastic medals and had scores on standardized tests exceeding 99.99 percentile.

I was left-handed, but my 1st-grade teacher "fixed" that. I wrote very very tiny, squeezing TWO lines onto narrow-lined paper! My 3rd-grade teacher fixed that.

My business skills are absolutely atrocious; my social skills also quite bad. And -- some of you may have noticed this -- I have a tendency to get HORRIBLY annoyed and angry over absolutely irrelevant trivia. I've done bizarrely foolish things with women who wanted to date me.

In my careers as computer programmer, systems and circuit designer, and troubleshooter, nobody ever considered me stupid. My co-workers were awed. Needless to say my code was NEVER the butt of the picayune and wrong-headed nitpicking we've seen here. When I first started writing C code circa 1980, I squeezed things together without white space, but by the mid 80's, when I was first writing C professionally, I was using One True Style. (During the late 70's and early 80's I worked with hardware.)

Over the decades I've gradually gotten a little better. Some people find me friendly and fun, even handsome. I'm 73 years old but pretend I'm still in my 60's and enjoy life.

More questions?
Thanks that was very helpful. BTW you said "I have a tendency to get HORRIBLY annoyed and angry over absolutely irrelevant trivia" yet you are "absolutely boggled" that I had a slight problem with your comments. I thought you would be more understanding that I am a bit like you in this way.
 
Swami

Yuur cide genrtes he same numer for differnt outoputs of rand(). Multiple occureces. This means that each numer no long has the same praability, as such it is usekess.

Take a 6 sided die with the numers 1- 6 on eacf face. Toss and the probukity is 1/6 for each numer.

Put 1,2,2,3,4,5 on the faces and toss. Each number no longer has the same probability.

The definition of a uniform distribution is theoretically all numbers in the distribution have equal probability. Your code fails.

That is wy I said the mean is not the best indcator of performance or a histogram for that matter. You actually have to look at the outputs..

When you said normal distributions can only have a zero mean you lost credibility. That and other threads involving statistics. On the exponential function problem I posed you confused the sum of the probil;ites equalling o1 with the sum of the varibles.

Anybody can have a brain fart. Yiu are cstently shallow and lacjing comreension.

From your posts over time my guess is that you never word with people who had the savy to ubdertsnd what you did and quetion it. When somebody did you prbably bluff yir way though it.

Exreationist makes reasoned requests which apparently you can not deal with and you call him stupid. That says it all.

Down the thread you posted a multiply algorithm you said was faster the anything else. I showed it to be false and when I asked you about limitations of the code you went silent.

You got snarky on the exponential problem I posed and when I [sted the solution usng badic clculu, average value of a function, integration yiu went silent.

All hat being said, answer this question.

Given that your code generates duplicate numbers and no longer meets the defintion of a uniform distibution, , statically how is that not a problem? I told you what it is a bug statistcally, tell me exactly why I am wrong.
 

Steeeve

Yuur cide genrtes he same numer for differnt outoputs of rand(). Multiple occureces. This means that each numer no long has the same praability, as such it is usekess.

This is getting VERY annoying. You keep bleating the same wrong narrative over and over. No matter how I try to educate you, it's impossible.

My code -- which was an ad-hoc function just in case someone wanted to compile my snippets -- DELIBERATELY discards bits from random(). I made no secret of that. And you figured out, all by yourself, that if you overwrite random()'s LSB with 1, you've in effect discarded another bit.

That bit doesn't matter. Look at the two numbers I discuss below.

Take a 6 sided die with the numers 1- 6 on eacf face. Toss and the probukity is 1/6 for each numer.

Put 1,2,2,3,4,5 on the faces and toss. Each number no longer has the same probability.
Totally wrong analogy. Imagine that you want to simulate a six-sided die {1,2,3,4,5,6} and all you have is a 12-sided die {1,2,3,4,5,6,7,8,9,10,11,12}. An easy way to the goal is to add 1 and divide by two. { (1+1)/2-->1 , (2+1)/2-->1 , (3+1)/2-->2 , et cetera }. THAT is the proper analogy for the effect of "|1".


The definition of a uniform distribution is theoretically all numbers in the distribution have equal probability. Your code fails.

Wrong. Using 28 random bits (ignoring the bit lost in "|1") as I do, 2^28 different floating point values are possible and (if the random() being invoked is "perfect") each is generated with probability 2^-28. I showed what those 2^28 numbers are: Can you improve on that choice?

Do you even understand that much? Answer a simple True or False, please, right now, so we can verify whether you're playing with a full deck.

Did you ever actually LOOK at the two numbers I asked you to look at? Here they are again. I've increased the font-sizes of the points you are determined to ignore.

The one-liner I posted showing the conversion of a random integer to a random double is NOT taken from my personal library; it was just a throw-away snippet for here. Instead of using 31 random bits, it uses 29 random bits (or 2 less than however many bits random() supplies.). I did this to minimize the risk that someone here would compile the snippets and run up against a limit. My actual library functions are perfectionistic and allow the caller to optionally specify how many random bits to spend when forming a random float.

Let's be clear on the effect of discarding those two random bits. When you call my ad-hoc ruf() to get a random float you might get, by chance
.0186264459
or you might get
.0186264496
But you will never get a number in between those two. Please. Look at those two numbers again. The ad hoc ruf() that I presented above cannot return a number larger than .0186264459 but smaller than .0186264496. The granularity imposed by the ">>2" is just too coarse. (Obviously it's the exact same granularity throughout the 0 < x < 1 range. No cherry-picking here.)

Look at the two numbers again. Raise your hand if you think this coarse granularity is likely to be a problem in any of 99.999% of applications.

Rounded to 7 significant digits, the two numbers just mentioned are 0.01862645 and 0.01862645 respectively. Stare at those two numbers. Notice anything?

That is wy I said the mean is not the best indcator of performance or a histogram for that matter. You actually have to look at the outputs..

Whack! NOBODY ever said that "the mean is the best indicator." Whack! I responded to a particularly stupid statement by pointing out that my code's outputs had mean of 0.5 exactly, and an inferior method's did not (it had a very slight bias).

When you said normal distributions can only have a zero mean you lost credibility. That and other threads involving statistics. On the exponential function problem I posed you confused the sum of the probil;ites equalling o1 with the sum of the varibles.

NOBODY ever said that a normal distribution can only have zero mean." I even posted the snippet
y = x*std_dev + mean​
to show how to convert a zero-mean variate x into a variate y with specified mean and std_dev. Apparently you need even more hand-holding than that.

I posted an elegant snippet that converts two uniform real variates into a 2D gaussian variate in polar coordinates, and then converts that into two 1D gaussian variates. Your comments showed that that was too much for you to understand, so you've been babbling over and over and over about the "remarkable" fact that discarding two bits with ">>2" DISCARDS TWO BITS!! Wow!!

You seem determined to find fault with my code, or my posts in general. Not just here, but in other threads as well. What is your problem?

Anybody can have a brain fart. Yiu are cstently shallow and lacjing comreension.

From your posts over time my guess is that you never word with people who had the savy to ubdertsnd what you did and quetion it. When somebody did you prbably bluff yir way though it.

I've posted synopses of my career. At my last job I worked with one of the top pattern-recognition scientists in the USA. We coauthored papers and coauthored patent applications. I've worked with many other top engineers. Sometimes a stranger would approach and ask for help since they'd heard I was the best mathematician in the company. Et cetera. Evidently you think what I tell about myself is a lie. You think you've caught me "bluffing."

Down the thread you posted a multiply algorithm you said was faster the anything else. I showed it to be false and when I asked you about limitations of the code you went silent.

I think you're referring to the Babylonian approach using a table of squares. I NEVER said it was "faster than anything else." I DID mention that it outperformed the standard multiply on SOME 20th-century machines (without FPU obviously), though there is a storage-vs-speed tradeoff.

You got snarky on the exponential problem I posed and when I listed the solution usng badic clculu, average value of a function, integration yiu went silent.

I don't know what this refers to.

-- -- -- -- -- --

You are intent on criticizing me, nitpicking any code snippet I post.

I've resisted any urge to criticize your code. Yet just recently we learn you don't know what a median is and/or are unsurprised that a uniform distribution's mean and median differ. You conjure up a bubble sort which does twice as many comparisons as needed. And somehow, you can't even figure out how to call qsort()!

That's just from your latest effort. If I were to criticize all your posted code, the list would get quite long.
 
I couldn't see any reason for you to be called an "idiot". The issue with your commas was the only thing I could think of so I brought it up as a way of trying to agree with your statement. Sorry if it offended you or whatever.

I apologize for losing my temper with you. My experience at IIDB is getting increasingly bad -- mostly due, I'm sure, to my own demeanor -- and my reactions are getting worse and worse. I probably should ask for my account to be suspended for a few weeks.

It was a digression to mention my alleged autism in this thread. It was a spill-over from another recent thread where other autists were "confessing." I DO know that my software programs are often hard for others to read; this might be related to my autism. I certainly find it MUCH easier to "talk" to computers than to talk to people!

But I post the snippets because I think they may provide interesting insights to mathematics or to C coding. And NOT because I solicit discussion of my stupidities.

Anyway, let me apologize again. I've always enjoyed your posts, and often get insight from them. And I need all the friends I can get! :cool:
 
Against my better judgement I responded to your plea in the post that started this all off, I knew where it would go.

I will go back to just ignoring your posts.

Be as crtiical of my posts as you please, I have nothing to prove.

I posted code cde a derived from the orginal URAND paper and at three times you decared it was useledd...why on Earth would I care what you think?
 
The talk on statistical led me to this. Math tools and libraries have functions to do this. Spread sheets create histograms

Given an array of data count the occrurences of points falling within a set of equal width buns.

The function automatically sizes for arbitrary lengths and number of bins.

Tested with arrays of random uniform doubles.

Code:
}
void save(int nbins,int *bins, double *limits ){
    int i;
    FILE *ptr = fopen("data.txt","w");
    for(i=0;i<nbins;i++)
       fprintf(ptr,">= %.10f -> < %.10f %d\n",limits[i],limits[i+1],bins[i]);
    fclose(ptr);
}

void rand_doub(long long n,double *x,unsigned int sf,int os){
    unsigned r;
    int i;
    double val = 0;
    for(i=0;i<n;i++){
        x[i] = double(rand())/double(RAND_MAX+1);
        x[i] += rand()%sf + os;
    }
}//rand_doub()


void hist_bins(long long ndata,int nbins, double *x,double * limits,int *bins){
    long long i,j;
    for(i=0;i<nbins;i++)bins[i] = 0;
    double lo = x[0],hi = x[0];
    for(i = 0;i<ndata;i++){
        if(x[i] > hi)hi = x[i];
        if(x[i] < lo)lo = x[i];
    }
    double dbin = (hi-lo)/double(nbins);
    limits[0] = lo;
    for(i=1;i<nbins;i++)limits[i] = lo += dbin;
    limits[nbins] = hi;
    for(i=0;i<ndata;i++){
            for(j = 0;j<nbins;j++)
                if(x[i] >= limits[j] && x[i] < limits[j+1])bins[j] += 1;
    }//i
}//hist_bins()


int main()
{
    srand(1);
    long long i ,ndata=(1e6);
    int nbins = 20,nlimits = nbins+1;
    int *bins = new int[nbins];
    double *limits = new double[nlimits];
    double *data = new double[ndata];
    rand_doub(ndata,data,600,200);
    hist_bins(ndata,nbins,data, limits,bins);

    for(i=0;i<nbins;i++)
         printf(">= %.10f -> < %.10f %d\n",limits[i],limits[i+1],bins[i]);
    save(nbins,bins,limits);
return 0;
}



>= 200.0009765625 -> < 230.0009262085 50143
>= 230.0009262085 -> < 260.0008758545 50428
>= 260.0008758545 -> < 290.0008255005 50379
>= 290.0008255005 -> < 320.0007751465 50232
>= 320.0007751465 -> < 350.0007247925 50175
>= 350.0007247925 -> < 380.0006744385 50192
>= 380.0006744385 -> < 410.0006240845 50636
>= 410.0006240845 -> < 440.0005737305 50718
>= 440.0005737305 -> < 470.0005233765 50513
>= 470.0005233765 -> < 500.0004730225 50562
>= 500.0004730225 -> < 530.0004226685 50362
>= 530.0004226685 -> < 560.0003723145 50290
>= 560.0003723145 -> < 590.0003219604 49417
>= 590.0003219604 -> < 620.0002716064 49355
>= 620.0002716064 -> < 650.0002212524 49304
>= 650.0002212524 -> < 680.0001708984 49514
>= 680.0001708984 -> < 710.0001205444 49554
>= 710.0001205444 -> < 740.0000701904 49403
>= 740.0000701904 -> < 770.0000198364 49220
>= 770.0000198364 -> < 799.9999694824 49602
 
Continuing my leisurely foray into random numbers.

URAND is of the form

random_number = seed
random_number[n] = random_number[n-1]*k1 + k2

he functions eliminate a separate seed function like rand() and srand(seed) and ian external static variable . If the seed value changes from the previous seed the sequence is reinitialized. Not original, it is in the 70s paper I linked to.

The max number limitation is the multiplications. Using 64 bit ints in the function allows a full 32 bit signed int number. That max number doesn't seem practical unless you are using 64 or 128 bit variables outside the function.

The max number is 2^32-1.

The code allows setting the max number and an offset. You ca set the range of random numbers.

y = urand_int(1,10,20) yields a random number from 20 to 30.

Creating ransom doubles. Random doubles with an integer part from 20 to 30.

y = (double)(urand_int(1,10,20)) + urand_dec(1);

Utility? Platform and library Independence and direct control of the inner workings. Increases the max number over rand(). It can be extended to 64 bit numbers s by using __int128 in the functions. A simple algorithm. From testing URAND is a flatter uniform distribution than rand();



Code:
int  urand_int(unsigned long long seed,unsigned long long max_num,
                          unsigned long long offset){
    //max random int  2^31 - 1
    static unsigned long long  IY = 1,s = 0;
    if(s != seed){IY = seed; s = seed;}
    unsigned long long  m = max_num;
    unsigned long long   IC = 2*floor(m*(.5 - sqrt(3)/6.)) + 1;
    unsigned long long  IA = 8*m*ceil(atan(1.)/8.) + 5;
    IY = IY * IA + IC;
    return (IY % (m + 1)) + offset;
}

double  urand_dec(unsigned long long seed){
    //random decimals 0 - <1
    static unsigned long long  IY = 1,s = 0;
    if(s != seed){IY = seed; s = seed;}
    unsigned long long  m = pow(2,31)-1;
    unsigned long long   IC = 2*floor(m*(.5 - sqrt(3)/6.)) + 1;
    unsigned long long  IA = 8*m*ceil(atan(1.)/8.) + 5;
    IY = IY * IA + IC;
    return (double)(IY%(m+1))/(double)(m+1);
}
 
Last edited:
Digital Pseudo Random Sequence Generators

Random sequences have long been created in digital circuits. They are used in spread spectrum radio like cell phones to pseudo randomly spread the RF energy so that it looks like noise.

It comes uder the heading of linear feedback shift registers. A form of a finite state machine.

The code is an implementation of a simple digital circuit in the second link.

5 bit shift register with b0-b3 the sequence output. Take the exclusive OR XOR of b0 and b3, if positive make b5 1 and shift right. XOR is true if either inputs are are true but not both at the same time.

There are methods to design the feedback logic to create arbitrary sequences.

The seed has to be greater than zero and the cycle repeats.

There are Fibonacci and other code examples in the first kink in C and Python.


The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value.

The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function can produce a sequence of bits that appears random and has a very long cycle.

Applications of LFSRs include generating pseudo-random numbers, pseudo-noise sequences, fast digital counters, and whitening sequences. Both hardware and software implementations of LFSRs are common.

The mathematics of a cyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR.[1] In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well.




Code:
int main(){
    int i,n = 20, sequence = 1;
    int fb = 0;
    for(i = 0;i<n;i++){
        printf("%d\n",sequence);
        //XOR b0 and b3
        fb = (sequence&0x1)^((sequence>>3)&0x1);
        //if b0 and b3 are not equal make b4 1
        if(fb)sequence = sequence|0x10;
        //shift right for next sequnece
        sequence = sequence>>1;

    }

    return 0;
}
 
A foray into matrix math. Working towards a linear equation solver.

The first thing I need is to be able to display and save arrays without having to rewrite code for each array.

A review of arrays.

In memory there is the stack where code, functions, and non dynamically allocated variables and arrays go. The heap is where dynamically allocated variables and arrays go. Experimentally on the heap approximately 2^32 doubles can be allocated. Don't know the stack limit.

A mini heap MYHEAP is created and two arrays a1 a2 are sequentially created in MYHEAP. A1 s a 1 column array and a2 nxm. I don't know how C actually manages arrays and variables but it gives you an idea of what C does behind the scenes.

C does not dynamically allocate multiple dimension arrays. You have to mange that yourself. Macros can be used for indexing.

The code creates the two arrays on MYHEAP, prints them out as arrays, then prints the linear contents of MYHEAP.

Code:
void mprint(char *s,int n,int m,double *x){
    int i,col,row;
    for(i = 0;i < n;i++){
        row = i*m;
        for(col=0;col<m;col++)printf(s,x[row+col]);
        printf("\n");
    }
}


int fmprint(char *file,char *fmt,int n,int m,double *x){
    //save to file
    int i,col,row;
    FILE *ptr = fopen(file,"w");
    if(!ptr) return 1;
    fprintf(ptr,"%d   %d\n",n,m);
    for(i = 0;i < n;i++){
        row = i*m;
        for(col=0;col<m;col++)fprintf(ptr,fmt,x[row+col]);
        fprintf(ptr,"\n");
    }
    fclose(ptr);
    return 0;
}



#define ADD 0
#define SUB 1
#define MUL 2
#define DIV 3

int matk(int op,int rows,int cols,double k,double *x){
      if(k == 0.0 && op == DIV)return 1;
      if(cols == 0 || rows == 0)return 3;
      int i, n = rows*cols;
      switch(op){
        case ADD:for(i=0;i<n;i++)x[i] += k;break;
        case SUB:for(i=0;i<n;i++)x[i] -= k;break;
        case MUL:for(i=0;i<n;i++)x[i] *= k;break;
        case DIV:for(i=0;i<n;i++)x[i] /= k;break;
        default: return 2;
      }
      return 0;
}
#define index(rows,row,col)((rows-1)*row)+col)
int main(){
    char fmt[20] = "%.4f   ";
    int row,col,k = 0;
    double *MYHEAP = new double[1000];
    double *a1,*a2;
    int ra1 = 10,ca1 = 1,na1 = ra1*ca1;  //10x1
    int ra2 = 4, ca2 = 3,na2 = ra1*ca1;  //4x3
    a1 = MYHEAP; //a1 base address
    a2 = a1 + na1; //a2 base address
    //load arrays
    for(row=0;row<na1;row++)a1[row] = 10.*k++;
    k = 0;
    for(row=0;row<ra2;row++)
         for(col=0;col<ca2;col++)a2[((ra2-1)*row)+ col] = k++;
        //for(c=0;c<ca2;c++)a2[(index(ra2,row,col)] = k++;
    mprint(fmt,ra1,ca1,a1);
    mprint(fmt,ra2,ca2,a2);
    matk(ADD,ra2,ca2,3.,a2);
    mprint(fmt,ra2,ca2,a2);


    return 0;
}

For non dynamic arrays. For two sequntial arrays on the stack you can sequence through both with one pointer. Floating point numbers can be hacked but not recommended.



Code:
int main(){
    int i,x[4],y[4],*p;
    p = &x[0];
    for(i=0;i<4;i++){
        x[i] = i+1;
        y[i] = (i+1)*10;
        }
    for(i=0;i<4;i++)printf("%d %Ld\n",x[i],y[i]);
    for(i=0;i<8;i++)printf("%d\n",p[i]);


return 0;
}


Applying a constant to an array is easy. You don't have to access an array by column and row.

Coming soon to a thread near you, inverting a matrix.
 
I once wrote some code that does matrix inverses on rational numbers.

In Python:
Semisimple-Lie-Algebras/SemisimpleLieAlgebras.py at master · lkpetrich/Semisimple-Lie-Algebras
Code:
# Do inverse with the Gauss-Jordan algorithm
# Starts with an integer matrix, and uses rational numbers
def MatrixInverse(mat):
	# Set up the work matrix: originally (original,identity)
	# Transform into (identity,inverse)
	n = len(mat)
	workmat = [(2*n)*[Fraction(0)] for k in xrange(n)]
	for i in xrange(n):
		mrow = mat[i]
		wmrow = workmat[i]
		for j in xrange(n):
			wmrow[j] += int(mrow[j])
	for k in xrange(n):
		workmat[k][n+k] += 1
	
	# Do forward substitution
	for icol in xrange(n):
		# Necessary to exchange rows
		# to bring a nonzero value into position?
		# Return None if singular
		if workmat[icol][icol] == 0:
			ipvt = None
			for i in xrange(icol+1,n):
				if workmat[i][icol] != 0:
					ipvt = i
					break
			if ipvt == None: return None
			temp = workmat[icol]
			workmat[icol] = workmat[ipvt]
			workmat[ipvt] = temp
		# Make diagonal 1:
		wmicol = workmat[icol]
		dgvrecip = 1/wmicol[icol]
		for i in xrange(icol,2*n):
			wmicol[i] *= dgvrecip
		# Forward substitute:
		for i in xrange(icol+1,n):
			wmi = workmat[i]
			elimval = wmi[icol]
			for j in xrange(icol,2*n):
				wmi[j] -= elimval*wmicol[j]
	
	# Do back substitution
	for icol in xrange(n-1,0,-1):
		wmicol = workmat[icol]
		for i in xrange(icol):
			wmi = workmat[i]
			elimval = wmi[icol]
			for j in xrange(icol,2*n):
				wmi[j] -= elimval*wmicol[j]
	
	# Done!
	return [[workmat[i][n+j] for j in xrange(n)] for i in xrange(n)]
 
Here's a C++ version, as a template function:
Semisimple-Lie-Algebras/SemisimpleLieAlgebras.py at master · lkpetrich/Semisimple-Lie-Algebras
Code:
// Inverse of square matrix implemented as a template function
// Result's members are in class N, input's in class N0
// Returns whether the inversion could be performed:
// false for singular, non-square
template<class N, class N0> bool MatrixInverse(Matrix<N> &invmat, Matrix<N0> &mat)
{
	int nc = 0;
	if (!mat.IsSquare()) return false;
	
	size_t n = mat.get_rows();
	Matrix<N> workmat(n,2*n);
	workmat.fill(0);
	
	for (size_t i=0; i<n; i++)
	{
		// Initial matrix in first block
		for (size_t j=0; j<n; j++)
			workmat(i,j) = mat(i,j);
		// Identity matrix in second block
		workmat(i,n+i) = 1;
	}
	
	// Index vector for pivoting
	vector<size_t> ixp(n);
	for (size_t i=0; i<n; i++) ixp[i] = i;
	
	// Do forward substitution
	for (size_t icol=0; icol<n; icol++)
	{
		if (workmat(ixp[icol],icol) == N(0))
		{
			bool PivotFound = false;
			size_t ipvt;
			for (size_t i=(icol+1); i<n; i++)
				if (workmat(ixp[i],icol) != N(0))
				{
					PivotFound = true;
					ipvt = i;
					break;
				}
			// Singular?
			if (!PivotFound) return false;
			swap(ixp[ipvt],ixp[icol]);
		}
		// Make diagonal 1
		N dgvrecip = N(1)/workmat(ixp[icol],icol);
		for (size_t i=0; i<2*n; i++)
			workmat(ixp[icol],i) *= dgvrecip;
		// Forward substitute
		for (size_t i=icol+1; i<n; i++)
		{
			N elimval = workmat(ixp[i],icol);
			for (size_t j=icol; j<2*n; j++)
				workmat(ixp[i],j) -= elimval*workmat(ixp[icol],j);
		}
	}
	
	// Do back substitution
	for (size_t iclr=0; iclr<n; iclr++)
	{
		size_t icol = (n-1) - iclr;
		for (size_t i=0; i<icol; i++)
		{
			N elimval = workmat(ixp[i],icol);
			for (size_t j=0; j<2*n; j++)
				workmat(ixp[i],j) -= elimval*workmat(ixp[icol],j);
		}
	}
	
	// Done!
	invmat.resize(n);
	for (size_t i=0; i<n; i++)
		for (size_t j=0; j<n; j++)
			invmat(i,j) = workmat(ixp[i],n+j);
	
	return true;
}
 
Semisimple-Lie-Algebras/LinearAlgebra.h at master · lkpetrich/Semisimple-Lie-Algebras · GitHub

Also has a template class for matrices and template functions for vectors and matrices.

I also have a template class for rational numbers:
Semisimple-Lie-Algebras/Fraction.h at master · lkpetrich/Semisimple-Lie-Algebras · GitHub

The template arg is assumed to be some integer-like class that supports Euclid's algorithm for the greatest common denominator.

For Python, however, I used a built-in module, "fractions", even though operations on it are much slower than operations on Python integers. BTW, Python integers are bignums.

The original is in Mathematica, and in that original, I used Mma-native matrix inversion and rational numbers.

lkpetrich/Semisimple-Lie-Algebras: Does "semisimple" Lie algebras, important for describing elementary-particle and space-time symmetries. Sets up algebras and representation basis sets in them. Finds products and powers of representations ("plethysms"), also subalgebras and representations in them ("branching rules"). Has several examples, like Grand Unified Theories.
 
Nicely done Ipetrich, as always . I worked through multiplication and I am still wading through determinants.

Never had the time when I was working to explore math.
 
Here's an interesting type of algebraic entity: extension fields.

Imagine extending the rational numbers with sqrt(2): a = a0 + a1*sqrt(2)

One can do arithmetic on this extension, though division and ordering are rather tricky. Addition and subtraction are easy, and multiplication suggests some generalized form:

a*b = (a0*b0 + 2*a1*b1) + (a0*b1 + a1*b0)*sqrt(2)

Express a as a polynomial in x = sqrt(2): a = a0 + a1*x

Then a*b = remainder of (simple a*b) when divided by x2 - 2

That can be extended to other extension polynomials. Thus, if one wants to do the cube root of 2, one uses
a = a0 + a1*x + a2*x2

and takes the remainder when dividing by x3 - 2.

Division is more tricky. For this example, it is fairly easy:

For (a0 + a1*sqrt(2)) / (b0 + b1*sqrt(2)) multiply both the numerator and the denominator by (b0 - b1*sqrt(2)) This makes

(a0 + a1*sqrt(2)) * (b0 - b1*sqrt(2)) / (b02 - 2*b12)

For more general extension polynomials, this is more difficult.
 
Put it all into a static library or dll and you have the beginnings of a native open source C++ math library.

The major math libraries seem to be legacy FORTRAN.
 
Nicely done Ipetrich, as always . I worked through multiplication and I am still wading through determinants.
For finding matrix inverses, I used Gaussian elimination with pivoting. I started with (original matrix, identity matrix) and I ended up with (identity matrix, inverse matrix).

Pivoting is a way of avoiding dividing by zero or small numbers. Instead of selecting the next row as one proceeds, one looks for the best row for the next step and then makes it the next row.

An often-recommended algorithm, LU decomposition, is essentially Gaussian elimination with construction of a matrix in place of the original one for quick calculation of linear-equation solutions. Gauss/LU requires O(n3) calculations for size n, but LU makes two triangular matrices that require only O(n2) operations for a solution.

LU refers to lower-triangular and upper-triangular matrices, and these matrices are easy to invert.

But I didn't bother with LU, because I wanted less code complexity.

Turning to determinants, they are easy to compute with Gaussian elimination. Pivoting makes sign flips, so that is easy to take into account.

Put it all into a static library or dll and you have the beginnings of a native open source C++ math library.

The major math libraries seem to be legacy FORTRAN.

I've found
Boost C++ Libraries
with
Boost Basic Linear Algebra - 1.82.0
among numerous other C++ libraries.

If you want to take your chances with Github's contributors, search GitHub
 
Back
Top Bottom