• Welcome to the Internet Infidels Discussion Board.

The Programming Thread

Binary search is very useful and probably in much more common use than hash tables. BUT I cannot condone the ANSI standard bsearch() function.

When the key is not found in the searched table, it's not unusual to want the nearest element instead. Or even to create an entry for that new key. I don't insist that bsearch() be capable of creating a new entry. BUT it's already done the main task: locating where the new entry should go. However (to keep the interface very simple) bsearch() returns only a null pointer when no exact match is found. Insertion will require that you then do the equivalent of what bsearch() just did but you'll need to "roll your own." Rolling your own anyway, what good was bsearch()?

bsearch() itself is only a tiny bit of C source code, although it allows the same generality as qsort(). But you don't need that; start with a VERY trivial template and derive your own code for any particular table.

At least ANSI was smart enough to reject BSD's hsearch() which has a horrid interface.

Related to a sorted table for binary search (which uses time log_2(N) for look-up ) is a digital search tree (e.g. trie) where log_5(N) look-up time suffices if each node has fan-out of 5.

AND one of the most elegant (and amazing?) data structures I have ever seen (and which is NOT found in Knuth) is a memory-saving method to store a digital search tree in a hash table. (If anyone gets this far and is seized by excitement in anticipation of the possible amazement, I may share some details via PM.)
 
After some more reading I am not done with random numbers yet.

I posted one of the tests, the Chi Squared test. In a uniform random sequence the number of occurrences of each number are counted ad compared against theoretical counts.

That does not tell anything about how the numbers are distributed in the sequence. The occurrences of each number may be the same or close, but the sequence may not be random.

The sequential test looks at occurrences of sub sequences, looking for data clumping.

At 200 million samples a sequence of up to 7 numbers can be tested for a uniform distribution 0-10.

Above that run time is a problem on my PC.

Both the c++ mt19937 and the URAND function have max ints 0x7fffffff. To test over the full integer range at once would be difficult.

The full test would require calculating the theoretical probabilities of a sub sequence and do a Chi Squared test. I just compared URAND to C++ mt19937.

So far URAND algorithm seems to be a good replacement for rand() if you are using C. With C++ the random functions are preferred.

There is one more test, the spectral te

See Knuth Vol 2 Semi Numeral Algorithms chapter 3, empirical tests section.. PDFs online.

Code:
#include <chrono>
#include <random>

void sequential_test(void){
    cout<<"sequential testt"<<endl;
    unsigned seed = chrono::system_clock::now().time_since_epoch().count();
    mt19937 rand_gen(seed);
     uniform_int_distribution<int> dist (0,10);;
    unsigned long long rand_num = seed;
    unsigned long long urand_max = 0x7fffffff;
    unsigned long long m = 0x000fffffffffffff;
    unsigned long long mh = m/2;
    unsigned long long k1 = 2*ceil((mh)*(.5 - sqrt(3)/6.)) + 1;
    unsigned long long k2 = 8*ceil((mh)*atan(1.)/8.) + 5;

   int j,i, n = 200000000,urandcnt = 0,cppcnt = 0;
   int nseq = 3;
   int seq[8] = {0,1,2,3,4,5,6,7};
   int cppflag,urandflag;
   int *cpp = new int[n];
   int *urand = new int[n];

   for(i=0;i<n;i++){
        urand[i] = rand_num%11; //0-10
        rand_num = rand_num*k2 + k1;
        cpp[i] = dist(rand_gen);
    }
    for(i=0;i<n-nseq;i++){
        cppflag = 0;
        urandflag = 0;
        for(j=0;j<nseq;j++){
            if(seq[j] != urand[i+j])urandflag = 1;
            if(seq[j] != cpp[i+j])cppflag = 1;
        }
        if(cppflag == 0)cppcnt++;
        if(urandflag == 0)urandcnt++;
    }
    cout<<"cpp  "<<cppcnt<<endl;
    cout<<"urand  "<<urandcnt<<endl;
}
 
Knuth's Shuffle
The Fisher–Yates shuffle is named after Ronald Fisher and Frank Yates, who first described it. It is also known as the Knuth shuffle after Donald Knuth.[2] A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cyclic permutations of length n instead of random permutations.

The modern version of the Fisher–Yates shuffle, designed for computer use, was introduced by Richard Durstenfeld in 1964[4] and popularized by Donald E. Knuth in The Art of Computer Programming as "Algorithm P (Shuffling)".[5] Neither Durstenfeld's article nor Knuth's first edition of The Art of Computer Programming acknowledged the work of Fisher and Yates; they may not have been aware of it.[citation needed] Subsequent editions of Knuth's The Art of Computer Programming mention Fisher and Yates' contribution.[6]
The algorithm described by Durstenfeld is more efficient than that given by Fisher and Yates: whereas a naïve computer implementation of Fisher and Yates' method would spend needless time counting the remaining numbers in step 3 above, Durstenfeld's solution is to move the "struck" numbers to the end of the list by swapping them with the last unstruck number at each iteration. This reduces the algorithm's time complexity to O ( n ) compared to O ( n 2 ) for the naïve implementation.[7] This change gives the following algorithm (for a zero-based array).


Outside of new areas like AI it is hard to find a new ways of doing something that is better than what has been done.

Fisher was born in 1913.




Code:
void shuffle(int n,int *x){
    //Knuth's Shuffle in place
    int i,r,swap;
    for(i=n;i>=1;i--){
        r = rand()%i;  //rand() not the best choice
        swap = x[i-1];
        x[i-1] = x[r];
        x[r] = swap;
        }
}


int main(){

   int i,n=10;
    int x[n];
    for(i=0;i<n;i++)x[i] =i;
    shuffle(n,x);
    for(i=0;i<n;i++)cout<<i<<"  "<<x[i]<<endl;
    return 0;
}
 
The finish of my foray into random numbers. A reference to URAND in Scilab documentation lled me to the URAND paper and then testing random number generators or more generally random sequences of any source.

If an electromagnet phenomena is detected from space seemingly a series of integers is it random?

Three of the test methods are frequency, sequential, and gap tests.

The frequency test counts the number occurrences of each number and a Chi Squared test compares the counts to what they should be for a perfect uniform distribution.

The frequency test provides no information on randomness of the sequences. For example a sequence 000111222...999 will pass the frequency test but is not random.

That is where sequential and gap tests come in. The sequential test as written counts the occurrences of each non overlapping pair. It looks for non random clumping of numbers. It can be extended to three or more groupings.

The gap test evaluates the distance between occurrences of each single number in the sequence, It looks for a non random bias.

Tests are applied to the CPP mt19937 generator. A benchmark for a quick check of other generators. More comprehensive testing requires a lot of insertions and time.

The C rand() function fails the fervency test as iterations increases, but looks ok with the gap and sequential tests, it takes all three tests.

URAND has results similar to mt19937.


Code:
#include <random>
unsigned Seed = time(NULL);
mt19937 gen(Seed);

void int_array_cpp(int n,int lo,int hi,int *y){
   int i;
   uniform_int_distribution<int> dist (lo,hi);
   for(i=0;i<n;i++)y[i] = dist(gen);
}
void frenquency_test(void){
    //ntrials power of 10  num power of 10
    cout<<"Frencuency Test "<<endl;
    int i,m,num =100,ntrials = pow(10,8);
    int *y = new int[ntrials];
    int_array_cpp(ntrials,0,num-1,y);
    int counts[num];
   for(i=0;i<num;i++)counts[i] = 0;
   double X=0;
   /// count occurnces
   for(i=0;i <ntrials;i++)counts[y[i]]++;
   m = ntrials/num;
   // chi squared test
   for(i=0;i<num;i++)X += double(pow(counts[i]-m,2))/m;
   FILE *p = fopen("freq.dat","w");
   for(i=0;i<num;i++)fprintf(p,"%10d\n",counts[i]);
   fclose(p);
   printf("Chi Squared  %.3f \n",X);
   cout<<"ntrials  "<<ntrials<<endl;
   cout<<"m  "<<m<<endl;
}

void sequential_test(void){
   cout<<"sequential test"<<endl;\
   int k,j,i, z,n = 100000000;
   int *yr = new int[n];
   int_array_cpp(n,0,9,yr);
   int counts[100];
   for(i=0;i<100;i++)counts[i] = 0;
   int x[10]  = {0,1,2,3,4,5,6,7,8,9};
    int y[10] = {0,1,2,3,4,5,6,7,8,9};

    for(i=0;i<n-2;i+=2){
        z = 0;
        for(j=0;j<10;j++){
                for(k=0;k<10;k++){
                    if(yr[i] == x[j] && yr[i+1] == y[k])counts[z]++;
                    z++;
                }
        }
    }
    double mx = n/200,X=0;
    for(i=0;i<100;i++)X += double(pow(counts[i]-mx,2))/mx;
    printf("chi squre .05 critical value 123.225  X  %.3f\n",X);
    FILE *p = fopen("seq.dat","w");
    for(i=0;i<100;i++)fprintf(p,"%10d\n",counts[i]);
    fclose(p);
}

void gap_test(void){
    // change num 0 to 9
   cout<<"gap test"<<endl;
   int i,n = 100000000;
   double aver;
   int *y = new int[n];
   int_array_cpp(n,0,9,y);
   int gap0,gap1,gap2,s = 0,k=0;
   int delta,max = -1,min = 100000;
    int num =0;
    for(i=0;i<n;i++)if(y[i] ==num){gap0 = i;break;}
    gap1 = gap0;
    for(i=gap0+1;i<n;i++){
        if(y[i] == num){
            delta = i-gap1;
            s += delta;
            if(delta > max)max = delta;
            if(delta<min)min = delta;
            gap1 = i;
            k++;
            }
    }
    aver = double(s)/double(k);
    printf("num %d  min %d  max  %d aver gap  %.5f\n",num,min,max,aver);
}

int main(){
    gap_test();
    sequential_test();
    frenquency_test();
    return 0;
}
 
From what I see on the net this is the way to use C++ random number generators. Making the statements static should prevent reseeding on successive calls.

If you are using C++ the rand() function should not be used.

The mt19937 generator has a virtually infinite period for practical purposes.

Code:
#include <random>

int unif_int(int lo,int hi){
    static mt19937 rand_gen(time(NULL));
    static uniform_int_distribution<int> dist (lo,hi);
    return dist(rand_gen);
}

void unif_int_array(int n,int lo,int hi,int *y){
    int i;
    static mt19937 rand_gen(time(NULL));
    static uniform_int_distribution<int> dist (lo,hi);
    for(i=0;i<n;i++)y[i] = dist(rand_gen);
}

double unif_float(int lo,int hi){
    static mt19937 rand_gen(time(NULL));
    static uniform_real_distribution<double> dist (lo,hi);
    return dist(rand_gen);
}

void unif_float_array(int n,int lo,int hi,double *y){
    int i;
    static mt19937 rand_gen(time(NULL));
    static uniform_real_distribution<double> dist (lo,hi);
    for(i=0;i<n;i++)y[i] = dist(rand_gen);
}



double unif_dec(void){
    //0 <= x <= 0.999999999767169
    static mt19937 rand_gen(time(NULL));
    return generate_canonical<double,1>(rand_gen);
}

void unif_dec_array(int n,double *y){
    // 0 <= x <= 0.999999999767169
    int i;
    static mt19937 rand_gen(time(NULL));
    for(i=0;i<n;i++)
    y[i] =generate_canonical<double,1>(rand_gen);
}
 
The mt19937 Mersenne Twister random number generator coded directly from the link. Thanks to whoever wrote it. I added the function to make it work.

It is based on an FSR feedback shift register.The technique is used in digital hardware to generate arbitrary sequences. A finite state machine or finite automata in commuter science.

The major benefit is the extremely long period, infinite for all practical purposes,


The Association Of Computing Machinery is a place to look for algorithms. It gas been around for a long time.



Code:
#include <stdint.h>

#define n 624
#define m 397
#define w 32
#define r 31
#define UMASK (0xffffffffUL << r)
#define LMASK (0xffffffffUL >> (w-r))
#define a 0x9908b0dfUL
#define u 11
#define s 7
#define t 15
#define l 18
#define b 0x9d2c5680UL
#define c 0xefc60000UL
#define f 1812433253UL

typedef struct
{
    uint32_t state_array[n];         // the array for the state vector
    int state_index;                 // index into state vector array, 0 <= state_index <= n-1   always
} mt_state;

mt_state mts;


void initialize_state(mt_state* state, uint32_t seed)
{
    uint32_t* state_array = &(state->state_array[0]);

    state_array[0] = seed;                          // suggested initial seed = 19650218UL

    for (int i=1; i<n; i++)
    {
        seed = f * (seed ^ (seed >> (w-2))) + i;    // Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.
        state_array[i] = seed;
    }

    state->state_index = 0;
}

uint32_t random_uint32(mt_state* state)
{
    uint32_t* state_array = &(state->state_array[0]);

    int k = state->state_index;      // point to current state location
                                     // 0 <= state_index <= n-1   always

//  int k = k - n;                   // point to state n iterations before
//  if (k < 0) k += n;               // modulo n circular indexing
                                     // the previous 2 lines actually do nothing
                                     //  for illustration only

    int j = k - (n-1);               // point to state n-1 iterations before
    if (j < 0) j += n;               // modulo n circular indexing

    uint32_t x = (state_array[k] & UMASK) | (state_array[j] & LMASK);

    uint32_t xA = x >> 1;
    if (x & 0x00000001UL) xA ^= a;

    j = k - (n-m);                   // point to state n-m iterations before
    if (j < 0) j += n;               // modulo n circular indexing

    x = state_array[j] ^ xA;         // compute next value in the state
    state_array[k++] = x;            // update new state value

    if (k >= n) k = 0;               // modulo n circular indexing
    state->state_index = k;

    uint32_t y = x ^ (x >> u);       // tempering
             y = y ^ ((y << s) & b);
             y = y ^ ((y << t) & c);
    uint32_t z = y ^ (y >> l);

    return z;
}

int mt19937(void){
    // 0 <= x <=  2,147,483,646
    static int init = 1;
    if(init){
        initialize_state(&mts, time(NULL));
        init = 0;
        }
    return random_uint32(&mts)%0x7fffffff;
}
 
Last edited:
Having nothing better to do I combined the two functions into one and replaced the structure with static variables for a clean function. The constants could be hard coded into the code.

The constants appear to be standard for the 32 bit version.

Code:
int mt19937(void){
    static int init = 1;
    static  uint32_t state_array[n];
    static int state_index = 0;
    if(init){
        unsigned int seed = time(NULL);
        state_array[0] = seed;     // suggested initial seed = 19650218UL
        for (int i=1; i<n; i++){
            seed = f * (seed ^ (seed >> (w-2))) + i;    // Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.
            state_array[i] = seed;
        }
        init = 0;
    }
    int k = state_index;  // point to current state location
                                     // 0 <= state_index <= n-1   always
//  int k = k - n;              // point to state n iterations before
//  if (k < 0) k += n;        // modulo n circular indexing
                                     // the previous 2 lines actually do nothing
                                     //  for illustration only

    int j = k - (n-1);         // point to state n-1 iterations before
    if (j < 0) j += n;         // modulo n circular indexing
    uint32_t x = (state_array[k] & UMASK) | (state_array[j] & LMASK);
    uint32_t xA = x >> 1;
    if (x & 0x00000001UL) xA ^= a;
    j = k - (n-m);                           // point to state n-m iterations before
    if (j < 0) j += n;                       // modulo n circular indexing
    x = state_array[j] ^ xA;         // compute next value in the state
    state_array[k++] = x;            // update new state value
    if (k >= n) k = 0;                     // modulo n circular indexing
    state_index = k;
    uint32_t y = x ^ (x >> u);       // tempering
             y = y ^ ((y << s) & b);
             y = y ^ ((y << t) & c);
    uint32_t z = y ^ (y >> l);
    return z%0x7fffffff;

}
 
Most of my programming challenges are very far removed from most of the things discussed in this thread. In my world, I already have all of the data structures and sorting algorithms I need. My programming problems are on a higher level:

1. Understand the complex piece of software that we are modifying and the environment in which it runs.
2. Correctly change the software's behaviour to meet new requirements and fix newly-discovered bugs.
3. Make sure the code remains maintainable.

I generally don't find it too challenging to understand the software or make correct changes. I think the big challenge in a complex codebase is avoiding technical debt. Coding basically becomes a people problem: how do I change this so that we will be able to change it again later? Another related challenge is paying off technical debt. Changing even basic functionality can be a nightmare when the code is a big mountain of spaghetti.
 
Most of my programming challenges are very far removed from most of the things discussed in this thread. In my world, I already have all of the data structures and sorting algorithms I need. My programming problems are on a higher level:

1. Understand the complex piece of software that we are modifying and the environment in which it runs.
2. Correctly change the software's behaviour to meet new requirements and fix newly-discovered bugs.
3. Make sure the code remains maintainable.

I generally don't find it too challenging to understand the software or make correct changes. I think the big challenge in a complex codebase is avoiding technical debt. Coding basically becomes a people problem: how do I change this so that we will be able to change it again later? Another related challenge is paying off technical debt. Changing even basic functionality can be a nightmare when the code is a big mountain of spaghetti.

Yea, this seems to be common experience in applications of scale. I don't see it much in my role as I'm using a scripting language for the most part, and occasionally updating fairly simple Java applications.

The challenge in my role is that my team is responsible for maintaining a whole whack of things that we didn't write or set up initially. In theory everything's simple to do, but there is so much to know, and we touch everything so irregularly that we don't really build the habits or muscle memory.

For example, earlier this year I had to jump into something and write some PL SQL. I'd never written PL SQL in my career, and probably won't do it again for a very long time. But out of nowhere I had to learn it, and write it.. quickly. Stuff like this is a common experience.

But like 80% of what I do is the scripting language which I can do in my sleep.
 
I agree the main skill is not in technical details, it is learning to manage large complex problems.
 
Thirteen years in the industry and I still need to read up on outer joins.
Yeah, I have pretty much been able to avoid joins by properly setting things up in the first place. I'm more interested in a lot of simple data than a little bit of fancy data. I did not too long ago find a surprising bottleneck in the data access--looking up fields by names. Turns out my code was spending more time looking up those names than actually getting the data. Get the index of the field at the start and retrieve by number is a lot faster. The profiler didn't distinguish between time in the library and actual SQL time, I only realized it when basically the same query in the console went much faster. And there's a name-to-index routine in the library that returns -1 on a failure rather than throwing--but it's private.
 
Thirteen years in the industry and I still need to read up on outer joins.
Yeah, I have pretty much been able to avoid joins by properly setting things up in the first place. I'm more interested in a lot of simple data than a little bit of fancy data. I did not too long ago find a surprising bottleneck in the data access--looking up fields by names. Turns out my code was spending more time looking up those names than actually getting the data. Get the index of the field at the start and retrieve by number is a lot faster. The profiler didn't distinguish between time in the library and actual SQL time, I only realized it when basically the same query in the console went much faster. And there's a name-to-index routine in the library that returns -1 on a failure rather than throwing--but it's private.

I don't tend to write a lot of SQL these days. The scripting language I use (Cerner CCL) is similar to SQL, but because it's a full reporting language I can usually avoid complex queries by decomposing them into simpler ones. You grab from a table or two, put it in the data structure, move on to the next table and add it's data etc.

Oddly enough, there's been a tendency in my team's history to go the opposite way and try to build massive queries that do a lot of things. For some reason they think this complexity is better? I've been trying to kill this anti-pattern in our junior devs.
 
Big factorials in C sing __int_128. Code compiled using gcc ISO C++ C17.

I got the __int128 print function off the net, it uses recursion to parse the number.


Code:
void print_int128(__int128 x) {
    //print __int128 to console
    if (x < 0) {putchar('-');x = -x;}
    if (x > 9) print_int128(x / 10);putchar(x % 10 + '0');
}//print_int128()


void save_int128(FILE *p,__int128 x) {
    //save __int128 to file
    char c;
    if (x < 0) {fprintf(p,"%c",'-');x = -x;}
    if (x > 9)save_int128(p,x / 10);
    c = x % 10 + '0';
    fprintf(p,"%c",c);
}//save_int128()

__int128 fact_128(int n){
    //max n __int128 33, long long 20, int 12
    //n! = 1*2*3.....
    if(n>34 || n<0)return -1;
    int i;
    __int128 f = 1;
    for(i=1;i<n;i++)f += (f * i);
    return f;
}

void fact_128_test(void){
    int i, n =34;
    __int128  f;
    FILE *p = fopen("fact_128_table.txt","w");
    for(i=0;i<n;i++){
        f = fact_128(i);
        printf("%2d\t ",i);
        print_int128(f);
        fprintf(p,"%2d\t ",i);
        save_int128(p,f);
        printf("\n");
        fprintf(p,"\n");
     }
     fclose(p);
}

0 1
1 1
2 2
3 6
4 24
5 120
6 720
7 5040
8 40320
9 362880
10 3628800
11 39916800
12 479001600
13 6227020800
14 87178291200
15 1307674368000
16 20922789888000
17 355687428096000
18 6402373705728000
19 121645100408832000
20 2432902008176640000
21 51090942171709440000
22 1124000727777607680000
23 25852016738884976640000
24 620448401733239439360000
25 15511210043330985984000000
26 403291461126605635584000000
27 10888869450418352160768000000
28 304888344611713860501504000000
29 8841761993739701954543616000000
30 265252859812191058636308480000000
31 8222838654177922817725562880000000
32 263130836933693530167218012160000000
33 8683317618811886495518194401280000000
 
99% of my programming has been as purely 1-man projects. Most of these did NOT include source code as a paid deliverable, but when I did deliver source it was OK.

This 1-man coding has virtues and flaws. But that would be a discussion for another post.

When I DID work with code I hadn't written, I often lacked source; disassembled and patched the target using a 'Debug' facility; among the most challenging of these projects was designing and defeating copy-protection schemes.

There was just one time I worked closely with a team developing a software system; I would discuss it in a "Worst Software Design" thread if someone starts such a thread.

TRUE story follows.

I'll use aliases: it was Nazagam 'Mation and they were developing OIS (Oshkosh Information System) for the customer's factory in Oshkosh. Although there were many relations in a SQL-like base (RDBS), dozens of menus and queries available from each of the dozens of PCs attached to the OIS mainframe (a large Vax), and various other features, at its heart what this so-called "Information System" mainly did was just to record automated clicks from a factory floor! Period.

They started with ten coders or so, had been working for several months (delayed by bad decisions and customer changing specs), needed to hire contractors (including me) to try to propel their efforts.

To give a hint of the needless complexity
  • The Vax wasn't connected directly to the clickers on the factory floor. Those relays or servos fed to one of hundreds of "workstations" which fed to microcomputers from another vendor, Tennessee Micro (TMC). Both the workstations and those micros were themselves accumulating click counts and sending the summaries to the Vax in haphazard manner. I think TMC and Nazagam had BOTH bid on the Oshkosh system initially. Perhaps Oshkosh, unsure which to pick, picked BOTH and asked them to cooperate.
  • Although the counts had already been partly corrupted by the TMC interface, OIS was expected to provide Hourly and Shift Summaries. The Shift boundaries need not be on an Hourly boundary and could be changed retroactively by a plant executive.
  • The fine programmers at IIDB would probably use a log file to record these haphazard summaries. Updating the Click Totals and Hourly summaries -- big relations in the RDBS -- would be a key step, but no matter how many other SQL tables there were the individual raw count summaries would be maintained as a log. In the unfortunate special case where a shift boundary is changed retroactively, one pass through that log could create (e.g. using VERY simple SQL) all the corrected Shift Summaries.
    Right? Isn't that how you'd design it?
  • NO. That's not how it worked. Instead of a simple log (and associated totaling) each record from a TMC on the factory floor was reformatted and entered into a RDBS table. (The time stamp ensured key uniqueness.)
  • Does that seem trivial enough for an "Information System"?? Click counting, with SQL executions for Hourly and Shift Summaries being the main drain on computation resources. (There were other details, e.g. the software kept track by part number of the tool or die that generated each click.)
    The Hourly and Shift summary softwares were written by TWO different programmers, who each departed at dead-line time, as customer turning on the system was celebrated. But the factory wanted to run THREE shifts per day, and the shift summary software took more than 8 hours per shift!! By that time I was a valued contractor and was asked to sort that out as well.
  • OIS ran under Unix, so simple shell scripts could be written to start up and drive OIS. The Oshkosh factory could use 1 or 2 Unix hackers to tune the scripts. BUT NO, instead Nazagam designed a horrid custom command language for the customer to use. It looked like JCL (do you young'uns know what that was?) and, IIUC, the employee or contractor who did that was in fact an old JCL guy!
  • A different Nazagam team had developed its own "O.S." that lay on top of Unix and did NOTHING as far as I could tell but was packaged with OIS by default! One thing this "OS" DID do was to allocate the full limit (maybe 8?) of shared-memories on its start-up, although it never used any of them. This was unacceptable for me when I found that shared memory would solve some of OIS's problems.
  • Somebody did agree to change, and to allocate and waste only 7 of the 8 such shared-memory structures.
  • Not long after OIS was finally running at Oshkosh, IBM made Oshkosh an offer they couldn't refuse. Oshkosh called Nazagam and told them the system had to be ported to an IBM mainframe instead of the Vax. (The IBM mainframe would run Unix.) The software manager called ME into her office to ask if $1 Million was the right price to ask Oshkosh for the porting effort. Minor deficiencies in IBM's C compiler seemed to be the major issue. One Million Dollars.
  • Remember this is Swammerdami, the autistic lone wolf she's talking to. I was probably planning my next vacation in Thailand. I SHOULD have said "I'll do the whole thing for a Quarter Million." Instead I just mumbled and nodded my head.

There's more to the story but you get the idea.
Does anyone know of an even worse software project? Tell us about it! (I suppose a new thread would be best.)
 
Big factorials in C sing __int_128. Code compiled using gcc ISO C++ C17.

I got the __int128 print function off the net, it uses recursion to parse the number.
Which compiler supports 128-bit integers? I don't know of any hardware that does.

You might like this:

The GNU Multiple Precision Arithmetic Library:
The GNU MP Bignum Library with documentation Top (GNU MP 6.3.0)

It supports arbitrary-sized integers and fractions with them, and arbitrarily-long floating-point numbers.

The easiest way to install it is with some open-source package manager, and such package managers are available for all major desktop platforms.
 
From the net,Clang and gcc support __int128. MS C++ does not.

Registers in the processor going back to 8 bit processors have features to support arithmetic in multiples of the word length length. 64 bit folatng point was done 8 bits at a time in software Very slow.

To add two 128 but integers the lower 64 bits are added, the registers generate a carry bit The upper two 64 bits are are added adding in the carry bit.

In assembly language

ADD - add without carry
ADDC - add with carry.

binary addition
0 + 0 = 0
0 + 1 = 1
1 + 0 = 0
1 + 1 = 0 carry 1

Same as base 10 addition. 9 + 9 = 8 carry 1.

Add two two 4 bit numbers two bits at a time.

0011 + 0011 = 3 + 3 = 6

add lower two bits 11 + 11

11
11 +
------
1 10 carrying 1

then add upper two uppper bits plus the carry.

1 carry
00
00 +
-----
01

upper and lower concentrated with = 0110 = 6.

0,1,2,4,8.....
0000 = 0
0010 = 2
0100 = 4
1000 = 8
 
I finally learned how to use the Python class as I would the C++ class.

Two ways to do it, internal or external variable initialization. The init function is roughly analogous to the C++ constructor.

Code:
class X:
    def __init__(self):
       self.k = 0
    def test(self,y):
         print(self.k*y)

x = X()

class X:
    def __init__(self,k,j):
       self.k = k
       self.j = j
    def test(self,y):
         print(self.k*y)

x = X(2,3)  Initializes k to 2 and j to 3

Accessing arrays from a class.

class X:
    def __init__(self):
       self.y = []
    def test(self):
         for I in range(5):self.y[i] = i
      
t = ar.array("i",5*[0])
x = X()
x.y = t
x.test()

Ported a signal generator from C++ to Python. Sine, pulse, rectangular, triangle.

Code:
import array as ar
import numpy as np
import math as ma
import matplotlib.pyplot as plt

class SIG_GEN:
  
    def __init__(self): 
        self.a = 0
        self.f = 0
        self.ph = 0
        self.os = 0
        self.tlo = 0
        self.thi = 0
        self.vlo = 0
        self.vhi = 0
        self.tdelay = 0
        self.t = []
        self.y = []
      
    def make_sine(self):
        n = len(self.y)
        for i in range(n):
           self. y[i] = self.a*ma.sin(2*ma.pi*self.f*self.t[i] + self.ph) + self.os
 
    def make_pulse(self):
        n = len(self.y)
        for i in range(n):
            if self.t[i] >= self.tlo and self.t[i] <= self.thi:
                self.y[i] = self.vhi
            else:
                self.y[i] = self.vlo
  
  
    def make_rect(self):
        n = len(self. y)
        tcount = 0
        dt = max(self.t)/(n-1)
        i = 0
        tcount = 0

        while(1):
            if self.tdelay == 0:break
            if(i == n or tcount > self.tdelay):break
            self.y[i] = 0
            tcount += dt
            i += 1
          
        while(1):
            tcount = 0
            while(tcount <= self.thi):
                if(i == n):break
                self.y[i] = self.vhi
                tcount += dt
                i += 1
            tcount = 0
      
            while(tcount <= self.tlo):
                if(i == n):break
                self.y[i] = self.vlo
                tcount += dt
                i += 1
            if(i == n):break
          
    def make_tri(self):
        dt = self.t[1]-self.t[0]
        n1 = ma.floor(self.thi/dt)
        n2 = ma.floor(self.tlo/dt)
        dy1 = (self.vhi-self.vlo)/n1
        dy2 = (self.vhi-self.vlo)/n2
        i = 0;
        while(1):
            tcount = 0;
            ycount = self.vlo
            while(tcount <= self.thi):
                if i == n: break
                self.y[i] = ycount
                ycount += dy1
                tcount += dt
                i += 1
            
            tcount = 0
            ycount = self.vhi
            while(tcount <= self.tlo):
                if i == n: break
                self.y[i] = ycount
                ycount -= dy2
                tcount += dt
                i += 1
        
            if i == n: break
      
 
n = 2**16
t = ar.array("d",n*[0])
y = ar.array("d",n*[0])

dt = 1/(n-1)
for i in range(n):t[i] = i*dt

sg = SIG_GEN()
sg.y = y
sg.t = t

sg.a = 1
sg.f = 2
sg.ph = 0
sg.os = 0
#sg.make_sine()

sg.tlo = .2
sg.thi = .4
sg.vlo = 0
sg.vhi = 1
#sg.make_pulse()

sg.tlo = .2
sg.thi = .2
sg.vlo= 0
sg.vhi = 1
sg.tdelay = .4
#sg.make_rect()

sg.tlo = .1
sg.thi = .1
sg.vlo= -1
sg.vhi = 1
sg.make_tri()

plt.plot(t,y,linewidth=2.0,color="k")
plt.grid(color='k', linestyle='-', linewidth=1)
plt.show()

1738976952858.png

1738976549469.png
 
Last edited:
Made a mistake, when I cleaned up the code I did not fully retest.

In the triangle function add a line after the definition.


def make_tri(self):
n = len(self.y)
 
Back
Top Bottom