• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Programming Thread

I get the impression that a lot of the most interesting programming happens on embedded systems, where resources are highly constrained and there is a big incentive to make programs as lean and efficient as possible.

I'm reminded of John Carmack's fast inverse square root function. It's a pretty cool example of using various tricks to create a function that minimises execution time by calculating an approximate result, and does so with a couple of nifty tricks.



C:
float Q_rsqrt( float number )
{
    long i;
    float x2, y;
    const float threehalfs = 1.5F;

    x2 = number * 0.5F;
    y  = number;
    i  = * ( long * ) &y;    // evil floating point bit level hacking
    i  = 0x5f3759df - ( i >> 1 );               // what the fuck?
    y  = * ( float * ) &i;
    y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
 // y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration,
                                              // this can be removed

    return y;
}

I appreciate creative problem solving.

It took three iterations of the last line to match the compiler library sqrt function, so I'd call it a successive approximation algorithm.

There may be portability problems. With the way I have my compiler options set it compiles wit warnings of the de-referencing. It may not compile on a struct rule compiler like MS C++.

The function works with 32 bit ints and 32 bit floats. Float and double are both 32 bit, but the function fails with doubles. It probably has something to do with the pointers. Possible potability problem.

I haven't reduced it to an equation. The right shift is a divide by 2 and I haven't figured out where the hex constant comes from. Something to chew on for a while.
 
I get the impression that a lot of the most interesting programming happens on embedded systems, where resources are highly constrained and there is a big incentive to make programs as lean and efficient as possible.

I'm reminded of John Carmack's fast inverse square root function. It's a pretty cool example of using various tricks to create a function that minimises execution time by calculating an approximate result, and does so with a couple of nifty tricks.



C:
float Q_rsqrt( float number )
{
    long i;
    float x2, y;
    const float threehalfs = 1.5F;

    x2 = number * 0.5F;
    y  = number;
    i  = * ( long * ) &y;    // evil floating point bit level hacking
    i  = 0x5f3759df - ( i >> 1 );               // what the fuck?
    y  = * ( float * ) &i;
    y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
 // y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration,
                                              // this can be removed

    return y;
}

I appreciate creative problem solving.

It took three iterations of the last line to match the compiler library sqrt function, so I'd call it a successive approximation algorithm.

There may be portability problems. With the way I have my compiler options set it compiles wit warnings of the de-referencing. It may not compile on a struct rule compiler like MS C++.

The function works with 32 bit ints and 32 bit floats. Float and double are both 32 bit, but the function fails with doubles. It probably has something to do with the pointers. Possible potability problem.

I haven't reduced it to an equation. The right shift is a divide by 2 and I haven't figured out where the hex constant comes from. Something to chew on for a while.

Seconded. I've encountered this as a quick approximation of square root that gives reasonable precision a lot faster than doing it the "correct" way. I didn't realize it converged on the right answer, I only knew it as a way to get into the ballpark fast--in many cases you don't need the exact value. This is especially true in games where a couple of digits of precision very often do the job fine.
 
The hex number, aka magic number, appears to be empirical. I would never have figured it out from a cold start.

Thanks for the post bigfield.
 

In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series.

Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform.

The term trigonometric polynomial for the real-valued case can be seen as using the analogy: the functions sin(nx) and cos(nx) are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of eix, Laurent polynomials in z under the change of variables z = eix.

One version of a cosine polynomial. Good to 4 decimal places with 3 terms.



Code:
double cos_poly(double x){
// cos(x) = 1 - x^2/2 + x^4/24 - x^6/720
    double val = 0;
    val = 1 -(pow(x,2)/2.) + (pow(x,4)/24.) - (pow(x,6)/720.);
    return val;
}//cos_poly()

int main()
{
     double ang = _PI/4., val = 0, valc = 1.234;
     val = cos_poly(ang);
     //truncate or round to get rid of the noise
     val = val *1000;
     val = trunc(val)
     //val = round(val);
     val = val/1000;
     valc = cos(ang);  //c lib
     printf("angle  %2.10f\n", 180*ang/_PI );
     printf("cos  lib  %1.10f cos poly  %1.10f\n",valc,val);
     return 0;
}
 
The most interesting aspect of my job is the business analysis aspect. The actual code I write is often trivial, but the business processes behind the code can be interesting.

I'd take an interesting business over an interesting tech stack.

I was hired onto a team that was already halfway through the app development process, and so much of the business logic code has already been done. I often have to spend several hours just reading code or doing DB queries etc. just to figure out what the code is doing before I change one or two lines.

I don't get to do much business analysis - we have dedicated BA's who do that part - but I enjoy the process of analysing the implementation to understand the hows and whys. And the BAs still have no idea how to actually code, so translating requirements in to code is itself an interesting exercise.

I find it satisfying when I get to a ticket that requires me to write or rewrite a whole feature. I don't particularly like the languages that I have to work with, but coding is fundamentally an enjoyable craft for me.

A lot of the BAs I work with are clinical people (I work in a hospital) with minimal experience in IT, and I've worked as a Software Analyst for about five years. So it usually ends up being a joint effort translating business requirements into software. In some cases I end up teaching these people how to be better at their jobs, which can be fun. They usually know the clinical side a little better, I know the software side better.

One of the weirder aspects of my background is that I also have a Medical Science degree which is oddly appropriate for the role, on rare occasions I've been able to contribute to the clinical side.

And yea, I also like the act of coding without regard to language. I'm a bit bored with the technologies I'm using and would prefer broader exposure, but learning more about the business as a whole keeps things interesting. And I've been coding in the same language, on the same system for nearly seven years, so at this point becoming more and more of an expert can be gratifying.
 
This morning I researched and implemented the Luhn algorithm to build out a barcode with a check-digit in a couple hours. Not overly difficult if you're an experienced programmer, but it makes for an interesting thought experiment to consider the proportion of the public who could pull that off.

That's been a major learning point for me since entering the industry eight years ago - how few people have the capacity to build software, let alone being actually good at it. When I was coming out of college I was pretty stressed about finding a role, now I realize just how much power I had at the time, and still have.

The only problem going forward is going to be ageism.
 
Testing out AI generated code.... I used an example of FFT because my friend is really interesting in that:


Note it keeps on giving different answers with different approaches...

in C++ create a program that uses FFT to double the pitch of a sound sample and implement the FFT functions using FFTW, implement sample loading and saving, and implement the pitch doubling

Here is an example of how you might implement a pitch doubling program using FFTW in C++:

C++:
#include <iostream>
#include <complex>
#include <fftw3.h>
#include <cmath>


const int FFT_SIZE = 1024;


int main()
{
    // Load the sound sample data into an array
    // TODO: Implement sample loading


    // Allocate memory for the FFT input and output arrays
    fftw_complex *in, *out;
    in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * FFT_SIZE);
    out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * FFT_SIZE);


    // Create a plan for the FFT
    fftw_plan plan = fftw_plan_dft_1d(FFT_SIZE, in, out, FFTW_FORWARD, FFTW_ESTIMATE);


    // Copy the sound sample data into the FFT input array
    // TODO: Implement sample copying


    // Compute the FFT of the input data
    fftw_execute(plan);


    // Double the pitch by shifting the spectrum to the right
    for (int i = 0; i < FFT_SIZE/2; i++)
    {
        in[i] = out[i + FFT_SIZE/2];
    }


    // Compute the inverse FFT of the shifted spectrum
    fftw_plan inverse_plan = fftw_plan_dft_1d(FFT_SIZE, in, out, FFTW_BACKWARD, FFTW_ESTIMATE);
    fftw_execute(inverse_plan);


    // Save the modified sound sample
    // TODO: Implement sample saving


    // Clean up
    fftw_destroy_plan(plan);
    fftw_destroy_plan(inverse_plan);
    fftw_free(in);
    fftw_free(out);


    return 0;
}
This code implements the basic steps for using FFTW to double the pitch of a sound sample. You will need to fill in the details for loading and saving the sample, as well as copying the sample data into the FFT input array.

Note that this code is just an example, and there may be other ways to implement pitch doubling using FFTW in C++. If you have any questions or need more help, feel free to ask.
Another answer - but was incomplete:
in C++ create a program that uses FFT to double the pitch of a sound sample and implement the FFT functions using FFTW, implement sample loading and saving, and implement the pitch doubling

To create a program that uses FFT to double the pitch of a sound sample in C++, you can use the FFTW library to implement the FFT functions. You can also use a library like libsndfile to handle loading and saving the sound sample. Here is an example of how you might implement this program:

C++:
#include <iostream>
#include <cmath>
#include <fftw3.h>
#include <sndfile.h>


// Constants for the FFT
const int FFT_SIZE = 4096;
const int FFT_HALF_SIZE = FFT_SIZE / 2;


int main() {
  // Load the sound sample
  SF_INFO info;
  SNDFILE *sndfile = sf_open("sample.wav", SFM_READ, &info);
  if (!sndfile) {
    std::cerr << "Failed to open sound file" << std::endl;
    return 1;
  }


  // Check that the sample is mono
  if (info.channels != 1) {
    std::cerr << "Sound file must be mono" << std::endl;
    return 1;
  }


  // Allocate memory for the sample
  float *sample = new float[info.frames];


  // Read the sample data
  sf_read_float(sndfile, sample, info.frames);
  sf_close(sndfile);


  // Allocate memory for the FFT input and output
  fftwf_complex *in = (fftwf_complex*) fftwf_malloc(sizeof(fftwf_complex) * FFT_SIZE);
  fftwf_complex *out = (fftwf_complex*) fftwf_malloc(sizeof(fftwf_complex) * FFT_SIZE);


  // Create the FFT plan
  fftwf_plan plan = fftwf_plan_dft_1d(FFT_SIZE, in, out, FFTW_FORWARD, FFTW_ESTIMATE);


  // Copy the sample data into the FFT input array, padding with zeros
  for (int i = 0; i < FFT_HALF_SIZE; i++) {
    in[i][0] = sample[i];
    in[i][1] = 0.0;
  }
  for (int i = FFT_HALF_SIZE; i < FFT_SIZE; i++) {
    in[i][0] = 0.0;
    in[i][1] = 0.0;
  }


  // Execute the FFT
  fftwf_execute(plan);


  // Shift the FFT output so that the 0th element is in the middle
  for (int i = 0; i < FFT_HALF_SIZE; i++) {
    float re = out[i][0];
    float im = out[i][1];
    out[i][0] = out[i + FFT_HALF_SIZE][0];
    out[i][1] = out[i + FFT_HALF_SIZE][1];
    out[i + FFT_HALF_SIZE][0] = re;
    out[i
(end of repsonse)
Another partial answer:
in C++ create a program that uses FFT to double the pitch of a sound sample and implement the FFT functions using FFTW, implement sample loading and saving, and implement the pitch doubling all with separate functions

To create a program that uses FFT to double the pitch of a sound sample in C++, you will need to do the following:

  1. Install FFTW, a popular library for computing FFTs in C++. You can do this by downloading the library from its website and following the installation instructions provided.
  2. Define a function for loading an audio sample from a file. This function should take the file name as input and return the sample data as an array or vector of doubles.
  3. Define a function for saving an audio sample to a file. This function should take the sample data and the file name as input and write the sample data to the specified file in a format that can be played back by a media player or other audio software.
  4. Define a function for computing the FFT of an audio sample. This function should take the sample data as input and return the frequency-domain coefficients as a complex-valued array.
  5. Define a function for doubling the pitch of an audio sample. This function should take the frequency-domain coefficients as input and return the modified coefficients after scaling them by a factor of 2.
  6. Define a function for computing the inverse FFT of the modified frequency-domain coefficients. This function should take the modified coefficients as input and return the modified time-domain signal, which should have a higher pitch than the original sample.
  7. In your main function, use the above functions to load the audio sample, compute the FFT, double the pitch, compute the inverse FFT, and save the modified sample to a file.
Here is an example of what the code for this program might look like:
C++:
#include <fftw3.h>
#include <iostream>
#include <cstdio>


// Load the audio sample from a file
std::vector<double> loadAudioSample(const std::string& fileName) {
  // Open the file for reading
  std::FILE* file = std::fopen(fileName.c_str(), "rb");
  if (!file) {
    std::cerr << "Error: Failed to open file " << fileName << std::endl;
    return {};
  }


  // Read the sample data from the file
  std::vector<double> sample;
  while (!std::feof(file)) {
    double value;
    std::fread(&value, sizeof(double), 1, file);
    sample.push_back(value);
  }


  // Close the file and return the sample data
  std::fclose(file);
  return sample;
}


// Save the audio sample to a file
void saveAudioSample(const std::vector<double>& sample, const std::string& fileName) {
  // Open the file for writing
  std::FILE* file = std::fopen(fileName.c_str(), "wb");
  if (!file) {
    std::cerr << "Error: Failed to open file " << fileName << std::endl;
    return;
  }


  // Write the sample data to the file
  for (double value : sample) {
    std::fwrite(&value, sizeof(double), 1, file);
  }


  // Close the file
  std::fclose(file);
}


// Compute the FFT of the audio sample
std::vector<std::complex<double>> computeFF

It's not perfect but is very impressive....
 
Last edited:
I have a paid plan with the OpenAI Playground. I turned the randomness ("temperature") off and changed the maximum length to 4000.

in C++ create a program that uses FFT to double the pitch of a sound sample and implement the FFT functions using FFTW, implement sample loading and saving, and implement the pitch doubling all with separate functions

C++:
#include <iostream>
#include <fftw3.h>
#include <sndfile.h>


// Function to load a sound sample from a file
void loadSample(const char* filename, double*& sample, int& sampleSize)
{
    // Open the sound file
    SF_INFO info;
    SNDFILE* sndFile = sf_open(filename, SFM_READ, &info);
    if (sndFile == NULL)
    {
        std::cerr << "Error: could not open file " << filename << std::endl;
        return;
    }


    // Allocate memory for the sample
    sampleSize = info.frames * info.channels;
    sample = new double[sampleSize];


    // Read the sample from the file
    sf_read_double(sndFile, sample, sampleSize);


    // Close the sound file
    sf_close(sndFile);
}


// Function to save a sound sample to a file
void saveSample(const char* filename, double* sample, int sampleSize)
{
    // Open the sound file
    SF_INFO info;
    info.channels = 1;
    info.samplerate = 44100;
    info.format = SF_FORMAT_WAV | SF_FORMAT_PCM_16;
    SNDFILE* sndFile = sf_open(filename, SFM_WRITE, &info);
    if (sndFile == NULL)
    {
        std::cerr << "Error: could not open file " << filename << std::endl;
        return;
    }


    // Write the sample to the file
    sf_write_double(sndFile, sample, sampleSize);


    // Close the sound file
    sf_close(sndFile);
}


// Function to perform a forward FFT on a sound sample
void fft(double* sample, int sampleSize, fftw_complex*& fftOut)
{
    // Allocate memory for the FFT output
    fftOut = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * sampleSize);


    // Create a plan for the FFT
    fftw_plan plan = fftw_plan_dft_r2c_1d(sampleSize, sample, fftOut, FFTW_ESTIMATE);


    // Execute the FFT
    fftw_execute(plan);


    // Clean up
    fftw_destroy_plan(plan);
}


// Function to perform an inverse FFT on a sound sample
void ifft(fftw_complex* fftIn, int sampleSize, double*& sampleOut)
{
    // Allocate memory for the IFFT output
    sampleOut = new double[sampleSize];


    // Create a plan for the IFFT
    fftw_plan plan = fftw_plan_dft_c2r_1d(sampleSize, fftIn, sampleOut, FFTW_ESTIMATE);


    // Execute the IFFT
    fftw_execute(plan);


    // Clean up
    fftw_destroy_plan(plan);
}


// Function to double the pitch of a sound sample
void doublePitch(double* sample, int sampleSize, double*& sampleOut)
{
    // Allocate memory for the FFT output
    fftw_complex* fftOut;


    // Perform the FFT
    fft(sample, sampleSize, fftOut);


    // Double the frequency of the FFT output
    for (int i = 0; i < sampleSize / 2; i++)
    {
        fftOut[i][0] *= 2;
        fftOut[i][1] *= 2;
    }


    // Perform the IFFT
    ifft(fftOut, sampleSize, sampleOut);


    // Clean up
    fftw_free(fftOut);
}


int main()
{
    // Load the sound sample
    double* sample;
    int sampleSize;
    loadSample("sample.wav", sample, sampleSize);


    // Double the pitch of the sample
    double* sampleOut;
    doublePitch(sample, sampleSize, sampleOut);


    // Save the modified sample
    saveSample("sample_doubled.wav", sampleOut, sampleSize);


    // Clean up
    delete[] sample;
    delete[] sampleOut;


    return 0;
}

Very elegant....

So a key part of the process is to be very specific about what I want....
 
Last edited:
This morning I researched and implemented the Luhn algorithm to build out a barcode with a check-digit in a couple hours. Not overly difficult if you're an experienced programmer, but it makes for an interesting thought experiment to consider the proportion of the public who could pull that off.

That's been a major learning point for me since entering the industry eight years ago - how few people have the capacity to build software, let alone being actually good at it. When I was coming out of college I was pretty stressed about finding a role, now I realize just how much power I had at the time, and still have.

The only problem going forward is going to be ageism.
Keep in mind you get to be a coding god because of the people who design and make computers, the people on the assembly line who make the car you go to work in, the janitors who keep the toilets clean, and the people who keep you supplied with paper and pens.

When I got my first EE job I thought I had arrived, I was at the top of the food chain. Over time I understood I got to walk in the door and do a job I liked because of a lot of other people.

I never thought of myself as anyone special, just one of many. But then I started at the bottom. Worked in regular manufcturing jobs in the 70s, became a technician, and eventually engineer.

Before that I drove a cab, washed dishes, workrd as a cook, and did day labor. I can relate to anyone. You might call that a philosophy.

Back circa 1980 I would have said compiler desin and operating systems eres the most challenging coding.

As the PC began to spread it was CAD tools. Electrical simulators, 3D mechanical modeling tools, simulation tools like Matlab. The MS Office Suit had a major impact in busness.

Today I would say it is Amazon and Google.

The last company I did soe work for was an entrepenur who started a software company in the 80s. He taught himself softwte and developed a financial math tool that made him money. He had no math or science training, He went to a Christian bible school.

That was about 10 years ago. At that ime he had a high level tool that tok ascrpted language, generted code, and compile it for any major OS. He had one software engineer tomanage it. That is the future of software.

Today some of the more complex coding would be fighter jet control systems and missile guidance.

To me wen somebody says coding it generly means apps. Applying high level canned constructs.

Two books I kept on the shelf at work were Knuth's Semi Numerical Algorithms and Sorting And Searching. Probably a little behind today but they contained all the general algorithms.
 
I don't think of myself as someone special as much, but I do think about the social side of programming quite a bit. It's interesting to think about.

In college I took part in a six term program. In the first term there might have been about 150 - 200 students. Of those only 56 graduated (highest drop out rate in the college), and of the 56 there might have been about 10 - 15 really solid developers. Only 15 sought after developers entering the industry from one school per year in a city of 500 000 people. Those are alarming numbers.

I work at a company that offers a competitive salary and a defined benefit pension plan, and when we put up a job application for developers we're lucky if we get even a few acceptable candidates.

A few years back a software meetup formed here, and a number of companies in the city barred their employees from going because they were afraid they'd get poached by another company.

It just goes to show that programming is a niche skill that most people just aren't built to do, and why we're in such high demand. And also that the industry is a true meritocracy - you really can't fake it.
 
Last edited:
So a key part of the process is to be very specific about what I want....

Computers are morons.

We have to learn programming languages to instruct these morons how to do things in very precise, unambiguous terms.

Each generation of programming language makes it easier to instruct computers in terms that humans find easy to synthesise and comprehend. In particular, modern languages/compilers are slightly better at interpreting what we are telling them to do without us needing to spell out every last detail, and we have a vast array of libraries that have already solved most fundamental and common problems.

But still, we have to invest a lot of time into learning how to speak computer. Personally I've invested countless hours learning to code. Languages, patterns, paradigms, library APIs etc. But writing code is just a implementation detail: it's not a fundamental part of our function, which is to analyse "business" requirements and produce solutions in the form of computer programs.

AI programming is possibly the next step in bringing programming closer to natural language. Programmers will still need to be able to express their program's behaviour in specific terms, but they will no longer need to read or write code. We may end up doing something like Behaviour Driven Design, but the AI writes the code to pass our tests.
 
So a key part of the process is to be very specific about what I want....

Computers are morons.

We have to learn programming languages to instruct these morons how to do things in very precise, unambiguous terms.

Each generation of programming language makes it easier to instruct computers in terms that humans find easy to synthesise and comprehend. In particular, modern languages/compilers are slightly better at interpreting what we are telling them to do without us needing to spell out every last detail, and we have a vast array of libraries that have already solved most fundamental and common problems.

But still, we have to invest a lot of time into learning how to speak computer. Personally I've invested countless hours learning to code. Languages, patterns, paradigms, library APIs etc. But writing code is just a implementation detail: it's not a fundamental part of our function, which is to analyse "business" requirements and produce solutions in the form of computer programs.

AI programming is possibly the next step in bringing programming closer to natural language. Programmers will still need to be able to express their program's behaviour in specific terms, but they will no longer need to read or write code. We may end up doing something like Behaviour Driven Design, but the AI writes the code to pass our tests.

Stack Overflow had to ban ChatGPT because it kept flooding questions with plausible but wrong answers.

I'd be a fool to say that AI will never have an impact on the act of writing code, but the problem for AI is that when writing code the output has to be perfect. I don't see much use for a program that I have to feed instructions to so it can write out simple programming statements that aren't always correct, and that I'll have to verify and correct anyway. Is there really much of an efficiency gain to be had?

ChatGPT can write a basic SQL statement, but can it write a complicated one accurately, 100% of the time, based on convoluted input? I'd say that is likely the hard problem AI developers need to solve, and it's not an easy one.

And then, like you mention, coding is literally the easiest part of a programming job, the hard part is liaising with the business to understand what to code. Are we really going to get to a point where we no longer need people with an in depth understanding of software to do that?
 
More about AI and programming:

It can also explain code: (sometimes can be wrong - but comments help)

# Python 3
Python:
def remove_common_prefix(x, prefix, ws_prefix):
    x["completion"] = x["completion"].str[len(prefix) :]
    if ws_prefix:
        # keep the single whitespace as prefix
        x["completion"] = " " + x["completion"]
return x

# Explanation of what the code does
# This code takes in a dataframe (x), a prefix (prefix) and a boolean (ws_prefix) as parameters.
# It then removes the prefix from the "completion" column of the dataframe and, if ws_prefix is True, adds a single whitespace as a prefix.
# Finally, it returns the modified dataframe.

# Explanation of what the code does that a child could understand
# This code takes a list of words (x) and removes a common prefix (prefix) from each word. It also has an option (ws_prefix) to keep a single whitespace as a prefix. After the prefix is removed, the code returns the list of words with the prefix removed.

and translate from one language to another....

##### Translate this function from Python into Haskell
### Python

Python:
def remove_common_prefix(x, prefix, ws_prefix):
    x["completion"] = x["completion"].str[len(prefix) :]
    if ws_prefix:
        # keep the single whitespace as prefix
        x["completion"] = " " + x["completion"]
return x

### Haskell


removeCommonPrefix :: String -> String -> Bool -> String
removeCommonPrefix x prefix ws_prefix =
if ws_prefix
then " " ++ (drop (length prefix) x)
else drop (length prefix) x

Fixing bugs:

##### Fix bugs in the below function

### Buggy Python
Python:
import Random
a = random.randint(1,12)
b = random.randint(1,12)
for i in range(10):
    question = "What is "+a+" x "+b+"? "
    answer = input(question)
    if answer = a*b
        print (Well done!)
    else:
        print("No.")

### Fixed Python
Python:
import random
a = random.randint(1,12)
b = random.randint(1,12)
for i in range(10):
    question = "What is "+str(a)+" x "+str(b)+"? "
    answer = input(question)
    if answer == str(a*b):
        print ("Well done!")
    else:
        print("No.")
 
So a key part of the process is to be very specific about what I want....

Computers are morons.

We have to learn programming languages to instruct these morons how to do things in very precise, unambiguous terms.

Each generation of programming language makes it easier to instruct computers in terms that humans find easy to synthesise and comprehend. In particular, modern languages/compilers are slightly better at interpreting what we are telling them to do without us needing to spell out every last detail, and we have a vast array of libraries that have already solved most fundamental and common problems.

But still, we have to invest a lot of time into learning how to speak computer. Personally I've invested countless hours learning to code. Languages, patterns, paradigms, library APIs etc. But writing code is just a implementation detail: it's not a fundamental part of our function, which is to analyse "business" requirements and produce solutions in the form of computer programs.

AI programming is possibly the next step in bringing programming closer to natural language. Programmers will still need to be able to express their program's behaviour in specific terms, but they will no longer need to read or write code. We may end up doing something like Behaviour Driven Design, but the AI writes the code to pass our tests.

Stack Overflow had to ban ChatGPT because it kept flooding questions with plausible but wrong answers.

I'd be a fool to say that AI will never have an impact on the act of writing code, but the problem for AI is that when writing code the output has to be perfect. I don't see much use for a program that I have to feed instructions to so it can write out simple programming statements that aren't always correct, and that I'll have to verify and correct anyway. Is there really much of an efficiency gain to be had?

ChatGPT can write a basic SQL statement, but can it write a complicated one accurately, 100% of the time, based on convoluted input? I'd say that is likely the hard problem AI developers need to solve, and it's not an easy one.

And then, like you mention, coding is literally the easiest part of a programming job, the hard part is liaising with the business to understand what to code. Are we really going to get to a point where we no longer need people with an in depth understanding of software to do that?
Programmers will need AI tools that are designed to produce code, not a general purpose chat bot.

I think it may be possible to use AI in a highly structured programming process such as BDD. The programmer specifies what behaviour the program has to produce and then the AI makes a program that passes the whole suite of tests. The programming language used by the human would only need to express requirements, which means it could potentially take on a form more closely resembling natural language grammar and look less like lambda calculus. In principle the AI would not be able to produce incorrect programs, because such a program would fail at least one test.

Are we really going to get to a point where we no longer need people with an in depth understanding of software to do that?
Large parts of programming for me is using libraries via a public API, and I have no clue how many of those libraries actually work under the hood. I doubt there are many programmers out there building software without using libraries that are effectively black boxes of machine instructions.

I could see a lot of software development reach a point where programmers only make inscrutable black boxes of code, only knowable by feeding them inputs and examining the outputs.
 
I liked my work and I liked lernimg science, but at the end of the day it was a job with good pay, benefits, and working conditions.

There were engineers I knew who hated the job and thought it was awful work. For most with families and mortgages they went on cruise control when they got enough experience..

I was rarely out of work, and at that not for long. I always had good medical insurance.

One of the important things for me I was usaully able to work without anyone looking over my shoulder.
 
Large parts of programming for me is using libraries via a public API, and I have no clue how many of those libraries actually work under the hood.
This is why a computer is always going to struggle with understanding English language instructions.

I had to read that sentence twice, to decide whether @bigfield is unaware of the detailed algorithms used by the libraries; Or whether he is uncertain of the proportion of those libraries that actually work.

It's taken more than fifty years of intensive and immersive training to get my neural network to understand "plain English", and even after that effort, mistakes are commonplace.

i'm waiting for an AI that can identify ambiguity and request clarification, rather than jumping straight to a response.

A truly intelligent system sometimes says "Wait, what? Are you crazy??" in response to a question, rather than trying to answer it. Advanced intelligences also occasionally respond to questions with "Why should I care?".
 
Programmers will need AI tools that are designed to produce code, not a general purpose chat bot.
Well #229 is a good foundation...
Did you read post #235? The AI was very good at explaining code, translating code from one language to another and fixing bugs... plus it does it very quickly (unlike human programmers)

Being a general purpose chatbot allows this:
Write a story that includes code for a number guessing program

Once upon a time there was a young programmer who wanted to test his coding skills. He wanted to create a program that would allow someone to guess a random number between 1 and 10.

He began by creating a variable for the random number.

```
JavaScript:
var randomNumber = Math.floor(Math.random() * 10) + 1;
```

He then set up a loop that would run until the correct number was guessed.

```
JavaScript:
while (true) {
    // Prompt user for their guess
    var guess = prompt("Guess a number between 1 and 10");


    // Check if the input is a valid number
    if (isNaN(guess)) {
        alert("Please enter a valid number");
    }
    else {
        // Check if the guess is correct
        if (guess == randomNumber) {
            alert("Congrats! You guessed the right number!");
            break;
        }
        else {
            alert("Sorry, that wasn't the right number. Try again!");
        }
    }
}
```
The programmer was proud of his work and excited to see if it would work. He ran the program and was delighted to see that it worked perfectly. Now anyone could guess the random number and have fun doing it!
Or this:
Write javascript code that generates a random story:

JavaScript:
let characters = ['princess', 'witch', 'unicorn', 'dragon'];
let places = ['castle', 'forest', 'mountain', 'swamp'];

let randomCharacter = characters[Math.floor(Math.random() * characters.length)];
let randomPlace = places[Math.floor(Math.random() * places.length)];

let story = `Once upon a time, there was a ${randomCharacter} who lived in a ${randomPlace}.`;

console.log(story);
Being general purpose means that you can make it create code from just about any domain... what is the problem with that?
 
Last edited:
More things the AI can do:

Making major modifications to existing code:
JavaScript:
var randomNumber = 5;
var guess = prompt("Guess a number between 1 and 10");
if (guess == randomNumber) {
    alert("Congrats! You guessed the right number!");
}

Modify the previous code so that it makes you guess a number between 1 and 20 and says if the guess is too high or too low and it involves a loop and generates a random number each time:

JavaScript:
var randomNumber = Math.floor(Math.random() * 20) + 1;


while (true) {
    var guess = prompt("Guess a number between 1 and 20");
    if (guess == randomNumber) {
        alert("Congrats! You guessed the right number!");
        break;
    }
    else if (guess > randomNumber) {
        alert("Your guess is too high! Try again.");
    }
    else {
        alert("Your guess is too low! Try again.");
    }
}
Completing code:
Complete the following code using the translate function

public void MoveTowardsPlayer() {
Vector3 direction =
C#:
public void MoveTowardsPlayer()
{
     Vector3 direction =   player.transform.position - transform.position;
     transform.Translate(direction.normalized * speed * Time.deltaTime, Space.World);
}
 
Last edited:
Back
Top Bottom