• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

A Successor of C++?

Coding for reliability means adding code to detect fault and make an orderly exit if needed instead of crashing without warning. The difference between coding and coding for others to use long after you have moved on to something else.
Yes, be sure to put in graceful degradation. Like vet every input and put in fallbacks for nonsensical inputs.
It also involves running realistic test cases to try and make the system crash. Software is only as good as the test cases you run. Which is where OOP comes in. Build and test individual objects before system integration.
OOP is only part of program organization. Before it was developed, structured programming was, for hiding "go to".

BTW, I agree that inheritance makes callbacks easy. The C syntax for function pointers is rather grotesque.
 
Structured programming, learned it in a class.

1. Use of functions
2. Top down execution
3. Limit nested function calls
4. No jumps or go to in main code
5. Single exit points from functions
6. Self documenting code

The opposite of spaghetti code. There is always a way around having to use go t and jump in code.

OOP may not be top down execution. You can have objects talking to each other asynchronously.

For me structured and OOP are two different ways of looking at things.
 
Or to put it more simply simply in practical terms.


Polymorphism in C++ describes the ability of different objects to be accessed by a common interface.

I can see the utility in that, perhaps a form of reductio ism.
List<Person*> people();
people.append(new Eunuch());//implements Person type
printf(people.get(0)->sex());//throws...
Why should this throw? Sex() isn't a null. (And does printing a null throw anyway?? I don't think we had exceptions the last time I wrote any C.)
It's a joke. In C++. Think it through.
Printing a null throws, because C strings are just char*. It will throw a null pointer access exception.
 
For me structured and OOP are two different ways of looking at things.
Structured programming is related to code structure, and I've seen various definitions of it. In summary:
  • Blocks of code. They can contain other blocks.
  • Selection of which blocks to execute: if-then-else, switch
  • Repetition: do, for, whle
  • Functions: enter a block from another block, and exit back to just after where one departed
Structured programming is universal in high-level languages, and "go to" is often absent.

OOP is more related to data structures, where the structures include methods along with data.
 
Last edited:
Aborer rule was never hard code constants into code, define it.
 
Aborer rule was never hard code constants into code, define it.
Aborer? Do you mean another?

I broadly agree about defining constants, especially if they are used repeatedly. But small ones like 0 or 1 or 2 likely don't need to be defined, but instead commented on.
 
Disclaimers:

Wasn't it the inventor of C++ himself who has some famous quote like "Using C language you stub your toe (code a bug) quite enough. Bugs in C++ are less common but when they do happen it's like blowing your foot off."

I am of course aware of the 'call by name' implicit in C macro's; it just seems clearer, despite that the pre-processor always comes bundled with the compiler, to treat the post-processed language as "pure C."

[pointless anecdote] I happened to be working graveyard as a consultant to a major computer manufacturer when the clocks ticked over to March 1, 1989 in New Zealand and Australia. So by chance I got to respond first when complaints about some 'time_of_day()' started coming in! There was a macro with a '++' and the '++' was "accidentally" being performed twice. The macro was over a year old but happened to give proper results for leap years so didn't fail in 1988.[/anecdote]

As I say, my efforts have been single-man; and often the deliverables were NOT source code or even executables. (Sometimes they were journal articles or patent applications.) In addition to C, I programmed extensively in, and enjoyed various machine languages. Thus, my perspective is weird, and probably quite inappropriate for most software projects. (Some of my single-man efforts were non-trivial enough that details might let Google divulge my secret identity!)

As I said, I was involved with one project that had about 15 programmers. Use of C language was the least of their problems!
 
Aborer rule was never hard code constants into code, define it.
Aborer? Do you mean another?

I broadly agree about defining constants, especially if they are used repeatedly. But small ones like 0 or 1 or 2 likely don't need to be defined, but instead commented on.
Yeah, 1, 2, and 0 serve special uses in all kinds of math and honestly, if they weren't used the way they are, directly, the math gets much harder to understand.

Another really nice way to avoid unnamed constants is enumerations, which allow you to define start ranges with macro constants, and sentinels on the end of a range, with only defining "head" values.
 
I once delivered some model code with

#define FIVE 5
#define THIRTY_2 (1 << (FIVE))

I used the macros in the access of bit arrays. In self-defense, I wanted to expose any dependence on the chunk size, though the chunk was most likely 32 bits for the most likely future.

Sure, LOG2SIZNB and NUMBITS (or whatever) could be substituted for the FIVE, THIRTY_2. But that would have made the code far FAR tougher and more annoying to read.

(Had I upgraded to a longer word, I might have changed the macro names to SIX and SIXTY_4.)
 
Aborer rule was never hard code constants into code, define it.
Aborer? Do you mean another?

I broadly agree about defining constants, especially if they are used repeatedly. But small ones like 0 or 1 or 2 likely don't need to be defined, but instead commented on.
As I got into it structured programming became a habit. Defining constants instead of hard coding was a g habit I just did without thinking. Like brushing my teeth.

I was given an engineering title while a night student. I learned structured programming in a class and applied it.

I went through an evolution of trial and error lkie many probably did. I started coding by just writing code without any thought and ended up with spaghetti code. Structured programming was a natural progression.

I think K&R'book C Programming Lqnguage was seminal. There should be a statue to those guys.
 
I once delivered some model code with

#define FIVE 5
#define THIRTY_2 (1 << (FIVE))
...
(Had I upgraded to a longer word, I might have changed the macro names to SIX and SIXTY_4.)
No, no, no. You're supposed to change the name from THIRTY_2 to SIXTY_FOUR. :biggrin:

I broadly agree about defining constants, especially if they are used repeatedly. But small ones like 0 or 1 or 2 likely don't need to be defined, but instead commented on.
Yeah, 1, 2, and 0 serve special uses in all kinds of math and honestly, if they weren't used the way they are, directly, the math gets much harder to understand.
 
I once delivered some model code with

#define FIVE 5
#define THIRTY_2 (1 << (FIVE))

I used the macros in the access of bit arrays. In self-defense, I wanted to expose any dependence on the chunk size, though the chunk was most likely 32 bits for the most likely future.

Sure, LOG2SIZNB and NUMBITS (or whatever) could be substituted for the FIVE, THIRTY_2. But that would have made the code far FAR tougher and more annoying to read.

(Had I upgraded to a longer word, I might have changed the macro names to SIX and SIXTY_4.)
But the whole point is that by defining the constants in this way, you don't need to change anything in the code other than the initial define statement, when the environment is upgraded. When you upgrade to 64bit, you just change that one character, and everything else works itself out:

#define FIVE 6
#define THIRTY_2 (1 << (FIVE))


Now FIVE is a macro which returns a numerical value of 6, THIRTY_2 is a macro which returns a numerical value of 64, and the whole thing just works. And isn't even slightly tough or annoying to read. ;)
 
The idea is that you do not change the name, you change the number.

If you have a few thousand lines of code or even a few hundred the last thing you want to do is change every instance.



#define WORD_LENGTH 32

Change the number not the name.

I usually had an include file with all the definitions and macros to get it out of the way.

I found early on trying to be too clever just made things worse.
 
Or to put it more simply simply in practical terms.


Polymorphism in C++ describes the ability of different objects to be accessed by a common interface.

I can see the utility in that, perhaps a form of reductio ism.
List<Person*> people();
people.append(new Eunuch());//implements Person type
printf(people.get(0)->sex());//throws...
Why should this throw? Sex() isn't a null. (And does printing a null throw anyway?? I don't think we had exceptions the last time I wrote any C.)
It's a joke. In C++. Think it through.
Printing a null throws, because C strings are just char*. It will throw a null pointer access exception.
So it doesn't have the sense to print "null" in this case? I guess it's C, sense shouldn't be expected.
 
Honestly,
Or to put it more simply simply in practical terms.


Polymorphism in C++ describes the ability of different objects to be accessed by a common interface.

I can see the utility in that, perhaps a form of reductio ism.
List<Person*> people();
people.append(new Eunuch());//implements Person type
printf(people.get(0)->sex());//throws...
Why should this throw? Sex() isn't a null. (And does printing a null throw anyway?? I don't think we had exceptions the last time I wrote any C.)
It's a joke. In C++. Think it through.
Printing a null throws, because C strings are just char*. It will throw a null pointer access exception.
So it doesn't have the sense to print "null" in this case? I guess it's C, sense shouldn't be expected.
Printf throws when it accesses a null. The point is that you provide it some real format string, a pointer, and it cannot be null. If it is, it is bad programming, and programmer error! I'm making a joke about eunuchs and possibly bad pointer programming.
 
I honestly wonder how some of you manage to find your way out of bed in the morning.
 
Is there anything wrong with the way the array s[] is defined?

NULL is the first ASCII control character so it is numerically 0. One use for NULL is a string terminator. EOF is used for detecting end of file.

Some editors will show non printable characters. So metes. at least it used to be, one could get into the text where you did not want it.

C allows those pesky pointers to index beyond the end of an array. Sometimes it is prudent to check for a terminator. An ounce of prevention is worth a poumd of cure.

#define BUF_MAX 80
#define ASCII_FAULT 2
#define NULL_FAULT 1
#define NO_FAULT 0
#define ASCII_LO 32
#define ASCII_HI 126

int null_check(char *str,int b_max){
//check for null terminated string
int i = 0;
for(i = 0; i< b_max;i++)if(str == NULL) return(NO_FAULT);
return(NULL_FAULT);
}


int ASCII_check(char *str, int b_max){
//check for printable characters
int i = 0;
for(i = 0;i<b_max;i++)if(str<ASCII_LO || str > ASCII_HI)return(ASCII_FAULT);
return(NO_FAULT);

}


int null_check(char *str,int b_max){
//check for null terminated string
if(str[b_max-1] == NULL) return(NO_FAULT);
return(NULL_FAULT);
}


int ASCII_check(char *str, int b_max){
//check for printable characters
int i = 0;
for(i = 0;i<b_max;i++)if(str<ASCII_LO || str > ASCII_HI)return(ASCII_FAULT);
return(NO_FAULT);

}

int main()
{
char s[3] = {'1','2','3'};
int error = 0,n = 0;
n = sizeof(s);
printf("length %d\n",n);

error = null_check(s,n);
printf("null %d\n",error);

error = ASCII_check(s,n);
printf("ascii %d\n",error);
return 0;
}
 
I think K&R'book C Programming Lqnguage was seminal. There should be a statue to those guys.
Now that I have free time on the weekend I've decided to occupy myself by (re-learning) C by reading K&R. In my day job (and night job) I use higher-level scripting languages, so C is just a curiosity to me.

One thing that stands out to me is that even simple code examples in K&R are sometimes pretty hard to read, and I'm putting unnecessary mental effort into parsing it. In fact, a lot of the code that you folks write on here is hard to read, even with the indentation restored.

When K&R was written, there was no such thing as IntelliSense. Programmers in the 70's were motivated to give brief names to all of their symbols. It makes it easier to type those names over and over again, but the reader has to do quite a bit more work remembering what all those single-letter variables and abbreviated constant names are doing. I don't see why you all persist with this habit in the age of code editors with tab-completion.

Sometimes I think programmers take perverse pride in obfuscating their own code.

"I need to write a function that checks if a string is null-terminated.

"Maybe I should call it 'int is_null_terminated_string(char *string, int max_length)'?

"Nah, let's throw in some vague names and weird abbreviations so other programmers have to read the docs to figure out what this function does and what arguments they need to provide.

"Let's go with 'int null_check(char *str,int b_max)' That way no-one else will know what it means and they'll have to read the explanatory comment I put directly underneath the function name."
 
I don't see why you all persist with this habit in the age of code editors with tab-completion
Mostly because a lot of C development still happens in notepad.exe.

Depending on how regulated the industry is, depending on how old the shop is, some shops may still be compiling fortran in Borland.

Being able to expend the mental effort to understand what programmers tend to call things and why, and pick meaning from context is invaluable when dealing with ancient and very particular code bases.

Rest assured things are getting easier, but occasionally you will still have to wade through manually supplying basic library functions that were written in notepad because that's how the aircraft simulator sister company in England does it...
 
I don't see why you all persist with this habit in the age of code editors with tab-completion
Mostly because a lot of C development still happens in notepad.exe.

Depending on how regulated the industry is, depending on how old the shop is, some shops may still be compiling fortran in Borland.

Being able to expend the mental effort to understand what programmers tend to call things and why, and pick meaning from context is invaluable when dealing with ancient and very particular code bases.

Rest assured things are getting easier, but occasionally you will still have to wade through manually supplying basic library functions that were written in notepad because that's how the aircraft simulator sister company in England does it...
That makes complete sense, and yet I'm still disgusted. :D
 
Back
Top Bottom