• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Calculability

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
14,632
Gender
Androgyne; they/them
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
So, a recent bit of discussion here lit a spark in my mind to investigate and ponder on the nature of calculability. Of course, I have read next to nothing on the topic, just scratched perhaps at a wiki article? I'm not even sure I went that far.

I'm a lazy sod, after all.

But on that subject, it stuck me that for something to be a number, to be real perhaps not in the "math" jargon but actually *real*,, it must be able to be expressed in some way. It's declaration must be possible by some means even if not through "calculation".

So, off the top of my head, I can only come up with one type of way a number may be actually real, and not be calculated; it must be measured. But measured numbers have a limit to their meaningfulness, insofar as quantum numbers go: there is a maximum specific meaningful precision to the speed of light. And even of G.

So, is there a set that can be described, but not calculated, that is not merely "measured" and thus finite in complexity given a finite reference frame?
 
When dealing with measured values there is no significance beyond the accuracy of the values.

You need enough buffer digits isoyou do not loose accuracy in a chain of calculations, if that is what yo are getting at. Before computers when doing hand calculations you had to figure out how far to carry it out. Or how many digits of a constant like PI to use.
It almost sounds like you are talking about Turing Machines and cmputability.

From a Theory Of Computation class I took it is said if a problem can be solved it must be Turing commutable, meaning no limits on memory and time. In modern computer terms, if you can not code it as an algorithm on a computer it is not solvable.

An ifinite number of operations on an infinite set would not be Turing computable as I see it.

Measurements can not be infinite. A ruler is limted by the number of atoms that can be chaind together. Electronic measurements are ultimatly limited by the quantized electron.

Numbers are the measurement. Philosophically I do not see numbers as real othr than something in our brains. You do not need numbers for a ruler. Just an arbitrary length.
 
Last edited:
... there is a maximum specific meaningful precision to the speed of light. And even of G.

So, is there a set that can be described, but not calculated, that is not merely "measured" and thus finite in complexity given a finite reference frame?

I'm not clear on the question. I think you're distinguishing between "mathematical numbers" and (the ratios of) physical measurements. Is that correct?

Nitpick: The speed of light in SI meters per SI second is known exactly!
Système international (d'unités) said:
The metre (meter) is currently defined as the length of the path traveled by light in a vacuum in 1/299,792,458 of a second.

The second is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom.

Until 2019 the SI kilogram was a hold-out, being defined as the mass of a certain platinum-iridium cylinder in a certain climate-controlled French vault (which was slowly evaporating!). But just as the meter is now defined by fixing the speed of light to be a specific finite constant, so the kilogram is now defined as whatever it needs to be to make Planck's Constant = 6.62607015×10-34 kg m2s-1 exactly.
 
... there is a maximum specific meaningful precision to the speed of light. And even of G.

So, is there a set that can be described, but not calculated, that is not merely "measured" and thus finite in complexity given a finite reference frame?

I'm not clear on the question. I think you're distinguishing between "mathematical numbers" and (the ratios of) physical measurements. Is that correct?

Nitpick: The speed of light in SI meters per SI second is known exactly!
Système international (d'unités) said:
The metre (meter) is currently defined as the length of the path traveled by light in a vacuum in 1/299,792,458 of a second.

The second is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom.

Until 2019 the SI kilogram was a hold-out, being defined as the mass of a certain platinum-iridium cylinder in a certain climate-controlled French vault (which was slowly evaporating!). But just as the meter is now defined by fixing the speed of light to be a specific finite constant, so the kilogram is now defined as whatever it needs to be to make Planck's Constant = 6.62607015×10-34 kg m2s-1 exactly.

Sure. But how long is a metre?

It's impossible to measure the speed of light, because it's a defined quantity. And the same applies to the duration of a second. But as a result, it's very impressive to determine a more accurate distance for the metre.
 
C is measurable as is anything via an arbitrary set of reference points. WE have SI and fundamental definitions.

All quantifiable measurements are relative to a standard. SI is of value because anyone enywhere can derive the stanrd refernce standards independently. Mass was the last item.

No measurements are absolute and all refences have a repeatability-error band. An inherent uncertainty.

There are no absolutes. If I measure 1kg of potatoes in a store it just means it equals the kg standard.

C is measured within SI. Any reference system will do.

Velocity is distance/time. A csr can be measured going 20km/hour. Photons in light are traveling at C in m/s.
 
...
Nitpick: The speed of light in SI meters per SI second is known exactly!
Système international (d'unités) said:
The metre (meter) is currently defined as the length of the path traveled by light in a vacuum in 1/299,792,458 of a second.

The second is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom.
...

The meter is known exactly in terms of the speed of light and the frequency of the cesium atom, which are both assumed to be constants. But the speed of light, like the cesium atom, simply is what it is.
 
C is measurable as is anything via an arbitrary set of reference points. WE have SI and fundamental definitions.

All quantifiable measurements are relative to a standard. SI is of value because anyone enywhere can derive the stanrd refernce standards independently. Mass was the last item.

No measurements are absolute and all refences have a repeatability-error band. An inherent uncertainty.

There are no absolutes. If I measure 1kg of potatoes in a store it just means it equals the kg standard.

C is measured within SI. Any reference system will do.

Velocity is distance/time. A csr can be measured going 20km/hour. Photons in light are traveling at C in m/s.

c is NOT measurable. c is exactly 299792458ms-1, and no measurement can ever change that.

The metre is measurable. If you determine how quickly photons cross a given space, then you are measuring the distance travelled, not the speed of the photons.

Your intuition here is leading you astray. It sounds nonsensical; But it is true nonetheless. That's why we have metrologists, rather than just asking an engineer for his opinion.
 
...
Nitpick: The speed of light in SI meters per SI second is known exactly!
Système international (d'unités) said:
The metre (meter) is currently defined as the length of the path traveled by light in a vacuum in 1/299,792,458 of a second.

The second is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom.
...

The meter is known exactly in terms of the speed of light and the frequency of the cesium atom, which are both assumed to be constants. But the speed of light, like the cesium atom, simply is what it is.

The speed of light isn't assumed to be constant; It is defined as constant. Something that Einstein demonstrated to be a sensible thing to do.
 
From relativity C will b appear constant regardless. of how you measure it.

Obviously not practical. I start a sand clock and flash a light. Someone at a distance sees the light and flashes a light. When you see the light you stop the sand clock. The distance is 100 arm lengths away, it being the length io the arm if the king. The second is a measure of change, but so is a sand clock.

Count the grains of sand and you have velocity in grains/arm lengths. Seconds based on an atomic clock is arbitrary, but it is convenient and reproducible.

The key to SI is being reproducible anywhere. Even Mars.

SI is harmonized or normalized such that all fundamental units and derived units are consistent. That includes mass, time, and distance. It was not always so.
 
From relativity C will b appear constant regardless. of how you measure it.

Obviously not practical.
Just because you think that what you believe is obvious, is no reason to accept that it is true. Indeed, it should be a warning flag to you that you're about to make a gross blunder into Dunning-Kruger territory.
I start a sand clock and flash a light. Someone at a distance sees the light and flashes a light. When you see the light you stop the sand clock. The distance is 100 arm lengths away, it being the length io the arm if the king. The second is a measure of change, but so is a sand clock.

Count the grains of sand and you have velocity in grains/arm lengths. Seconds based on an atomic clock is arbitrary, but it is convenient and reproducible.
Wrong. The velocity is 299792458ms-1. What you have measured is the length of your arms.
The key to SI is being reproducible anywhere. Even Mars.
That's certainly one of its strengths.
SI is harmonized or normalized such that all fundamental units and derived units are consistent. That includes mass, time, and distance. It was not always so.
True. But completely irrelevant to your understandable rookie error in assuming that it's possible to measure the speed of light. It's not possible. The speed of light is 299792458ms-1 regardless of any measurement you make. Your experiment measures the length of your unit of distance (using a low accuracy measurement of time, and so probably giving a very crude result).

Metrology is neither a democracy, nor a philosophical free-for-all. Opinion is irrelevant with regard to matters of fact.
 
The meter is known exactly in terms of the speed of light and the frequency of the cesium atom, which are both assumed to be constants. But the speed of light, like the cesium atom, simply is what it is.

The speed of light isn't assumed to be constant; It is defined as constant. Something that Einstein demonstrated to be a sensible thing to do.

Special theory of relativity:
In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates:

The laws of physics are invariant (that is, identical) in all inertial frames of reference (that is, frames of reference with no acceleration).
The speed of light in vacuum is the same for all observers, regardless of the motion of the light source or observer.

An axiom, postulate or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments.
 
C is empirically demonstrated to be constant across inertial frames, with the caveat that the speed changes in something other than a perfect vacuum.

Set up a speed trap in a perfect vacuum and us a laser to send a pulse through the trap. Within experimental accuracy you will measure C.

Some decades back now the Italians set uop a long open air test to measure the speed of a particle thought to be C. After fixing a problem, they measured C.

If I measure a kg I get a kg. If I measure the speed of light in a vacuum I get C. The difference with C is that theoretically nothing goes faster than C.

Keep in mind that the second is based on counting particles. Not much different than sand particles falling in gravity in an hour glass sand clock.
 
Special theory of relativity:
In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates:

The laws of physics are invariant (that is, identical) in all inertial frames of reference (that is, frames of reference with no acceleration).
The speed of light in vacuum is the same for all observers, regardless of the motion of the light source or observer.

An axiom, postulate or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments.

Yes, exactly. Einstein demonstrated that taking a constant speed of light as axiomatic leads to theoretical predictions that are in keeping with observed reality - thereby rendering it sensible to define it as a constant.
 
C is empirically demonstrated to be constant across inertial frames, with the caveat that the speed changes in something other than a perfect vacuum.
NO!!

c is defined to be constant (and defined to apply to photons in a vacuum; the speed that light travels in other conditions isn't c), allowing us to accurately measure the length of the metre regardless of the inertial frame in which our experiment takes place.
Set up a speed trap in a perfect vacuum and us a laser to send a pulse through the trap. Within experimental accuracy you will measure C.
c is 299792458ms-1. Accuracy of any equipment cannot change that; No experiment could ever, even in principle, result in a different value.
Some decades back now the Italians set uop a long open air test to measure the speed of a particle thought to be C. After fixing a problem, they measured C.

If I measure a kg I get a kg. If I measure the speed of light in a vacuum I get C. The difference with C is that theoretically nothing goes faster than C.

Keep in mind that the second is based on counting particles. Not much different than sand particles falling in gravity in an hour glass sand clock.
The second is defined in that way. And the number of cycles in a second (9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom), like the number of metres per second in c, cannot be discovered to be different from the definition by some more accurate measurement or other; It's impossible by definition.

Any experimental measurement in this arena therefore provides a more accurate length for the metre.

And by the way, in the SI system, capitalisation matters. C is the SI unit of electric charge (an abbreviation of Coulomb). The constant denoting the speed of electromagnetic radiation in a vacuum is c, which must be lower case, and is conventionally italicised.
 
Just because you think that what you believe is obvious, is no reason to accept that it is true. Indeed, it should be a warning flag to you that you're about to make a gross blunder into Dunning-Kruger territory.
Words to live by.

Wrong. The velocity is 299792458ms-1. What you have measured is the length of your arms.
The key to SI is being reproducible anywhere. Even Mars.
That's certainly one of its strengths.
Stuff and nonsense. SI is still based on the Earth, just like it was back when a meter was a forty millionth of the circumference through Paris. "The second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom.", so the definition goes. Well, a cesium-133 atom radiating where? Radiating on the Earth. Cesium radiates faster on Mars -- it's not as deep in a gravity well.

True. But completely irrelevant to your understandable rookie error in assuming that it's possible to measure the speed of light. It's not possible. The speed of light is 299792458ms-1 regardless of any measurement you make.
Of course it's possible to measure the speed of light -- that 299792458 number wasn't made up by philosophers; it was somebody's measurement of the speed of light. A meter wasn't always defined as the distance light goes in 1/299792458 second and it probably won't always be. We stopped defining a second as 1/86400 of a mean solar day because the Earth's rotation varies too much. Well, how fast cesium radiates on the Earth varies with the Earth's orbit; eventually that will be found to be too unpredictable for accurate timekeeping. New units of time and distance will be adopted; whether those make the speed of light a defined constant or a measured ratio will depend on the new definitions; those will be chosen based on future measurement technologies.

Metrology is neither a democracy, nor a philosophical free-for-all. Opinion is irrelevant with regard to matters of fact.
Definitions of units are temporary conventions, not facts. They can even be democratic. If you have trouble measuring the speed of light in meters per second, you could always measure it in meters per day -- a day is democratically defined as 86400 seconds or 86401, depending on some Frenchmen's measurements. :)
 
Old buddy Bilby, anyone can bandy facts especialy in the Internet age.

If a measurement system is based on a standard, then when you measure the standard you had better get the value of the standard back. That applies to time , voltage, mass, or C.

You can get secondary standard voltage cells form NIST in this country to test say a voltmeter. C is a standard, when using a test set up to measure velocity of light then you get back velocity of light, if it is in a vacuum.

In common usage light is a general term used to infer all electromagnetic radiation across the spectrum.

In electronics we neasure the speed of light al the time, but it is at a speed lower than C in a vccum/.A digital pulse raveling on a conductor on a circuit board travels at a velocity relative to C based on the dielectric constant of the PCB material. When I use an oscilloscope to measure timing, speed of light, of a signal in the US it traceable to NIST through the calibration company. If I measure C in a cauum using the scope I will get C. C is measurable, happens all the time.

You are making the mistake of arguing terms not understanding how things actually work.

You are hard pressed to ague standards and test with me.

Are you familiar with laser guages and intererometers? Radio altimeters? Laser range finding? A laser guage counts wavelengths as a measure of distance. An interferometer measures surface flatness of a mirror also based on fractions of wavelengths. I am well aware of light and measurement. Not just academic.
 
There are several issues that one ought to try to untangle.
  1. Accumulation of roundoff errors in calculations with floating-point numbers.
  2. Run time of algorithms
  3. Whether something is computable

About the first one, that can be serious in cases where one has to subtract two nearly equal numbers.

Consider (1+x) - 1
That is equal to x, but for |x| << 1, one loses a lot of trailing digits.

That case is easy to optimize, but there are more difficult cases, like:
  1. sqrt(x+1) - 1
  2. 1 - cos(x)
  3. x - sin(x)
There are some tricks that one can use to avoid subtracting two nearly equal quantities.

For the first one, multiply and divide by sqrt(x+1) + 1 and simplify the numerator:
x/(sqrt(x+1) + 1)

One can do likewise with the second one, using some trigonometric identities:
sin(x)^2 / (1 + cos(x))
Or one can use such identities more directly: 2*sin(x/2)^2

For the third one, I recall a trick that someone used in a similar sort of expression:

Numerically integrate (1 - cos(x)) over x starting from 0.
 
One can do such calculations by brute force, by using a lot of trailing digits, but that can be time-consuming, especially if one uses more digits than what one's hardware supports.


The second one is of computational efficiency. This encompasses both space and time, the amount of memory an algorithm needs and the number of CPU cycles it needs. That is a large subject, so I will consider a classic case: sorting.

Numerous algorithms have been devised for doing that, and they vary widely. The simplest sorting algorithms are bubblesort and similar algorithms. These are easy to code, and the run in time O(n^2) for n items to sort. The fastest ones are mergesort, heapsort, and quicksort, with runtimes around O(n*log(n)). Their code complexity is much greater, and they can be beaten by bubblesort for small numbers of items.

Sorting algorithms work by comparing items and exchanging them as needed. But for large items it may be easier to do an index sort, sorting a list of indices of the items. One may go even further and cache the sort-comparison results.


NP-completeness is related to that. P is the set of all problems that can be solved in polynomial time, with solution time O(n^a) for some a. NP is the set of all problems whose solutions can be verified in that time. An unknown problem is whether or not P = NP. Is everything in NP also in P? There are three options:
  • Yes,
  • There is an existence proof for some NP members not being in P.
  • There is at least one member of NP that is known to not be in P.
We have yet to demonstrate any of these three, though in practice, no polynomial-time algorithm is known for some problems in NP.

A problem is NP-hard if every algorithm in NP can be transformed into it in polynomial time, and a problem is NP-complete if it is NP-hard and also in NP.
 List of NP-complete problems has a big list.
 
I think what it comes down to is that to solve a problem it has to be done in a finite number of steps.

If a cake recioe has an infinite numver of steps it can never be made.

In Theory Of Computation we started with logic trees and graphs. This led to classes of problems that can not be solved with logic alone. One example was parsing embedded parentheses in a compiler. It requires what CS calls finite automata, a push down stack. IOW a Turing Machine.
 
I now turn to the third criterion. Is something computable?

Wikipedia has a lot of stuff on that:
Also on theory of functions:

Combinational logic - Memoryless system. It has no memory of any of its previous state. Boolean operations on bits, for instance.

Finite-state machine - CL system with a state variable. Multiple variables are equivalent to a single compound variable.

Pushdown automaton - FSM with a last-in-first-out (LIFO) memory, a stack.

Turing machine - FSM with a memory tape that can be moved in either direction.

What can one do?

A pushdown automaton can parse a  Context-free language, one defined with a  Context-free grammar. That kind of grammar is a set of production rules with the form (symbol) -> (string of symbols)

Such a grammar is often written in  Backus–Naur form. I will illustrate with how one writes a positive integer in decimal form.

<nonnegative-digit> ::= 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<digit> ::= 0 | <nonnegative digit>
<positive-integer> ::= <nonnegative digit> | <number> <digit>

The ::= is assignment and the | is for alternate possibilities.

I will continue for all integers.
<nonnegative-integer> ::= 0 | <positive-integer>
<negative-integer> ::= - <positive-integer>
<integer> ::= <nonnegative-integer> | <negative-integer>

If leading zeros and -0 are allowable, then
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<nonnegative-integer> ::= <digit> | <nonnegative-integer> <digit>
<integer> ::= - <nonnegative-integer> | <nonnegative-integer>
 
Back
Top Bottom