• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Calculability

A primitive recursive function is one with a limited set of control-flow statements: if-then-else and looping with the maximum number of times to loop known before starting the loop.

Most familiar algorithms can be expressed as PRF's.

A PRF has the nice property that its runtime is guaranteed to be bounded.


A general recursive one relaxes the constraint on number of times to loop. It can be arbitrarily large, and its maximum need not be known before entering the loop.

General recursive functions are equivalent to Turing machines, and in turn, to the lambda calculus: doing everything with functions.

I will illustrate some operations in the lambda calculus, but using "normal" function notation

Boolean operations:
true(x,y) = x
false(x,y) = y
ifelse(x,a,b) = x(a,b)
and(x,y) = x(y,x)
or(x,y) = x(x,y)
not(x,a,b) = x(b,a)

not(true,a,b) = true(b,a) = b = false(a,b)

For numbers, one uses Peano's axioms, getting the nonnegative integers as 0 with an arbitrary number of successors of it.

num(0,f,x) = x
num(1,f,x) = f(x)
num(2,f,x) = f(f(x))
num(3,f,x) = f(f(f(x)))
...
successor(n,f,x) = f(n(f,x))

Arithmetic:
plus(m,n,f,x) = m(f,n(f,x))
times(m,n,f,x) = m(n(f,arg),x)

One can express recursion with something called the Y combinator.
 
When a computer solves an equation it pushes a tokenized sequwnce of the equation onto a stack and goes through a push pop .

You can find C code on the net for an RPN Reverse Polish Notation calculator.

A PC is a TM with limited memory. In Turing's day paper tape was a storage medium. Teletype machines,. So it is logical he framed it in the terms of his time.

For a problem to cheew on using a stack calculate a *(b + C) * ( ( f + g)*d + e) using push and pop on a stack.

You csan start with something small like a + b - c.

It starts with a text string. You parse the string and assign memor locations to variables it can be an array, then push the operation sequence onto the stack.

There is FIFO first in first out, and LIFO last in first out. C has a stack function.
+ (d + e)
 
The funny thing about computable numbers is that they are countable: every computer program can be expressed as a finite sequence of digits, i.e. a finite whole number. But the set of real numbers is uncountable. So the vast majority of numbers that exist are not only irrational, but something we can't even calculate in any meaningful way.

How does this relate to the question raised in the OP, that there could be numbers that we can "measure" even if we can't calculate them? That depends on the idea that the universe is continuous and not discrete. We don't know if that's the case.
 
The most capable of these systems are Turing machines, general recursive functions, and the lambda calculus.

A little less than a century ago, it was proved that these three kinds of systems are mathematically equivalent. What a Turing machine can do, a general recursive function can do, and the lambda calculus can do, and likewise for any permutation of these three.

That is also true of what they cannot do.
  • There is no way to construct a lambda-calculus function that successfully tests for whether any two lambda-calculus functions are equivalent.
  • There is no way to construct a Turing machine that successfully tests for whether any Turing machine will or will not halt.
The latter result has a certain practical application: in full generality, it is impossible to tell if a function will or will not exit.

As to what makes computer hardware or a programming language Turing-complete, it is these criteria:
  • One must access the contents of an arbitrary-length array
  • One must have arbitrary flow of control
Strictly speaking, Turing completeness means access to an infinite amount of resources, but that limitation is usually ignored, and "Turing complete" means in practice "Turing complete except for finiteness".
 
The first proposed Turing-complete computer is Charles Babbage's Analytical Engine, first described in 1837, long before Alan Turing himself was born.

As to the first Turing-complete computer that was built and run, that is apparently the ENIAC in 1946. But it was programmed with patch cords, and the first Turing-complete stored-program computer was likely the Manchester Mark I or the EDSAC in 1949.

The first electronic components used were vacuum tubes, sometimes whimsically named glassfets, from their working much like a field-effect transistor. The next step was transistors, solid-state electronic switches. Compared to them, vacuum tubes were large, power-hungry, and short-lived. But they had to be developed and manufactured in quantity, something that happened over the late 1940's and early 1950's. The first partially-transistorized computer was the TRADIC in 1954, and it was soon followed by all-transistor computers.


So far, I have only mentioned discrete components, but what if they could be combined and manufactured as parts of super components?

The first step in that direction was printed-circuit boards, which became common in WWII and became very successful afterward. Most of the wires in many electronic devices are typically printed on printed-circuit boards.

But can one print more than wires? Like transistors. The invention of the integrated circuit is a complicated story, because there were several participants in several places, and because their invention required solving several technical problems. It wasn't some lone inventor getting some "Aha!" moment. But one of the first practical ones was built in 1960, and they soon came to be used in computers.

The first CPU on a chip was the Intel 4004 in 1971, followed by the Intel 8008 in 1972. Either the 4004 or the 8008 was likely the first Turing-complete chip.
 
Going from hardware to programming languages, assembly languages are Turing-complete because they closely follow CPU instructions and data layouts. So it's high-level ones that we ought to look at. Most of them are Turing-complete. If it's possible to do a "go to" jump or a "while true" loop, and also access arbitrary-sized arrays, then it's Turing-complete to within resource limitations.

Of widely-used programming languages, I can think of only a few that are not Turing-complete: HTML, CSS, and SQL.


Douglas Hofstadter's 1979 book "Gödel, Escher, Bach: an Eternal Golden Braid", discusses computability and a variety of related issues without getting very technical.

At one point, he introduces three toy programming languages. "BlooP, FlooP, and GlooP are not trolls, talking ducks, or the sounds made by a sinking ship -- they are three computer languages, each one with its own special purpose." (Bloop Floop And Gloop)
  • BLooP - primitive recursive - every loop has a maximum number of repeats specified before entering it
  • FLooP - general recursive - equivalent to general Turing machines and to the lambda calculus
  • GLooP - can compute anything
There are still problems FlooP cannot solve, and Hofstadter proposed a mythical language, GlooP, that could solve them. But then he concludes, In fact it is widely believed that there cannot be any more powerful language for describing calculations than languages that are equivalent to FlooP. This hypothesis was formulated in the 1930's by two people, independent of each other: AlanTuring ... and AlonzoChurch, one of the eminent logicians of this century. It is called the "ChurchTuringThesis". If we accept this thesis, we have to conclude that GlooP is a myth -- there are no restrictions to remove in FlooP, no ways to increase its power by "unshackling" it, as we did BlooP.

At that site are possible GLooP functions:
Pimc Pifl Pire
  • Pimc - "Parallel Infinite MapCar" - apply a function to every member of a list
  • Pifl - "Parallel Infinite Filter" - select list members that give a true value for a function applied to them
  • Pire - "Parallel Infinite Reduce" - use a function to combine all the values
MapCar is a Lisp function that applies the function in its first arg to every member of the list in its second arg, like the "map" function of Python and Mathematica.
 
Here is how these three functions work, all written in Python:
Code:
# Pimc(f,xlist) - "Parallel Infinite MapCar"
reslist = []
for x in xlist:
  reslist.append(f(x))
return reslist
# Python function: map(f,xlist)


# Pifi(f,xlist) - "Parallel Infinite Filter"
reslist = []
for x in xlist:
  if f(x): reslist.append(x)
return reslist
# Python function filter(f,xlist)


# Pire(f,xlist,resinit) - "Parallel Infinite Reduce"
res = resinit
for x in xlist:
   res = f(res,x)
return res
# Python function functools.reduce(f,xlist,resinit)
#
# Without the initial value: use the first list member as that value
# then iterate over the rest of the list
# functools.reduce(f,xlist) = functools.reduce(f.xlist[1:],xlist[0])
For example, one can do a Cartesian or outer product of two lists with:
Code:
def lsxsc(lst,scl): return list(map(lambda x: (x,scl), lst))
def lsxls(lst1,lst2): return list(map(lambda x: lsxsc(lst1,x), lst2))
def flatten(lst): return functools.reduce(lambda x,y: x+y, lst, [])
def flatouter(lst1,lst2): return flatten(lsxls(lst1,lst2))

# Equivalent to itertools.product(lst1,lst2)

An interesting issue with the pi* functions is what they can compute if one restricts oneself to some cardinality of infinite sets, like countability.
 
The funny thing about computable numbers is that they are countable: every computer program can be expressed as a finite sequence of digits, i.e. a finite whole number. But the set of real numbers is uncountable. So the vast majority of numbers that exist are not only irrational, but something we can't even calculate in any meaningful way.

How does this relate to the question raised in the OP, that there could be numbers that we can "measure" even if we can't calculate them? That depends on the idea that the universe is continuous and not discrete. We don't know if that's the case.

I do not understand what you mean. How do you measure a number? A number is the measure of something.

Any measure is ultimatly finite limited by sme form of quantization.

Numbers are a quantization. If you have a 32 bit fractional part the resolution is 1/2^32.
If you calculate by hand and have 3 decimal places the resolution or quantization is .001.

Real numbers are not 'real' they are a useful theoretical abstraction.
 
The funny thing about computable numbers is that they are countable: every computer program can be expressed as a finite sequence of digits, i.e. a finite whole number. But the set of real numbers is uncountable. So the vast majority of numbers that exist are not only irrational, but something we can't even calculate in any meaningful way.

How does this relate to the question raised in the OP, that there could be numbers that we can "measure" even if we can't calculate them? That depends on the idea that the universe is continuous and not discrete. We don't know if that's the case.

I do not understand what you mean. How do you measure a number? A number is the measure of something.

Any measure is ultimatly finite limited by sme form of quantization.

Numbers are a quantization. If you have a 32 bit fractional part the resolution is 1/2^32.
If you calculate by hand and have 3 decimal places the resolution or quantization is .001.

Real numbers are not 'real' they are a useful theoretical abstraction.
If the universe were actually continuous, we might have a fundamental constant or even just a position of some particle that is not computable, but could be measurable to an arbitrary degree of accuracy. We don't actually have to do the measurement: we could refer to the quantity as an abstraction. "That particle's position over there".

Obviously if real numbers are just a convenient abstraction, and the universe is discrete instead of continuous, then this isn't possible.
 
The funny thing about computable numbers is that they are countable: every computer program can be expressed as a finite sequence of digits, i.e. a finite whole number. But the set of real numbers is uncountable. So the vast majority of numbers that exist are not only irrational, but something we can't even calculate in any meaningful way.
The computable numbers are those that can be approximated to arbitrary precision in a finite number of steps with a Turing machine. They include every (real) algebraic number, but all the (real) transcendental numbers that we typically work with, numbers like e and pi.
How does this relate to the question raised in the OP, that there could be numbers that we can "measure" even if we can't calculate them? That depends on the idea that the universe is continuous and not discrete. We don't know if that's the case.
The demonstration of their presence is an existence proof, one that demonstrates the existence of uncomputable numbers without finding any.

Some numbers are known to be uncomputable, however, like Chaitin's constant, or more precisely, Chaitin's family of constants.

Since the set of all Turing machines is countable, one can do a bijection between positive integers and Turing machines. Whether or not a Turing machine will halt can be represented as a bit, a binary digit.

Now construct a number between 0 and 1 with trailing digit n being whether or not the Turing machine for n will halt.

That number is a member of Chaitin's family of constants. All the others have permutations of that number's trailing digits. Since there is no Turing machine that takes n and emits whether or not that digit's Turing machine will halt, this number is uncomputable.
 
Words to live by.

Wrong. The velocity is 299792458ms-1. What you have measured is the length of your arms.
That's certainly one of its strengths.
Stuff and nonsense. SI is still based on the Earth, just like it was back when a meter was a forty millionth of the circumference through Paris. "The second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom.", so the definition goes. Well, a cesium-133 atom radiating where? Radiating on the Earth. Cesium radiates faster on Mars -- it's not as deep in a gravity well.

No, Cesium doesn't run faster on Mars. Mars is a different reference frame. Cesium vibrates and radiates and does all that stuff at exactly the same rate on Mars, in Mars time.

Remember no preferred reference frame?

True. But completely irrelevant to your understandable rookie error in assuming that it's possible to measure the speed of light. It's not possible. The speed of light is 299792458ms-1 regardless of any measurement you make.
Of course it's possible to measure the speed of light -- that 299792458 number wasn't made up by philosophers; it was somebody's measurement of the speed of light. A meter wasn't always defined as the distance light goes in 1/299792458 second and it probably won't always be. We stopped defining a second as 1/86400 of a mean solar day because the Earth's rotation varies too much. Well, how fast cesium radiates on the Earth varies with the Earth's orbit; eventually that will be found to be too unpredictable for accurate timekeeping. New units of time and distance will be adopted; whether those make the speed of light a defined constant or a measured ratio will depend on the new definitions; those will be chosen based on future measurement technologies.

Metrology is neither a democracy, nor a philosophical free-for-all. Opinion is irrelevant with regard to matters of fact.
Definitions of units are temporary conventions, not facts. They can even be democratic. If you have trouble measuring the speed of light in meters per second, you could always measure it in meters per day -- a day is democratically defined as 86400 seconds or 86401, depending on some Frenchmen's measurements. :)

I believe the rules the Frenchmen have set out for themselves allow the day to be 86402 seconds should that be necessary, and explicitly prefer a June 30 or December 31 with 86402 seconds to any other date with 86401.

On a more serious note, as long as you define the day in terms of seconds, you're still not measuring c - you're doing a sanity check of your calculations. You would need to define the solar day independently, as the time it takes the sun from zenith to zenith in as specific location (and similarly the metre) before you could measure c. That'll buy you a hell of a lot of variation, from the annual based on Earth's varying velocity in its ellipsoid orbit, to the seasonal because falling leaves or snow change the mass distribution, to the decadal acceleration from melting glaciers, to the multi-million-year deceleration due to the moons tidal forces.

It's obvious why we no longer base the second on the day. It's a convention, but it's not an arbitrary one. A panel of experts with late 20th century knowledge and technology available had to reach this or a very similar conclusion, or they would have failed at their job.
 
Now I now what a countable number is. Thanks.

I still don't see what the OP is about.
 
No, Cesium doesn't run faster on Mars. Mars is a different reference frame. Cesium vibrates and radiates and does all that stuff at exactly the same rate on Mars, in Mars time.

Remember no preferred reference frame?

True. But completely irrelevant to your understandable rookie error in assuming that it's possible to measure the speed of light. It's not possible. The speed of light is 299792458ms-1 regardless of any measurement you make.
Of course it's possible to measure the speed of light -- that 299792458 number wasn't made up by philosophers; it was somebody's measurement of the speed of light. A meter wasn't always defined as the distance light goes in 1/299792458 second and it probably won't always be. We stopped defining a second as 1/86400 of a mean solar day because the Earth's rotation varies too much. Well, how fast cesium radiates on the Earth varies with the Earth's orbit; eventually that will be found to be too unpredictable for accurate timekeeping. New units of time and distance will be adopted; whether those make the speed of light a defined constant or a measured ratio will depend on the new definitions; those will be chosen based on future measurement technologies.

Metrology is neither a democracy, nor a philosophical free-for-all. Opinion is irrelevant with regard to matters of fact.
Definitions of units are temporary conventions, not facts. They can even be democratic. If you have trouble measuring the speed of light in meters per second, you could always measure it in meters per day -- a day is democratically defined as 86400 seconds or 86401, depending on some Frenchmen's measurements. :)

I believe the rules the Frenchmen have set out for themselves allow the day to be 86402 seconds should that be necessary, and explicitly prefer a June 30 or December 31 with 86402 seconds to any other date with 86401.

On a more serious note, as long as you define the day in terms of seconds, you're still not measuring c - you're doing a sanity check of your calculations. You would need to define the solar day independently, as the time it takes the sun from zenith to zenith in as specific location (and similarly the metre) before you could measure c. That'll buy you a hell of a lot of variation, from the annual based on Earth's varying velocity in its ellipsoid orbit, to the seasonal because falling leaves or snow change the mass distribution, to the decadal acceleration from melting glaciers, to the multi-million-year deceleration due to the moons tidal forces.

It's obvious why we no longer base the second on the day. It's a convention, but it's not an arbitrary one. A panel of experts with late 20th century knowledge and technology available had to reach this or a very similar conclusion, or they would have failed at their job.

Incidentally, since Earth is rotating faster now than at any time in the last 50 years, and the current average day is in fact slightly shorter than 86400 seconds, they are deliberating whether we may soon need to introduce a negative leap second giving us a 86399 seconds day.
 
No, Cesium doesn't run faster on Mars. Mars is a different reference frame. Cesium vibrates and radiates and does all that stuff at exactly the same rate on Mars, in Mars time.

Remember no preferred reference frame?
I remember no preferred inertial reference frame. Doesn't necessarily apply to accelerated reference frames -- that's the whole reason the "twin paradox" isn't really a paradox isn't a real paradox.

Definitions of units are temporary conventions, not facts. They can even be democratic. If you have trouble measuring the speed of light in meters per second, you could always measure it in meters per day -- a day is democratically defined as 86400 seconds or 86401, depending on some Frenchmen's measurements. :)

I believe the rules the Frenchmen have set out for themselves allow the day to be 86402 seconds should that be necessary, and explicitly prefer a June 30 or December 31 with 86402 seconds to any other date with 86401.

On a more serious note, as long as you define the day in terms of seconds, you're still not measuring c - you're doing a sanity check of your calculations. You would need to define the solar day independently, as the time it takes the sun from zenith to zenith in as specific location
Which is in essence exactly what those Frenchmen are doing, modulo some qualifiers about averaging over long periods and rounding to integers.

(and similarly the metre) before you could measure c.
And that too was done, back when c was measured; and it may be again, when the conventions change again. To propose that it's impossible to measure c is the same thing as proposing that back when a second was an 86400th of a mean solar day and a meter was the distance between two scratches on a metal bar, the physicists who identified 299792458 as the relevant ratio were actually measuring the inaccuracy in the positions of the scratches. It's ludicrous.

That'll buy you a hell of a lot of variation, from the annual based on Earth's varying velocity in its ellipsoid orbit, to the seasonal because falling leaves or snow change the mass distribution, to the decadal acceleration from melting glaciers, to the multi-million-year deceleration due to the moons tidal forces.

It's obvious why we no longer base the second on the day. It's a convention, but it's not an arbitrary one. A panel of experts with late 20th century knowledge and technology available had to reach this or a very similar conclusion, or they would have failed at their job.
Well, sure; but that's just a parochial fact about the state of late 20th century knowledge and technology. When 25th century people are zipping around the solar system and relying on signals from the Interplanetary Positioning System to navigate their space ships, and the observers on French Callisto are tasked with making decisions about leap microseconds, which have to be subtracted from Coordinated Mars Time several times per hour to keep it in sync with Earth, the fact that from a certain philosophical viewpoint, cesium vibrates and radiates and does all that stuff at exactly the same rate on Mars is unlikely to be allowed to overrule user convenience. I expect we will either stop using cesium radiation to define seconds, or else we'll nail it to a particular reference frame. If the latter, it probably won't be Earth. It will either be the most stable one we can find, Neptune maybe, or else it will be a theoretical extrapolation, our best estimate for cesium radiating in the middle of some intergalactic void.
 
I remember no preferred inertial reference frame. Doesn't necessarily apply to accelerated reference frames -- that's the whole reason the "twin paradox" isn't really a paradox isn't a real paradox.

I believe the rules the Frenchmen have set out for themselves allow the day to be 86402 seconds should that be necessary, and explicitly prefer a June 30 or December 31 with 86402 seconds to any other date with 86401.

On a more serious note, as long as you define the day in terms of seconds, you're still not measuring c - you're doing a sanity check of your calculations. You would need to define the solar day independently, as the time it takes the sun from zenith to zenith in as specific location
Which is in essence exactly what those Frenchmen are doing, modulo some qualifiers about averaging over long periods and rounding to integers.

Integer what? Integer seconds I believe. So measuring c in meters per year, or miles per day, is only calibrating your calculator or doing a sanity check of your algorithm.

(and similarly the metre) before you could measure c.
And that too was done, back when c was measured; and it may be again, when the conventions change again. To propose that it's impossible to measure c is the same thing as proposing that back when a second was an 86400th of a mean solar day and a meter was the distance between two scratches on a metal bar, the physicists who identified 299792458 as the relevant ratio were actually measuring the inaccuracy in the positions of the scratches. It's ludicrous.

Only if you think that our "metre" and their "metre" refer to the same entity. I don't think they do. The metre, as defined today, didn't exist then. What existed were various other entities from which it collectively inherited its name and approximate value.

So, incidentally, the physicists who identified 299792458 as the relevant ratio were actually measuring in a different measuring system. Our meter can only be measured against c

That'll buy you a hell of a lot of variation, from the annual based on Earth's varying velocity in its ellipsoid orbit, to the seasonal because falling leaves or snow change the mass distribution, to the decadal acceleration from melting glaciers, to the multi-million-year deceleration due to the moons tidal forces.

It's obvious why we no longer base the second on the day. It's a convention, but it's not an arbitrary one. A panel of experts with late 20th century knowledge and technology available had to reach this or a very similar conclusion, or they would have failed at their job.
Well, sure; but that's just a parochial fact about the state of late 20th century knowledge and technology. When 25th century people are zipping around the solar system and relying on signals from the Interplanetary Positioning System to navigate their space ships, and the observers on French Callisto are tasked with making decisions about leap microseconds, which have to be subtracted from Coordinated Mars Time several times per hour to keep it in sync with Earth, the fact that from a certain philosophical viewpoint, cesium vibrates and radiates and does all that stuff at exactly the same rate on Mars is unlikely to be allowed to overrule user convenience. I expect we will either stop using cesium radiation to define seconds, or else we'll nail it to a particular reference frame. If the latter, it probably won't be Earth. It will either be the most stable one we can find, Neptune maybe, or else it will be a theoretical extrapolation, our best estimate for cesium radiating in the middle of some intergalactic void.

Yes, we might end up doing something like this. We also end up pretending the day has the same number of seconds throughout the year (and occasionally one more a few days after the solstices), which means solar noon fluctuates by what? - I think half an hour - in clock time. That doesn't make it real.
 
To propose that it's impossible to measure c is the same thing as proposing that back when a second was an 86400th of a mean solar day and a meter was the distance between two scratches on a metal bar, the physicists who identified 299792458 as the relevant ratio were actually measuring the inaccuracy in the positions of the scratches. It's ludicrous.

Only if you think that our "metre" and their "metre" refer to the same entity. I don't think they do. The metre, as defined today, didn't exist then. What existed were various other entities from which it collectively inherited its name and approximate value.
No, they don't refer to the same entity; but I don't think that's the right way to look at it. To measure the speed of light isn't to measure the ratio of "meters" to "seconds"; it's to measure the ratio of distance to time. When Romer measured the speed of light with only about 25% inaccuracy, the fact that he was doing it a hundred-odd years before meters were invented doesn't mean he was measuring a different phenomenon from the one Foucault measured in meters per second to within 1%.
 
I still don't see what the OP is about.
That's why I tried to break down the problem in an earlier post.

Floating-point roundoff errors? Algorithm runtime and memory consumption? Mathematical impossibility?

Another issue is hardware vs. software numbers. What can be directly represented with one's hardware? What instead needs several hardware numbers to represent it?

I'll first look at the hardware side. For the first two or three decades, computers used a variety of numbers of bits and data formats. But CPU-chip computers have been much more restricted, to power-of-2 data sizes, twos-complement binary integers, and IEEE 754 floating-point numbers.

On the software side, one can have many more digits than what one's hardware can represent, but one has to store those digits as separate hardware numbers. This is arbitrary-precision arithmetic or "bignum" arithmetic. Some programming languages handle integers as bignums, like Perl, Python, and Ruby. Computer-algebra software also automatically does bignum integers, software like Mathematica. One can also get bignums in add-on libraries, like GNU Multiple Precision, and Oracle includes its BigInteger library with Java.

One can also do bignum floats, but doing so requires setting some maximum number of digits. Mma can do bignum floats, for instance.
 
To propose that it's impossible to measure c is the same thing as proposing that back when a second was an 86400th of a mean solar day and a meter was the distance between two scratches on a metal bar, the physicists who identified 299792458 as the relevant ratio were actually measuring the inaccuracy in the positions of the scratches. It's ludicrous.

Only if you think that our "metre" and their "metre" refer to the same entity. I don't think they do. The metre, as defined today, didn't exist then. What existed were various other entities from which it collectively inherited its name and approximate value.
No, they don't refer to the same entity; but I don't think that's the right way to look at it. To measure the speed of light isn't to measure the ratio of "meters" to "seconds"; it's to measure the ratio of distance to time. When Romer measured the speed of light with only about 25% inaccuracy, the fact that he was doing it a hundred-odd years before meters were invented doesn't mean he was measuring a different phenomenon from the one Foucault measured in meters per second to within 1%.

Sure, you CAN measure the speed of light in earth orbit diameters per sidereal day. Just not in metres per second. And given that we have decided to use the metre as the base unit for, doing so does in fact amount to measuring the earth orbit.
 
The resolution of n bits is 1/2^n if using binary.

It is essentially digitizing the number 1.

Software uses floating point routines that operate at the bit level. PC processors have hardware floating point processors built in to speed up operations.

If you type in 1,234567 the fractional part may be exact only if it is a sum of the weightedninary
for 8 bits the lsb is 1/2^8 are uses floating point routines that operate on binary numbers. The display of numbers is a perpetration of digital storage.

for 8 bits 1.2^8 = .0039...

the digital value is

(2^7)*lsb + (2^7)*lsb + (2^6)*lsb + (2^5)*lsb + (2^4)*lsb + (2^3)*lsb + (2^2)*lsb + (2^1)*lsb + (2^0)*lsb

Addition and subtraction is straightforward. Multiplication and division is more complicated.
1000000 + 01000000 = 2^7 * lsb + 2^6 * lsb = .725.

Usually double precision has an lsb low enough not to affect accuracy.
 
The cardinality of a set is its number of members or elements. For finite sets, the cardinalities are positive integers, and for the empty set, 0. One can show that finite-set cardinalities obey Peano's axioms of arithmetic.

Cardinality of set A = card(A) = |A|
|{}| = 0
|{a}| = 1
|{a,b}| = 2
...

|union(A,B)| + |intersection(A,B)| = |A| + |B|

Outer products and functions
|{all of (a,b) for a in A and b in B}| = |A|*|B|
|(all {(n selections of a in A)}| = |A|^n
|{all functions of B to A}| = |A|^|B|

The power set of a set is the set of all subsets of a set.
|Power set of A| = 2^|A|
Each subset can be expressed as a membership predicate, a function that is either true or false.

Also interesting is permutations: how many self-bijections a set has. For set A, it's |A|!
 
Back
Top Bottom