• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

1 + 2 + 3 + 4 + .... = -1/12 ? (infinite series)

for 0.9999.. at what decimal place would it round up or converge to 1? 0.555... to 0.6?

1/3 is exact. 0.3333... is not.


0.3333... does not ex9ss i96 s a useful concept. There is no possible physical representation of 0.3333.... There is for 1/3. 0.333... is an approximation to 1/3.

On a calculator 1ake 1/0.3, 1/0.33 … until you use all the digits and see what happens.
 
Poetry, inkblots, and modern art are subject to personal interpretation. "I see a fruit tree. You see female genitals. We're both right! Art is in the eye of the beholder."

It seems that some in this thread treat mathematical notation as inkblots, with differing personal interpretations equally valid! This is not the viewpoint of mathematicians!

What is " 0.33333.... "? A number? A series? A picture of female genitalia? A prescription for pushing buttons on your smartphone's calculator app? All interpretations are equally valid? :)

There is a certain real number, one of whose names is "1/3", which can also be denoted by its representation as a decimal fraction, namely " 0.3 " but where the underlining of the "3" should be an over-line instead. I (and some others) don't even know how to produce an over-lined "3" in Unicode so we write " 0.33333... " instead, which in context is known to denote that decimal fraction consisting of a decimal point followed by an endless sequence of 3's. Thus " 0.33333... " is just another synonym for " 1/3 ".

There is also an infinite series which can be written " .3 + .03 + .003 + .0003 + ... " That infinite series happens to have 1/3 as its (absolutely) convergent sum but it isn't actually a synonym of " 0.33333... " anymore than "2+2" is a synonym of "4". " 2+2 " is an expression which can be evaluated to yield 4, just as the series " .3 + .03 + .003 + .0003 + ... " can be evaluated to yield 1/3 (or, equivalently, 0.33333...).

Period. That's P.e.r.i.o.d. With a P.

Hope this helps, but that cause is looking increasingly doubtful. :)
 
In the physical sciences, the limit to the number of digits is currently 13, I believe. It is the number of digits in the most accurate measurements we take (i.e. calibration of all measurements from the primary constants - e.g. the speed of light).

So, in physics 0.99999999999999 (14 nines) is 1. Any calculation giving an answer with greater than 13 digits must be rounded. Obviously there are more garbage measurements that could be taken. Heck, 0.99 might equal 1 sometimes. In fact only amount of nines that can't be rounded to 1 is a single decimal place (i.e. 0.9) as it has the same number of digits.

It does make for an interesting philosophical question in that if 0.9999... is not 1 then is it that 1/3*3!=1 or that 1/3!=0.3333... in such a mathematical system.
 
The practical problem is that all math operations are finite and truncated, either digital or by hand calculation.
No, they aren't.

Mathematics is quite able to handle infinities.

Arithmetic might be limited in the way you assert, for a given definition of arithmetic; But mathematics certainly is not.
for 0.9999.. at what decimal place would it round up or converge to 1?
You wouldn't. It's only true of the infinite, ie unrounded, decimal.
1/3 is exact. 0.3333... is not.

Yes, it is. They are the same number expressed using two different notations.
 
1/3 + 1/3 + 1/3 = exactly 1.
ADD 0.333... + 0.333... + 0.333.. for any number of digits will never equal exactly 3.

1/0.333... for any member of digits will never equal 1.

People seem to focus on 0.999... Does 0.666.. become 0.7?

The floating point standard defines how conforming software rounding occurs. 0.9999999999 out to the max digits c0iuld be rounded up to 1. Or the software could flag an overflow/underflow exception. In a spread sheet do 1/3 and see what happens.

In matrix operations like the Fast Fourier Transform with a lot of reputed multiplication how you round and truncate can matter.

The most sensitive instrument I used went to femto. For me the most number of decimal places was 6 in some cases. The number of decimal digits has to exceed the data by a few digits to preserve accuracy.
 
If you want to understand the issues with finite arithmetic get a book on numerical analyst or google it.
 
1/3 + 1/3 + 1/3 = exactly 1.
ADD 0.333... + 0.333... + 0.333.. for any number of digits will never equal exactly 3.
WTF are you on about "any number of digits"??

The "..." tells you that there are INFINITE digits. Not any finite number, but infinity.

Sure, if there were any finite number of digits, you would be right. But there isn't, and you're not.
People seem to focus on 0.999... Does 0.666.. become 0.7?
No. It's 2/3. And nothing 'becomes' anything; This isn't a mathematical operation, it's an identity.
The floating point standard defines how conforming software rounding occurs. 0.9999999999 out to the max digits c0iuld be rounded up to 1. Or the software could flag an overflow/underflow exception. In a spread sheet do 1/3 and see what happens.
Rounding isn't relevant. Nothing is being rounded, which is a way to reduce the number of digits to a defined (im)precision.

0.999... doesn't ROUND to 1, it IS 1.
In matrix operations like the Fast Fourier Transform with a lot of reputed multiplication how you round and truncate can matter.
But in the question "is 0.999... = 1?", rounding doesn't occur, and doesn't matter. The answer is simply "Yes".
The most sensitive instrument I used went to femto. For me the most number of decimal places was 6 in some cases. The number of decimal digits has to exceed the data by a few digits to preserve accuracy.
Nobody's talking about calculators, computers, or measuring instruments except you - and such tools are completely irrelevant to the question under discussion.

This is a question of mathematics. And it's an easy and well understood question that no mathematician has the slightest worry over, or doubt about. 0.999... = 1, just as the Roman numeral C = 100. They are different ways to write the exact same number. C doesn't ROUND to 100; or approximate 100; or become 100. It IS 100, just written in different notation.
 
If you want to understand the issues with finite arithmetic get a book on numerical analyst or google it.

As I already mentioned, only you are talking about arithmetic. The conversation is about mathematics. And your confusion of the two different things is entirely your problem.

Nobody needs to learn anything from you, or from your irrelevant sources, because you are wrong, and they are not addressing the same question you are struggling with. You need to learn, but seem totally resistant to doing so. You should either try to learn from the very helpful people in this thread, or from any mathematician of your choice; or just stop trying to contribute to a discussion in which you are unqualified to participate.
 
If you want to understand the issues with finite arithmetic get a book on numerical analyst or google it.

We seem to be on opposite sides of this "debate," so is this addressed at me? You might be startled to learn how much I know about numerical analysis and "finite arithmetic." For example: Kahan summation is an interesting topic — it's a technique (to increase precision) that requires a C compiler's optimizer to be turned off for the generated code to work properly.

The problem is:
. . . . . This has nothing to do with the 0.9999... confusion (let alone the thread's ostensible topic).



Returning to the thread's ostensible topic, let's write it in base-6: ζ(-1) = -0.036
so we needn't worry about the decimal "peculiarity" of ζ(-1) = -1/12 = -0.083333... . . . . . :)
 
If you want to understand the issues with finite arithmetic get a book on numerical analyst or google it.

We seem to be on opposite sides of this "debate," so is this addressed at me? You might be startled to learn how much I know about numerical analysis and "finite arithmetic." For example: Kahan summation is an interesting topic — it's a technique (to increase precision) that requires a C compiler's optimizer to be turned off for the generated code to work properly.

The problem is:
. . . . . This has nothing to do with the 0.9999... confusion (let alone the thread's ostensible topic).



Returning to the thread's ostensible topic, let's write it in base-6: ζ(-1) = -0.036
so we needn't worry about the decimal "peculiarity" of ζ(-1) = -1/12 = -0.083333... . . . . . :)

Hardly debate. More an informal lunch time discussion to pass the time. A tossing about of ideas. Nothing serious, at least for me. There is no winning or loosing here. Only food for thought which is the benefit of the forum.

I am not going to drill down into it. The claim of equivalence 1+2+3... = -1/12 does not hold up. As I showed when applied the equivalence does not work.

On the face of it a claim that a summation of positive integers turning negative probably violates the definition of integers, counting, and addition.

I would assume the -1/12 has some meaning in a greater theoretical context which I am not interested in. Partly due to my vision limitations.R4ading is hard.
 
Yes, the equation 1+2+3+4+... = -1/12 should be written with some sort of ℛ on top of the =, to show that this is Ramanujan summation, rather than ordinary summation. However the Youtubes in OP do show that Ramanujan summation has many of the properties of ordinary summation, so it is still a remarkable result
 
Or use functional notation R(1+2+3...) = -1/12
 
Back
Top Bottom