Sort of how 0.0000001 = 0
No, not like that.
This is why we can’t have nice things like dependable protection from fall damage while riding a boat in Minecraft.
If 0.999… < 1, then that must mean there’s an infinite amount of real numbers between 0.999… and 1. Can you name a single one of these?
Sure 0.999…95
Just kidding, the guy on the left is correct.
You got me
Meh, close enough.
I wish computers could calculate infinity
As long as you have it forget the previous digit, you can bring up a new digit infinitely
Computers can calculate infinite series as well as anyone else
Are we still doing this 0.999… thing? Why, is it that attractive?
People generally find it odd and unintuitive that it’s possible to use decimal notation to represent 1 as .9~ and so this particular thing will never go away. When I was in HS I wowed some of my teachers by doing proofs on the subject, and every so often I see it online. This will continue to be an interesting fact for as long as decimal is used as a canonical notation.
Welp, I see. Still, this is way too much recurting of a pattern.
The rules of decimal notation don’t sipport infinite decimals properly. In order for a 9 to roll over into a 10, the next smallest decimal needs to roll over first, therefore an infinite string of anything will never resolve the needed discrete increment.
Thus, all arguments that 0.999… = 1 must use algebra, limits, or some other logic beyond decimal notation. I consider this a bug with decimals, and 0.999… = 1 to be a workaround.
don’t sipport infinite decimals properly
Please explain this in a way that makes sense to me (I’m an algebraist). I don’t know what it would mean for infinite decimals to be supported “properly” or “improperly”. Furthermore, I’m not aware of any arguments worth taking seriously that don’t use logic, so I’m wondering why that’s a criticism of the notation.
Decimal notation is a number system where fractions are accomodated with more numbers represeting smaller more precise parts. It is an extension of the place value system where very large tallies can be expressed in a much simpler form.
One of the core rules of this system is how to handle values larger than the highest digit, and lower than the smallest. If any place goes above 9, set that place to 0 and increment the next place by 1. If any places goes below 0, increment the place by (10) and decrement the next place by one (this operation uses a non-existent digit, which is also a common sticking point).
This is the decimal system as it is taught originally. One of the consequences of it’s rules is that each digit-wise operation must be performed in order, with a beginning and an end. Thus even getting a repeating decimal is going beyond the system. This is usually taught as special handling, and sometimes as baby’s first limit (each step down results in the same digit, thus it’s that digit all the way down).
The issue happens when digit-wise calculation is applied to infinite decimals. For most operations, it’s fine, but incrementing up can only begin if a digit goes beyong 9, which never happens in the case of 0.999… . Understanding how to resolve this requires ditching the digit-wise method and relearing decimals and a series of terms, and then learning about infinite series. It’s a much more robust and applicable method, but a very different method to what decimals are taught as.
Thus I say that the original bitwise method of decimals has a bug in the case of incrementing infinite sequences. There’s really only one number where this is an issue, but telling people they’re wrong for using the tools as they’ve been taught isn’t helpful. Much better to say that the tool they’re using is limited in this way, then showing the more advanced method.
That’s how we teach Newtonian Gravity and then expand to Relativity. You aren’t wrong for applying newtonian gravity to mercury, but the tool you’re using is limited. All models are wrong, but some are useful.
Said a simpler way:
1/3= 0.333…
1/3 + 1/3 = 0.666… = 0.333… + 0.333…
1/3 + 1/3 + 1/3 = 1 = 0.333… + 0.333… + 0.333…
The quirk you mention about infinite decimals not incrementing properly can be seen by adding whole number fractions together.
I can’t help but notice you didn’t answer the question.
each digit-wise operation must be performed in order
I’m sure I don’t know what you mean by digit-wise operation, because my conceptuazation of it renders this statement obviously false. For example, we could apply digit-wise modular addition base 10 to any pair of real numbers and the order we choose to perform this operation in won’t matter. I’m pretty sure you’re also not including standard multiplication and addition in your definition of “digit-wise” because we can construct algorithms that address many different orders of digits, meaning this statement would also then be false. In fact, as I lay here having just woken up, I’m having a difficult time figuring out an operation where the order that you address the digits in actually matters.
Later, you bring up “incrementing” which has no natural definition in a densely populated set. It seems to me that you came up with a function that relies on the notation we’re using (the decimal-increment function, let’s call it) rather than the emergent properties of the objects we’re working with, noticed that the function doesn’t cover the desired domain, and have decided that means the notation is somehow improper. Or maybe you’re saying that the reason it’s improper is because the advanced techniques for interacting with the system are dissimilar from the understanding imparted by the simple techniques.
In base 10, if we add 1 and 1, we get the next digit, 2.
In base 2, if we add 1 and 1 there is no 2, thus we increment the next place by 1 getting 10.
We can expand this to numbers with more digits: 111(7) + 1 = 112 = 120 = 200 = 1000
In base 10, with A representing 10 in a single digit: 199 + 1 = 19A = 1A0 = 200
We could do this with larger carryover too: 999 + 111 = AAA = AB0 = B10 = 1110 Different orders are possible here: AAA = 10AA = 10B0 = 1110
The “carry the 1” process only starts when a digit exceeds the existing digits. Thus 192 is not 2Z2, nor is 100 = A0. The whole point of carryover is to keep each digit within the 0-9 range. Furthermore, by only processing individual digits, we can’t start carryover in the middle of a chain. 999 doesn’t carry over to 100-1, and while 0.999 does equal 1 - 0.001, (1-0.001) isn’t a decimal digit. Thus we can’t know if any string of 9s will carry over until we find a digit that is already trying to be greater than 9.
This logic is how basic binary adders work, and some variation of this bitwise logic runs in evey mechanical computer ever made. It works great with integers. It’s when we try to have infinite digits that this method falls apart, and then only in the case of infinite 9s. This is because a carry must start at the smallest digit, and a number with infinite decimals has no smallest digit.
Without changing this logic radically, you can’t fix this flaw. Computers use workarounds to speed up arithmetic functions, like carry-lookahead and carry-save, but they still require the smallest digit to be computed before the result of the operation can be known.
If I remember, I’ll give a formal proof when I have time so long as no one else has done so before me. Simply put, we’re not dealing with floats and there’s algorithms to add infinite decimals together from the ones place down using back-propagation. Disproving my statement is as simple as providing a pair of real numbers where doing this is impossible.
Are those algorithms taught to people in school?
Once again, I have no issue with the math. I just think the commonly taught system of decimal arithmetic is flawed at representing that math. This flaw is why people get hung up on 0.999… = 1.
Furthermore, I’m not aware of any arguments worth taking seriously that don’t use logic, so I’m wondering why that’s a criticism of the notation.
If you hear someone shout at a mob “mathematics is witchcraft, therefore, get the pitchforks” I very much recommend taking that argument seriously no matter the logical veracity.
Fair, but that still uses logic, it’s just using false premises. Also, more than the argument what I’d be taking seriously is the threat of imminent violence.
But is it a false premise? It certainly passes Occam’s razor: “They’re witches, they did it” is an eminently simple explanation.
By definition, mathematics isn’t witchcraft (most witches I know are pretty bad at math). Also, I think you need to look more deeply into Occam’s razor.
By definition, all sufficiently advanced mathematics is isomorphic to witchcraft. (*vaguely gestures at numerology as proof*). Also Occam’s razor has never been robust against reductionism: If you are free to reduce “equal explanatory power” to arbitrary small tunnel vision every explanation becomes permissible, and taking, of those, the simplest one probably doesn’t match with the holistic view. Or, differently put: I think you need to look more broadly onto Occam’s razor :)
i don’t think any number system can be safe from infinite digits. there’s bound to be some number for each one that has to be represented with them. it’s not intuitive, but that’s because infinity isn’t intuitive. that doesn’t mean there’s a problem there though. also the arguments are so simple i don’t understand why anyone would insist that there has to be a difference.
for me the simplest is:
1/3 = 0.333…
so
3×0.333… = 3×1/3
0.999… = 3/3
the problem is it makes my brain hurt
honestly that seems to be the only argument from the people who say it’s not equal. at least you’re honest about it.
by the way I’m not a mathematically adept person. I’m interested in math but i only understand the simpler things. which is fine. but i don’t go around arguing with people about advanced mathematics because I personally don’t get it.
the only reason I’m very confident about this issue is that you can see it’s equal with middle- or high-school level math, and that’s somehow still too much for people who are too confident about there being a magical, infinitely small number between 0.999… and 1.
to be clear I’m not arguing against you or disagreeing the fraction thing demonstrates what you’re saying. It just really bothers me when I think about it like my brain will not accept it even though it’s right in front of me it’s almost like a physical sensation. I think that’s what cognitive dissonance is. Fortunately in the real world this has literally never come up so I don’t have to engage with it.
no, i know and understand what you mean. as i said in my original comment; it’s not intuitive. but if everything in life were intuitive there wouldn’t be mind blowing discoveries and revelations… and what kind of sad life is that?
Any my argument is that 3 ≠ 0.333…
We’re taught about the decimal system by manipulating whole number representations of fractions, but when that method fails, we get told that we are wrong.
In chemistry, we’re taught about atoms by manipulating little rings of electrons, and when that system fails to explain bond angles and excitation, we’re told the model is wrong, but still useful.
This is my issue with the debate. Someone uses decimals as they were taught and everyone piles on saying they’re wrong instead of explaining the limitations of systems and why we still use them.
For the record, my favorite demonstration is useing different bases.
In base 10: 1/3 ≈ 0.333… 0.333… × 3 = 0.999…
In base 12: 1/3 = 0.4 0.4 × 3 = 1
The issue only appears if you resort to infinite decimals. If you instead change your base, everything works fine. Of course the only base where every whole fraction fits nicely is unary, and there’s some very good reasons we don’t use tally marks much anymore, and it has nothing to do with math.
you’re thinking about this backwards: the decimal notation isn’t something that’s natural, it’s just a way to represent numbers that we invented. 0.333… = 1/3 because that’s the way we decided to represent 1/3 in decimals. the problem here isn’t that 1 cannot be divided by 3 at all, it’s that 10 cannot be divided by 3 and give a whole number. and because we use the decimal system, we have to notate it using infinite repeating numbers but that doesn’t change the value of 1/3 or 10/3.
different bases don’t change the values either. 12 can be divided by 3 and give a whole number, so we don’t need infinite digits. but both 0.333… in decimal and 0.4 in base12 are still 1/3.
there’s no need to change the base. we know a third of one is a third and three thirds is one. how you notate it doesn’t change this at all.
I’m not saying that math works differently is different bases, I’m using different bases exactly because the values don’t change. Using different bases restates the equation without using repeating decimals, thus sidestepping the flaw altogether.
My whole point here is that the decimal system is flawed. It’s still useful, but trying to claim it is perfect leads to a conflict with reality. All models are wrong, but some are useful.
you said 1/3 ≠ 0.333… which is false. it is exactly equal. there’s no flaw; it’s a restriction in notation that is not unique to the decimal system. there’s no “conflict with reality”, whatever that means. this just sounds like not being able to wrap your head around the concept. but that doesn’t make it a flaw.
Let me restate: I am of the opinion that repeating decimals are imperfect representations of the values we use them to represent. This imperfection only matters in the case of 0.999… , but I still consider it a flaw.
I am also of the opinion that focusing on this flaw rather than the incorrectness of the person using it is a better method of teaching.
I accept that 1/3 is exactly equal to the value typically represented by 0.333… , however I do not agree that 0.333… is a perfect representation of that value. That is what I mean by 1/3 ≠ 0.333… , that repeating decimal is not exactly equal to that value.
deleted by creator
0.999… / 3 = 0.333… 1 / 3 = 0.333… Ergo 1 = 0.999…
(Or see algebraic proof by @Valthorn@feddit.nu)
If the difference between two numbers is so infinitesimally small they are in essence mathematically equal, then I see no reason to not address then as such.
If you tried to make a plank of wood 0.999…m long (and had the tools to do so), you’d soon find out the universe won’t let you arbitrarily go on to infinity. You’d find that when you got to the planck length, you’d have to either round up the previous digit, resolving to 1, or stop at the last 9.
Math doesn’t care about physical limitations like the planck length.
Any real world implementation of maths (such as the length of an object) would definitely be constricted to real world parameters, and the lowest length you can go to is the Planck length.
But that point wasn’t just to talk about a plank of wood, it was to show how little difference the infinite 9s in 0.999… make.
Afaik, the Planck Length is not a “real-world pixel” in the way that many people think it is. Two lengths can differ by an amount smaller than the Planck Length. The remarkable thing is that it’s impossible to measure anything smaller than that size, so you simply couldn’t tell those two lengths apart. This is also ignoring how you’d create an object with such a precisely defined length in the first place.
Anyways of course the theoretical world of mathematics doesn’t work when you attempt to recreate it in our physical reality, because our reality has fundamental limitations that you’re ignoring when you make that conversion that make the conversion invalid. See for example the Banach-Tarski paradox, which is utter nonsense in physical reality. It’s not a coincidence that that phenomenon also relies heavily on infinities.
In the 0.999… case, the infinite 9s make all the difference. That’s literally the whole point of having an infinite number of them. “Infinity” isn’t (usually) defined as a number; it’s more like a limit or a process. Any very high but finite number of 9s is not 1. There will always be a very small difference. But as soon as there are infinite 9s, that number is 1 (assuming you’re working in the standard mathematical model, of course).
You are right that there’s “something” left behind between 0.999… and 1. Imagine a number line between 0 and 1. Each 9 adds 90% of the remaining number line to the growing number 0.999… as it approaches one. If you pick any point on this number line, after some number of 9s it will be part of the 0.999… region, no matter how close to 1 it is… except for 1 itself. The exact point where 1 is will never be added to the 0.999… fraction. But let’s see how long that 0.999… region now is. It’s exactly 1 unit long, minus a single 0-dimensional point… so still 1-0=1 units long. If you took the 0.999… region and manually added the “1” point back to it, it would stay the exact same length. This is the difference that the infinite 9s make-- only with a truly infinite number of 9s can we find this property.
Except it isn’t infinitesimally smaller at all. 0.999… is exactly 1, not at all less than 1. That’s the power of infinity. If you wanted to make a wooden board exactly 0.999… m long, you would need to make a board exactly 1 m long (which presents its own challenges).
It is mathematically equal to one, but it isn’t physically one. If you wrote out 0.999… out to infinity, it’d never just suddenly round up to 1.
But the point I was trying to make is that I agree with the interpretation of the meme in that the above distinction literally doesn’t matter - you could use either in a calculation and the answer wouldn’t (or at least shouldn’t) change.
That’s pretty much the point I was trying to make in proving how little the difference makes in reality - that the universe wouldn’t let you explore the infinity between the two, so at some point you would have to round to 1m, or go to a number 1x planck length below 1m.
It is physically equal to 1. Infinity goes on forever, and so there is no physical difference.
It’s not that it makes almost no difference. There is no difference because the values are identical. There is no infinity between the two values.
Again, if you started writing 0.999… on a piece of paper, it would never suddenly become 1, it would always be 0.999… - you know that to be true without even trying it.
The difference is virtually nonexistent, and that is what makes them mathematically equal, but there is a difference, otherwise there wouldn’t be an infinitely long string of 9s between the two.
Sure, but you’re equivocating two things that aren’t the same. Until you’ve written infinity 9s, you haven’t written the number yet. Once you do, the number you will have written will be exactly the number 1, because they are exactly the same. The difference between all the nines you could write in one thousand lifetimes and 0.999… is like the difference between a cup of sand and all of spacetime.
Or think of it another way. Forget infinity for a moment. Think of 0.999… as all the nines. All of them contained in the number 1. There’s always one more, right? No, there isn’t, because 1 contains all of them. There are no more nines not included in the number 1. That’s why they are identical.
Even the hyperreal numbers *R, which include infinitesimals, define 1 == .999…
The only sources I trust are the ones that come from my dreams
Remember when US politicians argued about declaring Pi to 3?
Would have been funny seeing the world go boink in about a week.
Some software can be pretty resilient. I ended up watching this video here recently about running doom using different values for the constant pi that was pretty nifty.
I prefer my pi to be in duodecimal anyway. 3.184809493B should get you to where you need to go.
deleted by creator
You didn’t even read the first paragraph of that article LMAO
0.9<overbar.> is literally equal to 1
0.9 is most definitely not equal to 1
Hence the overbar. Lemmy should support LaTeX for real though
Oh, that’s not even showing as a missing character, to me it just looks like 0.9
At least we agree 0.99… = 1
Oh lol its rendering as HTML for you.
There’s a Real Analysis proof for it and everything.
Basically boils down to
- If 0.(9) != 1 then there must be some value between 0.(9) and 1.
- We know such a number cannot exist, because for any given discrete value (say 0.999…9) there is a number (0.999…99) that is between that discrete value and 0.(9)
- Therefore, no value exists between 0.(9) and 1.
- So 0.(9) = 1
Even simpler: 1 = 3 * 1/3
1/3 =0.333333…
1/3 + 1/3 + 1/3 = 0.99999999… = 1
Even simpler
0.99999999… = 1
But you’re just restating the premise here. You haven’t proven the two are equal.
1/3 =0.333333…
This step
1/3 + 1/3 + 1/3 = 0.99999999…
And this step
Aren’t well-defined. You’re relying on division short-hand rather than a real proof.
ELI5
Mostly boils down to the pedantry of explaining why 1/3 = 0.(3) and what 0.(3) actually means.
That actually makes sense, thank you.
I can honestly say I learned something from the comment section. I was always taught the .9 repeating was not equal to 1 but separated by imaginary i … Or infinitely close to 1 without becoming 1.
Mathematics is built on axioms that have nothing to do with numbers yet. That means that things like decimal numbers need definitions. And in the definition of decimals is literally included that if you have only nines at a certain point behind the dot, it is the same as increasing the decimal in front of the first nine by one.
That’s not an axiom or definition, it’s a consequence of the axioms that define arithmetic and can therefore be proven.
There are versions of math where that isn’t true, with infinitesimals that are not equal to zero. So I think it is an axium rather than a provable conclusion.
That’s not what “axiom” means
Those versions have different axioms from which different things can be proven, but we don’t define 9.9 repeating as 1
That’s not how it’s defined. 0.99… is the limit of a sequence and it is precisely 1. 0.99… is the summation of infinite number of numbers and we don’t know how to do that if it isn’t defined. (0.9 + 0.09 + 0.009…) It is defined by the limit of the partial sums, 0.9, 0.99, 0.999… The limit of this sequence is 1. Sorry if this came out rude. It is more of a general comment.
I study mathematics at university and I remember it being in the definition, but since it follows from the sum’s limit anyways it probably was just there for claritie’s sake. So I guess we’re both right…