D: Just focus on posts made by Legend, Blitz, Balance, and whizzball1. Those are the ones involved in the Grim incident. Source and Disdain are meaningless to you since they're just bantering.
D: Just focus on posts made by Legend, Blitz, Balance, and whizzball1. Those are the ones involved in the Grim incident. Source and Disdain are meaningless to you since they're just bantering.
Posting this here in case something goes wrong and/or for posterity's sake:
As a computer scientist, allow me to approach this problem from a logical point of view rather than an "infinity issue" point of view. To preface, you stated that virtually all mathematicians accept that 9.999... = 10, which is a fine statement; it has support. At the same time, however, it becomes an argument of authority which is good when searching for sources, but not quite when justifying a concept. So ignoring however many support it (as an example of why I can do this, consider every single mathematician saying that 2+2=5, while every other rule of math remains the same; it matters not how many support it, and this assumption will lead to technological failures all throughout engineering—support does not automatically mean a concept is correct). Now to address the next point, you stated that because we define math, we have the authority to also define 9.999... = 10. Another fair statement, however, (to use the previous counterexample as context), if we did define 2+2 to equal 5, we would be fine, but we would have to change the underlying rules of math as well (the simplest solution likely being swapping the characters used for the numeral 4 and 5). In the same way, we can only define 9.999... = 10 if it falls within the scope of the rules we have already defined. Those two issues addressed, it becomes imperative to prove that 9.999...=10 rather than simply defining it as such (and as I have found throughout my experience (most often with people I have talked to in physics fields), oftentimes a property is defined in a way that works, even when it may not perfectly represent what is occurring on a physical (or in this case mathematical) level). To address the first issue, I would disagree that 10/3 = 3.333..., but rather that 3.333... is an approximation of 10/3, but is actually infinitesimally less than 10/3, which would prevent it from becoming 9.999... when multiplied by 3. The real issue here lies in whether or not 1/∞ = 0. I argue that mathematicians will say it does, not because of any inherent property of the value, but rather because for all intents and purposes, it is. 1/∞ is the smallest possible value, which cannot be reached, since any number with an ending digit can be divided by 2 to equal a smaller number. The fact that such a theoretical concept is reached is where several issues arise, and this seems to justify the value being 0. However, I would also arguing that using those same mathematical rules, it is equal to a theoretical value rather than zero. 1/1 = 1, 1/2 = .5, 1/4 = .25, etc. A way to state this would be 1/n > 1/(n+1), 1/n > 0 (and in this particular example, 1/n = 2/(n+1), but that's irrelevant). Because of this, expanded to any positive integer, 1/n will still be greater than zero and less than the 1/(n+1), to whatever integer to infinity. It brings into play the concept of limits in the same way that 1/2+1/4+1/8... = 1. In the same way, however, any digit along the graph formed by that summation will be less than 1, regardless of how small the difference may be, and the same issue arises, whether 1/∞ = 0. Yes, you could argue that because this concept seems to be self-standing, unsupported by any previous concept in math, we are able to define the rule itself, and rule it as equal to 0. But by the same point, we see that absolutely any number along the positive integer number line in the limit example will be less than one, as well as the previous example I gave regarding 1/n > 0. Since infinity is a theoretical concept, but still holds true to mathematical rules (infinity, despite being theoretical, does not suddenly become less than zero because we can define it as such), I argue that it does not lead to 1/∞ = 0, but rather that 1/∞ = .00000...1, the smallest possible difference between two numbers, which would be another theoretical concept, similar to infinity. Because infinity is the largest "number," this would be the smallest "number" greater than zero, which cannot be further reduced (due to the infinite number of zeroes in 0.000...1 between the two values), due to the concept of ∞ + 1 = ∞. I would rather not (because the issue will be addressed later), but to prevent the argument that you can divide 0.000...1 by two to generate a smaller number, I argue that the result would be 0.000...5, which would be a larger value (due to the infinite number of zeroes), and shows that it is therefore impossible to divide the value by any other number, justifying it as the smallest. In addition, if it were equal to zero, then we run into this issue: 1/∞ = 0. (1/∞) * ∞ = 1 (since the denominator cancels itself). So in this case, the issue becomes (1/∞) * ∞ = 0 * ∞, which simplifies to 1 = 0, an untrue statement. Therefore, in the theoretical world of infinity, there must be a theoretical "infinitely small number" to follow both laws of math and logic (which is what I care about the most). If we define this theoretically infinitely small number as q, we can justify that 10 - q != 10, but rather 9.999..., since we previously showed that q cannot equal 0. That justified, we move on to the second proof shown in the video. 10M = 99.999... and M = 9.999..., also a fair statement. However, we are dealing with the theoretical concept of 10-q. It cannot be classically multiplied by 10 to achieve a value with another set of infinite zeroes. I understand that ∞ + 1 = ∞, and I agree that this initially makes it seem that the concept is mathematically sound. However, the issue arises in adding the extra 9 to an infinite series of 9s. To prove this, we can define 10M in a different way, being M + 90, or M + 10 * 9. After this, you divide by M, which is where the tricky part of the equation comes in and this justification falls apart. The expanded equation does not become (10M - M)/9, but rather (M + 10 * 9 - M) / 9. But the issue arises here, because it falls under circular logic, a fallacy. It is assumed already by the mathematician that 9.999... = 10 when it is multiplied by 10, because the equation simplifies to ((M - M) + (10 * 9))/9, which is equivalent to 0/9 + 90/9, equivalent to 0 + 10 = 10. The equation inherently assumes that 9.999... = 10 before it starts and it assumes that an extra 9 can be added to the infinite series of 9s without changing how the value interacts with classical mathematics when 9.999 * 10 = 99.999..., when it really becomes 100 - 10q. Many of these "infinity tricks" can be found to violate inherent axioms of mathematics when applied to classical mathematics without taking into account theoretical behavior that arises when an infinite value (expressed as a series of non-infinite numbers) has classical operations placed upon it. Infinite mathematics behave as infinite mathematics do, while classical mathematics behave as classical mathematics do, without any issue. But when the two are combined into one kind of value (1 + 1 + 1...) = ∞, there either must be a new set of rules, both sets must be used independently, or classical approaches must be applied without automatically assuming infinite approaches apply the same way, lest the value arrives at a ludicrous and impossible solution (such as 1 = 0). And to address the final argument presented in the video, it is a rehashed version of the previous explanation, just expressed in fractional notation rather than decimal. If we do take 10M and express it as a fraction, we arrive at 90 + 9 + 9/10 + 9/90..., also a fine conclusion, and it appears to be true when expressing it as an independent value. But when taken as a modification of M (or 10 - q), it becomes much trickier, because in that infinite series, 9/∞ is added with the assumption that it is equal to zero when it was previously shown that it cannot, but is rather nine times a theoretical infinitesimally small value, already shown to not be equal to zero without a paradox arising. In that case, (if we say .000...1 = g), if g = 0, g* 2 = 0, g * 3 = 0..., but g * ∞ = 1, which is a violation of the logical results of the values preceding it. So as a result of this, 10 - q cannot equal 10 because in the same way that 3.333... is a best estimate of 10/3 in decimal notation, 10 is a best estimate of 9.999..., which is in infinite notation (which I also realize is not ubiquitously expressed in mathematics, but is an easy way to illustrate the concept). So instead of 9.999... = 10 being considered true throughout virtually of mathematics due to its inherent accuracy (or even as a rule defined as mathematicians), it was determined to be the case because it was a significantly simpler way of expressing the value. 9.999... is an impossible value to achieve in both classical mathematics and in reality. Because of this, we cannot define 10 - q as equal to 10, but rather as 10 - q, remaining unchanged as the simplest form of the value. Similar to another theoretical concept without a "real" counterpart, 10 - q cannot be simplified in the same way that 10 - i cannot be simplified further, because we reach a point that one set of mathematics (classical in this case) interacts with a totally different set (imaginary or infinite), and it just doesn't behave in the same manner. I apologize for the lengthy discussion, but I hope that you (or somebody else) would be willing to consider this take on the subject of infinity.