Octave Rounding and Evaluation Order












0















In Octave I obtain



1 - 0.05 -0.95 = 0


and



1 - 0.95 -0.05 = 4.1633e-17


I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?



---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.










share|improve this question




















  • 2





    Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

    – Cris Luengo
    Jan 18 at 21:28











  • @Cris-Luengo, it is not.

    – gciriani
    Jan 19 at 2:00






  • 1





    In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

    – Cris Luengo
    Jan 19 at 5:14
















0















In Octave I obtain



1 - 0.05 -0.95 = 0


and



1 - 0.95 -0.05 = 4.1633e-17


I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?



---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.










share|improve this question




















  • 2





    Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

    – Cris Luengo
    Jan 18 at 21:28











  • @Cris-Luengo, it is not.

    – gciriani
    Jan 19 at 2:00






  • 1





    In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

    – Cris Luengo
    Jan 19 at 5:14














0












0








0








In Octave I obtain



1 - 0.05 -0.95 = 0


and



1 - 0.95 -0.05 = 4.1633e-17


I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?



---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.










share|improve this question
















In Octave I obtain



1 - 0.05 -0.95 = 0


and



1 - 0.95 -0.05 = 4.1633e-17


I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?



---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.







binary octave representation






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 2 days ago







gciriani

















asked Jan 18 at 21:21









gcirianigciriani

1191211




1191211








  • 2





    Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

    – Cris Luengo
    Jan 18 at 21:28











  • @Cris-Luengo, it is not.

    – gciriani
    Jan 19 at 2:00






  • 1





    In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

    – Cris Luengo
    Jan 19 at 5:14














  • 2





    Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

    – Cris Luengo
    Jan 18 at 21:28











  • @Cris-Luengo, it is not.

    – gciriani
    Jan 19 at 2:00






  • 1





    In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

    – Cris Luengo
    Jan 19 at 5:14








2




2





Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

– Cris Luengo
Jan 18 at 21:28





Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?

– Cris Luengo
Jan 18 at 21:28













@Cris-Luengo, it is not.

– gciriani
Jan 19 at 2:00





@Cris-Luengo, it is not.

– gciriani
Jan 19 at 2:00




1




1





In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

– Cris Luengo
Jan 19 at 5:14





In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.

– Cris Luengo
Jan 19 at 5:14












2 Answers
2






active

oldest

votes


















2














Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:



octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000


In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.



Therefore:



1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)


HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.






share|improve this answer


























  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

    – gciriani
    Jan 22 at 21:02











  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

    – Tasos Papastylianou
    Jan 23 at 0:33






  • 1





    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

    – gciriani
    Jan 23 at 15:57



















1














The full explanation




What Every Computer Scientist Should Know About Floating-Point
Arithmetic




https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html






share|improve this answer
























  • It doesn't really add much to what I wrote.

    – gciriani
    Jan 18 at 22:47











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54261566%2foctave-rounding-and-evaluation-order%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:



octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000


In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.



Therefore:



1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)


HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.






share|improve this answer


























  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

    – gciriani
    Jan 22 at 21:02











  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

    – Tasos Papastylianou
    Jan 23 at 0:33






  • 1





    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

    – gciriani
    Jan 23 at 15:57
















2














Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:



octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000


In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.



Therefore:



1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)


HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.






share|improve this answer


























  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

    – gciriani
    Jan 22 at 21:02











  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

    – Tasos Papastylianou
    Jan 23 at 0:33






  • 1





    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

    – gciriani
    Jan 23 at 15:57














2












2








2







Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:



octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000


In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.



Therefore:



1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)


HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.






share|improve this answer















Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:



octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000


In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.



Therefore:



1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)


HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 22 at 16:47

























answered Jan 22 at 16:14









Tasos PapastylianouTasos Papastylianou

10.9k1932




10.9k1932













  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

    – gciriani
    Jan 22 at 21:02











  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

    – Tasos Papastylianou
    Jan 23 at 0:33






  • 1





    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

    – gciriani
    Jan 23 at 15:57



















  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

    – gciriani
    Jan 22 at 21:02











  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

    – Tasos Papastylianou
    Jan 23 at 0:33






  • 1





    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

    – gciriani
    Jan 23 at 15:57

















I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

– gciriani
Jan 22 at 21:02





I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.

– gciriani
Jan 22 at 21:02













It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

– Tasos Papastylianou
Jan 23 at 0:33





It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.

– Tasos Papastylianou
Jan 23 at 0:33




1




1





Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

– gciriani
Jan 23 at 15:57





Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.

– gciriani
Jan 23 at 15:57













1














The full explanation




What Every Computer Scientist Should Know About Floating-Point
Arithmetic




https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html






share|improve this answer
























  • It doesn't really add much to what I wrote.

    – gciriani
    Jan 18 at 22:47
















1














The full explanation




What Every Computer Scientist Should Know About Floating-Point
Arithmetic




https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html






share|improve this answer
























  • It doesn't really add much to what I wrote.

    – gciriani
    Jan 18 at 22:47














1












1








1







The full explanation




What Every Computer Scientist Should Know About Floating-Point
Arithmetic




https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html






share|improve this answer













The full explanation




What Every Computer Scientist Should Know About Floating-Point
Arithmetic




https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html







share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 18 at 22:31









matzerimatzeri

3,842268




3,842268













  • It doesn't really add much to what I wrote.

    – gciriani
    Jan 18 at 22:47



















  • It doesn't really add much to what I wrote.

    – gciriani
    Jan 18 at 22:47

















It doesn't really add much to what I wrote.

– gciriani
Jan 18 at 22:47





It doesn't really add much to what I wrote.

– gciriani
Jan 18 at 22:47


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54261566%2foctave-rounding-and-evaluation-order%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Liquibase includeAll doesn't find base path

How to use setInterval in EJS file?

Petrus Granier-Deferre