Octave Rounding and Evaluation Order
In Octave I obtain
1 - 0.05 -0.95 = 0
and
1 - 0.95 -0.05 = 4.1633e-17
I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?
---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.
binary octave representation
add a comment |
In Octave I obtain
1 - 0.05 -0.95 = 0
and
1 - 0.95 -0.05 = 4.1633e-17
I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?
---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.
binary octave representation
2
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
1
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14
add a comment |
In Octave I obtain
1 - 0.05 -0.95 = 0
and
1 - 0.95 -0.05 = 4.1633e-17
I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?
---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.
binary octave representation
In Octave I obtain
1 - 0.05 -0.95 = 0
and
1 - 0.95 -0.05 = 4.1633e-17
I understand that it is caused by the order of evaluation combined with the approximate binary representation of
0.05 as 0.00(0011)
and
0.95 as 0.11(1100)
Could someone please give me the whole story or show me a link explaining it?
---EDIT:
This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.
binary octave representation
binary octave representation
edited 2 days ago
gciriani
asked Jan 18 at 21:21
gcirianigciriani
1191211
1191211
2
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
1
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14
add a comment |
2
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
1
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14
2
2
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
1
1
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14
add a comment |
2 Answers
2
active
oldest
votes
Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
add a comment |
The full explanation
What Every Computer Scientist Should Know About Floating-Point
Arithmetic
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54261566%2foctave-rounding-and-evaluation-order%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
add a comment |
Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
add a comment |
Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.
Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:
octave:34> fprintf("%.80fn%.80fn", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
octave:35> fprintf("%.80fn%.80fn", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000
In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.
Therefore:
1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)
vs
1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)
HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps
-- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.
edited Jan 22 at 16:47
answered Jan 22 at 16:14
Tasos PapastylianouTasos Papastylianou
10.9k1932
10.9k1932
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
add a comment |
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true.
– gciriani
Jan 22 at 21:02
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable.
– Tasos Papastylianou
Jan 23 at 0:33
1
1
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers.
– gciriani
Jan 23 at 15:57
add a comment |
The full explanation
What Every Computer Scientist Should Know About Floating-Point
Arithmetic
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
add a comment |
The full explanation
What Every Computer Scientist Should Know About Floating-Point
Arithmetic
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
add a comment |
The full explanation
What Every Computer Scientist Should Know About Floating-Point
Arithmetic
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The full explanation
What Every Computer Scientist Should Know About Floating-Point
Arithmetic
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
answered Jan 18 at 22:31
matzerimatzeri
3,842268
3,842268
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
add a comment |
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
It doesn't really add much to what I wrote.
– gciriani
Jan 18 at 22:47
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54261566%2foctave-rounding-and-evaluation-order%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
Possible duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?
– Cris Luengo
Jan 18 at 21:28
@Cris-Luengo, it is not.
– gciriani
Jan 19 at 2:00
1
In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked.
– Cris Luengo
Jan 19 at 5:14