Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Historically finance is done with integer multiples of some fraction of the currency. This allows for easy balancing of both sides of any transaction without things getting too out-of-hand. It's common to see $0.001 as the minimal unit in the US, for example.

If you used a decimal fractional representation instead, then one transaction of higher precision would contaminate all accounts it touches, making printing exact balances a pain.



Talk me through this. As I understood it, and I must be wrong here, a decimal type is an abstraction around an infinite-precision integer, which represents something like thousandths of a cent? I thought there was an IEEE standard, and it was the standard for all financial transactions. Is this not what BigDecimal is?


So, we lop off some precision and call it correct because displaying extra decimal places would be a "pain?" Not sure I'm following your logic here; if my balance is $10.0000005, wouldn't it be wrong to change that to $10.000?

If you want to do this correctly, then there should never be a balance that has an amount under the minimal unit. Then it doesn't matter if you use a BigDecimal or an int.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: