Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was writing trading systems on JVMs we were much more worried about allocation costs and memory access patterns than we were about GC. The former issues impact the normal latency while the latter impacted the worst case. Now you needed to think about and deal with the worst case, but as you say, making a system that doesn't GC often is pretty straight forward.

Now that I'm writing high throughput systems in go I use many of the same techniques that I did writing low latency systems on the JVM (arena allocation, memory locality, etc). This is because the other 2 parts of memory management, allocation and access, continue to be major drivers of performance even though the deallocation step is fundamentally different.

That is to say, GC times are not the only thing that matters when it comes to memory management and it is a relatively straightforward tradeoff between deallocation and allocation that the current golang GC is making.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: