Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not how databases are generally optimized. You're essentially doing random access, extracting a single tuple each time, in a datastructure optimized for locality. Think of it more like a memory-mapped region.


Nowhere in that article there was anything about hardware-related issues. I believe, what you are talking about has to do with how HDD store db files on the disk, and how it's (much) faster to sequentially read from such disk vs. seek for every record. Hence all sort of optimizations that DBs do - like b+trees, etc. If your dataset fits in memory - it's a nonissue, the engine will never reach for the disk anyway. Even if it doesn't fit in memory (in which case I would start all optimizations by adding more memory, if possible), it's much less of an issue with SSDs. But again - article has more science-ey tone than applied/practical. In some scenarios (large dataset on HDDs) random seek may be an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: