Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, now that most of us control the server, is this an option?


Yes. This is what I use on my hobby sites and home network. About a decade ago I was using 16 when the default changed to 10 from 3. This is with fq_codel+cdg. Most prefer bbr for servers but I have my own quirky use cases.

    ip route change local 127.0.0.0/8 dev lo initcwnd 128 initrwnd 128
    ip route | grep default | while read p; do ip route change $p initcwnd 32 initrwnd 32; done
I would suggest performing significant testing from high latency connections before making changes on anything important. i.e. iperf3/nuttcp from a VM in another country. This would also be a good time to get numbers from different congestion control algorithms and default qdiscs. e.g. net.ipv4.tcp_congestion_control and net.core.default_qdisc [1]

[Edit] I should also add that changing these values may require different methods depending on the distribution. [2]

[1] - https://www.kernel.org/doc/Documentation/sysctl/net.txt

[2] - https://serverfault.com/questions/546523/linux-initcwnd-and-...


Never tried it, but its listed as an option at https://linux.die.net/man/8/ip


Seems fraught with potential issues. Can I do this just for nginx?


Possibly but I've never tried this. One could create a virtual routing table and apply ACL's for the nginx listening ports then apply the initcwnd and initrwnd routing changes to that virtual table.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: