Or read this blog post once, learning three options to run a binary with non-default glibc:
# Set dynamic loader version at link time
cc -o hello_c -Wl,--dynamic-linker=/tmp/sysroot/lib/ld-linux-x86-64.so.2 hello.c
# Set dynamic loader version at run time
/tmp/sysroot/lib/ld-linux-x86-64.so.2 ./hello_c
# Edit dynamic loader version in binary
patchelf --set-interpreter /tmp/sysroot/lib/ld-linux-x86-64.so.2 ./hello_c
If you are designing the program yourself, you can use this trick to make your distribution more portable. You'll have to configure linker to change RPATH as well, and modify your packaging scripts to also grab any .so files associated with it, like libm, libpthread, librt and so on. You'll also want to make sure no library uses hardcoded data/configs with incompatible settings (like /etc/nsswitch.conf).
No public Linux will ever accept this, but it would be a reasonable way to distribute your internal app to your non-homogeneous linux fleet.
For complex third-party apps, it's going to be harder - you'll want some sort of dependency collector script that follows both load-time and run-time loading; you'll also want to do something about data and config files. If there are hardcoded paths (like Python or gcc), you will have to do something about them too.
That will be basically a custom, non-trivial effort for each program. Definitely possible, but also much more complex than sticking some "apt install" in Dockerfile.
I have never needed to call `patchelf` for anything. If I saw someone putting `--dynamic-linker` in a call to a C compiler I would assume it's out of scope for me.
There's already like 100 tools I need to know for my job, I don't want low-level OS and C stuff to add another 50 or even another 20.
This is a little bit "Whatever the world was like when I was 20 is perfect, everything before that is too old, everything after that is too new", but, I'm definitely just reaching for Docker for this. Unless I'm running a GUI application or something else that's hard to containerize.
It's kinda funny because you sorta fall into the same trap you accuse the GP of.
Your "Whatever the world was like when I was 20..." quote kinda boils down to "I am fine with the tools I already know how to use and, and I don't want to learn something new".
And then you say... you already have your 100 tools and don't want to learn any others.
It's 3 different one-liners that all accomplish the same goal. The binary itself will tell you what libraries are needed, so I don't get your objections here.
To be fair, it's not one line: before getting to use that one line, have to build a whole other glibc as well. Which is often not a particularly fun process.
It's not like containers are always easy and always work fine and never introduce problems of their own.
If I'm debugging something, the last thing I want to do is spin up a container, make sure I've mounted the paths I need inside it, and/or ferry things in and out of it.
I'd much rather just run the binary under a different interpreter.
Granted, this is only useful if I'm already building my own glibc for some reason. If I'm debugging a problem where someone tells me my app is broken on some other version of some other distro, and I've narrowed the problem down to glibc, it is probably easier just to spin up a container image of that version of that other distro, rather than building my own glibc, using the same version as in that other distro.
It depends "what" you are doing, or, more to the point, your existing knowledge. Somebody used to "low level" details (I wouldn't call it that) may find this solution simpler and faster than somebody used to containers.
Or just use Go or any other language which produced "really" - as in, no libc - statically linked executables.
If you are for example distributing "native" (I hate that word) programs, this is a way to only need one version for all (well, almost ;) Linux distributions.
That is a wild conclusion to make considering previous paragraph. It's only cheaper and simpler if you value your time at 0.