I’ve recently seen some shell script that tries to test for your OS architecture by running executables encoded within. There’s one for i386 (x86 platforms) and a few for ARM variants.
test_i386() { cat << EOF | $_base64 > /tmp/archtest && chmod a+x /tmp/archtest f0VMRgEBAQAAAAAAAAAAAAIAAwABAAAA5oAECDQAAACoEAAAAAAAADQAIAAEAC AAAAAIAECACABAgUBwAAFAcAAAUAAAAAEAAAAQAAANwPAADcnwQI3J8ECDgAAA . . . AAAAAAAgAAAAAAAAAFYAAAABAAAAMAAAAAAAAAAUEAAAMgAAAAAAAAAAAAAAAQ AwAAAAAAAAAAAAAARhAAAF8AAAAAAAAAAAAAAAEAAAAAAAAA EOF /tmp/archtest > /dev/null 2>&1 && arch=i386 }
What is terrible (well, to me at least) is that these executables are huge:
$ ls -l *.bin -rwxrwx--- 1 root root 4832 Jul 17 06:58 archtest-armv6.bin -rwxrwx--- 1 root root 4820 Jul 17 06:59 archtest-armv7.bin -rwxrwx--- 1 root root 4992 Jul 17 06:59 archtest-armv8.bin -rwxrwx--- 1 root root 4824 Jul 17 06:57 archtest-x86.bin
For one, I’m not sure why inspecting /proc/cpuinfo
or uname -a
is not really sufficient for their needs. And also, why such large binaries are required.
After all, what you want to do is just to check that it executes successfully. Were they trying to test for the presence of a working libc? Nope, because the binaries are statically-linked:
$ file ./archtest-x86.bin ./archtest-x86.bin: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, stripped
I think this just adds unnecessary bloat.
There are ways to make smaller binaries.
Now, I am not talking about crazy techniques like using assembly language instead of C, or making a weird ELF that might load only on Linux, but just using normal C and the standard gcc
and binutils
.
Let’s get started.