I'm looking for a mechanism like ASLR for Linux in order to benchmark a distributed application while accounting for incidental layout changes. For background and motivation, see the Stabilizer paper.
The goal is to recreate the behavior of Stabilizer, but in a distributed environment with complex deployment. (As far as I can tell, that project is no longer maintained and never made it past a prototype phase.) In particular, the randomization should take place (repeatedly) at runtime and without needing to invoke the program through a special binary or debugger. On the other hand, I assume full access to source code and the ability to arbitrarily change/recompile the system under test.
By "complex deployment", I mean that there may be different binaries running on different machines, possibly written in different languages. Think Java programs calling into JNI libraries (or other languages using FFI), "main" binaries communicating with sidecar services, etc. The key in all of these cases is that the native code (the target of randomization) is not manually invoked, but is somehow embedded by another program.
I'm only concerned about the randomization aspect (i.e., assume that metrics collection/reporting is handled externally). It's fine if the solution is system-specific (e.g., only runs on Linux or with C++ libraries), but ideally it would be a general pattern that can be applied "anywhere", regardless of the compiler/toolchain/OS.
Side note: layout issues are less of a concern on larger systems thanks to the extra sources of random noise (network, CPU temperatures/throttling, IPC overheads, etc). However, in many cases, distributed applications are deployed on "identical" machines with uniform environments, so there's still plenty of room for correlated performance impacts. The goal is just to debias the process for decision making.