My standard way of "smoke testing" docker images is to run a verify script at the end that just checks all the expected binaries are in the path, and tries to access any expected environment variables while `-u` is set.
This feels to me like the wrong approach. You shouldn't be trying to test if a file exists or a command was run as a 'unit test' (as a sanity check, maybe, but not a test) - that's testing the implementation.
You should be testing for the desired behavior that having that command run or having that file there was supposed to achieve.
IMHO even the use of containers should probably be considered an implementation detail.
Just package content into RPM and use rpm to perform these checks at installation time as usual. Moreover, Dockerfile will be simple "yum install package" command.
This could be quite useful to define the basic things a container should exhibit. A lot of the time you rely on certain files being present or a specific entrypoint and command combo when running containers under orchestration with sidecars or volume mapping etc. So this could help you define what is needed and why for others who will change your container build steps in future.
It could also compliment your Goss or Inspec integration tests quite nicely.
That's awesome. One step closer to treating infrastructure exactly as code.
I'd imagine you could stop a build if the docker image generated doesn't have, say, a valid python installation because someone mistyped a command, and that doing so would be quicker than standing up the image and running an external test against it.
This could be very useful given the docker images are increasing day by day.