In a side project I've been working on recently I've needed to write unit tests for some methods that return a non-deterministic random result.
In this case there was a method that would simulate an event happening based on some kind of probability factor. Eg if the factor was 0.2 then the event would happen on average every fifth time the method was called. It is pretty hard to unit test the outcome of that as the tests will fail 4/5 of the time. I needed to test that if I called that method 100 times with a factor of 0.2 that the event would happen roughly 20 times. Although of course sometimes it would be 19 times, sometimes 21. How do you test that?
After lots of messing about trying to Mock the random number generator and the random functions I discovered a much simpler approach. The random number generator on a computer is never actually truly random. It is a pseudo-random number generator (PRNG) which is in turn seeded with some 'randomness', eg the interrupts from the disk and network controller. But given the same seed the PNRG will always produce the same sequence of numbers when called.
So all I have to do is seed the PNRG with a known value before my test and the test outcome will always be deterministic.
An example below in Python, but most other languages have a similar seed() function.