When a Sawppy community member stepped up and released a ROS Melodic rover software stack, I abandoned my own efforts since there was little point in duplicating effort. But in addition to rover control, that project was also a test run for a few other ideas. I used a Jupyter notebook to help work through the math involved in rover geometry, and I started using a Python coding style static analysis tool to enforce my code style consistency.

I also wanted to start writing a test suite in parallel to my code development. It's something I thought would be useful in past projects but never put enough focus into it. It always seemed so intimidating to build test suites that are robust enough to catch all the bugs, when it takes effort to climb the learning curve to even verify the most basic functionality. What would be the point of that? Surely basic functionality would have been verified before code is pushed to a Github repository.

Then I had the misfortune to waste many hours on a different project, because another developer did not even verify the code was valid Python syntax before committing and pushing to the repository. My idealism meant I wasted too many hours digging for another explanation, because "surely they've at least ran their code" and I was wrong. This taught me there's value in unit tests that verify basic functionality.

So I brought up the Python unit test library documentation, and started writing a few basic tests for rover Ackermann geometry calculation. The biggest hurdle was that binary floating point arithmetic is not precise enough to use the normal equality comparison, and we don't even need that much precision anyway. Calculating Sawppy steering geometry isn't like calculating orbital trajectory for an actual mission to Mars. For production code using Python 3.5 onwards, there's a math.isclose() available as a result of PEP 485. And for the purposes of Python unit tests, we can use assertAlmostEqual(). And how did I generate my test data? I used my Jupyter notebook! It's a nice way to verify my wemake-compliant code would generate the same output as the original calculations hashed out in Jupyter notebook.

And finally, none of this would do any good if it doesn't get executed. If someone is going to commit and push bad code they didn't even try to run, they're certainly not going to run the unit tests, either. What I need is to learn how to make a machine perform the verification for me.