After this springs State of Simulation Testing survey, it became apparent that one of the biggest gaps in Simulation Testing was the lack of information and tutorials on getting started with the technique. To that end, weve created sim-template, a Simulant project template that many of the best practices around simulation testing, as well as forming a solid base to build a suite of tests upon.
In this article, well walk you through how to use sim-template to create your own Simulant project, what is included in the suite, and how to use it.
Before you get started, there are a few prerequisite pieces of software youll need installed on your system, namely:
With the above installed, creating a new Simulant suite is as simple as:
$ lein new sim-test org.my-org/sample-sim
Once run, youre ready to get off to the races.
You may be wondering, why both Leiningen and boot? It comes down to a mix of preference and capability (or rather, simplicity). First, some background: one of the larger driving goals Homegrown Labs has regarding Simulation Testing is to pull out much of the minutia needed to setup and run a simulation test. To that end, we find boots task abstraction a much simpler and easy-to-use approach for extracting project tasks than Leiningens plugin architecture. In a sim-template generated project, this means that all the lifecycle actions for managing a suite of tests are separate from the project itself, and laid out in a way that we find easy to maintain (see boot-sim).
On the broader note of preference, we find boots rationale a refreshing take on project management. As such, it is a secondary goal to drive boot adoption by using it in practice, and suggesting others give it a try. We hope you feel the same, but in the event you dont, wed be more than open to accepting improvements to sim-template that add Leiningen support.
If youre not intimately familiar with Simulant, then the slew of files in your generated project will likely be bewildering. At a high-level, Simulant achieves the flexibility it affords by breaking the traditional notion of a test into roughly four disparate pieces: behaviour modelling, test planning, sim invocation, and result validation. All of these phases act in concert to create and execute randomized behaviour against a target system.
Heres how each of the generated
src/ files relate to those phases:
model.clj- where potential behaviour in the sim is defined.
test.clj- where a concrete plan of action is generated from the non-deterministic model.
sim.clj- where a concrete test is executed against an actual system.
actions.clj- where general, re-usable action behaviour is captured.
actions/*- where specific actions to take against a target system are defined. (
actions/sample.cljdefines actions against a sample service, but in practice you will define your own namespaces to test real systems.)
validations.clj- where previously run sims are validated for correctness, etc.
repl.clj- a namespace for REPL-based interactions with a simulation.
db.clj- a small namespace for managing database migrations.
util.clj- utilities for capturing codebase information from the suite and target systems.
resources/schema.edn- database schema for simulant’s internal Datomic database. Where you’ll add new attributes when needed.
With the 10,000 ft. view out of the way, lets zoom in on specific phases to get a general overview of how they work, and how you can start to build your own simulation tests into the project.
As hinted, the model phase of a simulation test captures potential behaviour of agents acting in a sim. In the generated project, this is defined in
generated-project.model/agent-behavior. This data structure represents the edges of the following graph, along with a maximum delay per transition. In graph format, this looks a little something like this:
To construct your own models, I would suggest creating a state-transition diagram as above, then transcribing it to an edge-graph when it comes time for implementation. An aside, youre not strictly limited to generating models via markov models as above, but its generally a good starting point for most projects. Since its data all the way down, you can perform any post processing necessary to adjust behavior for your own domain (often, in test).
The test namespace of your generated project has the job of creating a concrete plan of action from your random models and other data generators. This strikes the ideal middle ground between stressful randomized behavior, and pre-determined test plans. Theyre random, and theyre repeatable.
In practice, this generally means some housekeeping (setting up agents, domain objects, etc.), followed by generating action entities for each state transition your agents will undertake. This is accomplished via the
actions-for multi-method, which dispatches based on target state. For the most part, these action entities are static, but if your own actions inject additional random data, this is where you should generate that data. Youll write one
actions-for implementation per state in your own model (as well as a corresponding
The sim phase is where the rubber meets the road. In the sim namespace, youll find a number of functions all invoked by
run-sim! that organize and initiate running a test plan against a real target system. Most of these functions will remain static for the life of your project (library fodder, I know), but there are two in particular you will interact with more often:
setup-actions is simple, this is where you require any action namespace you write (a limitation of Clojure multimethod definitions)more on that shortly.
setup-system is notably empty in the generated project. Alas, it is the place where you should perform any system initialization your suite requires. Think agent accounts, data baselines, etc.
Now, for actions. In the test phase we generated action entities for a number of action types. In the sim phase, you must provide a
perform-action implementation for each of those types. For the sample service, these are provided, but for your own actions, youll need to write action implementations that perform actions and keep records in a similar fashion (this could be its own article, of course).
Finally, the validation phase is where you introspect on action log entries across your sim run to enforce feature and quality invariants. For many types of validations, this is as simple as querying for non-conforming log entries. For more involved validations, youll find the pattern of querying for raw results, then aggregating & filtering to detect failures to be the best approach. If youre not intimately familiar with Datomics Datalog, Id suggest working through Learn Datalog Today! for developing a good body of knowledge.
To put all of these phases together, you have two approaches: In experimentation and development phases, Id suggest evaluating parts or all of the repl namespace in sequence. For long-term installations, Id suggest using the commands provided by boot-sim to run through model creation, test creation, sim invocation, and validation, in turn.
All in all, we hope sim-template provides you with fertile grounds for building your own simulation tests. Of course, this article barely scratches the surface of Simulation Testing. In the coming weeks and months, expect more articles, libraries and tools diving into all of the minutia of successful simulation testing.
If there are any particular areas that ail you, feel free to drop a line to firstname.lastname@example.org and wed be happy to help.