Module eqc_statem

This module provides functions for testing operations with side-effects, which are specified via an abstract state machine.

Copyright © Quviq AB, 2004-2017

Version: 1.42.1

Description

This module provides functions for testing operations with side-effects, which are specified via an abstract state machine. The state machine is in turn specified by a client module (which implements the callbacks for eqc_statem). Given such a client, this module can generate and run command sequences, checking that all postconditions are satisfied, and shrinking failing sequences by discarding commands which do not contribute to the failure. Thus it can be used to find minimal command sequences which elicit an unexpected behaviour.

It can also generate parallel test cases from the same client module, which are used to test for race conditions.

Modules which use this one should -include_lib("eqc/include/eqc_statem") to import the functions that eqc_statem provides.

Since release 1.35 of QuickCheck, the documentation of this module has been updated to the grouped style of writing callbacks. In case you are confronted with a legacy state machine model that you need to understand, please read the old documentation.

Symbolic Commands

Generated test cases are lists of symbolic commands (command()), each of which binds a symbolic variable to the result of a symbolic function call (except, possibly, for a first command which initializes the state, see below). For example,
{set, {var, 1}, {call, erlang, whereis, [a]}}
is a command to set variable 1 to the result of calling erlang:whereis(a). The command generator is specified by COMMAND_command/1. QuickCheck recognizes a set of suffices, such as _command to enable compact model notations. For the above symbolic call, one would write the generator:
  whereis_command(_S) ->
    {call, erlang, whereis, [elements([a, b, c, d])]}.
  
Alternatively, one only specifies the argument generator, which in practice works best, and write a wrapper for the function under test, for example:
  register_args(S) ->
    [elements([a, b, c, d]), pid(S)].
 
  register(Name, Pid) ->
    catch erlang:register(Name, Pid).
  
When a test case is run, then symbolic variables (var()) are replaced by the values they were set to, and symbolic calls (call()) are performed. In addition, the post-condition of each command is checked. Running a list of commands generates a result which indicates whether any post-condition failed, or any command raised an exception, or whether all commands and checks completed successfully.

It is very important to keep in mind the difference between symbolic calls and variables, which are used during test case generation, and the values they represent, which are computed during test execution. We refer to the latter as dynamic values. The reason we use symbolic representations (rather than just working with their dynamic values) is to enable us to display, save, analyze, and above all simplify test cases before they are run.

States

The client module defines an initial state in which test cases begin, and how each command changes that state. For example, if test cases spawn a number of processes, then the state might be a list of the pids that have been spawned. The state is used both during test case generation and during test execution. Obviously, at generation time the actual values returned by commands are not known--they must be represented symbolically. Thus during test generation a symbolic state is constructed---in this case the state might be a list of all earlier spawned processes:
[{var, 1}, {var, 2}, {var, 3}]
if the first three commands all spawned processes. The model for spawning process needs to update the state, for which the _next suffix is used: to compute the nest model state.
  spawn_args(_S) -> [].
 
  spawn() ->
    erlang:spawn(timer, sleep, [5000]).
 
  spawn_next(S, Res, []) ->
    S#state{pids = S#state.pids ++ [Res]}.
  
During test execution the corresponding dynamic state is computed--in this case a list of three pids returned by the first three commands in the test case. Dynamic states always have the same structure as the corresponding symbolic states--the difference is just that symbolic variables and calls are replaced by their values.

Symbolic states are used to generate symbolic commands, or to decide whether a given symbolic command can be included in a test case. Dynamic states are used to check postconditions.

It is not usually necessary to track all relevant state information in the test case state--there is no need to include more information in the state than is necessary to generate and execute the command sequences we are interested in.

Callback Functions

Each command (or API function of the software under test) is specified by a number of functions, which can be grouped together, that represent generation, precondition, postcondition, state update, etc. These functions are named by appending a suffix to the name of the command itself. Apart from the COMMAND_args callback, each of these callbacks is optional, and if not provided a default is used. In addition to callbacks for a specific command, there are also a number of general callbacks, such as

The following call-back suffixes are provided for eqc_statem.

In addition to specific callbacks for one command, there are a few more general callbacks:

What Property Should We Test?

This module does not define any properties to test, it only provides functions to make defining such properties easy. A client module will normally contain a property resembling this one, which generates a command sequence using the client state machine, and then tests it:
prop_statem_correct() ->
    ?FORALL(Cmds, commands(client),
      begin {H, S, Res} = run_commands(Cmds),
            pretty_commands(client, Cmds, {H, S, Res},
                            Res == ok).
      end).

However, in any particular case we may wish to add a little to this basic form, for example to collect statistics, to clean up after test execution, or to print more information in the event of failure. It is to allow this flexibility that the properties to test are placed in the client module, rather than in this one.

Parallel Testing

Parallel test cases are generated using parallel_commands/1 instead of commands/1. Instead of just one list of commands, a parallel test case consists of a sequential prefix, followed by a list of concurrent tasks. The prefix and each task are lists of commands, just like a sequential test case. A parallel test case is run using run_parallel_commands/2, by first executing the prefix in the normal way, then executing the concurrent tasks in newly spawned processes. The test passes if every command behaves atomically--that is, if the results we actually see could have been produced by some serialization of the concurrent tasks.

Blocking operations can be specified using the optional callback:

The properties for parallel testing are very similar to those for sequential testing: we just replace the commands/1 and run_commands/1 functions by their parallel versions. Often, though, the race conditions we are testing for only occur sometimes, and so we need to repeat each test several times using ?ALWAYS to be reasonably sure of provoking it. It is only really necessary to do this during shrinking (since otherwise shrinking is likely to stop before the test case is properly simplified). Using this idea, a property that tests each case once while initially searching for a failure, then ten times at each shrinking step, could be written using the parameter shrinking as

prop_atomic() ->
    ?LET(Shrinking, parameter(shrinking, false),
      ?FORALL(ParCmds, parallel_commands(client),
        ?ALWAYS(if Shrinking -> 10; true -> 1 end,
          begin  {Seq, Par, Res} = run_parallel_commands(Cmds),
                  pretty_commands(?MODULE, Cmds, {Seq, Par, Res},
                                  Res == ok)
          end))).
Note also that the results from run_parallel_commands/2 are a little different from those of run_commands.

One difference to be aware of is that postconditions and invariants are not checked during a parallel test, they are checked afterwards using the results collected from the concurrent tasks. This means that the postcondition and invariant callbacks cannot inspect the current state of the software under test, when they are used in parallel testing.

linearizable(ParCmds) can be used to insert calls to now() into the generated test case, giving more information to be used when deciding whether tests passed or failed.

Data Types

bug()

bug() = {[generalised_commands()], eqc:counterexample()}

A counterexample together with a generalisation of the counterexample commmand sequence. Returned by more_bugs/3.

call()

call() = {call, module(), atom(), [expr()]} | {call, module(), atom(), [expr()], metadata()}

A symbolic function call: {call, M, F, Args} represents a call of function F in module M, with arguments Args. {call, M, F, Args, Meta} represents a call with metadata (which may be symbolic). Metadata does not affect the way the call is executed, but is passed (as part of the call) to all the eqc_statem call-backs that take a call as an argument.

command()

command() = {model, module()} | {init, symbolic_state()} | {set, var(), call()}

A symbolic command, which when run either performs a call and binds the result to a variable, or initialises the state of the test case. (The latter appears only when commands/2 in used to generate a command sequence starting in a state other than initial_state()).

dynamic_state()

dynamic_state() = any()

The type used by the client module to represent the state of a test case during test execution. It is the same as symbolic_state(), except that symbolic variables and calls are replaced by their values.

exit()

exit() = {'EXIT', term()}

The type of a caught exception.

expr()

expr() = term()

A symbolic expression, which is evaluated by replacing any symbolic variables (var()) or function calls (call()) in the term by their values.

generalised_commands()

abstract datatype: generalised_commands()

A generalised command sequence used by more_bugs/3 representing a set of command sequences that should be avoided during test case generation.

history()

history() = [history_entry()]

The history of a test execution, with one element for each command that was executed.

history_entry()

abstract datatype: history_entry()

A history entry storing information about the execution of a single command.

metadata()

metadata() = [{atom(), expr()}]

A property list, where each property is a tuple containing a tag and an expression.

parallel_test_case()

parallel_test_case() = {[command()], [[command()]]}

A sequential prefix, and a list of concurrent child tasks.

reason()

reason() = ok | initialization | {precondition, boolean()} | {postcondition, any()} | {invariant, any()} | {exception, exit()} | precondition_failed | {bad_dynamic_precondition, term()} | {bad_next_state, term()} | {bad_precondition, term()}

The reason execution of a command sequence terminated.

shape()

shape() = '?VAR' | '_' | term()

The shape of a command argument. Used by more_bugs/3 to generalise command sequences. If the shape is ?VAR (as a macro, not an atom), the concrete value of the argument is not important, only whether it appears in other places in the command sequence. If the shape is '_' the argument is ignored completely by more_bugs. ?VAR and '_' are allowed to appear as subterms of the shape.

symbolic_state()

symbolic_state() = any()

The type used by the client module to represent the state of a test case during test case generation.

var()

var() = {var, integer()}

A symbolic variable, which is replace during test execution by the value bound by the corresponding command().

Function Index

apply/3Added by the QuickCheck framework to recognize the origin of the apply in test sequences.
call_features/1Returns a list of command features exercised in this test.
call_features/2Returns a list of features exercised by calls to F (or F/A, if the arity is also specified).
check_command_names/2Print distribution of commands and fail if some commands have not been executed.
command_names/1Returns a list of the command names used in Cmds.
commands/1Generates a list of commands, using the abstract state machine defined in module Mod.
commands/2Behaves like commands/1, but generates a list of commands starting in state S.
commands_length/1Returns the length of a command sequence Cmds.
conj/1Conjoin a list of booleans, returning true if they are all true.
eq/2Compare X and Y for equality, returns true if equal, and {X, '/=', Y} otherwise.
get_metadata/2Retreive a field from the MetaData component.
history_command/1Get the command from a history entry.
history_features/1Get the features from a history entry.
history_result/1Get the result of the command from a history entry.
history_state/1Get the state before the execution of the command from a history entry.
linearizable/1Adds calls of eqc_statem:now() before and after each command in the parallel part of the test case, which enables us to observe the order in which calls are made.
more_bugs/1Test a state machine property repeatedly, avoiding already discovered bugs.
more_bugs/2Test a state machine property repeatedly, avoiding already discovered bugs.
more_bugs/3Test a state machine property repeatedly, to find up to N different bugs.
more_commands/2Increases the expected length of command sequences generated within Gen by a factor N.
now/0Used in combination with linearizable/1.
parallel_commands/1Generate a parallel test case from the callbacks in the client module Mod.
parallel_commands/2Behaves like parallel_commands/1, but generates a test case starting in the state S.
postconditions/2Given the values returned by a list of commands, checks that all pre- and postconditions are satisfied.
pretty_commands/4Pretty-prints the execution history of a failing test, showing the calls made, the actual arguments and results, and (optionally) the model states.
pretty_commands/5Like pretty_commands/4, but also takes the environment passed to run_commands/2 as an additional parameter.
print_bugs/1Print generalised bugs returned by more_bugs/3 in a readable way.
run_commands/1Runs a list of commands generated by some client module Mod.
run_commands/2Behaves like run_commands/1, but also takes an environment containing values for additional variables that may be referred to in test cases.
run_parallel_commands/1Runs a parallel test case, and returns the history of the prefix, each of the parallel tasks, and the overall result.
run_parallel_commands/2Runs a parallel test case, but also takes an environment, like run_commands/2.
show_states/1Causes a call of pretty_commands/4 or pretty_commands/5 in the property to display the test case states as well as arguments and results.
state_after/1Returns the symbolic state after a list of commands is run.
user_features/2Modifier of features stored in history() by applying the model callback Mod:user_features([feature()]) -> [feature()], if defined.
zip/2Zips two lists together, but accepts lists of different lengths, stopping when the shorter list stops.

Function Details

apply/3

apply(M, F, As) -> any()

Equivalent to erlang:apply(M, F, As).

Added by the QuickCheck framework to recognize the origin of the apply in test sequences.

call_features/1

call_features(H::history()) -> [any()]

Returns a list of command features exercised in this test.

call_features/2

call_features(F::atom() | {atom(), int()}, H::history()) -> [any()]

Returns a list of features exercised by calls to F (or F/A, if the arity is also specified).

check_command_names/2

check_command_names(Cmds::[command()], Prop::property()) -> property()

Print distribution of commands and fail if some commands have not been executed. Requires a grouped style state machine. See eqc_group_commands.

command_names/1

command_names(Cmds::[command()]) -> [{atom(), atom(), integer()}]

Returns a list of the command names used in Cmds. This function can be used in properties to measure the frequency with which each command actually occurs in the generated test cases, as follows:

 ?FORALL(Cmds, commands(...),
   begin
     {H, S, Res} = run_commands(Cmds),
     aggregate(command_names(Cmds),
               Res == ok)
   end)
 

commands/1

commands(Mod::atom()) -> gen([command()])

Generates a list of commands, using the abstract state machine defined in module Mod. The commands in the sequence are generated by Mod:command/1, starting in the state Mod:initial_state(), and tracking state changes using Mod:next_state/3. Commands are only included in the sequence if their precondition (given by Mod:precondition/2) is satisfied. Sequences are shrunk by discarding commands in such a way that preconditions always hold, and all variables are set before they are used.

commands/2

commands(Mod::atom(), S::symbolic_state()) -> gen([command()])

Behaves like commands/1, but generates a list of commands starting in state S. To ensure the correct state when the commands are run, an {init, S} command is added to the list.

commands_length/1

commands_length(Cmds::[command()]) -> integer()

Returns the length of a command sequence Cmds. This function can be used in properties to measure the length of different generated command sequences, as follows:

 ?FORALL(Cmds, commands(...),
   begin
     {H, S, Res} = run_commands(Cmds),
     measure(length, commands_length(Cmds),
               Res == ok)
   end)
 
For parallel command sequences it will add the sequential and parallel sequences, such that you get the total number of commands generated for that each test case.

conj/1

conj(Xs::[any()]) -> any()

Conjoin a list of booleans, returning true if they are all true. If there are any non-true values in the argument, they are returned in the result. Typically used in postconditions, to return a more informative reason for test failure.

eq/2

eq(X, Y) -> true | {X, '/=', Y}

Compare X and Y for equality, returns true if equal, and {X, '/=', Y} otherwise. Typically used in postcondition; since it will result in a more informative counterexample.

get_metadata/2

get_metadata(Key::term(), Call::call()) -> false | {ok, term()}

Retreive a field from the MetaData component. This function pre-suppose that the meta data is organized in the form of a proplist with {Key, Value}-pairs.

history_command/1

history_command(Eqc_statem_history::history_entry()) -> command()

Get the command from a history entry.

history_features/1

history_features(Eqc_statem_history::history_entry()) -> [any()]

Get the features from a history entry.

history_result/1

history_result(Eqc_statem_history::history_entry()) -> any()

Get the result of the command from a history entry.

history_state/1

history_state(Eqc_statem_history::history_entry()) -> dynamic_state()

Get the state before the execution of the command from a history entry.

linearizable/1

linearizable(TestCase::parallel_test_case()) -> parallel_test_case()

Adds calls of eqc_statem:now() before and after each command in the parallel part of the test case, which enables us to observe the order in which calls are made. This gives us more information with which to determine whether a test passed or failed, which may improve the detection of races. On the other hand, calling now() involves a global synchronization, and so may make race conditions less likely to appear.

more_bugs/1

more_bugs(Prop::eqc:property()) -> eqc_suite:test_suite()

Equivalent to more_bugs(Prop, 20).

Test a state machine property repeatedly, avoiding already discovered bugs.

more_bugs/2

more_bugs(Prop::eqc:property(), N::non_neg_integer() | infinity) -> eqc_suite:test_suite()

Equivalent to more_bugs(Prop, N, []).

Test a state machine property repeatedly, avoiding already discovered bugs.

more_bugs/3

more_bugs(Prop::eqc:property(), N::non_neg_integer() | infinity, Bugs::eqc_suite:test_suite()) -> eqc_suite:test_suite()

Test a state machine property repeatedly, to find up to N different bugs. The arguments are

Prop
The property to test.
N
The maximum number of bugs to find before stopping.
Bugs
Previously found bugs to avoid. This parameter should be the result of a previous call of more_bugs/4; these bugs are then avoided, so more_bugs/4 searches for other bugs instead.

more_bugs/4 keeps searching for more bugs until N bugs have been found, or the number of tests specified by eqc:numtests/2 have been run with no failure, or the time limit specified by eqc:testing_time/2 has been reached. Time limits apply to the total time spent running more_bugs/4, not the time spent searching for any individual bug.

There are a few mechanisms that control when a command sequence is considered to be "the same" as a previously found bug: inclusion, reordering, and abstraction.

Inclusion

A command sequence Cmds is considered an instance of a bug B if it contains all the commands (up to abstraction) of B in the same order. For example, the sequence [X, A, Y, B] is an instance of the bug [A, B], but not of [Y, A].

Reordering

When a counterexample of the property is found, more_bugs tries to discover reorderings of the command sequence that also provoke the bug. The type generalised_commands() represents a set of reorderings of a command sequence using the primitive Cmd | GCmds (pronounced Cmd anywhere in GCmds) to represent all possible ways of inserting Cmd somewhere in the (generalised) command sequence GCmds. A command sequence is an instance of a bug if it is an instance of any of the reorderings it represents.

For example, the generalised sequence
  X | A
      B
      C
  
represents the four sequences
  [X, A, B, C]
  [A, X, B, C]
  [A, B, X, C]
  [A, B, C, X]
  

Abstraction

In many cases the concrete values of the command arguments are not important for the behaviour of a system, only their identity. For example, the process registry does not care about which names you register processes under, but it does matter if you use the same name in different places (such as in a call to register followed by a call to whereis). To capture this, more_bugs treats command arguments as abstract variables, recording only whether two values are the same, but not their actual values. Thus, f(1), f(1) is considered the same bug as f(2), f(2), but f(1), f(2) is not an instance of either (the former are instances of the latter though).

To change this behaviour you can define a shape callback for a given command. This returns a shape() for each argument of the command, where ?VAR is the default behaviour and '_' means that more_bugs ignores the argument completely. If the concrete value is important the shape callback should return the actual value.

Example

Typical use:
  prop_ok() ->
    ?FORALL(Cmds, commands(?MODULE),
    begin
      HSR={_, _, Res} = run_commands(Cmds),
      pretty_commands(?MODULE, Cmds, HSR, Res == ok)
    end).
 
  bugs() -> eqc_statem:more_bugs(prop_ok()).
  

more_commands/2

more_commands(N::int(), Gen::gen(A)) -> gen(A)

Increases the expected length of command sequences generated within Gen by a factor N.

now/0

now() -> any()

Equivalent to os:timestamp().

Used in combination with linearizable/1.

parallel_commands/1

parallel_commands(Mod::atom()) -> gen(parallel_test_case())

Generate a parallel test case from the callbacks in the client module Mod. These test cases are used to test for race conditions that make the commands in the tests behave non-atomically.

parallel_commands/2

parallel_commands(Mod::atom(), S::symbolic_state()) -> gen(parallel_test_case())

Behaves like parallel_commands/1, but generates a test case starting in the state S.

postconditions/2

postconditions(Cmds::[command()], Vals::[term()]) -> boolean()

Given the values returned by a list of commands, checks that all pre- and postconditions are satisfied. Mod is a module defining a state machine, Cmds a list of commands generated from it, and Vals the list of values returned by running those commands. This function is useful when the list of commands cannot be run just by calling run_commands/1, for example because the commands represent calls to functions in a different programming language.

pretty_commands/4

pretty_commands(Mod::atom(), Cmds::[command()], HSRes::{history(), dynamic_state(), reason()}, P::property()) -> property()

Pretty-prints the execution history of a failing test, showing the calls made, the actual arguments and results, and (optionally) the model states. Like ?WHENFAIL, pretty_commands takes the rest of the property as its last argument, and constructs a new property that also pretty-prints. The argument Cmds should be the list of commands passed to run_commands/1, and HSRes should be its result. Alternatively (notwithstanding the type signature above) a parallel test case generated by parallel_commands/1, and the result of run_parallel_commands/1, can be passed instead.

The pretty-printing can be customized using eqc_gen:with_parameter/3 to specify

Note that these parameters are parameters to the entire property, and can be specified when eqc:quickcheck/1 is called from the Erlang shell.

Alternatively, the simplest way to show the states when a property fails is to call

  eqc:quickcheck(eqc_statem:show_states(...property...)).
  
from the shell.

pretty_commands/5

pretty_commands(Mod::atom(), Cmds::[command()], HSRes::{history(), dynamic_state(), reason()}, Env::[{atom(), term()}], P::property()) -> property()

Like pretty_commands/4, but also takes the environment passed to run_commands/2 as an additional parameter.

print_bugs/1

print_bugs(X1::eqc_suite:test_suite()) -> ok

Print generalised bugs returned by more_bugs/3 in a readable way.

run_commands/1

run_commands(Cmds::[command()]) -> {history(), dynamic_state(), reason()}

Runs a list of commands generated by some client module Mod. Before each command is run, its precondition is checked by Mod:precondition/2, and after each command is executed, its postcondition is checked by Mod:postcondition/3. The result contains the history() of execution, the state after the last command that was executed successfully, and the reason() execution stopped.

Note: Command sequences generated with QuickCheck < 1.39 do not contain the name of the client module and must be run with run_commands/2.

run_commands/2

run_commands(Cmds::[command()], Env::[{atom(), term()}]) -> {history(), dynamic_state(), reason()}

Behaves like run_commands/1, but also takes an environment containing values for additional variables that may be referred to in test cases. For example, if Env is [{x, 32}], then {var, x} may appear in the commands, and will evaluate to 32. The variables names must be atoms (unlike generated variable names, which are numbers).

Note: For command sequences generated by QuickCheck < 1.39 you can also use this function as run_commands(Mod, Cmds) to specify the client module that generated the command sequence.

run_parallel_commands/1

run_parallel_commands(Cmds::parallel_test_case()) -> {history(), [history()], reason()}

Runs a parallel test case, and returns the history of the prefix, each of the parallel tasks, and the overall result.

run_parallel_commands/2

run_parallel_commands(Cmds::parallel_test_case(), E::[{atom(), term()}]) -> {history(), [history()], reason()}

Runs a parallel test case, but also takes an environment, like run_commands/2.

show_states/1

show_states(Prop::property()) -> property()

Causes a call of pretty_commands/4 or pretty_commands/5 in the property to display the test case states as well as arguments and results.

state_after/1

state_after(Cmds::[command()]) -> symbolic_state()

Returns the symbolic state after a list of commands is run. The commands are not executed.

user_features/2

user_features(Mod::module(), H::history()) -> history()

Modifier of features stored in history() by applying the model callback Mod:user_features([feature()]) -> [feature()], if defined.

zip/2

zip(Xs::[A], Ys::[B]) -> [{A, B}]

Zips two lists together, but accepts lists of different lengths, stopping when the shorter list stops. This is useful to zip together a list of commands with the history returned by run_commands/1, to display each command together with its result in the output from QuickCheck.


Generated by EDoc, Sep 18 2017, 16:17:38.