Test
A module containing functions for creating and managing tests.
A test which has yet to be evaluated. When evaluated, it produces one
or more Expectation
s.
Return a Test
that evaluates a single
Expectation
.
import Test exposing (fuzz)
import Expect
test "the empty list has 0 length" <|
\_ ->
Array.length []
|> Expect.equal 0
Organizing Tests
Apply a description to a list of tests.
import Test exposing (describe, test, fuzz)
import Fuzz exposing (int)
import Expect
describe "Array"
[ describe "reverse"
[ test "has no effect on an empty list" <|
\_ ->
Array.reverse []
|> Expect.equal []
, fuzz int "has no effect on a one-item list" <|
\num ->
Array.reverse [ num ]
|> Expect.equal [ num ]
]
]
Passing an empty list will result in a failing test, because you either made a mistake or are creating a placeholder.
Run each of the given tests.
concat [ testDecoder, testSorting ]
Returns a Test
that is "TODO" (not yet implemented). These tests
always fail, but test runners will only include them in their output if there
are no other failures.
These tests aren't meant to be committed to version control. Instead, use them
when you're brainstorming lots of tests you'd like to write, but you can't
implement them all at once. When you replace todo
with a real test, you'll be
able to see if it fails without clutter from tests still not implemented. But,
unlike leaving yourself comments, you'll be prompted to implement these tests
because your suite will fail.
describe "a new thing"
[ todo "does what is expected in the common case"
, todo "correctly handles an edge case I just thought of"
]
This functionality is similar to "pending" tests in other frameworks, except that a TODO test is considered failing but a pending test often is not.
Returns a Test
that gets skipped.
Calls to skip
aren't meant to be committed to version control. Instead, use
it when you want to focus on getting a particular subset of your tests to
pass. If you use skip
, your entire test suite will fail, even if
each of the individual tests pass. This is to help avoid accidentally
committing a skip
to version control.
See also only
. Note that skip
takes precedence over only
;
if you use a skip
inside an only
, it will still get skipped, and if you use
an only
inside a skip
, it will also get skipped.
describe "Array"
[ skip <|
describe "reverse"
[ test "has no effect on an empty list" <|
\_ ->
Array.reverse []
|> Expect.equal []
, fuzz int "has no effect on a one-item list" <|
\num ->
Array.reverse [ num ]
|> Expect.equal [ num ]
]
, test "This is the only test that will get run; the other was skipped!" <|
\_ ->
Array.length []
|> Expect.equal 0
]
Returns a Test
that causes other tests to be skipped, and
only runs the given one.
Calls to only
aren't meant to be committed to version control. Instead, use
them when you want to focus on getting a particular subset of your tests to pass.
If you use only
, your entire test suite will fail, even if
each of the individual tests pass. This is to help avoid accidentally
committing a only
to version control.
If you you use only
on multiple tests, only those tests will run. If you
put a only
inside another only
, only the outermost only
will affect which tests gets run.
See also skip
. Note that skip
takes precedence over only
;
if you use a skip
inside an only
, it will still get skipped, and if you use
an only
inside a skip
, it will also get skipped.
describe "Array"
[ only <|
describe "reverse"
[ test "has no effect on an empty list" <|
\_ ->
Array.reverse []
|> Expect.equal []
, fuzz int "has no effect on a one-item list" <|
\num ->
Array.reverse [ num ]
|> Expect.equal [ num ]
]
, test "This will not get run, because of the `only` above!" <|
\_ ->
Array.length []
|> Expect.equal 0
]
Fuzz Testing
Take a function that produces a test, and calls it several (usually 100) times, using a randomly-generated input
from a Fuzzer
each time. This allows you to
test that a property that should always be true is indeed true under a wide variety of conditions. The function also
takes a string describing the test.
These are called "fuzz tests" because of the randomness. You may find them elsewhere called property-based tests, generative tests, or QuickCheck-style tests.
import Test exposing (fuzz)
import Fuzz exposing (list, int)
import Expect
fuzz (list int) "Array.length should never be negative" <|
-- This anonymous function will be run 100 times, each time with a
-- randomly-generated fuzzArray value.
\fuzzArray ->
fuzzArray
|> Array.length
|> Expect.atLeast 0
Run a fuzz test using two random inputs.
This is a convenience function that lets you skip calling Fuzz.pair
.
See fuzzWith
for an example of writing this using tuples.
import Test exposing (fuzz2)
import Fuzz exposing (list, int)
fuzz2 (list int) int "Array.reverse never influences Array.member" <|
\nums target ->
Array.member target (Array.reverse nums)
|> Expect.equal (Array.member target nums)
Run a fuzz test using three random inputs.
This is a convenience function that lets you skip calling Fuzz.triple
.
Run a fuzz
test with the given FuzzOptions
.
Note that there is no fuzzWith2
, but you can always pass more fuzz values in
using Fuzz.pair
, Fuzz.triple
,
for example like this:
import Test exposing (fuzzWith)
import Fuzz exposing (pair, list, int)
import Expect
fuzzWith { runs = 4200, distribution = noDistribution }
(pair (list int) int)
"Array.reverse never influences Array.member" <|
\(nums, target) ->
Array.member target (Array.reverse nums)
|> Expect.equal (Array.member target nums)
Options fuzzWith
accepts.
runs
The number of times to run each fuzz test. (Default is 100.)
import Test exposing (fuzzWith)
import Fuzz exposing (list, int)
import Expect
fuzzWith { runs = 350, distribution = noDistribution }
(list int)
"Array.length should never be negative" <|
-- This anonymous function will be run 350 times, each time with a
-- randomly-generated fuzzArray value. (It will always be a list of ints
-- because of (list int) above.)
\fuzzArray ->
fuzzArray
|> Array.length
|> Expect.atLeast 0
distribution
A way to report/enforce a statistical distribution of your input values.
(Default is noDistribution
.)
import Test exposing (fuzzWith)
import Test.Distribution
import Fuzz exposing (list, int)
import Expect
fuzzWith
{ runs = 350
, distribution =
expectDistribution
[ ( Test.Distribution.zero, "empty", \xs -> Array.length xs == 0 )
, ( Test.Distribution.atLeast 10, "3+ items", \xs -> Array.length xs >= 3 )
]
}
(list int)
"Sum > Average"
<|
\xs ->
Array.sum xs
|> Expect.greaterThan (average xs)
With Distribution
you can observe statistics about your fuzz test inputs and
assert that a given proportion of test cases belong to a given class.
-
noDistribution
opts out of these checks. -
reportDistribution
will collect statistics and report them after the test runs (both when it passes and fails) and so is mostly useful as a temporary setting when creating your fuzzers and tests. -
expectDistribution
will collect statistics, but only report them (and fail the test) if theExpectedDistribution
is not met. Handy for checking your fuzzers are giving interesting and relevant inputs to your tests.
fuzzWith { runs = 10000, distribution = noDistribution }
fuzzWith
{ runs = 10000
, distribution =
reportDistribution
[ ( "fizz", \n -> (n |> modBy 3) == 0 )
, ( "buzz", \n -> (n |> modBy 5) == 0 )
, ( "even", \n -> (n |> modBy 2) == 0 )
, ( "odd", \n -> (n |> modBy 2) == 1 )
]
}
fuzzWith
{ runs = 10000
, distribution =
expectDistribution
[ ( Test.Distribution.atLeast 30, "fizz", \n -> (n |> modBy 3) == 0 )
, ( Test.Distribution.atLeast 15, "buzz", \n -> (n |> modBy 5) == 0 )
, ( Test.Distribution.moreThanZero, "fizz buzz", \n -> (n |> modBy 15) == 0 )
, ( Test.Distribution.zero, "outside range", \n -> n < 1 || n > 20 )
]
}
The a
type variable in Distribution a
is the same type as your fuzzed type.
For example, if you're fuzzing a String with Fuzzer String
and want to see
distribution information for values produced by this fuzzer, you need to provide
String -> Bool
functions to your reportDistribution
or expectDistribution
calls,
which will in turn produce a Distribution String
.
Opts out of the test input distribution checking.
Collects statistics and reports them after the test runs (both when it passes and fails).
Collects statistics and makes sure the expected distribution is met.
Fails the test and reports the distribution if the expected distribution is not met.
Uses a statistical test to make sure the distribution doesn't pass or fail the
distribution by accident (a flaky test). Will run more tests than specified with the
runs
config option if needed.
This has the consequence of running more tests the closer your expected distribution is to the true distribution. You can thus minimize and speed up this "making sure" process by requesting slightly less % of your distribution than needed.
Currently the statistical test is tuned to allow a false positive/negative in 1 in every 10^9 tests.