pub trait Strategy: Debug {
type Tree: ValueTree<Value = Self::Value>;
type Value: Debug;
Show 15 methods
// Required method
fn new_tree(&self, runner: &mut TestRunner) -> NewTree<Self>;
// Provided methods
fn prop_map<O: Debug, F: Fn(Self::Value) -> O>(self, fun: F) -> Map<Self, F>
where Self: Sized { ... }
fn prop_map_into<O: Debug>(self) -> MapInto<Self, O>
where Self: Sized,
Self::Value: Into<O> { ... }
fn prop_perturb<O: Debug, F: Fn(Self::Value, TestRng) -> O>(
self,
fun: F,
) -> Perturb<Self, F>
where Self: Sized { ... }
fn prop_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> Flatten<Map<Self, F>>
where Self: Sized { ... }
fn prop_ind_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlatten<Map<Self, F>>
where Self: Sized { ... }
fn prop_ind_flat_map2<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlattenMap<Self, F>
where Self: Sized { ... }
fn prop_filter<R: Into<Reason>, F: Fn(&Self::Value) -> bool>(
self,
whence: R,
fun: F,
) -> Filter<Self, F>
where Self: Sized { ... }
fn prop_filter_map<F: Fn(Self::Value) -> Option<O>, O: Debug>(
self,
whence: impl Into<Reason>,
fun: F,
) -> FilterMap<Self, F>
where Self: Sized { ... }
fn prop_union(self, other: Self) -> Union<Self>
where Self: Sized { ... }
fn prop_recursive<R: Strategy<Value = Self::Value> + 'static, F: Fn(BoxedStrategy<Self::Value>) -> R>(
self,
depth: u32,
desired_size: u32,
expected_branch_size: u32,
recurse: F,
) -> Recursive<Self::Value, F>
where Self: Sized + 'static { ... }
fn prop_shuffle(self) -> Shuffle<Self>
where Self: Sized,
Self::Value: Shuffleable { ... }
fn boxed(self) -> BoxedStrategy<Self::Value>
where Self: Sized + 'static { ... }
fn sboxed(self) -> SBoxedStrategy<Self::Value>
where Self: Sized + Send + Sync + 'static { ... }
fn no_shrink(self) -> NoShrink<Self>
where Self: Sized { ... }
}
Expand description
A strategy for producing arbitrary values of a given type.
fmt::Debug
is a hard requirement for all strategies currently due to
prop_flat_map()
. This constraint will be removed when specialisation
becomes stable.
Required Associated Types§
Required Methods§
Sourcefn new_tree(&self, runner: &mut TestRunner) -> NewTree<Self>
fn new_tree(&self, runner: &mut TestRunner) -> NewTree<Self>
Generate a new value tree from the given runner.
This may fail if there are constraints on the generated value and the
generator is unable to produce anything that satisfies them. Any
failure is wrapped in TestError::Abort
.
This method is generally expected to be deterministic. That is, given a
TestRunner
with its RNG in a particular state, this should produce an
identical ValueTree
every time. Non-deterministic strategies do not
cause problems during normal operation, but they do break failure
persistence since it is implemented by simply saving the seed used to
generate the test case.
Provided Methods§
Sourcefn prop_map<O: Debug, F: Fn(Self::Value) -> O>(self, fun: F) -> Map<Self, F>where
Self: Sized,
fn prop_map<O: Debug, F: Fn(Self::Value) -> O>(self, fun: F) -> Map<Self, F>where
Self: Sized,
Returns a strategy which produces values transformed by the function
fun
.
There is no need (or possibility, for that matter) to define how the output is to be shrunken. Shrinking continues to take place in terms of the source value.
fun
should be a deterministic function. That is, for a given input
value, it should produce an equivalent output value on every call.
Proptest assumes that it can call the function as many times as needed
to generate as many identical values as needed. For this reason, F
is
Fn
rather than FnMut
.
Sourcefn prop_map_into<O: Debug>(self) -> MapInto<Self, O>
fn prop_map_into<O: Debug>(self) -> MapInto<Self, O>
Returns a strategy which produces values of type O
by transforming
Self
with Into<O>
.
You should always prefer this operation instead of prop_map
when
you can as it is both clearer and also currently more efficient.
There is no need (or possibility, for that matter) to define how the output is to be shrunken. Shrinking continues to take place in terms of the source value.
Sourcefn prop_perturb<O: Debug, F: Fn(Self::Value, TestRng) -> O>(
self,
fun: F,
) -> Perturb<Self, F>where
Self: Sized,
fn prop_perturb<O: Debug, F: Fn(Self::Value, TestRng) -> O>(
self,
fun: F,
) -> Perturb<Self, F>where
Self: Sized,
Returns a strategy which produces values transformed by the function
fun
, which is additionally given a random number generator.
This is exactly like prop_map()
except for the addition of the second
argument to the function. This allows introducing chaotic variations to
generated values that are not easily expressed otherwise while allowing
shrinking to proceed reasonably.
During shrinking, fun
is always called with an identical random
number generator, so if it is a pure function it will always perform
the same perturbation.
§Example
// The prelude also gets us the `Rng` trait.
use proptest::prelude::*;
proptest! {
#[test]
fn test_something(a in (0i32..10).prop_perturb(
// Perturb the integer `a` (range 0..10) to a pair of that
// integer and another that's ± 10 of it.
// Note that this particular case would be better implemented as
// `(0i32..10, -10i32..10).prop_map(|(a, b)| (a, a + b))`
// but is shown here for simplicity.
|centre, rng| (centre, centre + rng.gen_range(-10, 10))))
{
// Test stuff
}
}
Sourcefn prop_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> Flatten<Map<Self, F>>where
Self: Sized,
fn prop_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> Flatten<Map<Self, F>>where
Self: Sized,
Maps values produced by this strategy into new strategies and picks values from those strategies.
fun
is used to transform the values produced by this strategy into
other strategies. Values are then chosen from the derived strategies.
Shrinking proceeds by shrinking individual values as well as shrinking
the input used to generate the internal strategies.
§Shrinking
In the case of test failure, shrinking will not only shrink the output
from the combinator itself, but also the input, i.e., the strategy used
to generate the output itself. Doing this requires searching the new
derived strategy for a new failing input. The combinator will generate
up to Config::cases
values for this search.
As a result, nested prop_flat_map
/Flatten
combinators risk
exponential run time on this search for new failing values. To ensure
that test failures occur within a reasonable amount of time, all of
these combinators share a single “flat map regen” counter, and will
stop generating new values if it exceeds Config::max_flat_map_regens
.
§Example
Generate two integers, where the second is always less than the first, without using filtering:
use proptest::prelude::*;
proptest! {
#[test]
fn test_two(
// Pick integers in the 1..65536 range, and derive a strategy
// which emits a tuple of that integer and another one which is
// some value less than it.
(a, b) in (1..65536).prop_flat_map(|a| (Just(a), 0..a))
) {
prop_assert!(b < a);
}
}
§Choosing the right flat-map
Strategy
has three “flat-map” combinators. They look very similar at
first, and can be used to produce superficially identical test results.
For example, the following three expressions all produce inputs which
are 2-tuples (a,b)
where the b
component is less than a
.
use proptest::prelude::*;
let flat_map = (1..10).prop_flat_map(|a| (Just(a), 0..a));
let ind_flat_map = (1..10).prop_ind_flat_map(|a| (Just(a), 0..a));
let ind_flat_map2 = (1..10).prop_ind_flat_map2(|a| 0..a);
The three do differ however in terms of how they shrink.
For flat_map
, both a
and b
will shrink, and the invariant that
b < a
is maintained. This is a “dependent” or “higher-order” strategy
in that it remembers that the strategy for choosing b
is dependent on
the value chosen for a
.
For ind_flat_map
, the invariant b < a
is maintained, but only
because a
does not shrink. This is due to the fact that the
dependency between the strategies is not tracked; a
is simply seen as
a constant.
Finally, for ind_flat_map2
, the invariant b < a
is not
maintained, because a
can shrink independently of b
, again because
the dependency between the two variables is not tracked, but in this
case the derivation of a
is still exposed to the shrinking system.
The use-cases for the independent flat-map variants is pretty narrow.
For the majority of cases where invariants need to be maintained and
you want all components to shrink, prop_flat_map
is the way to go.
prop_ind_flat_map
makes the most sense when the input to the map
function is not exposed in the output and shrinking across strategies
is not expected to be useful. prop_ind_flat_map2
is useful for using
related values as starting points while not constraining them to that
relation.
Sourcefn prop_ind_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlatten<Map<Self, F>>where
Self: Sized,
fn prop_ind_flat_map<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlatten<Map<Self, F>>where
Self: Sized,
Maps values produced by this strategy into new strategies and picks values from those strategies while considering the new strategies to be independent.
This is very similar to prop_flat_map()
, but shrinking will not
attempt to shrink the input that produces the derived strategies. This
is appropriate for when the derived strategies already fully shrink in
the desired way.
In most cases, you want prop_flat_map()
.
See prop_flat_map()
for a more detailed explanation on how the
three flat-map combinators differ.
Sourcefn prop_ind_flat_map2<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlattenMap<Self, F>where
Self: Sized,
fn prop_ind_flat_map2<S: Strategy, F: Fn(Self::Value) -> S>(
self,
fun: F,
) -> IndFlattenMap<Self, F>where
Self: Sized,
Similar to prop_ind_flat_map()
, but produces 2-tuples with the input
generated from self
in slot 0 and the derived strategy in slot 1.
See prop_flat_map()
for a more detailed explanation on how the
three flat-map combinators differ.
Sourcefn prop_filter<R: Into<Reason>, F: Fn(&Self::Value) -> bool>(
self,
whence: R,
fun: F,
) -> Filter<Self, F>where
Self: Sized,
fn prop_filter<R: Into<Reason>, F: Fn(&Self::Value) -> bool>(
self,
whence: R,
fun: F,
) -> Filter<Self, F>where
Self: Sized,
Returns a strategy which only produces values accepted by fun
.
This results in a very naïve form of rejection sampling and should only
be used if (a) relatively few values will actually be rejected; (b) it
isn’t easy to express what you want by using another strategy and/or
map()
.
There are a lot of downsides to this form of filtering. It slows
testing down, since values must be generated but then discarded.
Proptest only allows a limited number of rejects this way (across the
entire TestRunner
). Rejection can interfere with shrinking;
particularly, complex filters may largely or entirely prevent shrinking
from substantially altering the original value.
Local rejection sampling is still preferable to rejecting the entire
input to a test (via TestCaseError::Reject
), however, and the default
number of local rejections allowed is much higher than the number of
whole-input rejections.
whence
is used to record where and why the rejection occurred.
Sourcefn prop_filter_map<F: Fn(Self::Value) -> Option<O>, O: Debug>(
self,
whence: impl Into<Reason>,
fun: F,
) -> FilterMap<Self, F>where
Self: Sized,
fn prop_filter_map<F: Fn(Self::Value) -> Option<O>, O: Debug>(
self,
whence: impl Into<Reason>,
fun: F,
) -> FilterMap<Self, F>where
Self: Sized,
Returns a strategy which only produces transformed values where fun
returns Some(value)
and rejects those where fun
returns None
.
Using this method is preferable to using .prop_map(..).prop_filter(..)
.
This results in a very naïve form of rejection sampling and should only
be used if (a) relatively few values will actually be rejected; (b) it
isn’t easy to express what you want by using another strategy and/or
map()
.
There are a lot of downsides to this form of filtering. It slows
testing down, since values must be generated but then discarded.
Proptest only allows a limited number of rejects this way (across the
entire TestRunner
). Rejection can interfere with shrinking;
particularly, complex filters may largely or entirely prevent shrinking
from substantially altering the original value.
Local rejection sampling is still preferable to rejecting the entire
input to a test (via TestCaseError::Reject
), however, and the default
number of local rejections allowed is much higher than the number of
whole-input rejections.
whence
is used to record where and why the rejection occurred.
Sourcefn prop_union(self, other: Self) -> Union<Self>where
Self: Sized,
fn prop_union(self, other: Self) -> Union<Self>where
Self: Sized,
Returns a strategy which picks uniformly from self
and other
.
When shrinking, if a value from other
was originally chosen but that
value can be shrunken no further, it switches to a value from self
and starts shrinking that.
Be aware that chaining prop_union
calls will result in a very
right-skewed distribution. If this is not what you want, you can call
the .or()
method on the Union
to add more values to the same union,
or directly call Union::new()
.
Both self
and other
must be of the same type. To combine
heterogeneous strategies, call the boxed()
method on both self
and
other
to erase the type differences before calling prop_union()
.
Sourcefn prop_recursive<R: Strategy<Value = Self::Value> + 'static, F: Fn(BoxedStrategy<Self::Value>) -> R>(
self,
depth: u32,
desired_size: u32,
expected_branch_size: u32,
recurse: F,
) -> Recursive<Self::Value, F>where
Self: Sized + 'static,
fn prop_recursive<R: Strategy<Value = Self::Value> + 'static, F: Fn(BoxedStrategy<Self::Value>) -> R>(
self,
depth: u32,
desired_size: u32,
expected_branch_size: u32,
recurse: F,
) -> Recursive<Self::Value, F>where
Self: Sized + 'static,
Generate a recursive structure with self
items as leaves.
recurse
is applied to various strategies that produce the same type
as self
with nesting depth n to create a strategy that produces the
same type with nesting depth n+1. Generated structures will have a
depth between 0 and depth
and will usually have up to desired_size
total elements, though they may have more. expected_branch_size
gives
the expected maximum size for any collection which may contain
recursive elements and is used to control branch probability to achieve
the desired size. Passing a too small value can result in trees vastly
larger than desired.
Note that depth
only counts branches; i.e., depth = 0
is a single
leaf, and depth = 1
is a leaf or a branch containing only leaves.
In practise, generated values usually have a lower depth than depth
(but depth
is a hard limit) and almost always under
expected_branch_size
(though it is not a hard limit) since the
underlying code underestimates probabilities.
Shrinking shrinks both the inner values and attempts switching from recursive to non-recursive cases.
§Example
use std::collections::HashMap;
use proptest::prelude::*;
/// Define our own JSON AST type
#[derive(Debug, Clone)]
enum JsonNode {
Null,
Bool(bool),
Number(f64),
String(String),
Array(Vec<JsonNode>),
Map(HashMap<String, JsonNode>),
}
// Define a strategy for generating leaf nodes of the AST
let json_leaf = prop_oneof![
Just(JsonNode::Null),
prop::bool::ANY.prop_map(JsonNode::Bool),
prop::num::f64::ANY.prop_map(JsonNode::Number),
".*".prop_map(JsonNode::String),
];
// Now define a strategy for a whole tree
let json_tree = json_leaf.prop_recursive(
4, // No more than 4 branch levels deep
64, // Target around 64 total elements
16, // Each collection is up to 16 elements long
|element| prop_oneof![
// NB `element` is an `Arc` and we'll need to reference it twice,
// so we clone it the first time.
prop::collection::vec(element.clone(), 0..16)
.prop_map(JsonNode::Array),
prop::collection::hash_map(".*", element, 0..16)
.prop_map(JsonNode::Map)
]);
Sourcefn prop_shuffle(self) -> Shuffle<Self>
fn prop_shuffle(self) -> Shuffle<Self>
Shuffle the contents of the values produced by this strategy.
That is, this modifies a strategy producing a Vec
, slice, etc, to
shuffle the contents of that Vec
/slice/etc.
Initially, the value is fully shuffled. During shrinking, the input
value will initially be unchanged while the result will gradually be
restored to its original order. Once de-shuffling either completes or
is cancelled by calls to complicate()
pinning it to a particular
permutation, the inner value will be simplified.
§Example
use proptest::prelude::*;
static VALUES: &'static [u32] = &[0, 1, 2, 3, 4];
fn is_permutation(orig: &[u32], mut actual: Vec<u32>) -> bool {
actual.sort();
orig == &actual[..]
}
proptest! {
#[test]
fn test_is_permutation(
ref perm in Just(VALUES.to_owned()).prop_shuffle()
) {
assert!(is_permutation(VALUES, perm.clone()));
}
}
Sourcefn boxed(self) -> BoxedStrategy<Self::Value>where
Self: Sized + 'static,
fn boxed(self) -> BoxedStrategy<Self::Value>where
Self: Sized + 'static,
Erases the type of this Strategy
so it can be passed around as a
simple trait object.
See also sboxed()
if this Strategy
is Send
and Sync
and you
want to preserve that information.
Strategies of this type afford cheap shallow cloning via reference
counting by using an Arc
internally.
Sourcefn sboxed(self) -> SBoxedStrategy<Self::Value>
fn sboxed(self) -> SBoxedStrategy<Self::Value>
Erases the type of this Strategy
so it can be passed around as a
simple trait object.
Unlike boxed()
, this conversion retains the Send
and Sync
traits
on the output.
Strategies of this type afford cheap shallow cloning via reference
counting by using an Arc
internally.
Sourcefn no_shrink(self) -> NoShrink<Self>where
Self: Sized,
fn no_shrink(self) -> NoShrink<Self>where
Self: Sized,
Wraps this strategy to prevent values from being subject to shrinking.
Suppressing shrinking is useful when testing things like linear
approximation functions. Ordinarily, proptest will tend to shrink the
input to the function until the result is just barely outside the
acceptable range whereas the original input may have produced a result
far outside of it. Since this makes it harder to see what the actual
problem is, making the input NoShrink
allows learning about inputs
that produce more incorrect results.