package ppx_expect

  1. Overview
  2. Docs
Cram like framework for OCaml

Install

Dune Dependency

Authors

Maintainers

Sources

v0.16.1.tar.gz
md5=f2db0b9091a532f2f67a2bf0b8d8b268
sha512=b24df7db91ef0d72f3bf753b526b7a21c8fd06c8814367f4f06e2356bace7353891207dd5f4ccaa60542000fb2c6a179b38ffd9908803274c169e3b95a754ece

Description

Part of the Jane Street's PPX rewriters collection.

Published: 11 Dec 2024

README

README.org

#+TITLE: expect-test - a cram like framework for OCaml

** Introduction

Expect-test is a framework for writing tests in OCaml, similar to [[https://bitheap.org/cram/][Cram]].
Expect-tests mimic the existing inline tests framework with the =let%expect_test= construct.
The body of an expect-test can contain output-generating code, interleaved with =%expect= extension
expressions to denote the expected output.

When run, these tests will pass iff the output matches what was expected. If a test fails, a
corrected file with the suffix ".corrected" will be produced with the actual output, and the
=inline_tests_runner= will output a diff.

Here is an example Expect-test program, say in =foo.ml=

#+begin_src ocaml
open Core

let%expect_test "addition" =
  printf "%d" (1 + 2);
  [%expect {| 4 |}]
#+end_src

When the test is run (as part of =inline_tests_runner=), =foo.ml.corrected= will be produced with the
contents:

#+begin_src ocaml
open Core

let%expect_test "addition" =
  printf "%d" (1 + 2);
  [%expect {| 3 |}]
#+end_src

=inline_tests_runner= will also output the diff:

#+begin_src
---foo.ml
+++foo.ml.corrected
File "foo.ml", line 5, characters 0-1:
  open Core

  let%expect_test "addition" =
    printf "%d" (1 + 2);
-|  [%expect {| 4 |}]
+|  [%expect {| 3 |}]
#+end_src

Diffs will be shown in color if the =-use-color= flag is passed to the test runner executable.

** Expects reached from multiple places

A [%expect] can exist in a way that it is encountered multiple times, e.g. in a
functor or a function:

#+begin_src ocaml
let%expect_test _ =
  let f output =
    print_string output;
    [%expect {| hello world |}]
  in
  f "hello world";
  f "hello world";
;;
#+end_src

The =[%expect]= should capture the exact same output (i.e. up to string equality) at every
invocation. In particular, this does **not** work:

#+begin_src ocaml
let%expect_test _ =
  let f output =
    print_string output;
    [%expect {| \(foo\|bar\) (regexp) |}]
  in
  f "foo";
  f "bar";
;;
#+end_src

** Output matching

Matching is done on a line-by-line basis. If any output line fails to
match its expected output, the expected line is replaced with the
actual line in the final output.

*** Whitespace

Inside =%expect= nodes, whitespace around patterns are ignored, and
the user is free to put any amount for formatting purposes. The same
goes for the actual output.

Ignoring surrounding whitespace allows to write nicely formatted
expectation and focus only on matching the bits that matter.

To do this, ppx_expect strips patterns and outputs by taking the
smallest rectangle of text that contains the non-whitespace
material. All end of line whitespace are ignored as well. So for
instance all these lines are equivalent:

#+begin_src ocaml
  print blah;
  [%expect {|
abc
defg
  hij|}]

  print blah;
  [%expect {|
                abc
                defg
                  hij
  |}]

  print blah;
  [%expect {|
    abc
    defg
      hij
  |}]
#+end_src

However, the last one is nicer to read.

For the rare cases where one does care about what the exact output is,
ppx_expect provides the =%expect_exact= extension point, which only
succeed when the untouched output is exactly equal to the untouched
pattern.

When producing a correction, ppx_expect tries to respect as much as
possible the formatting of the pattern.

** Output capture

The extension point =[%expect.output]= returns a =string= with the output that
would have been matched had an =[%expect]= node been there instead.

An idiom for testing non-deterministic output is to capture the output using
=[%expect.output]= and either post-process it or inspect it manually, e.g.,

#+BEGIN_SRC ocaml
show_process ();
let pid_and_exit_status = [%expect.output] in
let exit_status = discard_pid pid_and_exit_status in
print_endline exit_status;
[%expect {| 1 |}]
#+END_SRC

This is preferred over output patterns (see below).

** Integration with Async, Lwt or other cooperative libraries

If you are writing expect tests for a system using Async, Lwt or any
other libraries for cooperative threading, you need some preparation
so that everything works well. For instance, you probably need to
flush some =stdout= channel. The expect test runtime takes care of
flushing =Stdlib.stdout= but it doesn't know about
=Async.Writer.stdout=, =Lwt_io.stdout= or anything else.

To deal with this, expect\_test provides some hooks in the form of a
configuration module =Expect_test_config=. The default module in scope
define no-op hooks that the user can override. =Async= redefines
this module so when =Async= is opened you can write async-aware
expect test.

In addition to =Async.Expect_test_config=, there is an
alternative, =Async.Expect_test_config_with_unit_expect=.  That is
easier to use than =Async.Expect_test_config= because =[%expect]= has
type =unit= rather than =unit Deferred.t=.  So one can write:

#+begin_src ocaml
[%expect foo];
#+end_src

rather than:

#+begin_src ocaml
let%bind () = [%expect foo] in
#+end_src

=Expect_test_config_with_unit_expect= arrived in 2019-06.  We hope to
transition from =Expect_test_config= to
=Expect_test_config_with_unit_expect=, eventually renaming the latter
as the former.

*** LWT

This is what you would need to write expect tests with Lwt:

#+begin_src ocaml
module Lwt_io_run = struct
  type 'a t = 'a Lwt.t
end

module Lwt_io_flush = struct
  type 'a t = 'a Lwt.t
  let return x = Lwt.return x
  let bind x ~f = Lwt.bind x f
  let to_run x = x
end

module Expect_test_config :
  Expect_test_config_types.S
    with module IO_run = Lwt_io_run
     and module IO_flush = Lwt_io_flush = struct
  module IO_run = Lwt_io_run
  module IO_flush = Lwt_io_flush
  let run x = Lwt_main.run (x ())
  let upon_unreleasable_issue = `CR
end
#+end_src

** Comparing Expect-test and unit testing (e.g. =let%test_unit=)

The simple example above can be easily represented as a unit test:

#+begin_src ocaml
let%test_unit "addition" = [%test_result: int] (1 + 2) ~expect:4
#+end_src

So, why would one use Expect-test rather than a unit test?  There are
several differences between the two approaches.

With a unit test, one must write code that explicitly checks that the
actual behavior agrees with the expected behavior.  =%test_result= is
often a convenient way of doing that, but even using that requires:

- creating a value to compare
- writing the type of that value
- having a comparison function on the value
- writing down the expected value

With Expect-test, we can simply add print statements whose output gives
insight into the behavior of the program, and blank =%expect=
attributes to collect the output.  We then run the program to see if
the output is acceptable, and if so, *replace* the original program
with its output.  E.g we might first write our program like this:

#+begin_src ocaml
let%expect_test _ =
  printf "%d" (1 + 2);
  [%expect {||}]
#+end_src

The corrected file would contain:

#+begin_src ocaml
let%expect_test _ =
  printf "%d" (1 + 2);
  [%expect {| 3 |}]
#+end_src

With Expect-test, we only have to write code that prints things that we
care about.  We don't have to construct expected values or write code
to compare them.  We get comparison for free by using diff on the
output.  And a good diff (e.g. patdiff) can make understanding
differences between large outputs substantially easier, much easier
than typical unit-testing code that simply states that two values
aren't equal.

Once an Expect-test program produces the desired expected output and we
have replaced the original program with its output, we now
automatically have a regression test going forward.  Any undesired
change to the output will lead to a mismatch between the source
program and its output.

With Expect-test, the source program and its output are interleaved.  This
makes debugging easier, because we do not have to jump between source
and its output and try to line them up.  Furthermore, when there is a
mismatch, we can simply add print statements to the source program and
run it again.  This gives us interleaved source and output with the
debug messages interleaved in the right place.  We might even insert
additional empty =%%expect= attributes to collect debug messages.

** Implementation

Every =%expect= node in an Expect-test program becomes a point at which
the program output is captured. Once the program terminates, the
captured outputs are matched against the expected outputs, and interleaved with
the original source code to produce the corrected file. Trailing output is appended in a
new =%expect= node.

** Build system integration

Follow the same rules as for [[https://github.com/janestreet/ppx_inline_test][ppx_inline_test]]. Just make sure to
include =ppx_expect.evaluator= as a dependency of the test runner. The
[[https://github.com/janestreet/jane-street-tests][Jane Street tests]] contains a few working examples using oasis.

Dependencies (8)

  1. re >= "1.8.0"
  2. ppxlib >= "0.28.0"
  3. dune >= "2.0.0"
  4. stdio >= "v0.16" & < "v0.17"
  5. ppx_inline_test >= "v0.16" & < "v0.17"
  6. ppx_here >= "v0.16" & < "v0.17"
  7. base >= "v0.16" & < "v0.17"
  8. ocaml >= "4.14.0"

Dev Dependencies

None

  1. ansi-parse >= "0.4.0"
  2. api-watch
  3. arrayjit
  4. autofonce
  5. autofonce_config
  6. autofonce_core
  7. autofonce_lib
  8. autofonce_m4
  9. autofonce_misc
  10. autofonce_patch
  11. autofonce_share
  12. bio_io >= "0.2.1"
  13. bitpack_serializer
  14. bitwuzla
  15. bitwuzla-c
  16. bitwuzla-cxx
  17. camelot >= "1.3.0"
  18. charInfo_width
  19. combinaml
  20. combinat < "3.0"
  21. ctypes_stubs_js
  22. cudajit
  23. dap
  24. data-encoding >= "0.6"
  25. dkml-install-runner < "0.5.3"
  26. dream
  27. dream-pure
  28. drom
  29. drom_lib
  30. drom_toml
  31. dune-action-plugin
  32. electrod >= "0.1.6" & < "0.2.1"
  33. ez_cmdliner >= "0.2.0"
  34. ez_config >= "0.2.0"
  35. ez_file >= "0.2.0"
  36. ez_hash < "0.5.3"
  37. ez_opam_file
  38. ez_search
  39. ez_subst
  40. feather >= "0.2.0"
  41. fiat-p256 < "0.2.0"
  42. fiber >= "3.7.0"
  43. fiber-lwt
  44. GT >= "0.4.0" & < "0.5.0"
  45. gccjit
  46. graphv
  47. graphv_core
  48. graphv_core_lib
  49. graphv_font
  50. graphv_font_js
  51. graphv_font_stb_truetype
  52. graphv_gles2
  53. graphv_gles2_native
  54. graphv_gles2_native_impl
  55. graphv_webgl
  56. graphv_webgl_impl
  57. header-check
  58. hl_yaml
  59. http < "6.0.0"
  60. http-cookie >= "4.0.0"
  61. http-multipart-formdata >= "2.0.0"
  62. hyper
  63. imguiml
  64. influxdb >= "0.2.0"
  65. js_of_ocaml >= "3.10.0"
  66. js_of_ocaml-compiler >= "3.4.0"
  67. js_of_ocaml-lwt >= "3.10.0"
  68. js_of_ocaml-ocamlbuild >= "3.10.0" & < "5.0"
  69. js_of_ocaml-ppx >= "3.10.0"
  70. js_of_ocaml-ppx_deriving_json >= "3.10.0"
  71. js_of_ocaml-toplevel >= "3.10.0"
  72. js_of_ocaml-tyxml >= "3.10.0"
  73. kdl
  74. knights_tour
  75. kqueue >= "0.2.0"
  76. learn-ocaml >= "0.16.0"
  77. learn-ocaml-client >= "0.16.0"
  78. libbpf
  79. little_logger
  80. loga >= "0.0.5"
  81. lsp < "1.8.0" | >= "1.11.3" & < "1.20.0"
  82. m_tree
  83. merge-fmt >= "0.3"
  84. mlt_parser = "v0.16.0"
  85. module-graph
  86. neural_nets_lib
  87. nloge
  88. nsq >= "0.4.0" & < "0.5.2"
  89. OCanren-ppx >= "0.3.0~alpha1"
  90. ocaml-lsp-server >= "1.15.0-4.14" & < "1.20.0~5.3preview" | = "1.20.0-4.14" | >= "1.20.1-4.14"
  91. ocaml-protoc-plugin >= "1.0.0"
  92. ocluster >= "0.2"
  93. ocp-search
  94. ocplib_stuff >= "0.3.0"
  95. octez-libs
  96. octez-protocol-009-PsFLoren-libs
  97. octez-protocol-010-PtGRANAD-libs
  98. octez-protocol-011-PtHangz2-libs
  99. octez-protocol-012-Psithaca-libs
  100. octez-protocol-013-PtJakart-libs
  101. octez-protocol-014-PtKathma-libs
  102. octez-protocol-015-PtLimaPt-libs
  103. octez-protocol-016-PtMumbai-libs
  104. octez-protocol-017-PtNairob-libs
  105. octez-protocol-018-Proxford-libs
  106. octez-protocol-019-PtParisB-libs
  107. octez-protocol-020-PsParisC-libs
  108. octez-protocol-alpha-libs
  109. octez-shell-libs
  110. odate >= "0.6"
  111. odoc >= "2.0.0"
  112. odoc-parser
  113. omd >= "2.0.0~alpha3"
  114. opam-bin >= "0.9.5"
  115. opam-check-npm-deps
  116. opam_bin_lib >= "0.9.5"
  117. owork
  118. passage
  119. poll
  120. pp
  121. ppx_deriving_jsonschema >= "0.0.2"
  122. ppx_jane = "v0.16.0"
  123. ppx_minidebug
  124. ppx_protocol_conv_json >= "5.0.0"
  125. ppx_relit >= "0.2.0"
  126. ppx_ts
  127. psmt2-frontend >= "0.3.0"
  128. pvec
  129. pyml_bindgen
  130. pythonlib >= "v0.16.0"
  131. res_tailwindcss
  132. routes >= "2.0.0"
  133. safemoney >= "0.1.1"
  134. sarif
  135. sedlex >= "3.1"
  136. seqes < "0.2"
  137. solidity-alcotest
  138. solidity-common
  139. solidity-parser
  140. solidity-test
  141. solidity-typechecker
  142. spawn < "v0.9.0" | >= "v0.14.0"
  143. tdigest >= "2.2.0"
  144. tezos-benchmark
  145. tezos-client-009-PsFLoren >= "14.0"
  146. tezos-client-010-PtGRANAD >= "14.0"
  147. tezos-client-011-PtHangz2 >= "14.0"
  148. tezos-client-012-Psithaca >= "14.0"
  149. tezos-client-013-PtJakart >= "14.0"
  150. tezos-client-014-PtKathma
  151. tezos-client-015-PtLimaPt
  152. tezos-client-016-PtMumbai
  153. tezos-client-017-PtNairob
  154. tezos-client-alpha >= "14.0"
  155. tezos-injector-013-PtJakart
  156. tezos-injector-014-PtKathma
  157. tezos-injector-015-PtLimaPt
  158. tezos-injector-016-PtMumbai
  159. tezos-injector-alpha
  160. tezos-layer2-utils-016-PtMumbai
  161. tezos-layer2-utils-017-PtNairob
  162. tezos-micheline >= "14.0"
  163. tezos-shell >= "15.0"
  164. tezos-smart-rollup-016-PtMumbai
  165. tezos-smart-rollup-017-PtNairob
  166. tezos-smart-rollup-alpha
  167. tezos-smart-rollup-layer2-016-PtMumbai
  168. tezos-smart-rollup-layer2-017-PtNairob
  169. tezos-stdlib >= "14.0"
  170. tezos-tx-rollup-013-PtJakart
  171. tezos-tx-rollup-014-PtKathma
  172. tezos-tx-rollup-015-PtLimaPt
  173. tezos-tx-rollup-alpha
  174. toplevel_expect_test = "v0.16.0"
  175. torch < "v0.17.0"
  176. travesty >= "0.3.0" & < "0.6.0" | >= "0.7.0"
  177. um-abt
  178. wtr >= "2.0.0"
  179. wtr-ppx
  180. yocaml >= "2.0.0"
  181. yocaml_cmarkit
  182. yocaml_eio
  183. yocaml_git >= "2.0.0"
  184. yocaml_jingoo >= "2.0.0"
  185. yocaml_mustache >= "2.0.0"
  186. yocaml_omd
  187. yocaml_otoml
  188. yocaml_runtime
  189. yocaml_syndication >= "2.0.0"
  190. yocaml_unix >= "2.0.0"
  191. yocaml_yaml >= "2.0.0"
  192. zanuda

Conflicts (1)

  1. js_of_ocaml-compiler < "5.8"
OCaml

Innovation. Community. Security.