package fehu
Reinforcement learning framework for OCaml
Install
dune-project
Dependency
Authors
Maintainers
Sources
raven-1.0.0.alpha1.tbz
sha256=8e277ed56615d388bc69c4333e43d1acd112b5f2d5d352e2453aef223ff59867
sha512=369eda6df6b84b08f92c8957954d107058fb8d3d8374082e074b56f3a139351b3ae6e3a99f2d4a4a2930dd950fd609593467e502368a13ad6217b571382da28c
doc/fehu.envs/Fehu_envs/index.html
Module Fehu_envs
Source
Built-in reinforcement learning environments.
This module provides a collection of ready-to-use environments for testing algorithms, learning the Fehu API, and benchmarking. All environments follow the standard Fehu.Env
interface and are fully compatible with wrappers, vectorization, and training utilities.
Available Environments
Random_walk
: One-dimensional random walk with continuous state spaceGrid_world
: Two-dimensional grid navigation with discrete states and obstaclesCartpole
: Classic cart-pole balancing problemMountain_car
: Drive up a steep hill using momentum
Usage
Create an environment with a Rune RNG key:
let rng = Rune.Rng.create () in
let env = Fehu_envs.Random_walk.make ~rng () in
let obs, info = Fehu.Env.reset env ()
Environments support rendering for visualization:
let env = Fehu_envs.Grid_world.make ~rng () in
let obs, _ = Fehu.Env.reset env () in
match Fehu.Env.render env with
| Some output -> print_endline output
| None -> ()
Environment Selection Guide
Use Random_walk
for:
- Testing continuous observation spaces
- Debugging value-based algorithms
- Quick prototyping with minimal complexity
Use Grid_world
for:
- Learning discrete state/action navigation
- Testing path planning or exploration strategies
- Demonstrating obstacle avoidance
One-dimensional random walk environment.
Two-dimensional grid world with goal and obstacles.
Mountain car environment - drive up a steep hill using momentum.
sectionYPositions = computeSectionYPositions($el), 10)"
x-init="setTimeout(() => sectionYPositions = computeSectionYPositions($el), 10)"
>
On This Page