Writing Planners

Using the PDDL.jl interface, it is straightforward to implement planning algorithms which solve problems in PDDL domains. Since all domain and implementation specific details are encapsulated by the interface, the same algorithm can operate across multiple domains, and even multiple representations of the same domain (e.g. interpreted vs. compiled).

In this tutorial, we present two simple planners as examples: forward breadth-first search, and backward breadth-first search.

Our first example is forward breadth-first search, shown below. The algorithm accepts a Domain and Problem, then constructs the initial state with the initstate function. It also extracts the goal formula using PDDL.get_goal. The algorithm then searches the state space, iteratively expanding the successors of each state and available action in a breadth-first order:

function forward_bfs(domain::Domain, problem::Problem)
    # Initialize state and extract goal
    state = initstate(domain, problem)
    goal = PDDL.get_goal(problem)
    # Initialize search queue
    plan = []
    queue = [(state, plan)]
    while length(queue) > 0
        # Pop state and plan
        state, plan = popfirst!(queue)
        # Check if goal is satisfied
        if satisfy(domain, state, goal)
            # Return plan if goal is satisfied
            return plan
        end
        # Iterate over available actions and add successors to queue
        for action in available(domain, state)
            next_state = transition(domain, state, action)
            next_plan = [plan; action]
            push!(queue, (next_state, next_plan))
        end
    end
    # Return nothing upon failure
    return nothing
end

As can be seen, search proceeds by popping a state and corresponding plan off the search queue at each iteration, then checking if the state satisfies the goal using satisfy. If the goal is satisfied, the plan is returned. If not, the state is expanded by iterating over each available action, and constructing the successor state for that action using the transition function. The successor state and its corresponding plan are added to queue. Search continues until either the queue is exhausted, or the goal is satisfied.

Implementation Efficiency

While easy to understand, the implementation of breadth-first search presented here is memory inefficient because it stores the plan to each state as part of the search queue. Efficient implementations of planners using breadth-first search should be based off Djikstra's algorithm instead.

PDDL.jl also supports planning via backward search, also known as regression search. Backward search operates by treating the goal condition as a partial or abstract state which only specifies that some predicates must be true. It then searches the space by considering all actions that could possibly achieve the current abstract state (called relevant actions), and inverting the semantics of each action (called regression). This results in a successor abstract state that represents the pre-image of the action: the set of all states that could have reached the current abstract state through that action.

A breadth-first version of backward search is shown below.

function backward_bfs(domain::Domain, problem::Problem)
    # Construct initial state and goal state
    init_state = initstate(domain, problem)
    state = goalstate(domain, problem)
    # Initialize search queue
    plan = []
    queue = [(state, plan)]
    while length(queue) > 0
        # Pop state and plan
        state, plan = popfirst!(queue)
        # Return plan if initial state implies the current abstract state
        if all(evaluate(domain, init_state, fluent) == val
               for (fluent, val) in PDDL.get_fluents(state))
            return plan
        end
        # Iterate over relevant actions and add pre-image to queue
        for action in relevant(domain, state)
            next_state = regress(domain, state, action)
            next_plan = [action; plan]
            push!(queue, (next_state, next_plan))
        end
    end
    # Return nothing upon failure
    return nothing
end

This algorithm is very similar to forward_bfs: It first constructs an initial state (using initstate) and abstract goal state (using goalstate) from the domain and problem. It then searches in a breadth-first order from the abstract goal state, iterating over actions that are relevant to achieving the current abstract state, then computing the preimage induced by each action using regress and adding the resulting state to the queue. The search terminates when the initial state is found to be in the preimage of some action, i.e., all fluents that are true in the preimage are also true in the initial state.

Support for Regression Search

PDDL.jl currently only provides correct implementations of regression search operations (relevant and regress) for STRIPS-style domains. This means that regression search is not currently supported for domains with non-Boolean fluents, negative preconditions, disjunctive preconditions, quantified preconditions, or conditional effects.

Existing Planners

While PDDL.jl makes it relatively easy to implement planning algorithms from scratch, the performance and (re)usability of these algorithms require more careful design. As such, the PDDL.jl ecosystem also includes the SymbolicPlanners.jl library, which provides a wide array of planning algorithms and heuristics that have comparable performance to other commonly-used planning systems. Below, we show how to use SymbolicPlanners.jl to solve a Blocksworld problem via A* search with the additive heuristic:

using PDDL, PlanningDomains, SymbolicPlanners

# Load Blocksworld domain and problem
domain = load_domain(:blocksworld)
problem = load_problem(:blocksworld, "problem-4")
state = initstate(domain, problem)
goal = PDDL.get_goal(problem)

# Construct A* planner with h_add heuristic
planner = AStarPlanner(HAdd())

# Solve the problem using the planner
sol = planner(domain, state, goal)

We can check that the resulting solution achieves the goal as desired:

julia> goal
and(on(d, c), on(c, b), on(b, a), on(a, e))

julia> collect(sol)
10-element Vector{Any}:
 unstack(b, a)
 put-down(b)
 unstack(a, d)
 stack(a, e)
 pick-up(b)
 stack(b, a)
 pick-up(c)
 stack(c, b)
 pick-up(d)
 stack(d, c)

julia> satisfy(domain, sol.trajectory[end], goal)
true

For more information about the planners and heuristics provided by SymbolicPlanners.jl, consult the README.