Specifications

SymbolicPlanners.jl provides a composable library of problem specifications through the Specification interface, allowing users to define a range of planning problems from modular components.

SymbolicPlanners.SpecificationType
abstract type Specification

Abstract type for problem specifications, which can define goal predicates, action costs, reward functions, and other desired criteria for planning Solutions.

source

Specifications support the following interface for determining goal achievement, constraint violation, costs, rewards, and temporal discount factors.

SymbolicPlanners.is_goalFunction
is_goal(spec, domain, state, [action])

Check if state is a goal state according to the specification. If an action is provided, check if state is a goal state after executing action. For most specifications, action is ignored.

source

Goal Specifications

The most basic specifications are Goal specifications, which define shortest path problems.

SymbolicPlanners.GoalType
abstract type Goal <: Specification

Abstract type for goal-based specifications, which define a shortest path problem to a set of goal states. The set of goal states is typically defined by a list of terms that must hold true for the goal to be satisfied.

In the context of Markov decision processes, a goal state is a terminal state. If all actions also have positive cost (i.e. negative reward), this constitutes a stochastic shortest path problem.

source

The following Goal specifications are provided:

SymbolicPlanners.MinStepsGoalType
MinStepsGoal(terms)
MinStepsGoal(goal::Term)
MinStepsGoal(problem::Problem)

Goal specification where each step (i.e. action) has unit cost, and the goal formula is a conjunction of terms. Planners called with this specification will try to minimize the number of steps to the goal in the returned Solution.

source
SymbolicPlanners.MinMetricGoalType
MinMetricGoal(terms, metric::Term)
MinMetricGoal(goal::Term, metric::Term)
MinMetricGoal(problem::Problem)

Goal specification where each step has a cost specified by the difference in values of a metric formula between the next state and the current state, and the goal formula is a conjuction of terms.Planners called with this specification will try to minimize the metric formula when solving for the goal.

source
SymbolicPlanners.MaxMetricGoalType
MaxMetricGoal(goals, metric::Term)
MaxMetricGoal(goal::Term, metric::Term)
MaxMetricGoal(problem::Problem)

Goal specification where each step has a reward specified by the difference in values of a metric formula between the next state and the current state, and the goal formula is a conjuction of terms.Planners called with this specification will try to maximize the metric formula when solving for the goal.

source

It is also possible to plan to achieve a specific action using the ActionGoal specification:

SymbolicPlanners.ActionGoalType
ActionGoal(action::Term, [constraints, step_cost])

Goal specification which requires that action is executed as the final step. Optionally, object constraints can be specified. These are either static constraints on the action's variable parameters, or predicates that must hold in the final state. The cost of each step in the solution is step_cost, which defaults to 1.0.

source

However, action goals are not currently supported by planners that make use of backward search, such as BackwardPlanner and BidirectionalPlanner.

Constraint Specifications

SymbolicPlanners.jl also provides limited support for planning under constraints. In particular, planning solutions can be constrained such that all traversed states satisfy certain predicates:

SymbolicPlanners.StateConstrainedGoalType
StateConstrainedGoal(goal::Goal, constraints::Vector{Term})

Goal specification with a list of constraints that must hold for every state. Planners that receive this specification are required to return plans or policies that ensure every visited state satisfies the constraints.

source

Support for more general temporal constraints may be provided in the future.

Action Costs

Many planning problems have action costs that are fixed (i.e. not state-dependent). Problems of this sort can be defined using the following specifications:

SymbolicPlanners.MinActionCostsType
MinActionCosts(terms, costs)
MinActionCosts(terms, actions, costs)

Goal specification where each action has a specific cost, and the goal formula is a conjunction of terms. Planners called with this specification will try to minimize the total action cost in the returned Solution.

Costs can be provided as mapping from action names (specified as Symbols) to Reals, such that each lifted action has an associated cost. Alternatively, costs can be provided as a mapping from ground action Terms to Reals. A mapping can be provided directly as a NamedTuple or Dictionary, or as a list of actions and corresponding costs.

source
SymbolicPlanners.ExtraActionCostsType
ExtraActionCosts(spec::Specification, costs)
ExtraActionCosts(spec::Specification, actions, costs)

Wrapper that adds action-specific costs to an underlying spec.

Costs can be provided as mapping from action names (specified as Symbols) to Reals, such that each lifted action has an associated cost. Alternatively, costs can be provided as a mapping from ground action Terms to Reals. A mapping can be provided directly as a NamedTuple or Dictionary, or as a list of actions and corresponding costs.

source
SymbolicPlanners.MinPerAgentActionCostsType
MinPerAgentActionCosts(terms, costs, [agent_arg_idx=1])

Goal specification where each agent has separate action costs. Planners called with this specification will try to minimize the total action cost in the returned Solution.

Costs can be provided as a nested dictionary or named tuple, where the first level maps agent names (specified as Symbols or Consts) to the second level, which maps action names or ground action Terms to Reals.

The agent_arg_idx argument specifies the index of the agent argument in the action terms. By default, this is 1.

source
SymbolicPlanners.ExtraPerAgentActionCostsType
ExtraPerAgentActionCosts(spec::Specification, costs, [agent_arg_idx=1])

Wrapper that adds per-agent action costs to an underlying spec.

Costs can be provided as a nested dictionary or named tuple, where the first level maps agent names (specified as Symbols or Consts) to the second level, which maps action names or ground action Terms to Reals.

source

We also introduce interface methods to determine if a specification provides static action costs:

Action costs can be inferred from domains and problems using the following utility function:

SymbolicPlanners.infer_action_costsFunction
infer_action_costs(domain, problem)

Infer fixed action costs for a domain and problem, returning nothing if unsuccessful.

source
infer_action_costs(
    domain::Domain, state::State, metric::Term,
    cost_fluents=PDDL.constituents(metric, domain),
    static_fluents=infer_static_fluents(domain)
)

Infer fixed action costs for a domain and initial state, and metric formula, returning nothing if unsuccessful.

source

Reward Specifications

SymbolicPlanners.jl also provides support for goal-based reward functions through the following specifications.

SymbolicPlanners.GoalRewardType
GoalReward(terms, reward=1.0, discount=0.9)

Goal specification which returns a reward when all goal terms are achieved, along with a discount factor. Each action has zero cost.

source
SymbolicPlanners.MultiGoalRewardType
MultiGoalReward(goals::Vector{Term}, rewards::Vector{Float64}, discount=1.0)

Goal specification where multiple goals have associated rewards. Achieving a goal delivers the associated reward. Each action has zero cost.

source

Temporal discounting of rewards is supported:

More general reward functions can also be defined using the MaxMetricGoal introduced earlier, by defining a metric fluent that is corresponds to the total reward.

Reward-based specifications should generally be used with policy-based planning algorithms such as RTDP and MCTS.