Specifications
SymbolicPlanners.jl provides a composable library of problem specifications through the Specification interface, allowing users to define a range of planning problems from modular components.
SymbolicPlanners.Specification — Typeabstract type SpecificationAbstract type for problem specifications, which can define goal predicates, action costs, reward functions, and other desired criteria for planning Solutions.
Specifications support the following interface for determining goal achievement, constraint violation, costs, rewards, and temporal discount factors.
SymbolicPlanners.is_goal — Functionis_goal(spec, domain, state, [action])Check if state is a goal state according to the specification. If an action is provided, check if state is a goal state after executing action. For most specifications, action is ignored.
SymbolicPlanners.is_violated — Functionis_violated(spec, domain, state)
Check if state violates specified constraints.
SymbolicPlanners.get_cost — Functionget_cost(spec, domain, s1, a, s2)
Returns the cost of going from state s1 to state s2 via action a.
SymbolicPlanners.get_reward — Functionget_reward(spec, domain, s1, a, s2)
Returns the reward of going from state s1 to state s2 via action a.
SymbolicPlanners.get_discount — Functionget_discount(spec)
Returns the reward discount factor.
Goal Specifications
The most basic specifications are Goal specifications, which define shortest path problems.
SymbolicPlanners.Goal — Typeabstract type Goal <: SpecificationAbstract type for goal-based specifications, which define a shortest path problem to a set of goal states. The set of goal states is typically defined by a list of terms that must hold true for the goal to be satisfied.
In the context of Markov decision processes, a goal state is a terminal state. If all actions also have positive cost (i.e. negative reward), this constitutes a stochastic shortest path problem.
SymbolicPlanners.get_goal_terms — Functionget_goal_terms(spec)
Return goal terms.
SymbolicPlanners.set_goal_terms — Functionset_goal_terms(spec, terms)
Return a copy of the goal specification with updated goal terms.
The following Goal specifications are provided:
SymbolicPlanners.MinStepsGoal — TypeMinStepsGoal(terms)
MinStepsGoal(goal::Term)
MinStepsGoal(problem::Problem)Goal specification where each step (i.e. action) has unit cost, and the goal formula is a conjunction of terms. Planners called with this specification will try to minimize the number of steps to the goal in the returned Solution.
SymbolicPlanners.MinMetricGoal — TypeMinMetricGoal(terms, metric::Term)
MinMetricGoal(goal::Term, metric::Term)
MinMetricGoal(problem::Problem)Goal specification where each step has a cost specified by the difference in values of a metric formula between the next state and the current state, and the goal formula is a conjuction of terms.Planners called with this specification will try to minimize the metric formula when solving for the goal.
SymbolicPlanners.MaxMetricGoal — TypeMaxMetricGoal(goals, metric::Term)
MaxMetricGoal(goal::Term, metric::Term)
MaxMetricGoal(problem::Problem)Goal specification where each step has a reward specified by the difference in values of a metric formula between the next state and the current state, and the goal formula is a conjuction of terms.Planners called with this specification will try to maximize the metric formula when solving for the goal.
It is also possible to plan to achieve a specific action using the ActionGoal specification:
SymbolicPlanners.ActionGoal — TypeActionGoal(action::Term, [constraints, step_cost])Goal specification which requires that action is executed as the final step. Optionally, object constraints can be specified. These are either static constraints on the action's variable parameters, or predicates that must hold in the final state. The cost of each step in the solution is step_cost, which defaults to 1.0.
However, action goals are not currently supported by planners that make use of backward search, such as BackwardPlanner and BidirectionalPlanner.
Constraint Specifications
SymbolicPlanners.jl also provides limited support for planning under constraints. In particular, planning solutions can be constrained such that all traversed states satisfy certain predicates:
SymbolicPlanners.StateConstrainedGoal — TypeStateConstrainedGoal(goal::Specification, constraints::Vector{Term})Goal specification with a list of constraints that must hold for every state. Planners that receive this specification are required to return plans or policies that ensure every visited state satisfies the constraints.
Support for more general temporal constraints may be provided in the future.
Action Costs
Many planning problems have action costs that are fixed (i.e. not state-dependent). Problems of this sort can be defined using the following specifications:
SymbolicPlanners.MinActionCosts — TypeMinActionCosts(terms, costs)
MinActionCosts(terms, actions, costs)Goal specification where each action has a specific cost, and the goal formula is a conjunction of terms. Planners called with this specification will try to minimize the total action cost in the returned Solution.
Costs can be provided as mapping from action names (specified as Symbols) to Reals, such that each lifted action has an associated cost. Alternatively, costs can be provided as a mapping from ground action Terms to Reals. A mapping can be provided directly as a NamedTuple or Dictionary, or as a list of actions and corresponding costs.
SymbolicPlanners.ExtraActionCosts — TypeExtraActionCosts(spec::Specification, costs)
ExtraActionCosts(spec::Specification, actions, costs)Wrapper that adds action-specific costs to an underlying spec.
Costs can be provided as mapping from action names (specified as Symbols) to Reals, such that each lifted action has an associated cost. Alternatively, costs can be provided as a mapping from ground action Terms to Reals. A mapping can be provided directly as a NamedTuple or Dictionary, or as a list of actions and corresponding costs.
SymbolicPlanners.MinPerAgentActionCosts — TypeMinPerAgentActionCosts(terms, costs, [agent_arg_idx=1])Goal specification where each agent has separate action costs. Planners called with this specification will try to minimize the total action cost in the returned Solution.
Costs can be provided as a nested dictionary or named tuple, where the first level maps agent names (specified as Symbols or Consts) to the second level, which maps action names or ground action Terms to Reals.
The agent_arg_idx argument specifies the index of the agent argument in the action terms. By default, this is 1.
SymbolicPlanners.ExtraPerAgentActionCosts — TypeExtraPerAgentActionCosts(spec::Specification, costs, [agent_arg_idx=1])Wrapper that adds per-agent action costs to an underlying spec.
Costs can be provided as a nested dictionary or named tuple, where the first level maps agent names (specified as Symbols or Consts) to the second level, which maps action names or ground action Terms to Reals.
We also introduce interface methods to determine if a specification provides static action costs:
SymbolicPlanners.has_action_cost — Functionhas_action_cost(spec)
Returns whether a specification has action-specific costs.
SymbolicPlanners.get_action_cost — Functionget_action_cost(spec, action)
Returns the cost for act for specifications with fixed action costs.
Action costs can be inferred from domains and problems using the following utility function:
SymbolicPlanners.infer_action_costs — Functioninfer_action_costs(domain, problem)
Infer fixed action costs for a domain and problem, returning nothing if unsuccessful.
infer_action_costs(
domain::Domain, state::State, metric::Term,
cost_fluents=PDDL.constituents(metric, domain),
static_fluents=infer_static_fluents(domain)
)Infer fixed action costs for a domain and initial state, and metric formula, returning nothing if unsuccessful.
Reward Specifications
SymbolicPlanners.jl also provides support for goal-based reward functions through the following specifications.
SymbolicPlanners.GoalReward — TypeGoalReward(terms, reward=1.0, discount=0.9)Goal specification which returns a reward when all goal terms are achieved, along with a discount factor. Each action has zero cost.
SymbolicPlanners.BonusGoalReward — TypeBonusGoalReward(goal::Specification, reward=1.0, discount=0.9)Wrapper around an existing goal [Specification], which delivers additional reward upon reaching a goal.
SymbolicPlanners.MultiGoalReward — TypeMultiGoalReward(goals::Vector{Term}, rewards::Vector{Float64}, discount=1.0)Goal specification where multiple goals have associated rewards. Achieving a goal delivers the associated reward. Each action has zero cost.
Temporal discounting of rewards is supported:
SymbolicPlanners.DiscountedReward — TypeDiscountedReward(spec::Specification, discount::Float64)Discounts rewards of the underlying spec by a discount factor.
SymbolicPlanners.discounted — Functiondiscounted(spec, discount)
Discount the rewards or costs associated with spec by a discount factor.
More general reward functions can also be defined using the MaxMetricGoal introduced earlier, by defining a metric fluent that is corresponds to the total reward.
Reward-based specifications should generally be used with policy-based planning algorithms such as RTDP and MCTS.
Goal Simplification
It is generally useful to simplify and ground a goal specification in the the initial state before planning, so as to reduce the runtime of checking whether a goal is satisfied (e.g. by removing universal or existential quantifiers). This can be achieved using the simplify_goal function:
SymbolicPlanners.simplify_goal — Functionsimplify_goal(spec::Specification, domain::Domain, state::State; kwargs...)Simplifies a goal spec with respect to a domain and initial state. Universal or existential conditions are expanded into conjuctions or disjunctions, and predicates with statically known truth values are pruned. The statics keyword can be specified to provide a list of static fluents.
To ensure a goal is simplified once before planning, and prevent further (potentially costly) attempts at simplification by a planner, the simplified function can be used to create a SimplifiedGoal.
SymbolicPlanners.SimplifiedGoal — TypeSimplifiedGoal(spec::Specification)Wraps an existing spec and ensures that its goal terms are simplified, preventing repeated simplification on further calls to simplified.
SymbolicPlanners.simplified — Functionsimplified(spec::Specification, domain::Domain, state::State)Simplifies a spec with respect to a domain and initial state via simplify_goal, returning a SimplifiedGoal that prevents repeated simplification.
SymbolicPlanners.is_simplified — Functionis_simplified(spec)
Returns whether a specification has been simplified.
Preventing further simplication is useful when one planner calls another planner internally, or when multiple planner calls with the same specification are expected.