Skip to content
deniseAngilica edited this page May 25, 2022 · 66 revisions

Welcome to the ThinkEngine Wiki!

ThinkEngine is a Unity asset that allows to integrate automated reasoning modules in a videogame or whatever other kind of software developed in Unity. ThinkEngine has been developed having the integration of declarative ASP modules in mind, but other types of automated reasoning can be wired (e.g., PDDL), with an effort which we are constantly working at making as light as possible.

How to cite ThinkEngine

If you want to mention the latest ThinkEngine:

D. Angilica, G. Ianni, F. Pacenza: Declarative AI design in Unity using Answer Set Programming. IEEE Conference on Games (CoG). To appear, 2022.

Early ThinkEngine version (older mapping scheme, no planning, many less features) were reported in:

D. Angilica, G. Ianni, F. Pacenza: Tight Integration of Rule-Based Tools in Game Development. AI*IA 2019: pp. 3-17.

The Team

Core:

  • Giovambattista Ianni (Project coordinator, tweaking and fiddling)
  • Denise Angilica (Code Maintainer)
  • Francesco Pacenza (Code and game development, ASP integration)

Past and present student collaborators:

  • Fabio Barrese (Code development, porting to macOS, showcase games development and upgrade)
  • Antonio Pantaleone Carito (Frogger showcase game)
  • Salvatore Laiso (benchmarking engine)
  • Giuseppe Beltrano (showcase game development)
  • Emmanuel Scarriglia (early version of the testing engine)

Contact us

Feel free to reach us at thinkengine @ unical.it

Sample Games

Usage

How to add ThinkEngine to an existing project

For GitHub users:

Download and extract the file ThinkEnginePlugin.zip in the root folder of your Unity project (the one containing the Assets, Library, Packages, etc... folders).
Make sure that the dlv2 binary in the "Assets/StreamingAssets/ThinkEngine/lib" folder has executability access rights.

For the Unity Asset Store users:

After the import of the ThinkEngine asset from the store, you need to downloand the "dlv-2.1.1" solver that suits you Operative System (MacOS or Windows) at the following link. You need to rename the file as "dlv2" keeping the original extension; finally you can place the solver in the "Assets/StreamingAssets/ThinkEngine/lib" folder of your Unity project beeing sure that it has executability access rights.

Note that the ThinkEngine.dll in the "Assets/Plugins/" folder will be used by Unity when building the game, while the one in the "Assets/Plugins/ThinkEngineDLL" folder will be used at design-time. The .meta files in the ThinkEngine .zip archive already specify that.

You can then use ThinkEngine for adding AI capabilities to your game objects. Among the available scripts you will now find the SensorConfiguration script, the ActuatorConfiguration script, the Reactive Brain script and the Planner Brain script.

Quick start

Scripts

The only scripting features you need to know about regard the ThinkEngineTrigger and the Action scripts. You can read about their usage at the relative sections: Programming custom triggers and Action (for Planner Brain only).

Available scripts for components

  • ThinkEngine works by wiring Brains to the game scene. There are two types of brains: Reactive brains and Planning brains;
  • Reactive brains work by wiring them to Sensors and Actuators;
  • How to wire Sensors: just add a Sensor Configuration script to a GameObject (or a Prefab) of choice; customize the Sensor Configuration by selecting which properties of the game object at hand need to be mapped to a sensor;
  • How to wire Actuators: just add an Actuator Configuration script to a GameObject (or a Prefab) of choice, and configure it by selecting which properties of the game object at hand need to be mapped to an actuator; one can also enforce a postcondition telling whether the action at hand must be executed (see below).
  • How to wire a Brain: select a GameObject or Prefab and add a Brain script (either Planner or Reactive);
  • You can now select in the Brain properties as many SensorConfiguration(s) and ActuatorConfiguration(s) (only for Reactive Brains) as you like;
  • If needed, one can generate the template of logical assertions showing how input sensors values and actions values are represented in a Brain;
  • The brain script will show you a predefined path in which to save one or more .asp files. .asp file declaratively specify the decision making logic of the brain at hand. All the .asp files matching the pattern in the ASP Files Pattern field will be executed together when a reasoning task is triggered;
  • In the Choose when to run the reasoner combo box, one can specify when a Brain activates, i.e. when reasoning is triggered (see below on how to program custom triggers).

You can give a look to the configuration demo available here. (for Reactive brains only)

A configuration demo for Planner Brains will be added soon.

Note: Prefabs should be placed in some folder located in "Assets/Resources/Prefabs". If you want to check the run-time generated ASP facts files, check the box Maintain fact file in the Brain component and access the "ThinkEngineFacts" in your "Temp" system folder by clicking the button Open Input Folder.

Programming custom triggers.

The ThinkEngine generates a C# script, named "Trigger", in the folder "Assets/Scripts" (you can then move the script wherever you want). You can add to this script as many boolean functions (with no parameters) as you want. When configuring Actuators and Brains you will be provided with a drop-down list in which all those functions are listed: you can choose one of these as "reasoning task trigger" (for triggering brains' reasoning tasks) and "apply actuator trigger" (for triggering the actuators). Note that there is always a default trigger event: "When sensors are ready" for brains and "Always" for actuators.

ThinkEngine under the hood, in detail

ThinkEngine's Architecture

The main component of the framework are the Brain(s).

There are two kinds of brain:

  • Reactive Brains
  • Planner Brains

Reactive Brains and Planner Brains differ in the way they "think". The reasoning activities of Reactive Brains generate reactive actions which can have immediate impact on the game scene, while the reasoning activity of the Planner Brains generate Plans, which, in the terminology of the ThinkEngine, are generic sets of actions to be executed in a programmable order. To better understand thinkEngine let us consider an analogy with the real world. The main character of the following example is Bob.

Example: Bob is at home and wants to drink a cup of coffee. There is a bottle of water on the table (let's call it water_1) and another one on the chair (let's call it water_2), the coffee machine is on the sink. The purpose of Bob is to make the coffee, walking as little as possibile.

Bob knows his own position, the position of water_1, the position of water_2 and the position of the coffee machine. The brain of Bob uses this information to find the best solution of the problem. In a Unity videogame, we can see Bob as a game character and the house of Bob as the game world populated by game objects (e.g. water_1, water_2, coffee machine). The positions of the objects are information that sensors give to the Bob's brain as input to think. The output of the thinking can be a Plan, for example:

  1. take a step forward.
  2. take a step to the right.
  3. take a step to the right.
  4. take the bottle of water.
  5. make the coffe.

in this case we can say that the brain of Bob is a Planner Brain. But actions can also be reactive, for example one can just decide to "take a step forward", and in this case we can say that the brain of Bob is a Reactive Brain. Both Reactive Brains and Planner Brains use sensors to take information from the game world as input of reasoning activities, but their output is different: the first ones generates reactive Actions, the second ones produce Plans.

More details on brains

Reactive brains are associated with a number of sensors and actuators; planning brains have only sensors, as they impact on the game scene when their generated plans are executed. Brains have one or more ASP encoding files encoding a reasoning task. A triggering event defining when the task is to be run can be defined. Each [Reactive|Planner] brain is coupled with an auxiliary thread running a Solver Executor. When a trigger condition is met, a Solver Executor istance requests the brain's sensors data and feeds them in an ASP solver together with the encoding file(s). Both sensors and actuators|plans are generated (at run-time) based on their configurations defined at design-time. Sensors read data from the game world within a coroutine of the Sensors Manager. The coroutine yields every X ms: in order to guarantee a constant hardware load, the ThinkEngine can possibly adapt the value of X during the game depending on the game frame rate. The manager is also in charge of retrieving the sensors’ ASP mappings and of returning these values to some requesting brain. Actuators are acted upon by the Actuators Manager while Plans are executed by the Plans Scheduler with the decisions coming from the reasoner layer.

Reactive Brain features:

  • A Reactive Brain receives input sensors values (e.g. position of game objects, general game state, etc.) .
  • After its reasoning task is ended, a reactive Brain returns as output actuators values which will change the game objects properties at runtime (e.g. the position of game objects).

Planner Brain features:

  • A Planner Brain receives input sensors values.
  • the outcome of the reasoning activity of the Planner Brain is a Plan .
  • a plan P is a list of actions [a1, . . . , an ] which are supposed to be executed in the given sequence.
  • each action ai(1 ≤ i ≤ n) is equipped with a precondition function PCi().
  • The outcome of PCi() can be one of {ready, skip, wait, abort}, determining, respectively, whether ai is ready to be executed or it must be skipped, waited on or aborted, in this latter case causing to abort the whole plan.
  • At a given iteration in the game loop, P is executable if there is a minimum j for which PCj() is either ready or wait, and there is no k < j for which PCk() = abort.
  • The desired outcome of an action ai on the game scene is obtained implementing the Doi() function, whereas the function Donei() is be used to define when ai is completed, both Doi() and Donei() functions are implemented within the respective action script (see the figure above) .
  • Plans are associated to a priority value and, once generated by planning brains, they are submitted for execution to a scheduler.
  • Planning brains are grouped by game object. Each group has its own planning scheduler, which selects plans to be run, among those available, in order of priority.

Reasoning Layer

This layer is in charge of collecting, processing and executing reasoning jobs. A reasoning job J, in the form of an ASP specification S and a set of encoded sensor values F is elaborated by an answer set solver which produces decisions, encoded in the form of answer sets. Two types of decisions can be produced: deliberative one (i.e. plans) or reactive actions. These are respectively dealt with by the Planning Executors and the Reactive Executors which in turn submit reasoning jobs to the ASP solver.

Information Passing Layer

This layer buffers data passing between the reasoning layer and the actual game state. Sensors correspond to parts of the game data structures which are visible from the reasoning layer. These are buffered in the sensor data store. On the other hand, actuators and plans data stores collect decisions taken by the reasoning layer and are used to modify the game state in the Unity run-time.

Reflection Layer

This layer is in charge of translating back and forth from object data structure to logical assertion. Both Sensors and Actuators need to be configured at desig-time via a configuration component. Once that the configuration has been saved, it can be associated to some Brain. A Reactive Brain is associated with a some sensor and actuator configurations. A Planner Brain is associated with a some sensor configurations and a Planner Scheduler. A [Reactive|Planner] Brain is associated with an ASP encoding file and a triggering condition for the reasoning task. It is worth noting that sensors, actuators and brains can be configured even on GameObject that will be only instantiated at run-time. This is possibile only for Prefabs listed in the Resources/Prefabs folder.

Sensors

Involved classes: SensorConfiguration, MonoBehaviourSensorsManager, Sensor

SensorConfiguration

When some information of a GameObject are needed as input facts of an ASP program, you need to add, at design-time, a SensorConfiguration component to the GameObject. Once you choose the name of the configuration (that has to be unique in the game, a default one is suggested), you can graphically explore the properties hierarchy of the GameObject and you can choose the properties you want to feed in input to a brain. For each property, the ThinkEngine stores the last 200 read values. While configuring the sensor, you can choose which aggregation function has to be applied when generating the logical assertion (e.g. min, max, avg, newest value, oldest value). For what it concerns complex data structures, we support at the moment lists and mono and bidimensional arrays. Dictionaries cannot be read by sensors currently.

MonoBehaviourSensorsManager

At design-time, when a sensor configuration is saved, a MonoBehaviourSensorsManager component is automatically added to the GameObject at hand. At run-time it manages the actual instantiation of the sensors. For each configuration, it is instantiated a sensor for each simple property and a sensor for each element of a complex data structure. During the game, if the size of a complex data structure increases, the manager instantiates as many sensors as many new elements are added to the data structure; if it decreases, exceeding sensors are deleted.

Sensor

Sensors are updated in a cascaded way: the sensorsManager notifies the appropriate monobehaviourSensorsManager which in turn notifies the sensors instantiated until that moment. While updating, a sensor that is associated to an element of a complex data structure whose position exceeds the size of the data structure itlsef, is deleted.

Actuators (for Reactive Brain only)

Principal involved class: ActuatorConfiguration, MonoBehaviourActuatorsManager, MonoBehaviourActuator

ActuatorConfiguration

When you want to change some property of a GameObject according to an answer set of an ASP program, you need to add, at design-time, an ActuatorConfiguration component to the GameObject. Once you choose the name of the configuration (that has to be unique in the game, a default one is suggested), you can graphically explore the properties hierarchy of the GameObject and you can choose the properties you want to manage whit the reasoner. At the moment, only basic object property are supported.

MonoBehaviourActuatorsManager

At design-time, when an actuator configuration is saved, a MonoBehaviourActuatorsManager component is automatically added to the GameObject.
At run-time, it manage the actual instantiation of the actuators.

MonoBehaviourActuator

Actuators are implemented as MonoBehaviour. When an actuator is notified of the existence of an Answer Set coming from the brain to which the actuator is attached, it checks if the Answer Set contains a literal matching its logical assertion mapping. If this is the case, and the trigger conditin associated with the actuator is satisfied, it updates the value of the property to which it is attached.

Plan Schedulers (for Planner Brain only)

Principal involved class: Scheduler, PlannerBrainsCoordinator, Plan, Action

PlannerBrainsCoordinator(for Planner Brain only)

At design-time, when a Planner Brain is added to the game object G, a PlannerBrainsCoordinator component is automatically added to G. At run-time, it keeps trace of the last plan generated by each Planner Brain and decides which among all the plans has to be executed. Higher priority plans (thus higher priority brains) are executed first: if a lower priority plan is executing, it is aborted and the higher one is executed.

Scheduler

At design-time, when a Planner Brain is added to the game object G, a Scheduler component is automatically added to G. At run-time, during the Update step, it is provided with the plan to be executed. At this point, it starts the coroutine that actually execute the Actions contained in the Plan.

Plan

A Plan is generated at run-time when a Planner Brain PB parses an answer set coming from a Planner Executor. The Planner Brains Coordinator associated to the same game object of PB is then notified to replace the last plan associated with PB. Once that a plan is chosen to be executed and the coroutine has been started, while there is at least one Action in the Plan, the following happens:

  1. the actions associated to the plan are scanned until a precondition of type READY, WAIT or ABORT is met;
  2. if it is reached a WAIT precondition, the coroutine yields until the precondition changes its state; then repeates step 1.;
  3. if it is reached an ABORT precondition, the whole plan is aborted;
  4. if it is reached a READY precondition, the Do() method of the corresponding Action is executed; the coroutine yields until the Done() function of the corresponding Action returns true.

Action

Actions contained in plans are instances of custom classes inheriting from the abstract class Action. The associated script can be placed in whatever folder of the project since, at run-time, instances of these classes will be create by means of the ScriptableObject.CreateInstance(actionClass) function. Recall that:

  • Prerequisite() function returns a State value that can be one among State.SKIP, State.WAIT, State.READY, State.ABORT each one associated with the behaviour described above;
  • Do() method is the one that actually applies changes to the game word;
  • Done() functions return either true or false depending on the fact that the Do() method as terminated. For instance, the Do() method could have started a coroutine and the developer doesn't want to execute the next action of the plan until the coroutine reach its last step.

Other then the functions exposed by the Action class, the actual implementation of an action can contain whatever it is needed for the action performance. Properties of the class can have values assigned by the reasoning task:

  • the applyAction(order,actionName) assertion states in which position, order, of a plan sequence the action named actionName must be executed;
  • the actionArgument(order,parameterName,parameterValue) assertion states the following: to the property named parameterName of the action in position order of the plan at hand must be assigned the value parameterValue.
Clone this wiki locally