Projects per year
Abstract
Facilitating the interactions between humans and Artificial Intelligence (AI) in automated systems is becoming central with the advancements in technology and their more widespread adoption in practical applications. Mathematical programming scheduling techniques are a driving factor to assist ground station operators both on board the satellite, for autonomous decision making, and on ground, for supporting mid-term operations scheduling. When communication to
ground is limited, scheduling algorithms require a level of autonomy and robustness able to respond to issues arising on board the satellite in the absence of communication with a ground operator. Moreover, explanations must be generated, along side schedules, for the operator to build and gain trust in the autonomous system.
Explainable Artificial Intelligence (XAI) is an emerging topic in AI. Explanations are a necessary layer to effectively deploy autonomous trustworthy systems in practical applications. Queries may arise from operators such as why, what, when and how the scheduled actions were selected autonomously on board for a specific time. Explanations are provided based on the definition of the problem with its respective constraints.
Autonomous decision making algorithms can be explained in several ways. Computational Argumentation (CA) and Natural Language Processing (NLP)) are some techniques, belonging to the domains of formal logic and machine learning, that can be used to generate explanations and communicate them back to the user in the form of textual output. An Argumentation Framework (AF) was created to assist in answering questions raised by the end user. The AF encodes, in its lower level, all the necessary information on when conflicts may occur between actions, as well as, environmental conditions inhibiting the occurrence of the actions within a schedule. This database of information is used to construct arguments in support or negation of user submitted queries or to provide an explanation of the complete derived schedule. NLP is then used as a bridge to communicate the relevant arguments to the user.
The queries received revolved around three main areas: the subject, the time of interest and the intent. Following the interpretation, the queries were mapped to the AF database, returning a list of conflicts, agreements and neutral outcomes. The chosen NLP method for this architecture, GPT-3 was used to then deduce the answer to the query and justify it with a textual explanation.
ground is limited, scheduling algorithms require a level of autonomy and robustness able to respond to issues arising on board the satellite in the absence of communication with a ground operator. Moreover, explanations must be generated, along side schedules, for the operator to build and gain trust in the autonomous system.
Explainable Artificial Intelligence (XAI) is an emerging topic in AI. Explanations are a necessary layer to effectively deploy autonomous trustworthy systems in practical applications. Queries may arise from operators such as why, what, when and how the scheduled actions were selected autonomously on board for a specific time. Explanations are provided based on the definition of the problem with its respective constraints.
Autonomous decision making algorithms can be explained in several ways. Computational Argumentation (CA) and Natural Language Processing (NLP)) are some techniques, belonging to the domains of formal logic and machine learning, that can be used to generate explanations and communicate them back to the user in the form of textual output. An Argumentation Framework (AF) was created to assist in answering questions raised by the end user. The AF encodes, in its lower level, all the necessary information on when conflicts may occur between actions, as well as, environmental conditions inhibiting the occurrence of the actions within a schedule. This database of information is used to construct arguments in support or negation of user submitted queries or to provide an explanation of the complete derived schedule. NLP is then used as a bridge to communicate the relevant arguments to the user.
The queries received revolved around three main areas: the subject, the time of interest and the intent. Following the interpretation, the queries were mapped to the AF database, returning a list of conflicts, agreements and neutral outcomes. The chosen NLP method for this architecture, GPT-3 was used to then deduce the answer to the query and justify it with a textual explanation.
Original language | English |
---|---|
Pages | #349 |
Number of pages | 14 |
Publication status | Published - 6 Apr 2023 |
Event | SPACEOPS 2023: The 17th International Conference on Space Operations - Dubai, United Arab Emirates Duration: 6 Mar 2023 → 10 Mar 2023 https://spaceops2023.org/ |
Conference
Conference | SPACEOPS 2023 |
---|---|
Abbreviated title | SPACEOPS 2023 |
Country/Territory | United Arab Emirates |
City | Dubai |
Period | 6/03/23 → 10/03/23 |
Internet address |
Keywords
- Explainable Artificial Intelligence (XAI)
- Natural Language Processing (NLP)
- GPT-3
- satellite scheduling
- Abstract Argumentation (AA)
- Language Model (LM)
Fingerprint
Dive into the research topics of 'Natural language processing for explainable satellite scheduling'. Together they form a unique fingerprint.Projects
- 1 Active
-
Robust and Explainable Mission Planning and Scheduling (REMPS)
Riccardi, A. (Principal Investigator) & Cashmore, M. (Co-investigator)
1/11/20 → 30/04/25
Project: Research - Studentship
Research output
- 1 Paper
-
Explaining AI decisions in autonomous satellite scheduling via computational argumentation
Powell, C. & Riccardi, A., 17 Sept 2024. 6 p.Research output: Contribution to conference › Paper › peer-review
Open AccessFile