Intrinsic Error Evaluation during Human -Robot Interaction

Motivation
Human-Robot Interaction, as a concept, could trace back its inspiration to science fiction right from Isaac Asimov’s “Three Laws of Robotics” to the Marvel Comics’ robot healthcare companion – Baymax [Bartneck, 2004]. The rapid strides in the field of artificial Intelligence and robotics over the last decade have helped turn fiction into reality. Today, we find more and more robots sharing the space with humans and working together in unison to achieve a common goal which forms the core of this young field of research.
In order to facilitate better interaction between humans and robots, human-in-the loop learning is of great importance [José de Gea Fernández et al., 2017]. While there are many different approaches to achieve this, the use of human brain activity as a source to provide intrinsic feedback about the correctness of an interaction [Kim et al., 2017, 2020] or the experienced task load [Wöhrle and Kirchner, 2014] is very promising. Retrieving such information from the electroencephalogram (EEG) data of the interacting human provides insights into their mindset and subjective satisfaction with the robot’s performance.
However, there are tremendous challenges to be addressed such as the feasibility of recording and using EEG data under real-world conditions [Gramann et al., 2014, Protzak & Gramann, 2021, Fairclough & Lotte 2020, Roy et al, 2022, Sadanejad & Lotte 2022], decoding the brain asynchronously [Lotte & al, 2018, Kim et al., 2023], and making use of human feedback in autonomous systems [Kirchner et al., 2019; Roy et al., 2020; Singh et al., 2022] to truly improve human-robot interaction [Roy et al., 2020]. To tackle these challenges [Kirchner et al., 2015] new machine learning approaches are needed [Appriou et al, 2021] besides a holistic approach of combining real and artificial intelligence.
The Challenge
The main goal of the challenge is to develop new competitive Signal Processing and/or Machine Learning (ML) approaches for the asynchronous detection of erroneous behaviors based on single-trial EEG analysis. The competition is divided into 2 stages –
For this stage, pre-recorded single-trial brain activity (EEG) dataset has been provided in the form of labeled training data. To download the dataset for the training data, follow this link: Zenodo Training Data Link. To download the dataset for test data, follow this link: Zenodo Test Data Link (publicly accessible on 29 May 2023). Furthermore, detailed information about the experimental setup, procedure, and dataset description can be found in our data report paper uploaded on arXiv: (Link to the paper).
Note:
The above-mentioned paper provides details about data recording for our complete study wherein we also recorded EMG data and some additional experimental conditions such as delayed ball squeeze and no ball squeeze. However, for the sake of the competition, only a subset of this dataset is made public. The complete dataset will be made available after the competition.
For the first phase of the offline stage, we have provided 8 labeled training sets from each of the 8 subjects.
The remaining 2 sets will be made available as unlabeled test data by 29 May 2023 as per the schedule.
Problem Statement:
The challenge is to train an ML model to detect the onset of the deliberately introduced errors and perform 10-fold cross-validation (within each subject) on the labeled training data. Furthermore, the participating teams are expected to validate the performance of their trained model on the unlabeled test data.
The provided EEG training data includes all the events recorded during the experiment inside the marker file (.vmrk). However, the test data, provided during the next phase, will not include these events to simulate a real online scenario which will be a challenge for the online stage of the competition.
Registration:
All the participating teams are supposed to register for the offline stage of the competition via the ConfTool (ConfTool Link) before the Registration Deadline for the offline competition (12 June 2023, 23:59 AoE).
Note:
This registration is for our sake in order to better organize the participating team databases. This shall not be confused with the IJCAI’23 official registration (https://ijcai-23.org/registration/) which is mandatory if you wish to attend the conference.
To organize the online stage of the competition, we want to know if your team would be willing to travel to Macao (if selected in the top 10). Hence, it is essential that you state your willingness in the registration form under User comments (See Sample Registration Page section).
- Similarly, each participating team is expected to choose a team name. This, too, has to be mentioned under the User Comments section of the registration form (See Sample Registration Page section).
Submission Guidelines:
Submit a short paper with a maximum size of 2 pages including only 1 table and 1 figure. This report shall describe your overall approach and results in the form of a confusion matrix for the training data. In addition, the results section shall also contain the results of the test data validation (e.g.: a table containing all the sample indices per set per subject). This paper shall be named as paper_teamName.pdf (only PDF format is accepted).
Also, we expect each team to submit a results folder named test_results_teamName. This folder shall contain a text (.txt) file for each test set (total of 16 files) with the sample indices (integer; comma-separated) of the detected error onsets. An example results folder will be provided with one example .txt file along with the test data on Zenodo as per the schedule. Please ensure that everyone follows the exact format as mentioned in the example text file. Additionally, a readme file will also be provided named test_results_readme.txt.
We expect all teams to also provide access to their source code (preferably a GitHub repository) for evaluation purposes.
All the documents for submission should be submitted via the ConfTool (ConfTool Link) used for registration. The submission portal only accepts one .zip file. Thus, the test_results_teamName folder as well as the paper_teamName.pdf should be copied inside a new folder named submission_teamName and compressed into a .zip file before submission.
Evaluation Metrics:
Training phase:
The training data (8 sets per subject) shall be evaluated by the participating teams using 10-fold cross-validation within each subject and the confusion matrix ([True Positive, False Negative; False Positive, True Negative]) has to be reported.
Testing phase – Scoring:
Each of the submitted sample indices (6 per set) will be converted to the corresponding time points in milliseconds and compared against the ground truth (reference) and the temporal error (difference) in milliseconds will be calculated up to a maximum time point limit (yet to be finalized e.g.: 3000 ms). If the detection happens outside the maximum time point, a penalty will be added to the team’s total (the penalty would equal the maximum time point limit e.g.: 3000 ms). The error term will be summed up across all sets and all subjects and a grand total will be thus generated. This grand total will be used to determine the winners.
Note:
The detection should not happen before the actual error was introduced. Hence, a penalty(maximum time point value) will be added to the team’s total for each such instance.
Disclaimer!!
The Early registration deadline for attending IJCAI’23 on-site is on 19 June 2023 and the Late registration deadline is on 19 July 2023. There would be a slight increase in the registration fees after the early registration (https://ijcai-23.org/registration/).
We would not be able to finish the offline stage of the competition before the Early registration deadline but we will be providing the results before the Late registration deadline to ensure that the selected teams have enough time to decide and register for the conference. It is important for us that the participating teams are made aware of this.
A maximum of 10 teams with the best results from the offline stage will be selected for this stage. Here, the teams will get an opportunity to test their approaches on our experimental setup in real-time.
During the competition, unlabelled single-trial EEG data will be continuously streamed from an experimental session. The participating teams are expected to have a pre-trained model capable of detecting the onset of the introduced errors in the orthosis movement from the streaming EEG data.
Detailed information about the registration, general competition rules & guidelines, and evaluation metrics for the online stage will be made available soon!
Sample Registration Page

Important Links
Registration and Submission: ConfTool Link
Data Report paper on arXiv: Link to the paper.
Dataset:
- Training data: Zenodo Training Data Link.
- Test data: Zenodo Test Data Link
Important Dates / Schedule
19 May 2023 | Start of the offline stage of competition & release of labeled training data |
29 May 2023 | Release of test data & additional information |
12 June 2023 | Registration Deadline for Offline Competition* |
25 June 2023 | End of the offline stage of the competition & Deadline for Submission* of results, source code and paper |
15 July 2023 | Announcement of selected teams for the online stage |
06 Aug 2023 | Registration Deadline* for the online stage of the competition |
XX Aug 2023 | IJCAI’23 online stage of the competition with real hardware (Exact date of competition will be announced soon!) |
Here, XX is either 22 Aug 2023 or 23 Aug 2023
* All the deadlines are at 23:59 Anywhere on Earth
08:30 – 09:00 | Keynote I: Embedded Brain Reading and Intrinsic Reinforcement Learning by Dr. Elsa Kirchner and Dr. Frank Kirchner |
09:00 – 09:30 | Keynote II: Mobile Brain/Body Imaging by Dr. Klaus Gramann |
09:30 – 10:40 | Presentation of selected papers (15 min. each) |
10:40 – 11:30 | Poster Presentation / Coffee Break |
11:30 – 12:10 | Keynote III: Feature Extraction & ML methods for Cognitive & Affective State Estimation by Dr. Raphaëlle Roy and Dr. Fabien Lotte |
12:10 – 12:30 | Award Ceremony for Winners of the offline stage of the competition |
12:30 – 14:00 | Conference Lunch |
14:00 – 14:30 | Introduction to the IntEr-HRI scenario for the online stage of the competition |
14:30 – 17:30 | IntEr-HRI online stage of the competition with real hardware / Hackathon |
17:30 – 18:00 | Award Ceremony for the online stage of the competition and Concluding Remarks |
Keynote Speakers
Dr. rer. nat. Elsa Andrea Kirchner’s multiple award-winning interdisciplinary research is based on a highly interdisciplinary education in neuroscience, computer science, robotics and psychology. She has authored more than 95 publications in international journals and conference volumes as well as nine book chapters. In the spirit of transferring research and development, she is, among other things, a founding member of the DLR Space Management network Space2Health, founded in 2020 and funded by the BMWK. Furthermore, from 2018 to August 2022 she was a member of Germany’s Platform for Artificial Intelligence in Working Group 6 ‘Health Care, Medical Technology, Care’. In September 2022, she assumed co-leadership of Working Group 7 ‘Learning Robotics Systems’ within this network.
Prof. Dr. Dr. h.c. Frank Kirchner is the Executive Director of the German Research Center for Artificial Intelligence, Bremen, and is responsible for the Robotics Innovation Center, one of the largest centers for AI and Robotics in Europe. Founded in 2006 as the DFKI Laboratory, it builds on the basic research of the Robotics Working Group headed by Kirchner at the University of Bremen. There, Kirchner holds the Chair of Robotics in the Department of Mathematics and Computer Science since 2002. He is one of the leading experts in the field of biologically inspired behavior and motion sequences of highly redundant, multifunctional robot systems and machine learning for robot control.
Dr. Klaus Gramann received his Ph.D. in psychology from RWTH Aachen, Aachen, Germany. He was a postdoc with the LMU Munich, Germany, and the Swartz Center for Computational Neuroscience, University of California at San Diego. After working as a visiting professor at the National Chiao Tung University, Hsinchu, Taiwan and the University of Osnabruck, Germany, he became the chair of Biopsychology and Neuroergonomics with the Technical University of Berlin, Germany in 2012. He has been a Professor with the University of Technology Sydney, Australia and is an International Scholar at the University of California San Diego. His research covers the neural foundations of cognitive processes with a focus on the brain dynamics of embodied cognitive processes. He directs the Berlin Mobile Brain/Body Imaging Labs (BeMoBIL) that focus on imaging human brain dynamics in actively behaving participants.
Dr. Raphaëlle N. Roy (PhD, Habil.) is Associate Professor of neuroergonomics and physiological computing at ISAE-SUPAERO, University of Toulouse, France. She leads an interdisciplinary research at the cross-roads of cognitive science, neuroscience, machine learning and human-machine interaction (HMI). Her main research focus is to investigate how to better characterize operators’ mental state to enhance HMI and improve safety and performance. To this end, she develops methods to extract and classify relevant features from physiological data. Co-founder of the French BCI association, co-chair in the in the Artificial and Natural Intelligence Toulouse Institute (ANITI), and associate editor of the new Frontiers in Neuroergonomics journal, she has also recently published a public database for the passive BCI community and organized the first passive BCI competition.
Dr. Fabien Lotte obtained an M.Sc., an M.Eng. (2005), and a Ph.D. (2008) from INSA Rennes, and a Habilitation (HDR, 2016) from Univ. Bordeaux, all in computer science. His research focuses on the design, study, and application of Brain-Computer Interfaces (BCI). In 2009 and 2010, Fabien Lotte was a research fellow at the Institute for Infocomm Research in Singapore. From 2011 to 2019, he was a Research Scientist at Inria Bordeaux Sud-Ouest, France. Between October 2016 and January 2018, he was a visiting scientist at the RIKEN Brain Science Institute, and the Tokyo University of Agriculture and Technology, both in Japan. Since October 2019, he is a Research Director (DR2) at the Inria Centre at the University of Bordeaux. He is on the editorial boards of the journals Brain-Computer Interfaces (since 2016), Journal of Neural Engineering (since 2016), and IEEE Transactions on Biomedical Engineering (since 2021). He is also “co-specialty chief editor” of the section “Neurotechnologies and System Neuroergonomics” of the journal “Frontiers in Neuroergonomics”. He co-edited the books ”Brain-Computer Interfaces 1: foundations and methods” and ”Brain-Computer Interfaces 2: technology and applications” (2016) and the ”Brain-Computer Interfaces Handbook: Technological and Theoretical Advance” (2018). In 2016, he was the recipient of an ERC Starting Grant to develop his research on BCI and was the laureate of the USERN Prize 2022 in Formal Science.
Prizes
Coming Soon!!!
Organising Committee


















Organising Institution(s)


References
- [Bartneck, 2004] Bartneck, Christoph. From Fiction to Science – A cultural reflection on social robotics. 2004.
- [Appriou et al, 2021] A. Appriou, L. Pillette, D. Trocellier, D. Dutartre, A. Cichocki, F. Lotte, “BioPyC, an Open-Source Python Toolbox for Offline Electroencephalographic and Physiological Signals Classification”. MDPI Sensors, vol. 21, no. 5740, 2021.
- [Fairclough & Lotte 2020] S. Fairclough*, F. Lotte* (*: Authors contributed equally), “Grand Challenges in Neurotechnology and System Neuroergonomics“, Frontiers in Neuroergonomics: Section Neurotechnology and Systems Neurergonomics, 2020
- [Gramann et al., 2014] Gramann, K., Ferris, D. P., Gwin, J., & Makeig, S. (2014). Imaging natural cognition in action. International Journal of Psychophysiology, 91(1), 22-29.
- [José de Gea Fernández et al., 2017] José de Gea Fernández, Dennis Mronga, Martin Günther, Tobias Knobloch, Malte Wirkus, Martin Schröer, Mathias Trampler, Stefan Stiene, Elsa Andrea Kirchner, Vinzenz Bargsten, Timo Bänziger, Johannes Teiwes, Thomas Krüger, Frank Kirchner. Multimodal Sensor-Based Whole-Body Control for Human-Robot Collaboration in Industrial Settings. In Robotics and Autonomous Systems, Elsevier, volume 94, pages 102-119, Aug/2017.
- [Kim et al., 2017] Su Kyoung Kim, Elsa Andrea Kirchner, Arne Stefes, Frank Kirchner. Intrinsic interactive reinforcement learning – Using error-related potentials for real world human-robot interaction. In Scientific Reports, Nature, volume 7: 17562, Dezember 2017.
- [Kim et al., 2020] Su Kyoung Kim, Elsa Andrea Kirchner, Frank Kirchner. Flexible online adaptation of learning strategy using EEG-based reinforcement signals in real-world robotic applications. In Proceedings of the IEEE International Conference on Robotics and Automation, (ICRA-2020), 31.3.-31.8.2020, Paris, IEEE, pages 4885-44891, 2020.
- [Kim et al., 2023] Su Kyoung Kim, Michael Maurus, Mathias Trampler, Marc Tabie, Elsa Andrea Kirchner. Asynchronous classification of error-related potentials in human-robot interaction. In Proceedings of HCI International 2023 Conference, Copenhagen, Denmark, 23-28 July 2023. Accepted.
- [Kirchner et al., 2019] Elsa Andrea Kirchner, Stephen Fairclough and Frank Kirchner. Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications. In The Handbook of Multimodal-Multisensor Interfaces, Morgan & Claypool Publishers, volume 3, chapter 13, pp. 523-576, 2019, ISBN: e-book: 978-1-97000-173-0, hardcover: 978-1-97000-175-4, paperback: 978-1-97000-172-3, ePub: 978-1-97000-174-7.
- [Kirchner et al., 2015] Elsa Andrea Kirchner, José de Gea Fernández, Peter Kampmann, Martin Schröer, Jan Hendrik Metzen, Frank Kirchner. In Formal Modeling and Verification of Cyber Physical Systems, Springer Heidelberg, pages 224-248, Sep/2015. ISBN: 978-3-658-09993-0.
- [Protzak & Gramann et al., 2018] Protzak, J., & Gramann, K. (2018). Investigating established EEG parameter during real-world driving. Frontiers in psychology, 9, 2289.
- [Roy et al., 2020] Roy, R. N., Drougard, N., Gateau, T., Dehais, F., & Chanel, C. P.” How can physiological computing benefit human-robot interaction?”, Robotics, 9(4), 100, 2020.
- [Roy et al., 2022] R.N. Roy, M.F. Hinss, L. Darmet, S. Ladouce, E.S. Jahanpour, B. Somon, X. Xu, N. Drougard, F. Dehais, F. Lotte, “Retrospective on the first passive brain-computer interface competition on cross-session workload estimation”, Frontiers in Neuroergonomics: Neurotechnology and Systems Neuroergonomics, 2022.
- [Sadatnejad & Lotte 2022] K. Sadatnejad, F. Lotte, “Riemannian channel selection for BCI with between-session non-stationarity reduction capabilities“, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022
- [Singh et al., 2022] Singh, G., Roy, R. N., & Ponzoni Carvalho Chanel, C. “POMDP-based adaptive interaction through physiological computing”, HHAI, 2022.
- [Wöhrle and Kirchner, 2014] Wöhrle und E. A. Kirchner, Online Detection of P300 related Target Recognition Processes During a Demanding Teleoperation Task. Proceedings of the International Conference on Physiological Computing Systems (PHYCS-14), 07.01.-09.01.2014, Lissabon, Scitepress Digital Library, January 2014.
Questions / Contact
For any questions or queries, please use this Contact Form.