IntEr-HRI Competition:

Intrinsic Error Evaluation during Human -Robot Interaction

IntEr-HRI

Motivation

Human-Robot Interaction, as a concept, could trace back its inspiration to science fiction right from Isaac Asimov’s “Three Laws of Robotics” to the Marvel Comics’ robot healthcare companion – Baymax [Bartneck, 2004]. The rapid strides in the field of artificial Intelligence and robotics over the last decade have helped turn fiction into reality. Today, we find more and more robots sharing the space with humans and working together in unison to achieve a common goal which forms the core of this young field of research.

In order to facilitate better interaction between humans and robots, human-in-the loop learning is of great importance [José de Gea Fernández et al., 2017]. While there are many different approaches to achieve this, the use of human brain activity as a source to provide intrinsic feedback about the correctness of an interaction [Kim et al., 2017, 2020] or the experienced task load [Wöhrle and Kirchner, 2014]  is very promising. Retrieving such information from the electroencephalogram (EEG) data of the interacting human provides insights into their mindset and subjective satisfaction with the robot’s performance.

However, there are tremendous challenges to be addressed such as the feasibility of recording and using EEG data under real-world conditions [Gramann et al., 2014, Protzak & Gramann, 2021, Fairclough & Lotte 2020, Roy et al, 2022, Sadanejad & Lotte 2022], decoding the brain asynchronously [Lotte & al, 2018, Kim et al., 2023], and making use of human feedback in autonomous systems [Kirchner et al., 2019; Roy et al., 2020; Singh et al., 2022] to truly improve human-robot interaction [Roy et al., 2020]. To tackle these challenges [Kirchner et al., 2015] new machine learning approaches are needed [Appriou et al, 2021] besides a holistic approach of combining real and artificial intelligence.

The Challenge

The main goal of the challenge is to develop new competitive Signal Processing and/or Machine Learning (ML) approaches for the asynchronous detection of erroneous behaviors based on single-trial EEG analysis. The competition is divided into 2 stages – 

For this stage, pre-recorded single-trial brain activity (EEG) dataset has been provided in the form of labeled training data. To download the dataset for the training data, follow this link: Zenodo Training Data Link. To download the dataset for test data, follow this link: Zenodo Test Data Link (publicly accessible on 29 May 2023). Furthermore, detailed information about the experimental setup, procedure, and dataset description can be found in our data report paper uploaded on arXiv: (Link to the paper). 

Note:

  • The above-mentioned paper provides details about data recording for our complete study wherein we also recorded EMG data and some additional experimental conditions such as delayed ball squeeze and no ball squeeze. However, for the sake of the competition, only a subset of this dataset is made public. The complete dataset will be made available after the competition.

  • For the first phase of the offline stage, we have provided 8 labeled training sets from each of the 8 subjects.

  • The remaining 2 sets will be made available as unlabeled test data by 29 May 2023 as per the schedule.

Problem Statement:

The challenge is to train an ML model to detect the onset of the deliberately introduced errors and perform 10-fold cross-validation (within each subject)  on the labeled training data. Furthermore, the participating teams are expected to validate the performance of their trained model on the unlabeled test data.

The provided EEG training data includes all the events recorded during the experiment inside the marker file (.vmrk). However, the test data, provided during the next phase, will not include these events to simulate a real online scenario which will be a challenge for the online stage of the competition.

Registration:

All the participating teams are supposed to register for the offline stage of the competition via the ConfTool (ConfTool Link) before the Registration Deadline for the offline competition (12 June 2023, 23:59 AoE).

Note:

  • Only 1 registration per team and provide the contact details of the contact person / team leader.
  • This registration is for our sake in order to better organize the participating team databases. This shall not be confused with the IJCAI’23 official registration (https://ijcai-23.org/registration/) which is mandatory if you wish to attend the conference.

  • To organize the online stage of the competition, we want to know if your team would be willing to travel to Macao (if selected in the top 10). Hence, it is essential that you state your willingness in the registration form under User comments (See Sample Registration Page section). 

  • Similarly, each participating team is expected to choose a team name. This, too, has to be mentioned under the User Comments section of the registration form.

Submission Guidelines:

  • Submit a short paper with a maximum size of 2 pages including only 1 table and 1 figure. This report shall describe your overall approach and results in the form of a confusion matrix for the training data. In addition, the results section shall also contain the results of the test data validation (e.g.: a table containing all the sample indices per set per subject). This paper shall be named as paper_teamName.pdf (only PDF format is accepted).

  • Also, we expect each team to submit a results folder named test_results_teamName. This folder shall contain a text (.txt) file for each test set (total of 16 files) with the sample indices (integer; comma-separated) of the detected error onsets. An example results folder will be provided with one example .txt file along with the test data on Zenodo as per the schedule. Please ensure that everyone follows the exact format as mentioned in the example text file. Additionally, a readme file will also be provided named test_results_readme.txt.

  • We expect all teams to also provide access to their source code (preferably a GitHub repository) for evaluation purposes.

  • All the documents for submission should be submitted via the ConfTool (ConfTool Link) used for registration. The submission portal only accepts one .zip file. Thus, the test_results_teamName folder as well as the paper_teamName.pdf should be copied inside a new folder named submission_teamName and compressed into a .zip file before submission.

Evaluation Metrics:

Training phase:

The training data (8 sets per subject) shall be evaluated by the participating teams using 10-fold cross-validation within each subject and the confusion matrix ([True Positive, False Negative; False Positive, True Negative]) has to be reported. 

Testing phase – Scoring:

Each of the submitted sample indices (6 per set) will be converted to the corresponding time points in milliseconds and compared against the ground truth (reference) and the temporal error (difference) in milliseconds will be calculated up to a maximum time point limit (1000 ms). If the detection happens outside the maximum time point, a penalty will be added to the team’s total (the penalty would equal the maximum time point limit i.e. 1000 ms). The error term will be summed up across all sets and all subjects and a grand total will be thus generated. This grand total will be used to determine the winners.

Note:

The detection should not happen before the actual error was introduced. Hence, a penalty(maximum time point value) will be added to the team’s total for each such instance. 

Disclaimer!!

The Early registration deadline for attending IJCAI’23 on-site is on 19 June 2023 and the Late registration deadline is on 19 July 2023. There would be a slight increase in the registration fees after the early registration (https://ijcai-23.org/registration/). 

We would not be able to finish the offline stage of the competition before the Early registration deadline but we will be providing the results before the Late registration deadline to ensure that the selected teams have enough time to decide and register for the conference. It is important for us that the participating teams are made aware of this. 

A maximum of 10 teams with the best results from the offline stage will be selected for this stage. Here, the teams will get an opportunity to test their approaches on our experimental setup in real-time.

During the competition, unlabelled single-trial EEG data will be continuously streamed from an experimental session. The participating teams are expected to have a model capable of detecting the onset of the introduced errors in the orthosis movement from the streamed EEG data.

Location and Competition Format:

The online experiment will take place off-site on the 9th of August 2023 and the data will be streamed online such that all the participating teams can access the data remotely in real-time. We strongly advise the participants of the online competition to ensure that they have a stable network connection on the day of the experimental session. The experimental protocol will be designed in such a way that small latency in the data transfer would not affect the performance or the quality of the results of the participants. All the participants are advised to attend the dry run session on the 8th of August 2023 to test the data streaming pipeline before the competition day.

AQ59D will be the subject for the experiment. The goal of the online stage is to continuously detect the introduced errors during the operation of the active orthosis device (same experimental setup and procedure as in the offline stage). However, there will be 2 scenarios where the performance of the competing teams will be evaluated –

  1. detection of errors with a direct response (pressing the air-filled balloon as soon as the error is felt)
  2. detection of errors with no response (the subject feels the errors  but does not press the balloon)

Therefore, the experimental conditions of scenario (1) are equivalent to the conditions of the offline competition, whereas scenario (2) does not include explicit feedback by pressing the air-filled balloon. The online classification results of both scenarios will be evaluated to determine the winning team of the competition.

Remote Online Data Streaming Protocol:

  • Setting up a VPN Tunnel:

    In order to access the data stream, the qualified teams need to connect to our private network via a VPN tunnel. This could be done with the help of a cross-platform tool called Wireguard VPN Tunnel. This tool is available for all major OS (Linux, Windows, and macOS). Each team will be provided a unique configuration file (xyz.conf) needed to connect to our network.

    Installation and Procedure : 

    • Linux

The required packages can be installed using the following command :

sudo apt install wireguard-tools wireguard resolvconf

Next, navigate to the directory where you have saved the VPN config file (xyz.conf) and enter the following command to activate the VPN tunnel:

wg-quick up xyz.conf

If you wish to deactivate the VPN tunnel, use the following command instead:

$ wg-quick down xyz.conf

    • Windows / macOS

For Windows and macOS, there is a Wireguard app that provides an easy GUI to activate and deactivate the VPN tunnel. At the bottom of the dialog box, there is an Add Tunnel button where you can select the provided VPN config file (xyz.conf). Then all you got to do is to click on the Activate button to activate the VPN.

  • Lab Streaming Layer (LSL) :

    We will be using LSL coupled with the VPN tunnel for streaming the data online. The stream would consist of 65 channels of data (64-channel EEG data chunks and the 65th channel would contain the markers). Also, in parallel we expect each team to send us the sample indices and time stamps as and when they detect the introduced errors as an IP request. Demo Python scripts for receiving and sending data would be provided to each team with a unique secret access code along with an LSL config file (xyz.cfg).

    For more information regarding the Lab Streaming Layer, please refer to the User Guide.

    Installation:

    There are a few packages that are required to be installed before you can use the LSL.

    • Python interface for LSL (pylsl)

In order to install pylsl, use the following command:

pip install pylsl

    • LSL Library (liblsl)

In order to install liblsl, visit this site. Download and install the correct software package depending on your OS or Linux distribution/version.

N.B.: The sample code for reading the LSL stream is provided on our GitHub page along with a config file (LiveAmpIjcai.cfg) which provides the 64 channel names for the EEG stream and some metadata.

Evaluation Metrics:

There are 3 different metrics that will be used to determine the final performance score for each team.

  • Balanced Accuracy (BA_{cc})
  • Difference between the predicted error time and the ground truth (t_{err})
  • Computation time (t_{comp})

The final performance score will be a weighted average between the BA_{cc} and a consolidated time score (t_{score}) with 70% weightage given to the BA_{cc}.

Calculating the Balanced Accuracy (BA_{cc}):

The 4 terms that constitute the confusion matrix are True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Calculating these terms is essential to determine the Balanced Accuracy of the classifier models.

BA_{cc} = \frac{\left(\frac{n_{TP}}{n_{TP} \ + \ n_{FN}}\right) + \left(\frac{n_{TN}}{n_{TN} \ + \ n_{FP}}\right)}{2}

N.B.: The boundaries of each movement trial (flexion or extension) will be the basis for determining the above quantities. Any predictions before the start of the movement trial or after the end of the movement trial will be ignored for the error in question. This means that if there are multiple false classifications within a movement trial, it would be considered a single FP. Furthermore, if there are multiple TPs and multiple FPs within a movement trial, it will be considered as a single TP (one nearest to the error will be used for calculations).  

Calculating the time score (t_{score}):

For calculating the time score, 2 quantities will be considered -> t_{err} and t_{comp}

    • t_{err}

This quantity represents the difference (in ms) between the predicted error-index and the ground truth. This is exactly the same quantity that was used in the offline stage to calculate the performance score.

    • t_{comp}

This quantity represents the time taken by the trained classifier model to classify a sample as an error. This quantity is equal to the duration between the instant when a sample was received and the moment when it was classified as an error. This quantity is added to encourage and reward real-time classification of introduced errors, which is the main purpose of this competition. Thus, t_{comp} \in [0, 1000 ms] would be the ideal case but t_{comp} \in [0, 3000 ms] will be acceptable. If  t_{comp} > 3000 ms, the classification will no longer be a TP.

The calculation of the t_{score} is a 2-step process.

    • The first step involves calculating individual time scores (t_i) for each TP. For this, logistic regression will be used. The logistic function would start from t_{err}. This means that

f(0) = t_{err}

f(3000) = 1000

Thus, for a TP, the t_{err} will be the starting score and the t_{comp} aspect will be introduced to this score via the logistical function. Thus, the X-axis for the logistical function would be t_{comp} (D :  { x | x \in [0,3000]}) and the Y-axis would be t_{i} (R :  { y | y \in [t_{err},1000]}).

    • In the second step, all the individual time scores (one each for each TP) will be summed up and normalised.

t_{score} = 1 \ –  \frac{\sum\limits_{i=1}^{n_{TP}}  \ t_i}{n_{TP} \ * \ 1000}

Calculating the final performance score (FPS):

The final performance score of each team is the weighted average between BA_{cc} and t_{score}.

FPS = W * BA_{cc}  + (1 – W) * t_{score} ,

\forall W = 0.7, and FPS \in [0,1]

N.B.: In the online case, the team with the highest FPS will be the winner.

Keynotes:

As the competition has been shifted off-site, with the consultation from the keynote speakers, it has been decided to pre-record the keynotes and make them available around mid-July 2023 (See Keynotes section for embedded recorded keynotes). Hopefully, this would give ample time to the participating teams to listen to the keynotes and have an informed discussion with the speakers during the Q&A session that will be organized during the IJCAI’23 conference.

Our booth at IJCAI’23 Conference:

During the conference in Macao, a booth will be set up for the duration of 1.5 hours, during which selected top-performing teams shall present their approach and results. Additionally, an open Q&A session will be organized with the keynote speakers, providing an opportunity to exchange ideas, address concerns, clarify doubts, and engage in scientific discussions with experts in the fields of AI, Robotics, Neuroscience, and HRI.

 

Results

We are delighted to announce the results of the offline stage of the competition. The top 3 teams will be notified via personalised emails that will disclose their rankings along with a detailed analysis of their submitted results in a .txt file. The email will also contain comments from reviewers which are supposed to be incorporated into the 2-page paper and the final draft has to be submitted by the 31st of July, 2023.

Given below are some useful general statistics about the performance of the top 3 teams:

We are delighted to announce that Yanzhao Pan from Team NeuroXR Explorers affiliated with the Young Investigator Group – Intuitive XR, Brandenburg University of Technology Cottbus-Senftenburg, has emerged as the overall winner of the competition. Given below are the evaluation metrics from his performance in the online stage of the competition. 

                                                                             

Selected Papers (Offline Stage)

Team NeuroPrior AI (Winners – Offline Stage)    – Paper Link

Team Neuro XR Explorers (Overall Winner)         – Paper Link

ChocolaTeam                                                           – Paper Link

Important Links

Registration and Submission: ConfTool Link

Data Report Paper published in Frontiers in Human Neuroscience: Link to the paper.

Dataset:

Important Dates

Keynotes

Prof. Dr. rer. nat. Elsa Andrea Kirchner has been a professor at the University of Duisburg-Essen since 2021, where she heads the “Medical Technology Systems” department at the Faculty of Engineering. She also heads the “Intelligent Healthcare Systems” team at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI) in Bremen, where she worked for many years. After her studies in biology, Elsa Kirchner laid the foundation of her interdisciplinary research in the areas of human-robot interaction, embedded brain reading, neurophysiological methods (especially EEG, EMG, and other physiological data as well as motion data), behavioral analysis in humans, learning in humans and in artificial agents, embedded AI, embodied AI, and hybrid AI with a research stay at the Department of Brain and Cognitive Sciences at MIT in Boston/USA supported by the “Familie Klee Award”. Her PhD in computer science was recognized as one of the best of the year by the “Gesellschaft für Informatik” in 2014. She is the author of more than 100 publications in international journals and conferences and 9 book chapters. For research transfer, Elsa Andrea Kirchner is involved, among other things, as a founding member of the DLR space management network “Space2Health”. Furthermore, from 2018 to August 2022 she was a member of Germany’s Platform for Artificial Intelligence in Working Group 6 ‘Health Care, Medical Technology, Care’. In September 2022, she assumed co-leadership of Working Group 7 ‘Learning Robotics Systems’ within this network. In April 2023, she has been appointed to the “Council for Technological Sovereignty” of the BMBF.

Prof. Dr. Dr. h.c. Frank Kirchner is the Executive Director of the German Research Center for Artificial Intelligence, Bremen, and is responsible for the Robotics Innovation Center, one of the largest centers for AI and Robotics in Europe. Founded in 2006 as the DFKI Laboratory, it builds on the basic research of the Robotics Working Group headed by Kirchner at the University of Bremen. There, Kirchner holds the Chair of Robotics in the Department of Mathematics and Computer Science since 2002. He is one of the leading experts in the field of biologically inspired behavior and motion sequences of highly redundant, multifunctional robot systems and machine learning for robot control.

Dr. Klaus Gramann received his Ph.D. in psychology from RWTH Aachen, Aachen, Germany. He was a postdoc with the LMU Munich, Germany, and the Swartz Center for Computational Neuroscience, University of California at San Diego. After working as a visiting professor at the National Chiao Tung University, Hsinchu, Taiwan and the University of Osnabruck, Germany, he became the chair of Biopsychology and Neuroergonomics with the Technical University of Berlin, Germany in 2012. He has been a Professor with the University of Technology Sydney, Australia and is an International Scholar at the University of California San Diego. His research covers the neural foundations of cognitive processes with a focus on the brain dynamics of embodied cognitive processes. He directs the Berlin Mobile Brain/Body Imaging Labs (BeMoBIL) that focus on imaging human brain dynamics in actively behaving participants.

Dr. Raphaëlle N. Roy (PhD, Habil.) is an Associate Professor of neuroergonomics and physiological computing at ISAE-SUPAERO, University of Toulouse, France. She leads interdisciplinary research at the crossroads of cognitive science, neuroscience, machine learning, and human-machine interaction (HMI). Her primary research focuses on how to better characterize operators’ mental state to enhance HMI and improve safety and performance. To this end, she develops methods to extract and classify relevant features from physiological data. Co-founder of the French BCI association, co-chair in the in the Artificial and Natural Intelligence Toulouse Institute (ANITI), and associate editor of the new Frontiers in Neuroergonomics journal, she has also recently published a public database for the passive BCI community and organized the first passive BCI competition.

Dr. Fabien Lotte obtained an M.Sc., an M.Eng. (2005), and a Ph.D. (2008) from INSA Rennes, and a Habilitation (HDR, 2016) from Univ. Bordeaux, all in computer science. His research focuses on the design, study, and application of Brain-Computer Interfaces (BCI). In 2009 and 2010, Fabien Lotte was a research fellow at the Institute for Infocomm Research in Singapore. From 2011 to 2019, he was a Research Scientist at Inria Bordeaux Sud-Ouest, France. Between October 2016 and January 2018, he was a visiting scientist at the RIKEN Brain Science Institute, and the Tokyo University of Agriculture and Technology, both in Japan. Since October 2019, he is a Research Director (DR2) at the Inria Centre at the University of Bordeaux. He is on the editorial boards of the journals Brain-Computer Interfaces (since 2016), Journal of Neural Engineering (since 2016), and IEEE Transactions on Biomedical Engineering (since 2021). He is also “co-specialty chief editor” of the section “Neurotechnologies and System Neuroergonomics” of the journal “Frontiers in Neuroergonomics”. He co-edited the books ”Brain-Computer Interfaces 1: foundations and methods” and ”Brain-Computer Interfaces 2: technology and applications” (2016) and the ”Brain-Computer Interfaces Handbook: Technological and Theoretical Advance” (2018). In 2016, he was the recipient of an ERC Starting Grant to develop his research on BCI and was the laureate of the USERN Prize 2022 in Formal Science.

Prizes

There are multiple prizes planned for the winning team as follows:

  • Full waiver to publish in the “Frontiers in Neuroergonomics” journal

The editors of Frontiers in Neuroergonomics have agreed to provide the winning team a full waiver of the article processing charges that amount to USD 2,080. The deadline for the submission of the manuscript is on 30 September 2023. This waiver is only valid until this deadline and the submission would fall under the Technology & Code paper type (A-Type Articles) within the Research Topic – Open Science to Support Replicability in Neuroergonomic Research.

Organising Committee

Organising Institution(s)

Prize Sponsors

References

  • [Bartneck, 2004] Bartneck, Christoph. From Fiction to Science – A cultural reflection on social robotics. 2004.
  • [Appriou et al, 2021] A. Appriou, L. Pillette, D. Trocellier, D. Dutartre, A. Cichocki, F. Lotte, “BioPyC, an Open-Source Python Toolbox for Offline Electroencephalographic and Physiological Signals Classification”. MDPI Sensors, vol. 21, no. 5740, 2021. 
  • [Fairclough & Lotte 2020] S. Fairclough*, F. Lotte* (*: Authors contributed equally), “Grand Challenges in Neurotechnology and System Neuroergonomics“, Frontiers in Neuroergonomics: Section Neurotechnology and Systems Neurergonomics, 2020 
  • [Gramann et al., 2014] Gramann, K., Ferris, D. P., Gwin, J., & Makeig, S. (2014). Imaging natural cognition in action. International Journal of Psychophysiology, 91(1), 22-29. 
  • [José de Gea Fernández et al., 2017] José de Gea Fernández, Dennis Mronga, Martin Günther, Tobias Knobloch, Malte Wirkus, Martin Schröer, Mathias Trampler, Stefan Stiene, Elsa Andrea Kirchner, Vinzenz Bargsten, Timo Bänziger, Johannes Teiwes, Thomas Krüger, Frank Kirchner. Multimodal Sensor-Based Whole-Body Control for Human-Robot Collaboration in Industrial Settings. In Robotics and Autonomous Systems, Elsevier, volume 94, pages 102-119, Aug/2017. 
  • [Kim et al., 2017] Su Kyoung Kim, Elsa Andrea Kirchner, Arne Stefes, Frank Kirchner. Intrinsic interactive reinforcement learning – Using error-related potentials for real world human-robot interaction. In Scientific Reports, Nature, volume 7: 17562, Dezember 2017.  
  • [Kim et al., 2020] Su Kyoung Kim, Elsa Andrea Kirchner, Frank Kirchner. Flexible online adaptation of learning strategy using EEG-based reinforcement signals in real-world robotic applications. In Proceedings of the IEEE International Conference on Robotics and Automation, (ICRA-2020), 31.3.-31.8.2020, Paris, IEEE, pages 4885-44891, 2020. 
  • [Kim et al., 2023] Su Kyoung Kim, Michael Maurus, Mathias Trampler, Marc Tabie, Elsa Andrea Kirchner. Asynchronous classification of error-related potentials in human-robot interaction. In Proceedings of HCI International 2023 Conference, Copenhagen, Denmark, 23-28 July 2023. Accepted.  
  • [Kirchner et al., 2019] Elsa Andrea Kirchner, Stephen Fairclough and Frank Kirchner. Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications. In The Handbook of Multimodal-Multisensor Interfaces, Morgan & Claypool Publishers, volume 3, chapter 13, pp. 523-576, 2019, ISBN: e-book: 978-1-97000-173-0, hardcover: 978-1-97000-175-4, paperback: 978-1-97000-172-3, ePub: 978-1-97000-174-7. 
  • [Kirchner et al., 2015] Elsa Andrea Kirchner, José de Gea Fernández, Peter Kampmann, Martin Schröer, Jan Hendrik Metzen, Frank Kirchner. In Formal Modeling and Verification of Cyber Physical Systems, Springer Heidelberg, pages 224-248, Sep/2015. ISBN: 978-3-658-09993-0.  
  • [Protzak & Gramann et al., 2018] Protzak, J., & Gramann, K. (2018). Investigating established EEG parameter during real-world driving. Frontiers in psychology, 9, 2289. 
  • [Roy et al., 2020] Roy, R. N., Drougard, N., Gateau, T., Dehais, F., & Chanel, C. P.” How can physiological computing benefit human-robot interaction?”, Robotics, 9(4), 100, 2020. 
  • [Roy et al., 2022] R.N. Roy, M.F. Hinss, L. Darmet, S. Ladouce, E.S. Jahanpour, B. Somon, X. Xu, N. Drougard, F. Dehais, F. Lotte, “Retrospective on the first passive brain-computer interface competition on cross-session workload estimation”, Frontiers in Neuroergonomics: Neurotechnology and Systems Neuroergonomics, 2022. 
  • [Sadatnejad & Lotte 2022] K. Sadatnejad, F. Lotte, “Riemannian channel selection for BCI with between-session non-stationarity reduction capabilities“, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022 
  • [Singh et al., 2022] Singh, G., Roy, R. N., & Ponzoni Carvalho Chanel, C. “POMDP-based adaptive interaction through physiological computing”, HHAI, 2022. 
  • [Wöhrle and Kirchner, 2014] Wöhrle und E. A. Kirchner, Online Detection of P300 related Target Recognition Processes During a Demanding Teleoperation Task. Proceedings of the International Conference on Physiological Computing Systems (PHYCS-14), 07.01.-09.01.2014, Lissabon, Scitepress Digital Library, January 2014. 

Questions / Contact

For any questions or queries, please use this Contact Form.