JOSHUA NEWN
I’m a passionate design researcher with extensive research training in human-computer interaction, interaction design and user research. My current research focuses on designing, developing and evaluating human-centred AI systems for intelligence augmentation. I am currently a Lecturer (Assistant Professor) in Human-Centred Computing at the School of Computing Technologies at RMIT University, Australia.
I hold a PhD in Human-Computer Interaction from The University of Melbourne — supervised by Frank Vetere and Eduardo Velloso. Previously, I was affiliated with Lancaster University, where I developed further expertise in eye tracking and extended reality (XR) as part of Hans Gellersen’s ERC GEMINI project. Prior, I worked on several human-AI teaming projects at the AI and Autonomy Lab at The University of Melbourne as a postdoctoral researcher.
Teaching
I have extensive teaching experience in the field of human-computer interaction. I have taught various subjects focusing on human-centred design, emerging technologies, and evaluating user experience. I have not only delivered existing courses but also actively developed fresh and engaging subject material, coordinated subject delivery, and explored novel teaching methods to foster an enriched learning environment.
I have had the privilege of supervising and mentoring students at both undergraduate and postgraduate levels. Through these collaborations, we have successfully produced publications that combine my expertise in human sensing and conducting user studies with their areas of research and specialisation, resulting in valuable contributions to AI and HCI fields.
Research
AI has the potential to complement and augment human intelligence and capabilities by offering the right information and guidance at the right time. However, current AI agents are limited in this capacity — they often lack the proper communication cues and corresponding models to proactively determine when their human counterpart requires support and the level of support they may require, which limits their true ability.
My research aims to develop new knowledge in building interactive AI-based systems capable of identifying opportunities for adaptive AI interventions using unobtrusive sensing technologies — a challenge at the core of augmented intelligence towards systems and services that are proactive and adaptive to human needs. I address two main challenges through design: (1) the extent to which we can identify opportunities for support from multimodal sensor data and (2) designing AI agents that can learn to provide contextually-relevant interventions proactively. To achieve this, I leverage eye tracking as an informative modality for intervention prediction, a rapidly emerging ubiquitous input modality that has made its way into extended reality (XR) headsets.
Here’s a short research vision video I created during my PhD: youtu.be/4nb9ZT1hKSI
I am revamping my projects page into a portfolio format to showcase their outcomes and impacts better. For the moment, here are links to projects that I have worked on with an online presence:
Publications
I actively publish in top-tier venues in Human-Computer Interaction (HCI) and Artificial Intelligence (AI). In the HCI field, the primary publication venues are fully refereed conference proceedings, while journals are also prestigious publication venues. Please refer to my Google Scholar page for up-to-date publications and metrics. All publications are publically available through my ResearchGate profile.
-
[J13] Understanding the Impact of the Reality-Virtuality Continuum on Visual Search Using Fixation-Related Potentials and Eye Tracking Features. Francesco Chiossi, Uwe Gruenefeld, Baosheng James Hou, Joshua Newn, Changkun Ou, Rulu Liao, Robin Welsch, and Sven Mayer. 2024. In: ACM PACM HCI (MHCI) 8 (Sept. 2024), pp. 281:1-281:33. DOI: 10.1145/3676528
[J12] HeadShift: Head Pointing with Dynamic Control-Display Gain. Haopeng Wang, Ludwig Sidenmark, Florian Weidner, Joshua Newn, Hans Gellersen. 2024. In: ACM TOCHI (Aug. 2024). DOI: 10.1145/3689434
[J11] GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing. Baosheng James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan and Hans Gellersen. 2024. In: ACM PACM HCI (ETRA) 8 (June 2024), pp. 227:1-227:20. DOI: 10.1145/3655601
[J10] Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays. Ludwig Sidenmark, Franziska Prummer, Joshua Newn and Hans Gellersen. 2023. In: IEEE TVCG 29.11 (Nov. 2023), pp. 4740-4750. DOI: 10.1109/TVCG.2023.3320235
[J9] Examining and Promoting Explainable Recommendations for Personal Sensing Acceptance. Joshua Newn, Ryan M. Kelly, Simon D’Alfonso and Reeva Lederman. 2022. In: ACM IMWUT 6.3 (Sept. 2022), 133:1–133:27. DOI: 10.1145/3550297
[J8] GAVIN: Gaze-Assisted Voice-Based Implicit Note-Taking. Anam Ahmad Khan, Joshua Newn, Ryan M. Kelly, Namrata Srivastava, James Bailey and Eduardo Velloso. 2021. In: ACM TOCHI 28.4 (Aug. 2021), 26:1-26:32. DOI: 10.1145/3453988
[J7] Electronic Monitoring Systems for Hand Hygiene: Systematic Review of Technology. Chaofan Wang, Weiwei Jiang, Kangning Yang, Difeng Yu, Joshua Newn, Zhanna Sarsenbayeva, Jorge Goncalves and Vassilis Kostakos. 2021. In: J Med Internet Res 23.11 (Nov. 2021), e27880. DOI: 10.2196/27880
[J6] Combining Gaze and AI Planning for Online Human Intention Recognition. Ronal Singh, Tim Miller, Joshua Newn, Eduardo Velloso, Frank Vetere and Liz Sonenberg. 2020. . In: J Artificial Intelligence 284 (July 2020), 103275:1–103275:26. DOI: 10.1016/j.artint.2020.103275
[J5] Fully-Occluded Target Selection in Virtual Reality. Difeng Yu, Qiushi Zhou, Joshua Newn, Tilman Dingler, Eduardo Velloso and Gorge Goncalves. 2020. In: IEEE TVCG 26.12 (Dec. 2020), pp. 3402–3413. DOI: 10.1109/TVCG.2020.3023606 — Best Paper Nominee (Top 5% of Submissions)
[J4] Eyes-free Target Acquisition During Walking in Immersive Mixed Reality. Qiushi Zhou, Difeng Yu, Martin Reinoso, Joshua Newn, Jorge Goncalves and Eduardo Velloso. 2020. In: IEEE TVCG 26.12 (Dec. 2020), pp. 3423–3433. DOI: 10.1109/TVCG.2020.3023570
[J3] Classifying Attention Types with Thermal Imaging and Eye Tracking. Yomna Abdelrahman, Anam Ahmad Khan, Joshua Newn, Eduardo Velloso, Sherine Ashraf Safwat, James Bailey, Andreas Bulling, Frank Vetere and Albrecht Schmidt. 2019. . In: ACM IMWUT 3.3 (Sept. 2019), 69:1–69:27. DOI: 10.1145/3351227
[J2] Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Namrata Srivastava, Joshua Newn and Eduardo Velloso. 2018. In: ACM IMWUT 2.4 (Dec. 2018), 189:1–189:27. DOI: 10.1145/3287067
[J1] Motion Correlation: Selecting Objects by Matching Their Movement. Eduardo Velloso, Marcus Carter, Joshua Newn, Augusto Esteves, Christopher Clarke and Hans Gellersen. 2017. In: ACM TOCHI 24.3 (Apr. 2017), 22:1–22:35. DOI: 10.1145/3064937 — TOCHI Best Paper Award 2017
-
(C22) Biofeedback-Driven Multiplayer Games: Leveraging Social Awareness and Physiological Signals for Play. Joshua Newn and Madison Klarkowski. In: CHI PLAY ‘23 Companion. ACM, 2023.
(C21) Eye Expressions for Enhancing EOG‐Basd Interaction. Joshua Newn, Sophia Quesada, Baosheng James Hou, Anam Ahmad Khan, Florian Weidner and Hans Gellersen. In: INTERACT 2023. Springer, 2023.
[C20] Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input. Baosheng James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan, Per Bækgaard and Hans Gellersen. In: CHI ’23. ACM, 2023.
[C19] Vergence Matching: Gaze Selection in 3D based on Modulation of Target Distance from the Eyes. Ludwig Sidenmark, Christopher Clarke, Joshua Newn, Mathias N. Lystbæk, Ken Pfeuffer and Hans Gellersen. In: CHI ’23. ACM, 2023.
[C18] Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. Riccardo Bovo, Daniele Giunchi, Ludwig Sidenmark, Joshua Newn, Hans Gellersen, Enrico Costanza and Thomas Heinis. In: CHI ’23. ACM, 2023.
[C17] To Type or To Speak? The Effect of Input Modality on Text Understanding During Note-taking. Anam Ahmad Khan, Sadia Nawaz, Joshua Newn, Ryan M. Kelly, Jason M. Lodge, James Bailey and Eduardo Velloso. In: CHI ‘22. ACM, 2022.
[C16] Integrating Gaze and Speech for Enabling Implicit Interactions. Anam Ahmad Khan, Joshua Newn, James Bailey and Eduardo Velloso. In: CHI ’22. ACM, 2022.
[C15] Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation. Vincent Crocher, Ronal Singh, Joshua Newn and Denny Oetomo. In: EMBC ‘21. IEEE, 2021.
[C14] Observing Multiplayer Boardgame Play at a Distance. Melissa J. Rogerson, Joshua Newn, Ronal Singh, Emma Baillie, Michael Papasimeon, Lyndon Benke and Tim Miller. In: CHI PLAY ’21 Extended Abstracts. ACM, 2021.
[C13] Are You with Me? Measurement of Learners’ Video-Watching Attention with Eye Tracking. Namrata Srivastava, Sadia Nawaz, Joshua Newn, Jason Lodge, Eduardo Velloso, Sarah M. Erfani, Dragan Gasevic and James Bailey. In: LAK21. ACM, 2021.
[C12] Faces of Focus: A Study on the Facial Cues of Attentional States. Ebrahim Babaei, Namrata Srivastava, Joshua Newn, Qiushi Zhou, Tilman Dingler and Eduardo Velloso. In: CHI ’20. ACM, 2020.
[C11] Frame Analysis of Voice Interaction Gameplay. Fraser Allison, Joshua Newn, Wally Smith, Martin Gibbs and Marcus Carter. In: CHI ’19. ACM, 2019.
[C10] Designing Interactions with Intention-Aware Gaze-Enabled Artificial Agents. Joshua Newn, Ronal Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso and Frank Vetere. In: INTERACT 2019. Springer, 2019.
[C9] Combining Implicit Gaze and AI for Real-Time Intention Projection. Joshua Newn, Ronal Singh, Eduardo Velloso and Frank Vetere. In: UbiComp ’19 Adjunct. ACM, 2019.
[C8] Biometric Mirror: Exploring Ethical Opinions Towards Facial Analysis and Automated Decision-Making. Niels Wouters, Ryan M. Kelly, Eduardo Velloso, Katrin Wolf, Hasan Shahid Ferdous, Joshua Newn, Zaher Joukhadar and Frank Vetere. In: DIS ’19. ACM, 2019.
[C7] Cognitive Aid: Task Assistance Based On Mental Workload Estimation. Qiushi Zhou, Joshua Newn, Namrata Srivastava, Jorge Goncalves, Tilman Dingler and Eduardo Velloso. In: CHI ’19 Extended Abstracts. ACM, 2019.
[C6] Combining Planning with Gaze for Online Human Intention Recognition. Ronal Singh, Tim Miller, Joshua Newn, Liz Sonenberg, Eduardo Velloso and Frank Vetere. In: AAMAS ’18. IFAAMAS, 2018.
[C5] Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay. Joshua Newn, Fraser Allison, Eduardo Velloso and Frank Vetere. In: CHI ’18. ACM, 2018.
[C4] Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games. Joshua Newn, Eduardo Velloso, Fraser Allison, Yomna Abdelrahman and Frank Vetere. In: CHI PLAY ’17. ACM, 2017.
[C3] Exploring the Effects of Gaze Awareness on Multiplayer Gameplay. Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. In: CHI PLAY Companion ’16. ACM, 2016.
[C2] Multimodal Segmentation on a Large Interactive Tabletop: Extending Interaction on Horizontal Surfaces with Gaze. Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. In: ISS ‘16. ACM, 2016.
[C1] Remote Gaze and Gesture Tracking on the Microsoft Kinect: Investigating the Role of Feedback. Marcus Carter, Joshua Newn, Eduardo Velloso and Frank Vetere. In: OzCHI ‘15. ACM, 2015.
-
Joshua Newn. Using Multimodal Sensing to Improve Awareness in Human-AI Interaction. In: CHI 2020 Workshop on Artificial Intelligence for HCI: A Modern Approach. Honolulu, USA, 2020.
Joshua Newn, Ronal Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso and Frank Vetere. Nonverbal Communication in Human-AI Interaction: Opportunities & Challenges. In: INTERACT ’19 Workshop on Human(s) in the Loop Bringing AI & HCI Together. Paphos, Cyprus, 2019.
Joshua Newn, Benjamin Tag, Ronal Singh, Eduardo Velloso and Frank Vetere. AI-Mediated Gaze-Based Intention Recognition for Smart Eyewear: Opportunities & Challenges. In: UbiComp ’19 Adjunct (Third Workshop on Eye Wear Computing). London, United Kingdom, 2019.
Qiushi Zhou, Joshua Newn, Benjamin Tag, Hao-Ping Lee, Chaofan Wang and Eduardo Velloso. Ubiquitous Smart Eyewear Interactions using Implicit Sensing and Unobtrusive Information Output. In: UbiComp ’19 Adjunct (Third Workshop on Eye Wear Computing). London, United Kingdom, 2019.
Oludamilare Matthews, Zhanna Sarsenbayeva, Weiwei Jiang, Joshua Newn, Eduardo Velloso, Sarah Clinch and Jorge Goncalves. Inferring the Mood of a Community From Their Walking Speed: A Preliminary Study. In: UbiComp ’18 (Workshop on Mood Sensing In-The-Wild). Singapore, Singapore. ACM, 2018.
Joshua Newn, Ronal Singh, Prashan Madumal, Eduardo Velloso and Frank Vetere. Designing Explainable AI Interfaces through Interaction Design Technique. In: OzCHI 2018 Workshop on Interaction Design for Explainable AI. Melbourne, Australia, 2018.
Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. Dynamically Exposing Gaze to Foster Playful Experiences in Multiplayer Gameplay. In: CHI Play 2016 Workshop on Designing for Emotional Complexity in Games: The Interplay of Positive and Negative Affect. Austin, TX, USA, 2016.
-
Abdallah El Ali, Monica Perusquia-Hernandez, Mariam Hassib, Yomna Abdelrahman and Joshua Newn. MEEC: Second Workshop on Momentary Emotion Elicitation and Capture. In: CHI EA ’21. ACM, 2021.
Michael Lankes, Joshua Newn, Bernhard Maurer, Eduardo Velloso, Martin Dechant and Hans Gellersen. EyePlay Revisited: Past, Present and Future Challenges for Eye-Based Interaction in Games. In: CHI PLAY ’18 Extended Abstracts. ACM, 2018.
Prashan Madumal, Ronal Singh, Joshua Newn and Frank Vetere. Interaction Design for Explainable AI. In: OzCHI ’18. ACM, 2018.
-
Joshua Newn. Enabling Intent Recognition Through Gaze Awareness in User Interfaces. In: CHI EA ’18. ACM, 2018.
Joshua Newn. The Effect of Gaze on Gameplay in Co-located Multiplayer Gaming Environments. In: CIS 4th Annual Doctoral Colloquium, 66, 2016.
Hobbies
Beyond my academic pursuits, I derive immense pleasure from engaging in long-distance hiking, capturing the world's beauty through photography, and embarking on journeys that allow me to explore diverse cultures and experiences through travel.