JOSHUA NEWN

I’m a passionate designer researcher with extensive research training in human-computer interaction, interaction design and user research. My current research focuses on designing, developing and evaluating human-centred AI systems for intelligence augmentation.

I hold a PhD in Human-Computer Interaction from The University of Melbourne — supervised by Frank Vetere and Eduardo Velloso. My most recent affiliation is with Lancaster University, where I developed further expertise in eye tracking and extended reality (XR) as part of Hans Gellersen’s ERC GEMINI project. Prior, I worked on several human-AI teaming projects at the AI and Autonomy Lab at The University of Melbourne as a postdoctoral researcher.

Teaching

I have extensive teaching experience in the field of human-computer interaction. I have taught various subjects focusing on human-centred design, emerging technologies, and evaluating user experience. I have not only delivered existing courses but also actively developed fresh and engaging subject material, coordinated subject delivery, and explored novel teaching methods to foster an enriched learning environment.

I have had the privilege of supervising and mentoring students at both undergraduate and postgraduate levels. Through these collaborations, we have successfully produced publications that combine my expertise in human sensing and conducting user studies with their areas of research and specialisation, resulting in valuable contributions to AI and HCI fields.

Research

AI has the potential to complement and augment human intelligence and capabilities by offering the right information and guidance at the right time. However, current AI agents are limited in this capacity — they often lack the proper communication cues and corresponding models to proactively determine when their human counterpart requires support and the level of support they may require, which limits their true ability.

My research aims to develop new knowledge in building interactive AI-based systems capable of identifying opportunities for adaptive AI interventions using unobtrusive sensing technologies — a challenge at the core of augmented intelligence towards systems and services that are proactive and adaptive to human needs. I address two main challenges through design: (1) the extent to which we can identify opportunities for support from multimodal sensor data and (2) designing AI agents that can learn to provide contextually-relevant interventions proactively. To achieve this, I leverage eye tracking as an informative modality for intervention prediction, a rapidly emerging ubiquitous input modality that has made its way into extended reality (XR) headsets.

Here’s a short summary video I created for my PhD: youtu.be/4nb9ZT1hKSI

Publications

I actively publish in top-tier venues in Human-Computer Interaction (HCI) and Artificial Intelligence (AI). In the HCI field, the primary publication venues are fully refereed conference proceedings, while journals are also prestigious publication venues. Please refer to my Google Scholar page for up-to-date metrics.

  • [j10] Ludwig Sidenmark, Franziska Prummer, Joshua Newn and Hans Gellersen. 2023. Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays. In: IEEE TVCG.

    [j9] Joshua Newn, Ryan M. Kelly, Simon D’Alfonso and Reeva Lederman. 2022. Examining and Promoting Explainable Recommendations for Personal Sensing Acceptance. In: ACM IMWUT 6.3 (Sept. 2022), 133:1–133:27.

    [j8] Anam Ahmad Khan, Joshua Newn, Ryan M. Kelly, Namrata Srivastava, James Bailey and Eduardo Velloso. 2021. GAVIN: Gaze-Assisted Voice-Based Implicit Note-Taking. In: ACM TOCHI 28.4 (Aug. 2021), 26:1-26:32.

    [j7] Chaofan Wang, Weiwei Jiang, Kangning Yang, Difeng Yu, Joshua Newn, Zhanna Sarsenbayeva, Jorge Goncalves and Vassilis Kostakos. 2021. Electronic Monitoring Systems for Hand Hygiene: Systematic Review of Technology. In: J Med Internet Res 23.11 (Nov. 2021), e27880.

    [j6] Ronal Singh, Tim Miller, Joshua Newn, Eduardo Velloso, Frank Vetere and Liz Sonenberg. 2020. Combining Gaze and AI Planning for Online Human Intention Recognition. In: J Artificial Intelligence 284 (July 2020), 103275:1–103275:26.

    [j5] Difeng Yu, Qiushi Zhou, Joshua Newn, Tilman Dingler, Eduardo Velloso and Gorge Goncalves. 2020. Fully-Occluded Target Selection in Virtual Reality. In: IEEE TVCG 26.12 (Dec. 2020), pp. 3402–3413. — Best Paper Nominee (Top 5% of Submissions)

    [j4] Qiushi Zhou, Difeng Yu, Martin Reinoso, Joshua Newn, Jorge Goncalves and Eduardo Velloso. 2020. Eyes-free Target Acquisition During Walking in Immersive Mixed Reality. In: IEEE TVCG 26.12 (Dec. 2020), pp. 3423–3433.

    [j3] Yomna Abdelrahman, Anam Ahmad Khan, Joshua Newn, Eduardo Velloso, Sherine Ashraf Safwat, James Bailey, Andreas Bulling, Frank Vetere and Albrecht Schmidt. 2019. Classifying Attention Types with Thermal Imaging and Eye Tracking. In: ACM IMWUT 3.3 (Sept. 2019), 69:1–69:27.

    [j2] Namrata Srivastava, Joshua Newn and Eduardo Velloso. 2018. Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. In: ACM IMWUT 2.4 (Dec. 2018), 189:1–189:27.

    [j1] Eduardo Velloso, Marcus Carter, Joshua Newn, Augusto Esteves, Christopher Clarke and Hans Gellersen. 2017. Motion Correlation: Selecting Objects by Matching Their Movement. In: ACM TOCHI 24.3 (Apr. 2017), 22:1–22:35. — TOCHI Best Paper Award 2017

  • (c22) Joshua Newn and Madison Klarkowski. Biofeedback-Driven Multiplayer Games: Leveraging Social Awareness and Physiological Signals for Play. In: CHI PLAY ‘23 Companion. ACM, 2023.

    (c21) Joshua Newn, Sophia Quesada, Baosheng James Hou, Anam Ahmad Khan, Florian Weidner and Hans Gellersen. Eye Expressions for Enhancing EOG‐Basd Interaction. In: INTERACT 2023. Springer, 2023.

    [c20] Baosheng James Hou, Joshua Newn, Ludwig Sidenmark, Anam Ahmad Khan, Per Bækgaard and Hans Gellersen. Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input. In: CHI ’23. ACM, 2023.

    [c19] Ludwig Sidenmark, Christopher Clarke, Joshua Newn, Mathias N. Lystbæk, Ken Pfeuffer and Hans Gellersen. Vergence Matching: Gaze Selection in 3D based on Modulation of Target Distance from the Eyes. In: CHI ’23. ACM, 2023.

    [c18] Riccardo Bovo, Daniele Giunchi, Ludwig Sidenmark, Joshua Newn, Hans Gellersen, Enrico Costanza and Thomas Heinis. Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. In: CHI ’23. ACM, 2023.

    [c17] Anam Ahmad Khan, Sadia Nawaz, Joshua Newn, Ryan M. Kelly, Jason M. Lodge, James Bailey and Eduardo Velloso. To Type or To Speak? The Effect of Input Modality on Text Understanding During Note-taking. In: CHI ‘22. ACM, 2022.

    [c16] Anam Ahmad Khan, Joshua Newn, James Bailey and Eduardo Velloso. Integrating Gaze and Speech for Enabling Implicit Interactions. In: CHI ’22. ACM, 2022.

    [c15] Vincent Crocher, Ronal Singh, Joshua Newn and Denny Oetomo. Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation. In: EMBC ‘21. IEEE, 2021.

    [c14] Melissa J. Rogerson, Joshua Newn, Ronal Singh, Emma Baillie, Michael Papasimeon, Lyndon Benke and Tim Miller. Observing Multiplayer Boardgame Play at a Distance. In: CHI PLAY ’21 Extended Abstracts. ACM, 2021.

    [c13] Namrata Srivastava, Sadia Nawaz, Joshua Newn, Jason Lodge, Eduardo Velloso, Sarah M. Erfani, Dragan Gasevic and James Bailey. Are You with Me? Measurement of Learners’ Video-Watching Attention with Eye Tracking. In: LAK21. ACM, 2021.

    [c12] Ebrahim Babaei, Namrata Srivastava, Joshua Newn, Qiushi Zhou, Tilman Dingler and Eduardo Velloso. Faces of Focus: A Study on the Facial Cues of Attentional States. In: CHI ’20. ACM, 2020.

    [c11] Fraser Allison, Joshua Newn, Wally Smith, Martin Gibbs and Marcus Carter. Frame Analysis of Voice Interaction Gameplay. In: CHI ’19. ACM, 2019.

    [c10] Joshua Newn, Ronal Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso and Frank Vetere. Designing Interactions with Intention-Aware Gaze-Enabled Artificial Agents. In: INTERACT 2019. Springer, 2019.

    [c9] Joshua Newn, Ronal Singh, Eduardo Velloso and Frank Vetere. Combining Implicit Gaze and AI for Real-Time Intention Projection. In: UbiComp ’19 Adjunct. ACM, 2019.

    [c8] Niels Wouters, Ryan M. Kelly, Eduardo Velloso, Katrin Wolf, Hasan Shahid Ferdous, Joshua Newn, Zaher Joukhadar and Frank Vetere. Biometric Mirror: Exploring Ethical Opinions Towards Facial Analysis and Automated Decision-Making. In: DIS ’19. ACM, 2019.

    [c7] Qiushi Zhou, Joshua Newn, Namrata Srivastava, Jorge Goncalves, Tilman Dingler and Eduardo Velloso. Cognitive Aid: Task Assistance Based On Mental Workload Estimation. In: CHI ’19 Extended Abstracts. ACM, 2019.

    [c6] Ronal Singh, Tim Miller, Joshua Newn, Liz Sonenberg, Eduardo Velloso and Frank Vetere. Combining Planning with Gaze for Online Human Intention Recognition. In: AAMAS ’18. IFAAMAS, 2018.

    [c5] Joshua Newn, Fraser Allison, Eduardo Velloso and Frank Vetere. Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay. In: CHI ’18. ACM, 2018.

    [c4] Joshua Newn, Eduardo Velloso, Fraser Allison, Yomna Abdelrahman and Frank Vetere. Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games. In: CHI PLAY ’17. ACM, 2017.

    [c3] Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. Exploring the Effects of Gaze Awareness on Multiplayer Gameplay. In: CHI PLAY Companion ’16. ACM, 2016.

    [c2] Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. Multimodal Segmentation on a Large Interactive Tabletop: Extending Interaction on Horizontal Surfaces with Gaze. In: ISS ‘16. ACM, 2016.

    [c1] Marcus Carter, Joshua Newn, Eduardo Velloso and Frank Vetere. Remote Gaze and Gesture Tracking on the Microsoft Kinect: Investigating the Role of Feedback. In: OzCHI ‘15. ACM, 2015.

  • Joshua Newn. Using Multimodal Sensing to Improve Awareness in Human-AI Interaction. In: CHI 2020 Workshop on Artificial Intelligence for HCI: A Modern Approach. Honolulu, USA, 2020.

    Joshua Newn, Ronal Singh, Fraser Allison, Prashan Madumal, Eduardo Velloso and Frank Vetere. Nonverbal Communication in Human-AI Interaction: Opportunities & Challenges. In: INTERACT ’19 Workshop on Human(s) in the Loop Bringing AI & HCI Together. Paphos, Cyprus, 2019.

    Joshua Newn, Benjamin Tag, Ronal Singh, Eduardo Velloso and Frank Vetere. AI-Mediated Gaze-Based Intention Recognition for Smart Eyewear: Opportunities & Challenges. In: UbiComp ’19 Adjunct (Third Workshop on Eye Wear Computing). London, United Kingdom, 2019.

    Qiushi Zhou, Joshua Newn, Benjamin Tag, Hao-Ping Lee, Chaofan Wang and Eduardo Velloso. Ubiquitous Smart Eyewear Interactions using Implicit Sensing and Unobtrusive Information Output. In: UbiComp ’19 Adjunct (Third Workshop on Eye Wear Computing). London, United Kingdom, 2019.

    Oludamilare Matthews, Zhanna Sarsenbayeva, Weiwei Jiang, Joshua Newn, Eduardo Velloso, Sarah Clinch and Jorge Goncalves. Inferring the Mood of a Community From Their Walking Speed: A Preliminary Study. In: UbiComp ’18 (Workshop on Mood Sensing In-The-Wild). Singapore, Singapore. ACM, 2018.

    Joshua Newn, Ronal Singh, Prashan Madumal, Eduardo Velloso and Frank Vetere. Designing Explainable AI Interfaces through Interaction Design Technique. In: OzCHI 2018 Workshop on Interaction Design for Explainable AI. Melbourne, Australia, 2018.

    Joshua Newn, Eduardo Velloso, Marcus Carter and Frank Vetere. Dynamically Exposing Gaze to Foster Playful Experiences in Multiplayer Gameplay. In: CHI Play 2016 Workshop on Designing for Emotional Complexity in Games: The Interplay of Positive and Negative Affect. Austin, TX, USA, 2016.

  • Abdallah El Ali, Monica Perusquia-Hernandez, Mariam Hassib, Yomna Abdelrahman and Joshua Newn. MEEC: Second Workshop on Momentary Emotion Elicitation and Capture. In: CHI EA ’21. ACM, 2021.

    Michael Lankes, Joshua Newn, Bernhard Maurer, Eduardo Velloso, Martin Dechant and Hans Gellersen. EyePlay Revisited: Past, Present and Future Challenges for Eye-Based Interaction in Games. In: CHI PLAY ’18 Extended Abstracts. ACM, 2018.

    Prashan Madumal, Ronal Singh, Joshua Newn and Frank Vetere. Interaction Design for Explainable AI. In: OzCHI ’18. ACM, 2018.

  • Joshua Newn. Enabling Intent Recognition Through Gaze Awareness in User Interfaces. In: CHI EA ’18. ACM, 2018.

    Joshua Newn. The Effect of Gaze on Gameplay in Co-located Multiplayer Gaming Environments. In: CIS 4th Annual Doctoral Colloquium, 66, 2016.

Hobbies

Beyond my academic pursuits, I derive immense pleasure from engaging in long-distance hiking, capturing the world's beauty through photography, and embarking on journeys that allow me to explore diverse cultures and experiences through travel.