Project Activity 1 – Enhancing Emotion Recognition in Autism Spectrum Disorder through Integrated Machine Learning Applications and Foundational Models (led by Dr. Resendiz, Dr. Valles, and Dr. Liu)
Children with autism spectrum disorder (ASD) face challenges related to social interaction, communication, coping with change, and repetitive behaviors. They often struggle with socially awkward behaviors, such as failing to notice social cues, misunderstanding body language, and showing reduced empathy and emotion recognition. Historically, emotion recognition in ASD has been studied primarily through facial expressions, highlighting significant impairment. However, recent advancements have underscored the limitation of focusing solely on facial expressions without considering bodily cues. This activity aims to integrate bodily, speech, and facial expressions into a machine learning app to enhance emotional recognition, which is critical as the prevalence of ASD has increased significantly over the past two decades. While preliminary studies have shown the potential benefits of machine learning apps on children’s emotion recognition abilities, current apps lack emotional awareness, failing to meet the needs of children with ASD fully.
Our approach involves developing an app suite with wireless sensors to capture body, speech, and facial expression data. This technology offers personalized support for ASD children and their caregivers, helping them better understand and manage emotions and interpret nonverbal cues [9-15]. We are incorporating large language models (LLMs) and foundational models (FMs) to refine the app’s ability to analyze and respond to the nuanced emotional states of children with ASD. We use extensive datasets for accurate and relevant feedback. This project extends into applied and translational research in collaboration with the Texas A&M University-San Antonio Institute for Autism and Related Disorders.
In this project, there are three learning objectives:
- Developing an Integrated EmotionRecognition app to extract facial and body expression data over time and integrate these with audio signals for a comprehensive approach, using multiple deep learning models to classify speech, body gestures, and facial expressions;
- Enhancing EmotionRecognition by segmenting signals into emotional activities and classifying them into predefined categories, incorporating large language models (LLMs) and foundation models (FMs) to improve contextual understanding, and
- Developing and utilizing advanced analytical tools leveraging LLMs and FMs to enhance nuanced emotion recognition in ASD, providing more personalized and accurate feedback.
Project Activity 2 – Smart city firefighting autonomous data collection (led by Dr. Valles)
The High-Performing Engineering (HiPE) research group at Texas State University is pioneering intelligent systems to enhance safety measures during emergency scenarios, particularly firefighting. These systems utilize advanced technological AI, robotics, IoT, and encryption solutions in the developed rover and drone prototypes. This project extends the application of smart city concepts to address the challenges of dangerous environments such as burning residential buildings. The enhanced project focus now includes implementing large language models (LLMs) that contextualize real-time environmental data from deep learning models. These models are crucial for person detection and the audio triangulation of screams. Integrating data from temperature forecasting models, LiDAR, and other sensors, the project aims to develop a comprehensive understanding of dynamic and hazardous environments. The project will implement autonomous navigation technologies to self-deploy rovers and drones to advance emergency response capabilities further. These autonomous units are tasked with inspecting the interior and perimeter of a burning building. They are equipped with self-preservation protocols to enhance their operational lifespan under extreme conditions.
In this project, undergraduate students will:
- Integrate and interpret data from various sources, including audio sensors, LiDAR, temperature models, and visual feeds, to improve situational awareness and decision-making in emergencies;
- Develop skills in programming and deploying autonomous drones and rovers for safe navigation in hazardous environments;
- Implement LLMs for real-time data analysis to aid in critical decision-making, such as identifying safe paths and detecting individuals in need and
- Develop robust communication networks to enhance the coordination and efficiency of autonomous units.
Project Activity 3 – Advanced Object Recognition and Tracking using Computer Vision (led by Dr. Aslan)
Traditional object recognition and tracking systems often rely on predefined templates and rigid algorithms that can fail in dynamic, real-world environments. These systems typically require substantial manual calibration and need to be more adept at handling variations in object appearance, occlusions, or background clutter. Additionally, these systems’ feedback and performance analytics could be improved, hindering their ability to adapt and improve over time. This research project proposes using immersive computer vision experiences that leverage deep learning tools to enhance object recognition and tracking accuracy. By incorporating real-time feedback mechanisms and adaptive learning, the system can significantly improve its tracking performance and reliability in various applications, such as autonomous vehicles, surveillance, and augmented reality (AR). It is essential to develop a deep learning-based object recognition system and implement robust tracking algorithms for this project.
In this project, undergraduate students will learn to:
- Developing and integrating deep learning models using popular frameworks such as TensorFlow and PyTorch and integrating these models into the object recognition system, ensuring seamless operation in real-time scenarios;
- Designing and implementing tracking algorithms that leverage deep learning and predictive modeling and focusing on enhancing the robustness of these algorithms to handle various challenges, such as occlusions and rapid object movements, and
- Building infrastructure for real-time operation and analytics to support real-time object recognition and tracking and developing tools for performance analytics, enabling continuous monitoring and improvement of the system.
Project Activity 4 – Using AI and IoT to Build Smart Homes for Individuals with Autism Spectrum Disorder (led by Dr. Liu & Dr. Carvalho)
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by difficulties in social interaction, communication, and repetitive behaviors. Individuals with ASD often face challenges in adapting to their living environment, including problems managing sensory stimuli and maintaining routines. The project aims to develop IoT and AI-driven smart home systems for individuals with ASD. These smart home systems will utilize advanced IoT and AI algorithms to create personalized environments that optimize comfort, safety, health, and well-being for individuals with ASD.
The key features of our smart home system include sensory environment control, routine management, and personalized support. The smart home system will have lighting, temperature, and noise level change sensors. AI algorithms will analyze sensory data in real-time and adjust environmental settings to create a comfortable and calming atmosphere for individuals with ASD. The smart home system will incorporate AI-powered scheduling tools to help individuals with ASD manage their daily routines more effectively. The system will provide visual and auditory prompts to remind users of scheduled activities and tasks, helping them maintain a sense of predictability and control.
In this project, there are five learning objectives:
- Develop 3D Modeling – Students will use 3D modeling software to simulate smart home settings;
- Program Avatars – Students will program virtual avatars to act within these environments;
- Implement AI Algorithms – They will apply AI algorithms to analyze sensory data from the smart home environment;
- Collect and Analyze Data – Students will collect and analyze movement data from individuals with, and
- Evaluate System Performance—They will assess how effectively the AI-driven features create supportive environments and provide personalized support for individuals with ASD.
Project Activity 5 – Enhancing adaptive sports participation through technology-assisted physiological assessment for individuals with spinal cord injury (Led by Dr. Farrell)
Spinal cord injury (SCI) is commonly recognized for its devastating impact on motor, sensory, and autonomic function. As a result of these impairments and physical inactivity, cardiovascular disease (CVD) has emerged as a leading cause of death for individuals with an SCI, with threefold to fivefold greater odds of CVD compared to persons without disabilities. Thus, for persons with an SCI to reduce the risk of secondary comorbidities, such as CVD, increased engagement with physical activity is necessary, and participation in activities such as adaptive sports could potentially help facilitate this. For persons with SCI, participation in competitive sports has been promoted as a therapeutic intervention for improving both physical and psychological components of health. To create opportunities for engagement in competitive sports for those with physical disabilities, including SCI, adaptive sports were conceived.
Adaptive sports are sports that have been modified in a manner to allow those with physical disabilities to participate. Wheelchair racing and wheelchair basketball are the most prominent sports amongst adaptive sports. However, there is currently a lack of guidance for clinicians when recommending adaptive sports, such as which sports best fit the physical capabilities and physiological conditioning to allow individuals to participate safely. Integrating technology such as biosensors and machine learning with physiological testing can assist in developing guidance for safe participation in adaptive sports. This would allow clinicians to promote an active lifestyle through safe engagement with adaptive sports for this population.
In this project, there are five learning objectives:
- Collect biosensor data (e.g., accelerations, decelerations, distance covered) from individuals participating in adaptive sports (e.g., wheelchair basketball and racing);
- Collect physiological conditioning data (e.g., maximal oxygen consumption and muscular strength) for adaptive sports participants;
- Analyze which biosensor and physiological data most significantly correlate with adaptive sports performance;
- Develop a machine learning algorithm enabling clinicians to input patient characteristics and determine the most feasible adaptive sport and
- Track data on adverse events related to adaptive sports participation based on machine learning recommendations.
Project Activity 6 – Enhancing chronic ankle instability diagnosis with machine learning algorithms using biomechanics and patient-reported outcome features (led by Dr. Koldenhoven Rolfe)
Chronic ankle instability (CAI) is a common lower extremity condition characterized by recurrent ankle sprains and persistent symptoms like pain, swelling, and feelings of giving way. Individuals with CAI often exhibit faulty gait biomechanics and decreased patient-reported outcomes. Lateral ankle sprains (LAS) result in significant physical and financial burdens and have an estimated recurrence rate of 70%. CAI is highly heterogeneous, complicating its diagnosis, which relies on patient-reported outcome measures and ankle sprain history. Altered movement patterns are among the most observed deficits associated with CAI and can provide critical insights into the condition.
This project aims to enhance and validate the approach for diagnosing CAI by analyzing gait biomechanics features. Specifically, we strive to measure the location of the center of pressure during stance and ankle inversion angles throughout the gait cycle in individuals with and without CAI. These biomechanical features will train machine learning models to improve CAI diagnosis accuracy. This project will provide valuable research experience for undergraduate students, equipping them with biomechanics, data analysis, and machine learning skills. The Clinical Biomechanics and Exercise Physiology Laboratory research group has established gait analysis protocols and begun initial data collection involving patients with CAI. Students will assist in conducting detailed gait analyses for individuals with and without CAI, using motion capture cameras, force plates embedded in a treadmill, and plantar pressure insoles to collect ankle inversion angles and the location of the center of pressure. Students will develop and train machine learning models to classify if individuals have CAI.
In this project, there are three learning objectives:
- Assess biomechanics during walking for individuals with and without CAI;
- Determine the most relevant variables to differentiate between individuals with and without CAI using machine learning techniques and
- Develop machine learning models using programming languages, such as Python or MATLAB.
Project Activity 7 – Utilizing AI to support conversation in individuals with ASD (led by Dr. de la Cruz)
Individuals with autism have challenges with social communication, sometimes making it challenging to express themselves vocally. About a quarter of individuals diagnosed with ASD do not develop functional language or are considered minimally verbal. This project will utilize AI to assist people with ASD in having conversations about their daily lives. By taking pictures and short videos of their daily activities, participants will create a library of visual content that the AI can use to generate personalized narratives. These narratives will be tailored to the individual’s specific communication level and preferences, allowing them to share their experiences comfortably and familiarly. The AI will use a combination of natural language processing (NLP) and computer vision to analyze the visual content and identify the critical elements of the participant’s daily activities. From there, the AI will generate a narrative that describes the day’s events in a way that the individual with ASD understands. This will allow the individual with ASD to communicate about activities they engage in in their daily lives. Quality of life for individuals with ASD will be improved through this tool, which will enable them to share their experiences.
In this project, undergraduate students will learn to :
- Integrate natural language processing (NLP) and computer vision technologies to process pictures and videos.
- Generate narratives that adapt to an individual’s specific communication level and preferences.
- Write and refine code that combines visual content analysis and narrative generation to support individuals with ASD.
Sponsored by the National Science Foundation 2022-2028