Projects

Project Activity 1 – Measuring the effects of machine learning app on emotion recognition ability in autism spectrum disorder (led by Dr. Valles, Dr. Resendiz, and Dr. Liu)

Children with autism spectrum disorder (ASD) experience problems with social interaction, coping with change, communication, and repetitive behaviors. Moreover, there is a significant emphasis on social and emotional barriers for children with ASD. For example, children with ASD often exhibit socially awkward behavior (e.g., failure to notice social cues, misunderstanding or not seeing others’ body language), inability to initiate or carry conversations properly, reduced empathy, and decreased emotion recognition. Emotion recognition in ASD has traditionally been studied using facial expressions, suggesting impairment in this domain. However, recent research has shown that understanding emotion recognition is limited without analyzing bodily expression. For this reason, we will incorporate bodily gestures and facial expressions into a machine-learning app. Since ASD prevalence has been steadily increasing over the past two decades, it is essential to examine whether a machine learning app can improve emotional recognition in children with ASD.

In this project, there are three learning objectives:

  1. To develop a Python application that extracts the camera’s facial and body expression data over time and provides basic visualizations. The app will require the integration of multiple deep-learning models for speech and body gesture classification;
  2. Develop a machine learning-based app (EmotionRecognition) and analyze the data by segmenting the continuously collected signals into different activities and classifying those emotional expressions into a predefined set of categories (e.g., happiness, sadness, fear, disgust, anger, contempt, and surprise); and
  3. Utilize existing tools established by the PI and develop new emotion analysis tools to improve emotion recognition ability in individuals with ASD.

Training:

  1. Review the fundamentals of emotion machine learning concepts, information of the datasets, and previously developed code, and
  2. Work with speech audio for emotion-speech recognition.

Project Activity 2 – Smart city firefighting autonomous data collection (led by Dr. Valles)

Smart city concepts utilize technological advances to implement intelligent solutions for emergency scenarios. First response services are part of the intelligent concepts that can leverage different technologies for emergency scenarios. Firefighting presents engineering challenges due to the high temperatures, fatal and toxic environments, and other factors contributing to design challenges. The High-Performing Engineering (HiPE) research group at Texas State University has been working on different solutions to augment the work of firefighters and other emergency responders on how data, machine learning, and internet-of-things can enhance their efficiency and safety during rescue efforts. Autonomous units are equipped with many sensors and peripherals to collect data while firefighters are getting ready upon arrival. The challenge is to create a dynamic data-capturing environment that can provide essential information in real time for better planning and monitoring of an emergency site.

Current efforts include detecting people screaming with machine learning and forecasting temperature propagation for the safety of responders and autonomous units. Triangulation of the scream can also be life-saving to locate a person and understand the environmental factors they currently face during the fire emergency. Also, multiple autonomous units will help divide and conquer the information collection with a shorter latency. The units are presently designed for buildings of a single-floor structure, and other designs are needed to address multiple tall buildings. The communication and orchestration of the units is another engineering aspect that will need to be addressed to ensure that areas are not doubled-covered, dangerous temperatures are avoided, there is reliability through multiple floors, and data is quickly transmitted.

In this project, undergraduate students will:

  1. Develop an interface to help display the results of machine learning decisions and forecast, mapping, and real-time environmental data;
  2. Use machine learning to plan the orchestration of multiple autonomous units for deployment, data collection, and retrieval back to base;
  3. Develop a communication network backbone for multiple-floor data collection; and
  4. Plan an embedded development system design for autonomous units for multiple floors


Project Activity 3 – Augmented reality hand washing tool for children with ASD (led by Dr. Aslan)

Most handwashing training still follows the traditional exposition-type learning by providing theories and examples without developing the required muscle memory. One way to overcome this problem is to increase the number of live training events with augmented visual interaction to draw children’s focus to the task. However, live training has several limitations, including high cost, geographic barriers, and limited feedback about performance; it also requires shutting down business operations for training delivery. Augmented Reality (AR) is the technological foundation for transforming the training sector. One of the main drawbacks of existing training is that it is still in the “analog” phase. The training usually involves the physical presence of users in the training field, and the trainer cannot access the performance analytics of the trainee. In addition, 2D presentations and video training do not help the trainee gain valuable skills, including physical or muscle memory. This research project proposes using immersive training experiences that involve handwashing for children with ASD that leverage AR tools. Teaching handwashing hygiene to children with ASD will help them develop better sanitation habits through muscle memory and encouragement from the AR environment.

The objective is to help the children with ASD interact with the AR activity over a sink to wash their hands with soap for 20 seconds properly. The encouragement can be displayed in different formats, for example, an avatar celebrating, confetti falling from the ceiling, or a trophy of achievement each time all tasks are completed. The development and integration of the AR will require phases of development to identify actual objects in the environment and activate training sequences once the child with ASD uses the AR headset for training.

In this project, undergraduate students will learn to:

  1. Develop the code and integrate avatars that provide AR interactions of running water, putting on soap, and drying hands;
  2. Develop recognition of sink environments to help the AR program recognize and trigger the training sequence using deep learning on the server-side; and
  3. Integrate the infrastructure of computers and networking to provide high bandwidth capture for real-time handwashing training

Project Activity 4 – Virtual reality and machine learning on movement assessment in autism spectrum disorder (led by Dr. Liu, Dr. Aslan, and Dr. Valles)

Researchers will assess motor skills development in children using qualitative and quantitative approaches. It has been reported that children with disabilities, including autism spectrum disorder (ASD), sometimes use a movement strategy that is not congruent to achieving the task required during the assessment. Traditional ASD assessment occurs in settings that lack ecological validity, and results do not mirror performance in real life. New research about movement analysis based on video recordings and automatic tagging has emerged to overcome issues related to traditional movement assessment. Virtual reality (VR) provides haptic feedback and consequent users’ reactions and becomes promising in assessment, training, and treatment. A multimodal Virtual Environment (VE), which consists of a simulated gymnasium, will be developed to assess ASD children’s motor behavior.

The undergraduate students will be coding the avatars to perform the following actions: (1) in the virtual environment, a young male avatar will appear from the left side of the surface, walk to the middle of the scene, and wave three times; (2) a young female avatar will appear in the center of the scene and walk to the right, where she repeats the three waves before disappearing. This sequence will be repeated three times; (3) in the virtual-auditory stimuli condition. The same avatars will appear in the same order from the exact directions and dance to an animated disco song three times.

In this project, there are three learning objectives:

  1. Develop 3D modeling to create a multimodal VE consisting of a simulated gymnasium using Unity;
  2. Program a male avatar to walk and wave in the VE;
  3. Program a female avatar to walk and wave in the VE; and
  4. Program avatars to dance in the virtual-auditory environment.

Training:

  1. Collect movement data during visual stimuli conditions;
  2. Collect movement data during visual-auditory stimuli condition; and
  3. Apply a set of machine learning models to analyze whether the frequency of movement can discriminate between ASD and TD children

Project Activity 5 – The Development of a Disability Assessment Tool using Biosensors, Virtual Reality, and Machine Learning for Persons with Multiple Sclerosis (Led by Dr. Farrell)

Undergraduate students will:

  1. Collect biosensor data from persons with Multiple Sclerosis (MS) while performing a series of tasks designed to reflect activities of daily living (e.g., walking, climbing stairs, lifting and lowering objects);
  2. Analyze which movement patterns and functions have the most significant impact on disability using machine learning;
  3. Integrate virtual reality to reflect tasks chosen by machine learning for disability assessment tool;
  4. Compare new disability assessment tools to the established Expanded Disability Status Scale (EDSS).

Students will be educated on the pathophysiology of MS, including the development of both cognitive and physical disability, by Dr. Farrell. Also, Dr. Farrell, a certified rater for EDSS, will train undergraduate students on how the EDSS is administered and allow them to observe its administration during data collection.

In this project, there are three learning objectives:

  1. Understand the strength and limitations of the EDSS;
  2. Identify technologically innovative methods to address the current limitations of the EDSS while building upon its strengths; and
  3. Develop appropriate methodologies to test the newly developed disability assessment tool.

Training:

  1. Collect biosensor data during a series of tasks that reflect the activities of daily living for persons with MS; and
  2. Analyze the data and determine which biosensor-related outcomes and tasks have the highest correlation with overall disability.

The students will then work with Dr. Farrell to design the framework (i.e., which biosensor outcomes and tasks) to be included in the virtual reality disability assessment. Students will help oversee Dr. Farrell’s administration of the EDSS and the newly developed virtual reality assessment over multiple trials. Scores will be collected to determine the validity and reliability of the new evaluation. Students will be evaluated at the end of summer by presenting (with local clinicians specializing in treating persons with MS) their findings and demonstrating the newly developed assessment tool.


Project Activity 6 – Enhancing early diagnosis of autism with machine learning algorithms using postural control features (led by Dr. Rolfe)

Children with ASD often exhibit impaired communication and interaction with others, restricted interest, and repetitive behavior patterns. In addition, motor behavior deficits in children with ASD have been reported, including abnormalities in motor coordination, poor performance in functional motor tasks, and decreased postural stability.

Because there is no direct clinical test for ASD, diagnosis is often based on assessments of various behavioral and motor symptoms by experienced psychologists, pediatricians, or neurologists. Motor deficits are among the earliest signs that can be used to detect ASD. Therefore, our goal is to provide an efficient and valid approach for enhancing the early diagnosis of ASD using postural control features. The Center of Pressure (COP, the location of the ground reaction force) during quiet standing for children with ASD, children with developmental delay (no ASD), and children with typical development will be measured and used to train several machine learning models.

These machine learning classifiers will automatically classify the three groups’ postural control patterns. The machine learning approach has great potential to detect postural control features in autistic children, leading to a robust screening tool for the early diagnosis of ASD.

In this project, there are three learning objectives:

  1. Assess postural stability for individuals with ASD;
  2. Determine the most relevant variables to differentiate between individuals with and without ASD; and
  3. Develop machine learning classifiers using computer programs (e.g., MATLAB or Python).

Training:

  • Collect Center of Pressure (COP) data during quiet standing using a portable force platform;
  • Compute magnitudes and complexity of COP to assess postural sway; and
  • Develop and validate an automated identification of ASD postural control patterns using supervised machine learning classifiers.

 

 

 


Sponsored by the National Science Foundation 2022-2024