Simulating Sound Encoding in Active Contexts: Applying the Hidden Markov Model to Frequency Following Responses
Welcome to my poster presentation!
My name is Lucy Core and I am an undergraduate student at McGill University in the Honours Cognitive Science program. I have been working with the Zatorre Lab at the Montreal Neurological Institute since September 2019. Under the supervision of Dr. Robert Zatorre, I have completed an independent research project on the influence of auditory perception on motor learning. This summer, I expanded my knowledge of auditory cognitive neuroscience and computer science by using a machine learning algorithm to classify frequency following responses.
To view the abstract and my poster, please click the "Presentation" button.
I am looking forward to answering your questions about my poster via Zoom on Tuesday, August 11th, 2020 from 3:30-4:45 PM EST.
****PLEASE NOTE****
Because there is a 40 minute time limit for Zoom, and the poster session is 75 minutes long, I have two different links depending on the time.
The green button that says "Zoom Link" will bring you to the link for Part 1 of the meeting. See below for links for both Part 1 and Part 2.
Zoom Link for Part 1 (3:30pm - 4:10pm):
Lucy Core is inviting you to a scheduled Zoom meeting.
Topic: NSERC-CREATE Poster - Part 1
Time: Aug 11, 2020 03:30 PM Eastern Time (US and Canada)
Join Zoom Meeting
https://us04web.zoom.us/j/75776139403?pwd=MDhPVVBSdCtYa2ppNjh4NC8xQWZ4U…
Meeting ID: 757 7613 9403
Passcode: 5XeND8
Zoom Link for Part 2 (4:10pm-4:45pm):
Topic: NSERC-CREATE Poster - Part 2
Time: Aug 11, 2020 04:10 PM Eastern Time (US and Canada)
Join Zoom Meeting
https://us04web.zoom.us/j/76864984724?pwd=QjF3aWlNcEljekZSZHdXQzBXbXdmd…
Meeting ID: 768 6498 4724
Passcode: 8Cxqse
Simulating Sound Encoding in Active Contexts: Applying the Hidden Markov Model to Frequency Following Responses
The frequency following response (FFR) is an auditory neural signal whose nonlinear features capture the periodicity of complex sounds, such as speech and music. Previous work by Behroozmand et al (2016) has shown that FFRs recorded in active conditions when an individual is producing a sound have greater amplitudes than FFRs recorded in passive conditions where individuals are listening to sounds. In this project, we used an existing data set recorded in a passive listening condition in which participants listened to the speech syllable /da/ or piano tone G2 presented at the same frequency (Coffey et al., 2017). In order to simulate the active FFRs, we multiplied the passive FFR amplitudes by a single value from a random normal distribution with the mean amplitude enhancement and standard deviation reported by Behroozmand et al (2016). We then used a machine learning classifier called the Hidden Markov Model (HMM) to determine whether or not it could accurately classify FFRs in the passive vs simulated active conditions. We tested the HMM using 7 different sample sizes for both the speech and piano FFR data. The results showed that the HMM could accurately classify FFR trials when using sample sizes of 500, 1000, and 1500 for the speech data and 1500 and 2000 for the piano data. These results suggest that the HMM shows promising outcomes for use with FFR data, as they will better inform us of the HMM's optimal parameters.