Elise A. Piazza

How does neural coupling drive successful communication and learning?

I use functional near-infrared spectroscopy (fNIRS), which measures local changes in blood oxygenation and is much less susceptible to motion artifacts than fMRI, to investigate communication during live, naturalistic interactions. In a recent study, we simultaneously recorded brain activity from adult-infant dyads while they played, sang, and read a story in real time. We found significant neural coupling within each dyad during direct interaction, with the infant's PFC slightly preceding and driving similar activation in the adult brain. We also found that both brains continuously tracked the moment-to-moment fluctuations of communicative behaviors (mutual gaze, smiling, and speech prosody).

How do speakers adapt their voices to meet audience demands?

We recorded mothers' natural speech while they interacted with their infants and with adult experimenters and measured their vocal timbre using a time-averaged summary statistic (MFCC) that broadly represents the spectral envelope of speech.

Using an SVM classifier, we found that mothers consistently shift their unique vocal "fingerprint" between adult-directed speech and infant-directed speech in a way that is highly consistent across 10 diverse languages from around the world.

These findings show that timbre is a pervasive, cross-linguistic property of communicative shifts and could improve speech recognition technology designed to compare infants' linguistic input across different cultural environments.

How do listeners efficiently process complex natural sounds?

Statistical summary is a perceptual mechanism for compressing complex sensory information into more concise, "gist" representations. This phenomenon had been widely studied in vision, so we reimagined it for the auditory domain to understand how listeners capture the essence of complex sounds as they unfold over time. We found that listeners encode the average pitch of a tone sequence while losing information about the individual tones, indicating that they transform local acoustic information into a simpler, global summary representation.

In follow-up work, we are exploring how statistical summary flexibly adapts to different categories of natural sounds (speech, music) as well as how infants extract auditory summary statistics in real time.

Humans use summary statistics to perceive auditory sequences.
Piazza, Sweeny, Wessel, Silver, & Whitney. Psychological Science, 2013.

Press: UC Berkeley press release | Science Today interview

How do we perceptually recalibrate to new auditory environments?

Adaptation is a critical perceptual phenomenon that enhances processing by recalibrating the brain's response to the current environment. How do our perceptual systems adjust to the timbre (i.e., "tone color" or overall quality) of natural sounds, such as the buzz of a muted trumpet or Billie Holiday's raspy voice?

We report rapid, widespread perceptual adaptation to the timbre of a variety of highly natural sounds (musical instruments, speech, animal calls, natural textures) that survives pitch changes present in the natural environment. Our results point to timbre as a high-level, configural property of sounds, processed similarly to faces in vision.



Rapid adaptation to the timbre of natural sounds.
Piazza, Theunissen, Wessel, & Whitney. Scientific Reports, 2018.

DEMO:

How musicians' brains sync up during live performance?

Music offers a rich window into human interaction, and my research links the perception and production of music within naturalistic, dyadic contexts.

I am currently developing tools (a combination of fNIRS recordings of live interactions and more constrained fMRI experiments using highly realistic, non-ferromagnetic instruments) to study musical communication in real-life environments.

(Photo of my chamber trio in 2013)


How does the brain select what we consciously perceive when the world is ambiguous?

My dissertation used binocular rivalry (a bistable phenomenon that occurs when two conflicting images are presented separately to the two eyes, resulting in perceptual alternation between the images) as a model of visual ambiguity to study the effects of various factors on conscious awareness.

In one study, we found that asymmetry between the two cerebral hemispheres impacts our conscious perception during rivalry. Our results indicate that conscious representations differ across the visual field and that these differences persist for a long time (>30 seconds) after the onset of a stimulus. In a follow-up study, we found that this hemispheric filtering is based on a relative comparison of available spatial frequencies in the current environment.

In another set of studies, we investigated how prediction informs perception. First, we showed that people are more likely to perceive a given image during binocular rivalry when that image matches the prediction of a recently-viewed stream of rotating gratings. More recently, we have found that arbitrary associations between sounds and images (established during a brief, passive statistical learning period) bias what we see during rivalry.


Persistent hemispheric differences in the perceptual selection of spatial frequencies.
Piazza, & Silver. Journal of Cognitive Neuroscience, 2014.

Relative spatial frequency processing drives hemispheric asymmetry in conscious awareness.
Piazza, & Silver. Frontiers in Psychology, 2017.

Predictive context influences perceptual selection during binocular rivalry.
Denison, Piazza, & Silver. Frontiers in Human Neuroscience, 2011.

Rapid cross-modal statistical learning influences visual perceptual selection.
Piazza, Denison, & Silver. Journal of Vision, 2018.