Familiarity in music has been reported as an important factor modulating emotional and hedonic responses in the brain. Familiarity and repetition may increase the liking of a piece of music, thus inducing positive emotions. Neuroimaging studies have focused on identifying the brain regions involved in the processing of familiar and unfamiliar musical stimuli. However, the use of different modalities and experimental designs has led to discrepant results and it is not clear which areas of the brain are most reliably engaged when listening to familiar and unfamiliar musical excerpts. In the present study, we conducted a systematic review from three databases (Medline, PsychoINFO, and Embase) using the keywords (recognition OR familiar OR familiarity OR exposure effect OR repetition) AND (music OR song) AND (brain OR brains OR neuroimaging OR functional Magnetic Resonance Imaging OR Position Emission Tomography OR Electroencephalography OR Event Related Potential OR Magnetoencephalography). Of the 704 titles identified, 23 neuroimaging studies met our inclusion criteria for the systematic review. After removing studies providing insufficient information or contrasts, 11 studies (involving 212 participants) qualified for the meta-analysis using the activation likelihood estimation (ALE) approach. Our results did not find significant peak activations consistently across included studies. Using a less conservative approach (p < 0.001, uncorrected for multiple comparisons) we found that the left superior frontal gyrus, the ventral lateral (VL) nucleus of the left thalamus, and the left medial surface of the superior frontal gyrus had the highest likelihood of being activated by familiar music. On the other hand, the left insula, and the right anterior cingulate cortex had the highest likelihood of being activated by unfamiliar music. We had expected limbic structures as top clusters when listening to familiar music. But, instead, music familiarity had a motor pattern of activation. This could reflect an audio-motor synchronization to the rhythm which is more engaging for familiar tunes, and/or a sing-along response in one’s mind, anticipating melodic, harmonic progressions, rhythms, timbres, and lyric events in the familiar songs. These data provide evidence for the need for larger neuroimaging studies to understand the neural correlates of music familiarity.
Music is ubiquitous in human culture and has been present since prehistorical times (Conard et al., 2009). Music does not appear to have a survival value, yet most of the current literature has pinpointed it as a fundamental aspect of human life, describing it as a “universal reward” (Trehub et al., 2005). People often value music for the emotions it generates (Juslin and Laukka, 2004; Brattico and Pearce, 2013), and listening to music can help to regulate mood and increase well-being (Hills and Argyle, 1998; Kawakami et al., 2014). This might explain the use of music in people’s everyday lives (Schäfer and Sedlmeier, 2010).
Familiarity or repeated exposure in music has been reported as an important factor modulating emotional and hedonic responses in the brain (Pereira et al., 2011). The familiarity principle, also known as the “mere exposure effect,” was first described by Zajonc (1968). It is a psychological phenomenon which suggests that the more exposed we are to someone or something, the more we like it. Repetition in music can be of different types: within a piece, across pieces, or across multiple hearings (Margulis, 2013). Both familiarity and repetition may increase the liking of a piece of music, thus inducing positive emotions (Witviliet and Vrana, 2007; Omar Ali and Peynircioglu, 2010).
Long before its description in 1968, the phenomenon of familiarity had been known by social psychologists and applied to the music field (King and Prior, 2013). The first person who documented it was Meyer in 1903. He presented his subjects with a dozen repetitions of unfamiliar music that he had composed. After listening to the last repetition, most subjects asserted that “the aesthetic effect was improved by hearing the music repeatedly” (Meyer, 1903). Moreover, Meyer showed that melodies which ended on the frequency ratio symbol 2 (the Lipps-Meyer Law) was preferred to all other melodies. However, this law was later on disputed by Paul Farnsworth, his student, who argued that interval ending preferences could be altered by training. Therefore, repetition and familiarity with a specific ratio ending could increase preference for that specific ending. This effect, explaining the perception of music closure, was called the “habit principle” (Farnsworth, 1926). Overall, it seems familiarity deepens the understanding of music and engagement with music listening (King and Prior, 2013).
However, according to numerous studies, the relationship between exposure and enjoyment is non-linear, following an inverted-U shape preference response. Repeated exposure to music can increase pleasure (“hedonic value”) for a certain period, but ultimately gives rise to increasing displeasure (Jakobovits, 1966; Berlyne, 1971; Szpunar et al., 2004; Schellenberg, 2008).
There are different explanations for the inverted U-shape preference response. One is the perceptual fluency model (Bornstein and D’Agostino, 1994) which explains that people incorrectly assume that the facilitated processing of a familiar stimulus is associated to some positive attribute of the stimulus itself. However, as the conscious recognition of fluency processing increases, they stop misattributing this effect to the stimulus but to repeated exposure, and therefore pleasure decreases. Another explanation proposed by Berlyne (1971) states that the inverted U reflects the “interaction of two opposing impulses:” the ascending part arises from an evolutionary conditioned preference for the familiar (positive learned safety effect), and the subsequent decline of the U favors for novelty seeking (aversion to boredom). Moreover, the complexity of the stimulus also influences the timescale of satiation effect. According to Szpunar et al. (2004), despite initial increases in liking, after the stimulus complexity has been absorbed, boredom intercedes, and satiation reduces likability.
Peretz et al. reported that familiarity is best conceptualized as an “implicit memory phenomenon,” in which previous experience aids the performance of a task without conscious awareness of these previous episodes (Peretz et al., 1998). The ability to recognize familiar melodies appeared to be dependent on the integrity of pitch and rhythm perception. Of these two factors, pitch is thought to play a more important role (Hébert and Peretz, 1997). The authors noted that “although the mere exposure effect is simple to define and to reproduce experimentally, it is more complicated to explain.”
Familiarity is a complex subject and the neural mechanisms underlying this memory phenomenon toward music listening are still not very clear or consistent. Some authors define familiarity as a semantic memory process, which is a declarative knowledge (e.g., words, colors, faces, or music) acquired over a lifetime. Musical semantic memory is defined as the long-term storage of songs or musical excerpts, which enables us to have a strong feeling of familiarity when we listen to music (Groussard et al., 2010a). Brain lesion studies showed that music semantic memory appears to involve both hemispheres; however, the integrity of the left hemisphere is critical, suggesting functional asymmetry favoring the left hemisphere for semantic memory (Platel et al., 2003). Neuroimaging studies featuring musical semantic memory have reported the involvement of the anterior part of the temporal lobes, either in the left hemisphere or bilaterally, and the activation of the left inferior frontal gyrus (Brodmann area (BA) 47) (Plailly et al., 2007). Groussard and her co-workers also found activation of the superior temporal gyri (BA 22). The right superior temporal gyrus is mostly involved in the retrieval of perceptual memory traces (information about rhythm and pitch), which are useful for deciding whether or not a melody is familiar. The left superior temporal gyrus seems to be involved in distinguishing between familiar and unfamiliar melodies (Groussard et al., 2010a).
Plailly et al. (2007) also addressed the neural correlates of familiarity and its multimodal nature by studying odors and musical excerpts stimuli. These were used to investigate the feeling of familiarity and unfamiliarity. Results for the feeling of familiarity indicated a bimodal activation pattern in the left hemisphere, specifically the superior and inferior frontal gyri, the precuneus, the angular gyrus, the parahippocampal gyrus, and the hippocampus. On the other hand, the feeling of unfamiliarity (impression of novelty) of odors and music was related to the activation of the right anterior insula (Plailly et al., 2007). Janata (2009) studied the neural correlates of music-evoked autobiographical memories in healthy individuals and those with Alzheimer disease. His findings showed that familiar songs from our own past can trigger emotionally salient episodic memories and that this process is mediated by the medial prefrontal cortex (MPFC). In the same study, hearing familiar songs also activated the pre-supplementary motor area (SMA), left inferior frontal gyrus, bilateral thalamus, and the right cerebellar hemisphere (Janata, 2009).
Brain imaging studies in the neurobiology of reward during music listening demonstrated the involvement of mesolimbic striatal areas, especially the nucleus accumbens (NAcc) in the ventral striatum. This structure is connected with subcortical limbic areas such as the amygdala and hippocampus, insula and anterior cingulate cortex, and also integrated with cortical areas including the orbital cortex and ventromedial prefrontal cortex. These limbic and paralimbic structures are considered the core structures of emotional and reward processing (Koelsch, 2010; Salimpoor et al., 2013; Zatorre and Salimpoor, 2013). Recently, Pereira et al. (2011) investigated familiarity and music preference effects in determining the emotional involvement of the listeners and showed that familiarity with the music contributed more to the recruitment of the limbic and reward centers of the brain.
Electroencephalography (EEG) is another neuroimaging technique that enabled us to address the brain’s response to stimuli. It provides a real-time picture of neural activity, recording how it varies millisecond by millisecond. Time-locked EEG activity or event-related potential (ERP) are small voltages generated in the brain structures in response to specific sensory, cognitive or motor event (Luck, 2005). With regards to auditory stimuli—and, more specifically, to music listening and recognition—the N1, P200, P300, and N400 waves have been found to be particularly important. N1, a negative component found 80–110 ms after stimulus onset, is thought to represent the detection of a sound and its features, as well as detection of change of any kind (pitch, loudness, source location etc.) (Näätänen and Picton, 1987; Seppänen et al., 2012). It originates in the temporal lobe, predominantly in or near the primary auditory cortex, suggesting that it is involved in early phases of information processing (Hyde, 1997). Secondly, P2 is a positive component that arises 160–200 ms after the onset of the stimulus (Seppänen et al., 2012) and is localized in the parieto-occipital region (Rozynski and Chen, 2015). It is involved in evaluation and classification of the stimulus (Seppänen et al., 2012) as well as other related cognitive processes, such as working memory and semantic processing (Freunberger et al., 2007). P3, instead, is considered to be more related to selective attention and information processing, such as recognition and memory processes. It is traditionally divided into P3a, arising in the frontal region, and P3b, arising in the temporal and parietal regions; it appears 300–400 ms after the stimulus and lasts 300–600 ms (Patel and Azzam, 2005). However, its timing can vary widely, so it is often described as the late positive complex (LPC), a definition which also includes later deflections, such as P500 and P600 (Finnigan et al., 2002). Finally, N400 arises 200–600 ms after the stimulus, but its anatomical localization has not been well defined since it does not seem to be related to a specific mental operation only. Indeed, it seems to be connected to the processing of meaning at all levels, since it is influenced by factors acting both at lower and at higher levels of these cognitive processes (Kutas and Federmeier, 2011).
Advances in brain imaging techniques have facilitated the examination of music familiarity processing in the human brain. Nevertheless, the use of different modalities and experimental designs has led to differing results. Over the years, studies have used varying music stimuli such as melodies, songs with and without lyrics, with diverse acoustic complexity. Due to this heterogeneity, it is not clear which areas are most reliably engaged when listening to familiar and unfamiliar songs and melodies.
To our knowledge, no systematic review or meta-analysis has been conducted to resolve the inconsistencies in the literature. The present study systematically reviews the existing literature to establish the neural correlates of music familiarity, in healthy population using different neuroimaging methods, including fMRI, PET, EEG, ERP, and MEG. Finally, we used the activation likelihood estimation (ALE) method (Eickhoff et al., 2009) to conduct a series of coordinate-based meta-analyses for fMRI and PET studies. We expected to find brain areas related to emotion or reward as the most active regions when listening to familiar music, as familiarity is positively correlated with likeability and pleasure, at least to a certain number of exposures.[…]