Zum Hauptinhalt springen

How previous experience shapes future affective subjective ratings: A follow-up study investigating implicit learning and cue ambiguity.

Del Popolo Cristaldi, F ; Buodo, G ; et al.
In: PloS one, Jg. 19 (2024-02-09), Heft 2, S. e0297954
Online academicJournal

How previous experience shapes future affective subjective ratings: A follow-up study investigating implicit learning and cue ambiguity  Introduction

People use their previous experience to predict future affective events. Since we live in ever-changing environments, affective predictions must generalize from past contexts (from which they may be implicitly learned) to new, potentially ambiguous contexts. This study investigated how past (un)certain relationships influence subjective experience following new ambiguous cues, and whether past relationships can be learned implicitly. Two S1-S2 paradigms were employed as learning and test phases in two experiments. S1s were colored circles, S2s negative or neutral affective pictures. Participants (Experiment 1 N = 121, Experiment 2 N = 116) were assigned to the certain (CG) or uncertain group (UG), and they were presented with 100% (CG) or 50% (UG) S1-S2 congruency during an uninstructed (Experiment 1) or implicit (Experiment 2) learning phase. During the test phase both groups were presented with a new 75% S1-S2 paradigm, and ambiguous (Experiment 1) or unambiguous (Experiment 2) S1s. Participants were asked to rate the expected valence of upcoming S2s (expectancy ratings), or their experienced valence and arousal (valence and arousal ratings). In Experiment 1 ambiguous cues elicited less negative expectancy ratings, and less unpleasant valence ratings, independently of prior experience. In Experiment 2, both groups showed similar expectancies, predicting upcoming pictures' valence according to the 75% contingencies of the test phase. Overall, we found that in the presence of ambiguous cues subjective affective experience is dampened, and that implicit previous experience does not emerge at the subjective level by significantly shaping reported affective experience.

Emotions as predictions

People use previous experience to predict future affective events. For example, imagine that your neighbor has a medium-sized dog with brown spotted fur that always snarls at you, causing you fear and annoyment. Imagine now that you are having lunch with your best friend, who is introducing you to their recently adopted little dog. As the dog enters the room running briskly, you seem to see that his fur is brownish, and perhaps spotted. Does your past annoying experience with your neighbor's dog make you think that your best friend's new dog is extremely unfriendly, just because the two dogs look alike? When situations like this happen in real life, predictions play a crucial role. Based on your past experiences your brain generalizes the concept of unfriendliness, developed on your neighbor's dog, also to new, perceptually similar dogs, predisposing you to feel fear and annoyment. All these computations occur quickly and without conscious awareness. In other words, affective predictions (and the associated subjective experience) must generalize from the specific features of past contexts (from which they are implicitly learned) to new and potentially ambiguous contexts (i.e., contexts that look alike past ones, but have different perceptual features), to be effective in promoting survival and allostatic balance with the environment [[1]]. However, the specific mechanisms by which this process occurs remain unclear.

According to predictive models of emotion [[3]], affective predictions are constructed along three distinct neurocomputational stages: prediction generation, in which prior experience is combined with present information to construct affective predictions; prediction implementation, in which predictions are used to pre-arrange the best action plans to deal with the expected situation; and prediction updating, in which current environmental inputs are compared with predictions, and in case of mismatch the unexpected information (encoded as a prediction error) acts as a feedback to adjust subsequent predictions. Predictive models are assumed to be (i) probabilistic, since they encode the statistical regularities within the observed inputs [[1], [5]]; (ii) generative, because they generalize across sensory modalities, contexts, and time [[1]]; and (iii) implicit, since they act mostly outside of awareness, potentially emerging into it only in case of a prediction violation [[6]]. It follows that human brains do not merely react to affective stimuli at the time of their occurrence, but rather, as a spontaneous activity, they are constantly busy in generating-implementing-updating predictions in the service of allostasis [[7]].

Prediction construction is assumed to represent the core mechanism on which the brain relies, in order to provide the body with optimal resources for growth, adaptation to the environment, and survival [[1], [5], [7]–[9]]. Previous experience (i.e., knowledge derived from the extraction of statistical regularities of stimuli occurrence in the past) plays a pivotal role in the construction of new affective predictions: it interacts with momentary information (e.g., physical properties of the environment, contingencies experienced between present stimuli) in constraining and refining the pool of information used to generate predictions [[1], [3], [5]]. However, predictive processing's assumptions about (i) whether contingency learning (i.e., extracting statistical regularities of stimuli co-occurrence) may develop implicitly and (ii) whether these learnings generalize to new, and potentially ambiguous contexts are to date mainly based on empirical evidence related to cognitive domains such as visual perception [[10]–[12]] or motor control [[13]]. Little experimental evidence about the application of these assumptions to the affective domain has been collected so far.

To better understand how affective predictions are related to subjectively experienced affective states, predictive models of emotion [[3]] have reconceptualized subjective affective experience within their framework. Subjectively experienced affective states have been defined as the representation of valence (i.e., pleasantness/unpleasantness), arousal (i.e., activation/calm) or even discrete emotions (e.g., anger, fear) in subjective awareness, on which an individual can verbally self-report [[15]]. Predictive models of emotion [[3]] assume that affective experience derives from the brain's spontaneous predictive activity, and have redefined it as the active process of making meaning of present stimuli by predicting and categorizing them on the basis of past experiences [[3]]. When this process draws on conceptual emotion knowledge, the resulting predictive models and their associated affective responses can be subjectively experienced as emotions [[3], [16]]. It follows that subjective affective experience may be sensitive to statistical regularities experienced in the past. For example, the more reliable the regularities, the more expectancies about the valence of upcoming stimuli will draw on them to predict upcoming stimuli [[18]]. However, it is still unknown whether previously learned probabilities may influence subjective affective experience when facing new, potentially ambiguous cues (i.e., ambiguous signals preceding the occurrence of an affective stimulus). Also, it is not clear if affective environmental contingencies might be inferred from past experience at an implicit level (i.e., without explicit awareness, and without focusing attention on contingencies themselves at the time of their occurrence), and if this may ultimately shape subjective affective experience during future predictions. The present research aimed to test whether these two crucial assumptions of predictive processing models apply to the affective domain by employing a methodological replication approach. We draw on our previous work [[18]] to investigate (i) if being exposed to past (un)certain contingencies might influence future subjective affective ratings to new and potentially ambiguous cues (i.e., cues with different perceptual properties from the ones previously experienced); and (ii) if people are able to implicitly extract (un)certain probabilistic information, and use it later to subjectively predict new affective events.

A novel experimental paradigm

To study the effects of previous experience on affective predictions we recently developed a novel experimental paradigm [[18]], which integrates the logic of traditional emotional S1-S2 paradigms (see [[19]] for a review) with an uninstructed learning component. In this novel paradigm two separate emotional S1-S2 paradigms are employed as a learning and test phase, respectively. As in typical emotional S1-S2 paradigms, a sequential presentation of two stimuli is implemented (both in the learning and test phases): the S1 (or cue) is a symbolic stimulus (manipulated at two levels, i.e., red or blue circles) preceding the occurrence of an affective stimulus, while the S2 (or target) is an emotional stimulus (manipulated at two levels, i.e., negative or neutral affective pictures). The sequence of events in the S1-S2 paradigm allows to target the three stages of affective prediction construction: the S1 reflects the generation stage, the inter-stimulus interval (ISI) between S1 and S2 the implementation stage, and the S2 the updating stage [[20]]. The contribution of our new paradigm is that it manipulates actual previous experience through uninstructed certain vs. uncertain probabilistic contingencies between S1 and S2, experienced during a separate learning phase. This approach is markedly different from extant S1-S2 paradigms that manipulate explicitly labeled probabilistic information while participants perform the task.

During the learning phase of this paradigm, previous experience is manipulated between subjects by dividing participants into two experimental groups: the certain group (CG) and the uncertain group (UG). According to the group, they are presented with a 100% (CG) or 50% (UG) S1-S2 congruency, namely the probabilistic ratio between S1 color and S2 valence they are exposed to. During the test phase, all participants are then presented with a new S1-S2 paradigm with a fixed 75% S1-S2 congruency (see [[18]] and below for a detailed explanation of the choice to use the 75% S1-S2 ratio in the learning phase). In the test phase, they are asked to rate their subjective affective experience in terms of either the expected valence of upcoming S2s (expectancy ratings), or the experienced valence and arousal to S2s (valence and arousal ratings). Participants are left uninstructed about the probabilistic ratios they are exposed to during the whole paradigm.

Inconsistent evidence on subjective affective experience

Extant literature has collected inconsistent evidence with regard to the effect of (un)certain stimulus predictability on subjective affective ratings. As for expectancy measures related to the generation-implementation stages, our previous study [[18]] showed that experiencing certain contingencies (100%) during the learning phase subsequently elicited more extreme expectancy ratings (i.e., participants predicted the valence of future stimuli according to previously learned contingencies), and this effect generalized from the visual to the auditory sensory modality. Other studies implementing typical S1-S2 paradigms found negatively-biased expectancies (i.e., an overestimation of negative S2s occurrence) in the uncertain (50%) condition [[21]–[25]]; and that ambiguous cues (i.e., cues with uninstructed 50% predictive value) elicited less negative expectancy ratings than unambiguous cues [[26]].

Regarding valence and arousal ratings measured during the updating stage, we did not find any effect of previous experience [[18]], consistently with some studies implementing typical S1-S2 paradigms that found no effect of current (un)certainty on valence ratings [[21]–[24], [27]]. Other traditional S1-S2 studies, instead, showed a more intense subjective experience either in the certain (100%) [[27], [29]–[32]] or in the uncertain (50%) condition [[33]], and also that ambiguous cues elicited more unpleasant mood ratings [[26]]. Another study [[23]] compared an explicit anticipation condition (in which participants were asked for expectancy ratings) with an implicit anticipation condition (in which participants were asked for a target detection task): no effects of cue predictive meaning (100% vs. 50%) emerged on accuracy to the target detection task, faster reaction times (RTs) were found in the certain condition, and no significant results were found on S2-valence ratings.

A follow-up research

Overall, evidence on the effects of past knowledge on subjective affective experience remains fragmentary, with our previous work [[18]] suggesting that a reliable (i.e., certain) previous experience affects future expectancies, and S1-S2 studies suggesting that either certain [[27], [29]–[32]] or uncertain [[21]–[26], [30], [33]] contingencies can lead to an intensification of the related affective experience. Besides, little is known yet about the potential effects of cue ambiguity on new affective predictions, nor on the actual possibility to infer affective environmental contingencies implicitly. Both these factors are nonetheless crucial to construct efficient affective predictions. In fact, since our environments are characterized by frequent changes, affective predictions must be flexible in adapting to new, and potentially ambiguous, contextual features [[3], [6], [8]]. Moreover, in daily life people should be able to spontaneously learn contingencies from the environment, and to use them as priors for subsequent affective predictions, even in absence of explicit instructions and/or awareness of the contingencies themselves [[6]]. However, despite the importance of both of these aspects for constructing efficient predictions, it still lacks solid experimental evidence on how they may influence subjective affective experience.

Based on this, in the present research we implemented two follow-up experiments (pre-registered on the Open Science Framework—OSF; Experiment 1: https://osf.io/gdr3b/, Experiment 2: https://osf.io/z5esb/), in which we modified some features of our former paradigm [[18]]. In particular, in order to investigate the construction of new affective predictions as a function of cue ambiguity, in Experiment 1 we introduced ambiguous cues in the test phase. Here, unbeknownst to participants, we presented two new reddish and bluish S1 colors (i.e., coral and turquoise, here defined as ambiguous) in addition to those already presented during the learning phase (i.e., red and blue, here defined as unambiguous). Further, to test if the probabilistic information available in the environment can be extracted and learned at an implicit level (i.e., not explicitly focusing attention on the probabilistic relationships between stimuli), in Experiment 2 we engaged participants in a distracting task (i.e., a parity judgment task) during the learning phase.

According to predictive models of emotion [[3]], we hypothesized that participants would generalize previously learned contingencies to new ambiguous cues in Experiment 1, and that they would infer affective environmental contingencies without explicitly focusing attention (and use them later to predict future stimuli) in Experiment 2.

Experiment 1

Method

Experiment 1 investigated whether (un)certain past experience might influence future subjective affective ratings as a function of cue ambiguity. We pre-registered the study on Open Science Framework (OSF) (https://osf.io/gdr3b/).

According to predictive models of emotion [[3]], we can formulate the hypothesis (H1) that participants exposed to reliable contingencies in the learning phase (i.e., the CG) would generalize the learned information to the ambiguous cues of the test phase, thus showing more negative expectancy ratings after the cues that are more perceptually similar to those previously paired with negative pictures. As no extant study directly investigated the interaction between past (un)certainty and cue ambiguity, we tested whether cue predictive meaning modulated expectancies as an exploratory analysis. Moreover, according to previous literature [[26]] and to what specified in the pre-registration, we expected that ambiguous cues would elicit (H2) less strong generalization effects on expectancy ratings as compared to unambiguous cues and (H3) more unpleasant valence ratings. We also tested (H4) whether cue ambiguity modulates arousal ratings as a function of previous experience.

Participants

We used the provider platform Prolific (Prolific, Oxford, UK; http://www.prolific.co/) to recruit 125 adult participants in August 2021. Participants were screened for vision difficulties, including color blindness. Researchers had access only to the following participant's personal information: Prolific IDs, age, gender, country of birth, country of residence, nationality, employment status and languages spoken. Thus, no identifying information was provided and collected. We estimated the required sample size through an a priori pre-registered (https://osf.io/gdr3b/) simulation-based power analysis for generalized linear mixed-effects models (GLMMs) (R package: simr; [[34]]). We estimated parameters from data of our previous study (Experiment 1, N = 185 [[17]]). Pre-registered exclusion criteria were the following: scoring lower than 75% accuracy on attention check items (see below) (N of discarded participants = 0), reporting experienced technical issues in more than 25% of the experimental trials (N = 1). We also excluded data from 3 participants because of data collection failure.

The final sample included 121 participants (58 males, age: M = 25.03, SD = 7.67, range = 18–58; CG N = 57, UG N = 52). All participants gave their written informed consent before starting the experiment, and were paid £1.93 for their participation. All experimental procedures were conducted in accordance with the Declaration of Helsinki, and approved by the local Ethical Committee (protocol no. 4177).

Stimulus material and procedure

We employed two emotional S1-S2 paradigms as learning and test phase, respectively (as in [[18]]). In both phases, unambiguous S1s were red and blue 1-cm-diameter circles. In the test phase only, new ambiguous S1s were introduced unbeknownst to participants: same-sized reddish and bluish (i.e., coral- and turquoise-colored) circles. In both learning and test phases, S2s were colored 800 × 600 px pictures from Nencki Affective Picture System (NAPS) [[35]], whose valence was manipulated at two levels: negative (Neg; e.g., dead animals, injured people, human threat/war scenes) vs. neutral (Neu; e.g., urban or natural landscapes, people jogging or sitting, animals standing still) (see S1 Table for a list of NAPS pictures employed as S2s). Negative and neutral pictures did not differ in luminance, contrast, complexity and color space indices (see S2 Table), and each picture content (i.e., animals, faces, landscapes, objects, people) was equally represented within each valence level. In attention check trials, a 1-cm yellow circle was used as S1, while a 800 × 600 px picture of a black stripes pattern displayed on a transparent background was used as S2. The expected valence of upcoming S2s (expectancy ratings) and subjective affective responses to S2s (valence and arousal ratings) were assessed through three distinct Visual Analogue Scales (VASs) ranging from 0% to 100%. In the expectancy VAS 0% corresponded to "I definitely expect to see a neutral picture", 50% represented not knowing what to expect, and 100% corresponded to "I definitely expect to see a negative picture". In the valence VAS 0% represented "very negative" valence, 50% "neutral" valence, and 100% "very positive" valence. In the arousal VAS 0% meant "relaxed", 50% represented an intermediate level of activation, and 100% meant "aroused".

We ran the experiment online, through OpenSesame [[36]] and the JATOS hosting server [[37]]. Participants were asked to run the study on a computer, to sit alone in a silent and private room, and to avoid distractions and interruptions, in order to ensure optimal conditions for participation. They were also asked to avoid that someone else could view their screen during the experiment, due to the involvement of emotionally salient material.

Before the learning phase, participants were randomly assigned to the certain group (CG) or the uncertain group (UG). At the beginning of the learning phase, participants received the following instructions: they were asked to look at the screen and pay attention to the relationship between S1 color and S2 valence, and they were asked to press the 'spacebar' as fast as they could each time they saw a yellow circle (attention check trials). A practice session of 4 trials followed the instructions: here, participants received feedback on their performance to one attention check trial. After the practice, the learning session started. In each trial S1 was presented first for 250 msec, and it was displayed on a gray background. This was followed by a fixed interstimulus interval (ISI) of 1000 msec, in which the screen remained gray. Then, the S2 was presented for 1000 msec. A white fixation cross was displayed in the center of the screen during the inter-trial interval (ITI), whose length randomly varied between 800 and 1200 msec. The total number of learning trials was 40, presented in random sequence. During this phase, CG participants were exposed to a certain predictive relationship between S1 and S2: each S1 color (i.e., red and blue) was paired with the same S2 valence in 100% of the trials (100% S1-S2 congruency). UG participants, instead, were presented with an uncertain relationship: each S1 color was paired with negative S2s in 50% of the trials, and with neutral S2s in the other 50% (50% S1-S2 congruency). Color-valence pairings were counterbalanced between subjects, and participants were left uninstructed about the S1-S2 predictive ratios.

After the learning phase, a 1-minute interval followed in which participants were asked to wait and relax. Then, both groups were introduced to the same test phase. Instructions of the test phase were to look at the screen and try to predict S2 valence based on S1 color. Participants were told that in some trials they would be asked their expectancy (i.e., to rate how much they expected to see a negative picture after the S1) on a 0–100% scale; while in other trials they would be asked their subjective valence and arousal to the S2, on a 0–100% scale for each dimension. They were asked to give either their expectancy ratings during the ISI, or their valence and arousal ratings right after the S2. VASs response times were self-paced, and the two rating conditions were balanced for the number of trials (50%:50%) and randomly delivered. We presented the expectancy and the valence/arousal ratings in different trials to prevent the two kinds of ratings from influencing each other. Participants were lastly reminded to press the 'spacebar' as fast as they could each time they saw the yellow circle (attention check). After the instructions, participants performed a practice session of 4 trials, in which they were trained to give their ratings, and received performance feedback on a single attention check. Then the test session started, with the same trial structure and timing as the learning phase, and a total number of 80 trials. The order of the trials was randomized. Cue color was manipulated within participants at four levels: red, blue, coral, and turquoise. Test trials in which we employed the same S1 colors as the learning phase were considered unambiguous (N = 40), whereas test trials in which we employed the new reddish and bluish colors were considered ambiguous (N = 40). Cue ambiguity, thus, included two levels: unambiguous (red and blue S1s) vs. ambiguous (coral and turquoise S1s). Participants were not warned about the exposure to new colors. In the test phase, S1 color was moderately predictive of S2 valence, since S1-S2 congruency was fixed at 75% (same S1 color-S2 valence pairings in 75% of the trials). Color-valence pairings were counterbalanced between subjects, and participants were left uninstructed about the S1-S2 predictive ratio. Notably, the new 75% ratio implied that CG participants moved from a reliable predictive relationship (100%, experienced during the learning phase), to a new, more uncertain context (in which the previously learned predictive models were sometimes violated). UG participants, instead, moved from an unreliable predictive relationship (50%), to a more predictive context. Moreover, the 75% is equidistant from the predictive ratios experienced by the CG and the UG (it lies in the middle between 100% and 50%; see [[18]] for further details on this choice). Fig 1A shows a schematic representation of the experimental paradigm.

Graph: a) Example sequence of events and their duration for a trial of Experiment 1, according to the phase (learning, test), and the group (CG, UG). During the learning phase participants experienced a 100% (CG) vs. 50% (UG) affective contingency between S1 color (red or blue) and S2 valence (Neg or Neu), according to group assignment. During the test phase all participants were presented with unambiguous (i.e., red or blue) and ambiguous (i.e., coral or turquoise) S1s, and the S1-S2 affective contingency was fixed at 75%. Participants were asked to answer the VASs either during the ISI for half of the trials (expectancy ratings), or right after the S2 for the other half of the trials (valence and arousal ratings). Response times were self-paced. b) Example sequence of events and their duration for a trial of Experiment 2, according to the phase (learning, test), and the group (CG, UG). During the learning phase participants experienced a 100% (CG) vs 50% (UG) affective contingency between S1 color (red or blue) and S2 valence (Neg or Neu), according to group assignment. Then, after the S2, they were presented with the parity judgment task. During the test phase the S1-S2 affective contingency was fixed at 75%. Participants were asked to answer the VASs either during the ISI for half of the trials (expectancy ratings), or right after the S2 for the other half of the trials (valence and arousal ratings). Response times were self-paced. ISI = inter-stimulus interval, ITI = inter-trial interval, VAS = visual analogue scale. The text and the pictures are not drawn to scale.

After the test phase, participants were redirected to a Qualtrics survey (Qualtrics, Provo, UT; http://www.qualtrics.com/), in which they were asked demographic information (age, gender) and some mood and trait measures. In particular, participants completed the Intolerance of Uncertainty Scale (IUS-12) [[38]] and the Depression, Anxiety and Stress Scale (DASS-21) [[39]], as intolerance of uncertainty as well as negative affectivity are known to be potential moderating factors impacting affective predictions [[40]–[44]]. Also, participants were asked a forced-choice question about whether and in how many trials they had experienced any technical issues with the Internet connection and/or with pictures uploading (response options: "No, everything worked fine!", "Yes, in less than 25% of the trials", "Yes, between 25% and 50% of the trials", "Yes, between 50% and 75% of the trials", "Yes, in more than 75% of the trials"). At the end of the survey, participants were thanked, and redirected to Prolific to receive their payment. The experiment lasted about 30 minutes.

Data analysis

The study had a 2 (group, between-subjects: CG vs. UG) × 2 (cue ambiguity, within-subjects: ambiguous vs. unambiguous) × 2 (S2 valence, within-subjects: Neg vs. Neu) mixed design. The analysis plan was pre-registered on OSF (https://osf.io/gdr3b/).

As pre-registered, univariate outliers (i.e., expectancy ratings) were detected through Median Absolute Deviation values (MAD > 3), and multivariate outliers (i.e., valence and arousal ratings) through the Mahalanobis-Minimum Covariance Determinant (MMCD, breakdown point 0.25) (R package: Routliers; [[45]]). We identified 12 univariate outliers, and we removed them from data analysis. We also identified 21 multivariate outliers. However, from the visual inspection of their ratings, they emerged as "potentially interesting outliers" (see [[45]]), since they showed only a slightly different relationship between valence and arousal ratings as compared to other participants. Thus, for this reason and given that none of them significantly impacted the models' estimates (as assessed through Cook's distance, see below), we chose to keep their data into the analysis. Overall, data from 109 participants were included in the analyses.

To test our hypotheses (H1, H2, H3, H4), we fitted the following linear mixed-effects models (LMMs) for each dependent variable (DV) (R package: lme4; [[46]]):

  • expectancy ratings (H1): group, cue color and their interaction as fixed factors, and random slopes for cue color within participants;
  • expectancy ratings (H2): group, cue ambiguity and their interaction as fixed factors, random slopes for cue ambiguity within participant;
  • valence ratings (H3): group, cue ambiguity, S2 valence and their interaction as fixed factors, random slopes for S2 valence within participant;
  • arousal ratings (H4): group, cue ambiguity, S2 valence and their interaction as fixed factors, random slopes for S2 valence within participant.

The factor "cue ambiguity" codes for the ambiguity of the cue (within-subjects: ambiguous—that is, coral or turquoise S1 color vs. unambiguous—that is, red or blue S1 color). In the analysis script (shared in OSF https://osf.io/gdr3b/) this factor was called "cue". Here, for the sake of clarity, we renamed it as "cue ambiguity". The factor "cue color" codes for the predictive meaning of the cue according to the specific color-valence pairings presented to participants in the test phase (within-subjects: red and coral—that is, a cue that preceded negative stimuli in the test phase vs. blue and turquoise—that is, a cue that preceded neutral stimuli in the test phase). In the analysis script (shared in OSF https://osf.io/gdr3b/) this factor was called "S1 color". Here, for the sake of clarity, we renamed it as "cue color". Please note that to compensate for the counterbalancing of S1 color–S2 valence pairings, in the analysis we re-coded as red all the unambiguous cues preceding negative pictures, as coral all the ambiguous cues preceding negative pictures, as blue all the unambiguous cues preceding neutral pictures, and as turquoise all the ambiguous cues preceding neutral pictures, irrespective of the actual color of the cue (see [[18]] for further details on the cue factor).

The other pre-registered exploratory analyses regarding the effects of time, S2 congruency (expected vs. unexpected S2s), IUS, and DASS-21 questionnaires are reported in S7–S8 Tables. Since the results of these analyses are mainly null, or redundant with respect to both the confirmatory models and the results of our previous work [[18]], these analyses will not be discussed further in the present manuscript. As pre-registered, for each model we evaluated influential cases through Cook's distance (>1). No influential cases emerged. As pre-registered, models effects were tested by means of F-test and p-values, calculated via Satterthwaite's degrees of freedom method (α =.05, R package: lmerTest; [[47]]). For each model we reported the estimated parameters with 95% confidence intervals (CI), marginal and conditional R2 (estimated as in [[48]]).

Results

Descriptive statistics are provided in Table 1.

Graph

Table 1 Descriptive statistics of Experiment 1.

GroupCue ambiguityS2 valenceExpectancy ratingsValence ratingsArousal ratings
MSDMSDMSD
CGAmbiguousNeg52.5928.7435.5129.4154.9825.63
Neu48.2328.6456.3129.942.126.49
UnambiguousNeg57.1830.6231.6227.3658.0125.07
Neu48.9530.556.3628.4741.1726.12
UGAmbiguousNeg52.7425.3835.5627.2655.0823.16
Neu48.7626.454.0726.4744.8923.64
UnambiguousNeg54.032632.9625.9657.9922.84
Neu50.1325.8855.7324.9943.1622.74

1 For each group (CG, UG) and experimental condition (ambiguous/unambiguous cue, negative/neutral S2 valence) we report the mean (M) and standard deviation (SD) of expectancy, valence and arousal ratings.

All the models are summarized in Table 2 and Fig 2. For the expectancy model testing H1 (R2 marginal = 0.190, R2 conditional = 0.452), we found a main effect of cue color (F(3, 107) = 33.24, p <.001), better specified by a significant interaction between group and cue color (F(3, 107) = 4.05, p =.009). Post-hoc contrasts confirmed our previous results [[18]], showing that participants in the CG reported significantly more extreme expectancy ratings as compared with participants in the UG, but only in the case of unambiguous cues. Indeed, the CG showed more negative expectancy ratings than the UG after red cues (i.e., unambiguous cues preceding negative pictures; CG—UG = 6.78, SE = 3.05, t(107) = 2.22, p =.028), but not after coral cues (i.e., ambiguous cues preceding negative pictures; CG—UG = 5.19, SE = 2.68, t(107) = 1.93, p =.056); and it showed less negative expectancy ratings than the UG after blue cues (i.e., unambiguous cues preceding neutral pictures; CG—UG = -10.07, SE = 3.10, t(107) = -3.25, p =.002), but not after turquoise cues (i.e., ambiguous cues preceding neutral pictures; CG—UG = -3.46, SE = 3.31, t(107) = -1.05, p =.297). Thus, contrary to H1, we did not find evidence for a strong generalization of the learned predictive meaning of cues to new, ambiguous ones. For the expectancy model testing H2 (R2 marginal = 0.006, R2 conditional = 0.056), we found a main effect of cue ambiguity (F(1, 107) = 13.19, p <.001): ambiguous cues elicited less negative expectancy ratings than unambiguous ones (ambiguous—unambiguous = -4.08, t(107) = -3.63, p <.001, 95% CI = [-6.31, -1.85]). Thus, the expectancy model supports H2, suggesting a reduced generalization on expectancy ratings to ambiguous cues within both groups.

Graph: Points represent the mean estimated value for each participant and condition.

Graph

Table 2 Results of LMMs on expectancy, valence and arousal ratings in Experiment 1.

ModelParameterEstimateSEStatisticdfp95% CI
Expectancy (H1)Intercept52.170.6185.91107.01< 0.00150.9753.38
UG–CG0.391.210.32107.010.746-2.012.80
Coral–Blue19.002.397.96107.00< 0.00114.2723.72
Red–Blue25.912.769.38107.21< 0.00120.4331.38
Turquoise–Blue-1.251.50-0.83107.050.407-4.231.73
UG–CG x Coral–Blue-15.264.77-3.20107.000.002-24.72-5.80
UG–CG x Red–Blue-16.855.53-3.05107.210.003-27.81-5.90
UG–CG x Turquoise–Blue-6.613.01-2.20107.050.03-12.58-0.65
σ ID5.40
σ Coral–Blue23.06
σ Red–Blue27.09
σ Turquoise–Blue12.59
σ residual20.82
Expectancy (H2)Intercept52.170.6185.91107.00< 0.00150.9753.38
UG–CG-0.391.21-0.32107.000.746-2.802.01
Amb–Unamb-4.081.12-3.63107.00< 0.001-6.31-1.85
group x cue2.512.251.12107.000.266-1.946.97
σ ID4.62
σ cue7.89
σ residual27.27
Valence (H3)Intercept44.670.4990.57107.53< 0.00143.7045.65
UG–CG0.980.990.99107.530.322-0.972.94
Amb–Unamb1.400.492.874,138.000.0040.452.36
neg–neu-46.351.47-31.58107.23< 0.001-49.26-43.44
group x cue0.850.980.874,138.000.383-1.062.77
valence x group-5.502.94-1.87107.230.064-11.320.32
cue x valence-2.670.98-2.734,138.000.006-4.58-0.75
group x cue x valence-2.401.95-1.234,138.000.219-6.231.43
σ ID4.47
σ valence14.43
σ residual16.03
Arousal (H4)Intercept49.590.8657.89107.19< 0.00147.8951.29
UG–CG-1.621.71-0.95107.190.346-5.021.78
Amb–Unamb-1.520.51-2.974,138.000.003-2.53-0.52
neg–neu28.711.8315.68107.17< 0.00125.0832.34
group x cue-0.351.02-0.344,138.000.733-2.361.66
valence x group4.673.661.27107.170.205-2.5911.93
cue x valence-1.661.02-1.624,138.000.105-3.670.35
group x cue x valence1.692.050.834,138.000.409-2.325.70
σ ID8.53
σ valence18.34
σ residual16.80

2 For each model, we reported the unstandardized regression coefficients, standard errors (SE), 95% confidence intervals (CI), and the associated t-test.

For the valence model testing H3 (R2 marginal = 0.621, R2 conditional = 0.704), we found a main effect of cue ambiguity (F(1, 4138) = 8.26, p =.004), and a main effect of S2 valence (F(1, 107) = 997.35, p <.001), better specified by a significant interaction between cue ambiguity and S2 valence (F(1, 4138) = 7.45, p =.006). In particular, we found evidence for less unpleasant valence ratings to neutral pictures presented after ambiguous cues as compared to unambiguous ones (Neu: ambiguous vs. unambiguous = 2.74, t(4138) = 3.96, p <.001, 95% CI = [1.38, 4.09]), while valence ratings to negative pictures were not affected by cue ambiguity (Neg: ambiguous–unambiguous = 0.071, t(4138) = 0.103, p =.918, 95% CI = [-1.28, 1.43]). Thus, cue ambiguity actually modulated valence ratings in both groups, but in the opposite direction as H3, with ambiguous cues leading subsequent neutral pictures to elicit less unpleasant subjective ratings.

For the arousal model testing H4 (R2 marginal = 0.325, R2 conditional = 0.566), we found a main effect of S2 valence (F(1, 107) = 245.75, p <.001), with negative pictures eliciting higher arousal ratings than neutral pictures (Neg-Neu = 28.7, SE = 1.83, t(107) = 15.68, p <.001). We also found a main effect of cue ambiguity (F(1, 4138) = 8.85, p =.003): ambiguous cues elicited lower arousal ratings than unambiguous cues (ambiguous–unambiguous = -1.52, SE = 0.51, t(4138) = -2.98, p =.003).

Discussion

The findings obtained in Experiment 1 suggest that no strong generalization of subjective expectancies from the specific features of the learning phase to the new ambiguous features of the test phase emerged. Indeed, when testing the effects of cue predictive meaning (i.e., cue color) on expectancy ratings, we replicated our previous results [[18]] showing that participants in the CG reported more extreme expectancies than the UG, but only for unambiguous cues. This may seem in contrast with predictive models of emotion [[3]] (see H1), since one may think that an efficient predictive model must be generalizable across different contexts. However, recent contributions [[49]–[51]] have highlighted how affective predictions are inherently and indissolubly context-sensitive, being tight to the specific situation in which they develop. Thus, as an alternative explanation consistent with previous literature [[26]], it may be that the novelty of the stimuli employed as ambiguous cues in the testing phase (namely, current information) prevails over past information in shaping subjective affective ratings. Moreover, cue ambiguity per se (regardless of cue predictive meaning) did not interact with uncertainty of previous experience in shaping subjective affective ratings. In fact, no significant group by cue interaction was found in any of the models. We found instead that ambiguous cues elicited less negative expectancy ratings in the generation-implementation stage, less unpleasant valence to neutral stimuli and overall lower arousal in the updating stage, independently from previous learning. Thus, cue ambiguity appears to elicit a dampened subjective affective experience at all stages.

With regards to subjective expectancy (i.e., expectancy ratings) these results are consistent with our hypothesis (see H2, [[26]]), with regards to subjective reactions to new stimuli (i.e., valence and arousal ratings) they are instead partially in contrast (see H3, [[26]]). It must be noted, however, that in our paradigm we manipulated previous experience in a separate learning phase and we asked for a trial-by-trial rating of experienced valence (and arousal) to S2s, whereas Chen and Lovibond [[26]] did not employ a learning paradigm and asked for a post-experiment mood rating. Thus, the different nature of the paradigm and the ratings requested to participants may be accountable for the opposing effects found. Moreover, another S1-S2 study [[52]] manipulating ambiguity of the target affective pictures, found that ambiguous pictures were rated as less unpleasant and less arousing than unambiguous pictures, coherently with what we found following ambiguous cues. These effects can be explained in light of the time-related distinction between ambiguity and uncertainty (as proposed in [[53]]). Ambiguity, on the one hand, refers to a static feature of the here and now, embedded in the present moment: an ambiguous situation or stimulus is characterized by novelty, unpredictability and also uncertainty [[53]]. Uncertainty, on the other hand, refers to a future-oriented feature, characterized by unpredictability but not necessarily also by ambiguity [[53]]. In Experiment 1, we manipulated uncertainty during the learning phase (where participants were not asked for any rating), and ambiguity during the test phase (where we asked for subjective affective ratings). As a consequence, and in line with the results of our previous research [[18]], it might be that current (i.e., ambiguous vs. unambiguous) information prevailed over past information (i.e., certain vs. uncertain) in shaping subjective affective experience. Moreover, it is interesting to note that the role of cue ambiguity in dampening subjective valence was exclusively expressed with regards to neutral stimuli. This may be due to a ceiling effect in valence ratings elicited by negative pictures, that may have covered any modulating contribution of cue ambiguity. Neutral pictures, instead, elicited more variable valence ratings, allowing more subtle modulations of cue ambiguity to statistically emerge.

Experiment 2

Method

Experiment 2 investigated whether experiencing implicit certain vs. uncertain probabilistic relationships between stimuli might influence subjective ratings to future affective predictions. More specifically, to ensure an implicit exposure to the probabilistic relationship between S1 and S2, during the learning phase we engaged participants in a distracting task (a parity judgment task, see Stimulus material and procedure below for details). Then, we tested if participants were able to implicitly extract the (un)certain probabilistic information available during the learning phase, and to use it in the test phase to rate the expected valence of new affective events, or the subjective valence and arousal to new affective stimuli.

We pre-registered the study on OSF (https://osf.io/z5esb/). We performed confirmatory analyses to test seven pre-registered hypotheses. The first two hypotheses concerned the behavioral performance to the parity judgment task. We hypothesized to find (H1a) faster RTs in the CG as compared to the UG, and (H1b) no differences in accuracy between the groups [[22]]. This would be in line with the predictive models' assumption that the two contingencies (100% and 50%) can be experienced implicitly without any significant difference between them [[6]]. The remaining hypotheses regarded the test phase. As a third hypothesis (H2a), we expected to find more negative expectancy ratings in the UG as compared to the CG [[21], [24], [55]]. The fourth hypothesis (H2b), opposed to H2a, was that participants in the CG would show more negative expectancy ratings after the cues which were previously paired with a negative picture during the learning phase [[18]]. This hypothesis is coherent with predictive models of emotion [[3]], according to which we can expect that contingencies of the learning phase would be implicitly inferred. The fifth hypothesis (H3a) was that participants in the UG would show higher arousal and more unpleasant valence ratings to S2s than participants in the CG [[33]]. The sixth hypothesis (H3b), opposed to H3a, was that participants in the CG would show higher arousal and more unpleasant valence ratings to S2s, as compared to participants in the UG [[27], [29]–[32]]. The last hypothesis (H3c), opposed to both H3a and H3b, was to find only a main effect of S2 valence, with significantly higher arousal and more unpleasant valence ratings to negative S2s independently from the experimental group [[18], [23]].

Participants

We computed the required sample size through an a priori pre-registered (https://osf.io/z5esb/) power analysis for GLMMs (see 2.1.1), estimating parameters from pilot data (N = 18). We recruited 125 adult participants through the provider platform Prolific (Prolific, Oxford, UK; http://www.prolific.co/) in August 2021. Participants were screened for vision difficulties, including color blindness. Researchers had access only to the following participant's personal information: Prolific IDs, age, gender, country of birth, country of residence, nationality, employment status and languages spoken. Thus, no identifying information was provided and collected. In order to be included in Experiment 2, participants should not have participated in Experiment 1. Data from 9 participants were discarded according to the pre-registered exclusion criteria: scoring lower than -2 SD from the mean accuracy in the parity judgment task (N = 7), reporting internet/uploading issues in more than 25% of the experimental trials (N = 2), reporting to have caught the exact probabilistic relationship between S1 color and S2 valence during the learning phase (N = 0). The final sample included 116 participants (58 males, age: M = 25.06, SD = 7.63, range = 18–55; CG N = 55, UG N = 54). All participants gave their written informed consent before starting the experiment, and were paid £2.13 for their participation. All experimental procedures were conducted in accordance with the Declaration of Helsinki, and approved by the local Ethical Committee (protocol no. 4177).

Stimulus material and procedure

Materials and procedures were the same as in Experiment 1 (see Stimulus Material and Procedure above), but in Experiment 2 we employed only unambiguous (i.e., red and blue) cues. During the learning phase a parity judgment task was introduced. Single digits from 1 to 9 were employed as stimuli. Participants were informed that they would see a sequence of stimuli on the screen: a colored circle, followed by a picture, and then a number. They were instructed to look at the screen and judge if the number was odd or even, by pressing the 'Z' or 'M' keys. Response keys were counterbalanced between subjects. Instructions were followed by a practice session of 3 trials, in which participants were trained to give their parity judgments, and they received feedback on their performance. Then, the learning session started, with the same trials' structure, timing and number as in Experiment 1. In each trial, after the S2, a second ISI with a random duration between 500 and 800 msec followed, in which the screen remained gray. Then, a random digit between 1 and 9 was presented in the center of the screen for 200 msec, and participants had up to 1500 msec to judge the digit's parity by pressing the 'Z' or 'M' keys. Participants were left uninstructed about the S1-S2 ratios (100% for the CG, 50% for the UG). Fig 1B shows a schematic representation of the experimental paradigm.

At the end of the test phase, participants were directed to the Qualtrics survey (see Stimulus Material and Procedure above). Here, we added an open question as a manipulation check: participants were asked whether they caught any relationship between the color of the circle and the affective valence of the pictures during the learning phase. Participants were then redirected back to Prolific to receive their payment. The experiment lasted about 30 minutes.

Data analysis

The study had a 2 (group, between-subjects: CG vs. UG) × 2 (S2 valence, within-subjects: Neg vs. Neu) mixed design. The analysis plan was pre-registered on OSF (https://osf.io/z5esb/).

During data pre-processing, participants' verbatim responses to the manipulation check question (see Stimulus Material and Procedure above) were qualitatively analyzed by the experimenter, and coded as "yes" or "no" according to their content. No response was coded as "yes", suggesting that no participants reported to have caught the probabilistic relationship between S1 and S2.

Following the pre-registered procedures for outliers detection and management, we detected 5 univariate outliers (MAD > 3) that were excluded from data analysis. We also detected 36 multivariate outliers (MMCD, breakdown point 0.25). From the visual inspection of their ratings it emerged that 2 of them had reversed the scales' poles ("error outliers", see [[45]]), and they were therefore removed from data analysis. The remaining multivariate outliers showed only a slightly different relationship between valence and arousal ratings as compared to other participants, thus we chose to keep them into data analysis ("potentially interesting outliers", see [[45]]), since none of them impacted the models' estimates (as assessed through Cook's distance). Overall, data from 109 participants were included in the analyses.

Before performing the analyses, we pre-processed RTs to the parity judgment task according to the following pre-registered steps: (i) trimming RTs between 100 and 1500 msec [[57]]; (ii) discarding RTs of incorrect trials; (iii) adjusting RTs for the speed-accuracy trade off by means of Inverse Efficiency Score (IES) transformation [[58]]; (iv) log-transforming IES to account for their skewed distribution [[57], [59]].

In order to test our a-priori hypotheses (H1a and H1b on parity judgment task; H2a, H2b, H3a, H3b and H3c on test phase ratings), for each DV we fitted the following (G)LMMs (R package: lme4; [[46]]):

  • log-transformed IES (H1a): group as fixed factor, random intercept for participant;
  • accuracy (H1b): logistic regression with group as fixed factor, random intercept for participant;
  • expectancy ratings (H2a, H2b): group, cue (within-subjects: cueneg vs. cueneu) and their interaction as fixed factors, random slopes for cue within participant;
  • valence and arousal ratings (H3a, H3b, H3c): group, S2 valence and their interaction as fixed factors, random slopes for S2 valence within participant.

The factor "cue" codes for the predictive meaning of the cue according to the specific color-valence pairings experienced by participants in the test phase (within-subjects: cueneg−that is, a cue that preceded negative stimuli in the test phase vs. cueneu−that is, a cue that preceded neutral stimuli in the test phase). In the analysis script (shared on OSF https://osf.io/z5esb/) this factor was called "S1 color". Here, for the sake of clarity, we renamed it as "cue" (see [[18]] and above for further details on the cue factor, and on how we compensated for the counterbalancing of S1 color–S2 valence pairings in the analysis).

The pre-registered exploratory analyses regarding the effects of time, S2 congruency (expected vs. unexpected S2s), IUS, and DASS-21 questionnaires are reported in S3–S6 Tables. Since the results of these analyses are mainly null, or redundant with respect to both the confirmatory models reported below and the results of our previous work [[18]], these analyses will not be discussed further in the present manuscript.

Influential cases for each model, as well as the methods to test model's effects were pre-registered. For each model no influential cases emerged, as evaluated through Cook's distance (>1). LMMs effects were tested by means of F-test and p-values, calculated via Satterthwaite's degrees of freedom method (α =.05, R package: lmerTest; [[47]]), GLMMs effects were evaluated through Type II Analysis of Deviance (R package: car; [[60]]). For each model we reported the estimated parameters with 95% CI, marginal and conditional R2 (estimated as in [[48]]).

Results

Descriptive statistics are provided in Table 3.

Graph

Table 3 Descriptive statistics of Experiment 2.

GroupCueS2 valenceExpectancy ratingsValence ratingsArousal ratings
MSDMSDMSD
CGcuenegNeg56.7825.9832.5628.4456.6425.02
Neu57.9325.258.5128.4839.3525.95
cueneuNeg48.6526.3836.2128.2455.424.91
Neu49.3425.7356.2828.2341.324.76
UGcuenegNeg57.12933.2328.8456.1625.89
Neu58.0828.859.3829.4939.8925.55
cueneuNeg48.4430.6735.8728.2254.225.53
Neu49.829.3558.328.440.6525.02

3 For each group (CG, UG) and experimental condition (cueneg/cueneu, negative/neutral S2 valence) we report the mean (M) and standard deviation (SD) of expectancy, valence and arousal ratings.

All the models are summarized in Table 4 and Fig 3. For the IES model testing H1a (R2 marginal = 0.003, R2 conditional = 0.968), we did not find any effect of group (F(1, 105) = 0.33, p =.565). Thus, the IES model does not support the hypothesis of faster RTs in the CG as compared to the UG (H1a). We found the same result for the accuracy model testing H1b (R2 marginal = 0.009, R2 conditional = 0.486), with no significant effect of group2 = 1.71, p =.191). Therefore, we confirmed our hypothesis (H1b) of not finding group differences in accuracy scores to the parity judgment task.

Graph: Points represent the mean estimated value for each participant and condition.

Graph

Table 4 Results of confirmatory (G)LMMs on IES, accuracy, expectancy, valence and arousal ratings in Experiment 2.

ModelParameterEstimateSEStatisticdfp95% CI
IES (H1a)Intercept6.610.0884.06105.24< 0.0016.466.77
UG—CG-0.090.16-0.58105.240.565-0.400.22
σ ID0.82
σ residual0.15
Accuracy (H1b)Intercept2.350.1812.95< 0.0012.002.71
UG—CG0.470.361.310.191-0.231.18
σ ID1.75
Expectancy (H2a, H2b)Intercept53.360.8562.75107.01< 0.00151.6855.05
UG—CG-0.171.70-0.10107.010.922-3.543.20
cueneg—cueneu14.551.967.42107.00< 0.00110.6618.44
cue x group-0.243.92-0.06107.000.952-8.027.54
σ ID8.03
σ cue19.04
σ residual23.92
Valence (H3a, H3b, H3c)Intercept44.990.5582.53107.02< 0.00143.9146.07
UG—CG0.241.090.22107.020.83-1.932.40
Neg—Neu-47.101.42-33.19107.02< 0.001-49.91-44.28
valence x group0.792.840.28107.020.782-4.846.41
σ ID4.98
σ valence13.75
σ residual17.45
Arousal (H3a, H3b, H3c)Intercept48.950.7664.77106.96< 0.00147.4550.45
UG—CG-0.491.51-0.33106.960.745-3.492.50
Neg—Neu30.861.5719.60107.01< 0.00127.7433.98
valence x group1.473.150.47107.010.641-4.777.72
σ ID7.36
σ valence15.42
σ residual18.03

4 For each model we reported the estimates (unstandardized regression coefficients for LMMs, odds ratio for GLMMs), SE, 95% CI, and the associated statistics (t-test for LMMs, χ2 for GLMMs).

For the expectancy model testing H2a vs. H2b (R2 marginal = 0.068, R2 conditional = 0.267), we only found a main effect of cue (F(1, 107) = 54.98, p <.001): in both groups, participants showed significantly more negative expectancy ratings after the cueneg (i.e., cues preceding negative pictures in the test phase) than after the cueneu (i.e., cues preceding neutral pictures in the test phase) (cueneg vs. cueneu = 14.5, SE = 1.96, t(107) = 7.42, p <.001). Thus, the expectancy model does not support either of our two hypotheses (H2a, H2b).

For both valence (R2 marginal = 0.596, R2 conditional = 0.673) and arousal (R2 marginal = 0.352, R2 conditional = 0.52) models testing H3a vs. H3b vs. H3c, we found a main effect of S2 valence, with both groups reporting significantly greater unpleasantness (F(1, 107) = 1101.78, p <.001; Neg vs. Neu = -47.1, SE = 1.42, t(107) = -33.19, p <.001) and higher arousal (F(1, 107) = 384.03, p <.001; Neg vs. Neu = 30.9, SE = 1.57, t(107) = 19.6, p <.001) towards negative pictures. Thus, valence and arousal models support the hypothesis of no group differences in S2-ratings (H3c).

Discussion

Experiment 2 replicated our previous findings [[18]] for valence and arousal, but not for expectancy ratings. Indeed, both groups showed similar expectancies, predicting upcoming pictures' valence according to the 75% contingencies of the test phase rather than to previously (and implicitly) experienced contingencies. Thus, the results did not support the hypothesis that probabilistic information can be implicitly extracted and subsequently used to modulate subjective affective ratings, as it can be expected according to predictive models of emotion [[3], [6]]. They are however consistent with alternative explanations derived from some recent evidence [[61]], according to which awareness seems to be a necessary precursor for learning, especially in the case of repetition learning.

Remarkably, we can reasonably exclude any potential difference in the way the two groups experienced the implicit contingencies during the learning phase. In fact, no significant group differences emerged on the parity judgment task, as measured by RTs (see H1a) and accuracy (see H1b), suggesting that the distracting task acted similarly in the two groups. Demonstrating this is crucial, as judgments among low levels of contingency (e.g., 50%) are more difficult than judgments among high levels of contingency (e.g., 100%) [[62]], and this could have (but did not) exert confounding effects on how contingency learning occurred in the two groups.

General discussion

The studies reported in the present paper attempted to shed further light on the construction of subjective affective experience as conceived within the predictive framework [[3]]. Indeed, little is known about how subjective experience is shaped by new and potentially ambiguous environmental cues, or whether it can be influenced by implicit previous learnings, despite both being considered crucial factors for constructing efficient predictions [[1]].

Regarding cue ambiguity (Experiment 1), we found that ambiguous cues elicited a dampened subjective affective experience in all the prediction stages, independently from previous learnings. Interestingly, when presented with unambiguous cues, participants formerly exposed to certain contingencies (i.e., the CG, that constructed a reliable prediction) showed more extreme expectancy ratings than participants exposed to uncertainty (i.e., the UG). This fully replicates our previous results [[18]], and may be due to a pre-activation of the expected affective experience, as suggested by predictive models of emotion [[3]]. When faced with ambiguous cues, instead, participants were unable to implement a reliable prediction and coherently pre-activate the associated subjective experience, and thus the latter was dampened.

Concerning previous implicit experience (Experiment 2), we did not find any evidence that subjective affective ratings are shaped by (un)certainty of previous implicit learning. Indeed, all participants reported similar subjective experience. This is consistent with some recent evidence about repetition learning, providing strong evidence that awareness is a necessary precursor for learning [[61]], and suggests that affective stimuli require a prompt response primarily driven by a quick bottom-up evaluation of present inputs rather than by top-down predictions based on past information. However, this does not necessarily exclude that an implicit extraction of statistical regularities from task stimuli contingencies occurred in some ways (e.g., at a covert level, measured as patterns of neural/psychophysiological activity). There is a bunch of evidence, indeed, suggesting that implicit learning may potentially occur across all the statistical learning paradigms, and that these two learning processes are associated with the activation of partially overlapping neural networks [[63]]. In spite of this, our result reasonably suggests that, without any explicit focus of attention on previous experience, the effect of the latter does not emerge at the subjective level significantly shaping the reported affective experience. Future investigations should clarify whether implicit learning can shape affective prediction construction at other levels.

Overall, our results advance the understanding of the mechanisms underlying subjective affective experience as constructed from prior knowledge. Crucially, we fully replicated our previous results [[18]], collecting additional evidence that subjective expectancies are sensitive to uninstructed (but not implicit) statistical regularities experienced in the past. With this, we further supported predictive models of emotion [[3]]: certain past learning experiences are used to construct highly reliable predictions, thus leading to a coherent pre-activation of the associated subjective experience. However, our results suggest some caution when applying predictive processing theoretical principles to the affective domain, especially with regards to the updating stage. Indeed, we did not find evidence for a generalization effect to new and potentially ambiguous stimuli, nor evidence of implicitness. Predictive processing theories, instead, assume that prediction updating should be modulated by stimuli predictability [[11], [13], [20], [65]]. Nonetheless, in recent studies it was consistently found that affective processing during the updating stage is not (or is only weakly) influenced by stimulus predictability both at the subjective [[18]] and at the neural levels [[20], [41]]. Thus, when it comes to updating, more weight seems to be attributed to the affective nature of stimuli themselves and to the contextual information available in the current moment, rather than to top-down predictions based on past experiences. This may be due to the evolutionary relevance of affective stimuli, in that a quick bottom-up evaluation of present inputs may cover a more efficient role in promoting survival (and thus, it must be prioritized).

As a selected broader implication of our study, consistent with predictive models of emotion [[3]], we argue that a paradigm shift is needed in the study of emotions, in the sense that we further supported the idea that emotions cannot be studied independently from the specific context in which they are constructed [[3], [49], [66]]. We found, indeed, that previous experience failed to generalize to new contexts in an experimental setting which is rather artificial as compared to everyday life and which implies the use of symbolic and simple stimuli. This highlights the importance of taking context into careful account when studying affective processes in complex situations and/or beyond each specific experimental setup.

Notwithstanding, some limitations of our research are worth mentioning. First, we did not assess individual differences in cognitive processes that may influence affective predictions (e.g., attention, memory, etc.), so we cannot exclude that these processes may have exerted influencing effects. However, we can exclude that mood and personality traits modulating affective predictions (i.e., intolerance of uncertainty and negative affectivity, that we measured and controlled for in our analyses; see S5–S6, S9 and S10 Tables) exerted confounding effects. Also, there are some constraints on the generality of our findings [[67]] since our sample only included WEIRD participants. The lack of participants from other cultural groups precludes our ability to ascertain whether our findings have a broader generalizability. Second, our paradigm remains quite artificial with respect to real-life situations, in which people often experience multimodal and dynamic affective stimuli. Third, previous experience is manipulated only in terms of (un)certain probabilistic relationships between stimuli (i.e., S1-S2 congruency). However, other characteristics of previous experience, such as the frequency of exposure, or the familiarity with the physical environment in which the stimuli are embedded, may be of interest, too. Fourth, we have only manipulated ambiguity with respect to cues, but also ambiguity of targets can have a great impact on subjective affective experience. Last, in our study we only focused on subjective experience as overtly rated by participants (since we were not able to collect covert -e.g., psychophysiological- measures due to COVID-19 pandemic restrictions), but more subtle modulations may be experimentally captured by measuring also covert processing indices. We thus encourage future studies to complement our results, by integrating both subjective and objective measures of affective processing.

Supporting information

S1 Table

List of NAPS picture names used as S2s in Experiment 1 and 2, sorted by valence (Neg = negative, Neu = neutral).

(PDF)

S2 Table

Means (M), standard deviations (SD), and results of two-tailed t-tests assuming unequal variance in luminance, contrast, complexity indices (i.e., JPEG size, entropy), and color space indices (i.e., LABL, LABA, LABB), referred to negative (Neg) and neutral (Neu) NAPS pictures employed as S2s in Experiment 1 and 2.

(PDF)

S3 Table

Pre-registered exploratory models on Block effect in Experiment 1.

(PDF)

S4 Table

Pre-registered exploratory models on S2 Congruency effect in Experiment 1.

(PDF)

S5 Table

Pre-registered exploratory models on Intolerance of Uncertainty Scale (IUS) effect in Experiment 1.

(PDF)

S6 Table

Pre-registered exploratory models on Depression, Anxiety and Stress Scale (DASS-21) effect in Experiment 1.

(PDF)

S7 Table

Pre-registered exploratory models on Block effect in Experiment 2.

(PDF)

S8 Table

Pre-registered exploratory models on S2 Congruency effect in Experiment 2.

(PDF)

S9 Table

Pre-registered exploratory models on Intolerance of Uncertainty Scale (IUS) effect in Experiment 2.

(PDF)

S10 Table

Pre-registered exploratory models on Depression, Anxiety and Stress Scale (DASS-21) effect in Experiment 2.

(PDF)

We kindly thank Filippo Carnovalini for his valuable help on the experiment programming.

Footnotes 1 The authors have declared that no competing interests exist. References Friston K.The free-energy principle: A unified brain theory? Nature Reviews Neuroscience. (2010); 11(2):127–38. doi: 10.1038/nrn2787, 20068583 2 Shipp S.Neural elements for predictive coding. Frontiers in Psychology. (2016); 7:1792. doi: 10.3389/fpsyg.2016.01792, 27917138 3 Barrett LF. The theory of constructed emotion: an active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience. (2017); 12(1):1–23. doi: 10.1093/scan/nsw154, 27798257 4 Seth AK, Friston KJ. Active interoceptive inference and the emotional brain. Philosophical Transactions of the Royal Society B: Biological Sciences. (2016); 371(1708):20160007. doi: 10.1098/rstb.2016.0007, 28080966 5 Clark A.Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences. (2013); 36(3):181–204. doi: 10.1017/S0140525X12000477, 23663408 6 Bar M.The proactive brain: memory for predictions. Philosophical Transactions of the Royal Society B: Biological Sciences. (2009); 364(1521):1235–43. doi: 10.1098/rstb.2008.0310, 19528004 7 Pezzulo G, Zorzi M, Corbetta M. The secret life of predictive brains: what's spontaneous activity for? Trends in Cognitive Sciences [Internet]. (2021) Jun 16 [cited 2021 Jul 6];0(0). Available from: https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(21)00128-5, 34144895 8 Sterling P.Allostasis: A model of predictive regulation. Physiology & Behavior. (2012) Apr 12; 106(1):5–15. 9 Sterling P, Laughlin S. Principles of neural design. MIT press; (2015). Alink A, Schwiedrzik CM, Kohler A, Singer W, Muckli L. Stimulus predictability reduces responses in primary visual cortex. Journal of Neuroscience. (2010); 30(8):2960–6. doi: 10.1523/JNEUROSCI.3730-10.2010, 20181593 Kok P, Rahnev D, Jehee JFM, Lau HC, de Lange FP. Attention reverses the effect of prediction in silencing sensory signals. Cerebral cortex. (2019); 22(9):2197–206. Stefanics G, Heinzle J, Horváth AA, Stephan KE. Visual mismatch and predictive coding: A computational single-trial ERP study. Journal of Neuroscience. (2018); 38(16):4020–30. doi: 10.1523/JNEUROSCI.3365-17.2018, 29581379 Duma GM, Granziol U, Mento G. Should I stay or should I go? How local-global implicit temporal expectancy shapes proactive motor control: An hdEEG study. NeuroImage. (2020); 220:117071. doi: 10.1016/j.neuroimage.2020.117071, 32574807 Mento G, Vallesi A. Spatiotemporally dissociable neural signatures for generating and updating expectation over time in children: A High Density-ERP study. Developmental Cognitive Neuroscience. (2016); 19:98–106. doi: 10.1016/j.dcn.2016.02.008, 26946428 LeDoux JE, Hofmann SG. The subjective experience of emotion: a fearful view. Current Opinion in Behavioral Sciences. (2018) Feb 1; 19:67–72. Barrett LF, Wilson-Mendenhall CD, Barsalou LW. The conceptual act theory: A roadmap. In: Barrett LF, Russell JA, editors. The psychological construction of emotion. New York, NY: Guilford Press; (2014). p. 83–110. Seth AK. Interoceptive inference, emotion, and the embodied self. Trends in Cognitive Sciences. (2013); 17(11):565–73. doi: 10.1016/j.tics.2013.09.007, 24126130 Del Popolo Cristaldi F, Gambarota F, Oosterwijk S. Does your past define you? The role of previous visual experience in subjective reactions to new affective pictures and sounds. Emotion. (2023) Aug; 23(5):1317–33. doi: 10.1037/emo0001168, 36074619 Mercado F, Hinojosa JA, Peñacoba C, Carretié L. The emotional S1-S2 paradigm for exploring brain mechanisms underlying affective modulation of expectancy. In: Brain Mapping Research Developments. Hauppauge, NY: Nova Science Publishers; (2008). p. 197–209. Del Popolo Cristaldi F, Mento G, Buodo G, Sarlo M. What's next? Neural correlates of emotional predictions: A high-density EEG investigation. Brain and Cognition. (2021); 150:105708. doi: 10.1016/j.bandc.2021.105708, 33714004 Dieterich R, Endrass T, Kathmann N. Uncertainty is associated with increased selective attention and sustained stimulus processing. Cognitive, Affective, & Behavioral Neuroscience. (2016); 16(3):447–56. doi: 10.3758/s13415-016-0405-8, 26810702 Grupe DW, Nitschke JB. Uncertainty is associated with biased expectancies and heightened responses to aversion. Emotion. (2011); 11(2):413–24. doi: 10.1037/a0022583, 21500909 Lin H, Liang J, Jin H, Zhao D. Differential effects of uncertainty on LPP responses to emotional events during explicit and implicit anticipation. International Journal of Psychophysiology. (2018); 129:41–51. doi: 10.1016/j.ijpsycho.2018.04.012, 29704580 Qiao Z, Geng H, Wang Y, Li X. Anticipation of uncertain threat modulates subsequent affective responses and covariation bias. Frontiers in Psychology. (2018); 9:2547. doi: 10.3389/fpsyg.2018.02547, 30618968 Sarinopoulos I, Grupe DW, Mackiewicz KL, Herrington JD, Lor M, Steege EE, et al. Uncertainty during anticipation modulates neural responses to aversion in human insula and amygdala. Cerebral Cortex. (2010); 20(4):929–40. doi: 10.1093/cercor/bhp155, 19679543 Chen JTH, Lovibond PF. Intolerance of Uncertainty Is Associated With Increased Threat Appraisal and Negative Affect Under Ambiguity but Not Uncertainty. Behavior Therapy. (2016); 47(1):42–53. doi: 10.1016/j.beth.2015.09.004, 26763496 Bermpohl F, Pascual-Leone A, Amedi A, Merabet LB, Fregni F, Gaab N, et al. Attentional modulation of emotional stimulus processing: An fMRI study using emotional expectancy. Human Brain Mapping. (2006); 27(8):662–77. doi: 10.1002/hbm.20209, 16317710 Greenberg T, Carlson JM, Rubin D, Cha J, Mujica-Parodi L. Anticipation of high arousal aversive and positive movie clips engages common and distinct neural substrates. Social Cognitive and Affective Neuroscience. (2015); 10(4):605–11. doi: 10.1093/scan/nsu091, 24984958 Johnen AK, Harrison NR. The effects of valid and invalid expectations about stimulus valence on behavioural and electrophysiological responses to emotional pictures. International Journal of Psychophysiology. (2019); 144:47–55. doi: 10.1016/j.ijpsycho.2019.08.002, 31398378 Lin H, Liang J, Liu T, Liang Z, Jin H. Cue Valence Influences the Effects of Cue Uncertainty on ERP Responses to Emotional Events. Frontiers in Human Neuroscience. (2020); 14:140. doi: 10.3389/fnhum.2020.00140, 32351374 Lin H, Jin H, Liang J, Yin R, Liu T, Wang Y. Effects of Uncertainty on ERPs to Emotional Pictures Depend on Emotional Valence. Frontiers in Psychology. (2015); 6(1927). doi: 10.3389/fpsyg.2015.01927, 26733916 Lin H, Xiang J, Li S, Liang J, Jin H. Anticipation of negative pictures enhances the P2 and P3 in their later recognition. Frontiers in Human Neuroscience. (2015); 9:646. doi: 10.3389/fnhum.2015.00646, 26648860 Lin H, Xiang J, Li S, Liang J, Zhao D, Yin D, et al. Cued uncertainty modulates later recognition of emotional pictures: An ERP study. International Journal of Psychophysiology. (2017); 116:68–76. doi: 10.1016/j.ijpsycho.2017.03.004, 28323026 Green P, MacLeod CJ. SIMR: an R package for power analysis of generalized linear mixed models by simulation. Methods in Ecology and Evolution. (2016); 7(4):493–8. Marchewka A, \.Zurawski Lukasz, Jednoróg K, Grabowska A. The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behavior Research Methods. (2014); 46(2):596–610. doi: 10.3758/s13428-013-0379-1, 23996831 Mathôt S, Schreij D, Theeuwes J. OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods. (2012); 44(2):314–24. doi: 10.3758/s13428-011-0168-7, 22083660 Lange K, Kühn S, Filevich E. "Just Another Tool for Online Studies" (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies. PLOS ONE. (2015) Jun 26; 10(6):e0130834. Carleton RN, Norton MAPJ, Asmundson GJG. Fearing the unknown: A short version of the Intolerance of Uncertainty Scale. Journal of Anxiety Disorders. (2007); 21(1):105–17. doi: 10.1016/j.janxdis.2006.03.014, 16647833 Lovibond PF, Lovibond SH. The structure of negative emotional states: Comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behaviour Research and Therapy. (1995) Mar 1; 33(3):335–43. doi: 10.1016/0005-7967(94)00075-u, 7726811 Del Popolo Cristaldi F, Buodo G, Duma GM, Sarlo M, Mento G. Unbalanced functional connectivity at rest affects the ERP correlates of affective prediction in high intolerance of uncertainty individuals: A high density EEG investigation. International Journal of Psychophysiology. (2022) Aug 1; 178:22–33. doi: 10.1016/j.ijpsycho.2022.06.006, 35709946 Del Popolo Cristaldi F, Mento G, Sarlo M, Buodo G. Dealing with uncertainty: A high-density EEG investigation on how intolerance of uncertainty affects emotional predictions. PLOS ONE. (2021) Jul 1; 16(7):e0254045. doi: 10.1371/journal.pone.0254045, 34197554 Gole M, Schäfer A, Schienle A. Event-related potentials during exposure to aversion and its anticipation: The moderating effect of intolerance of uncertainty. Neuroscience Letters. (2012); 507(2):112–7. doi: 10.1016/j.neulet.2011.11.054, 22172930 Buehler R, McFarland C, Spyropoulos V, Lam KCH. Motivated Prediction of Future Feelings: Effects of Negative Mood and Mood Orientation on Affective Forecasts. Pers Soc Psychol Bull. (2007) Sep 1; 33(9):1265–78. doi: 10.1177/0146167207303014, 17586732 Brühl AB, Viebke MC, Baumgartner T, Kaffenberger T, Herwig U. Neural correlates of personality dimensions and affective measures during the anticipation of emotional stimuli. Brain Imaging and Behavior. (2011); 5:86–96. doi: 10.1007/s11682-011-9114-7, 21264550 Leys C, Delacre M, Mora YL, Lakens D, Ley C. How to Classify, Detect, and Manage Univariate and Multivariate Outliers, With Emphasis on Pre-Registration. International Review of Social Psychology [Internet]. (2019) [cited 2021 May 11]; 32(1). Available from: http://www.rips-irsp.com/articles/10.5334/irsp.289/ Bates D, Mächler M, Bolker BM, Walker SC. Fitting linear mixed-effects models using lme4. Journal of Statistical Software. (2015); 67(1). Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software. (2017); 82(13):1–26. Nakagawa S, Johnson PCD, Schielzeth H. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. Journal of The Royal Society Interface. (2017) Sep 30; 14(134):20170213. doi: 10.1098/rsif.2017.0213, 28904005 Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. American Psychologist. (2022); 77(8):894. doi: 10.1037/amp0001054, 36409120 Barrett LF, Satpute AB. Historical pitfalls and new directions in the neuroscience of emotion. Neuroscience Letters. (2019); 693:9–18. doi: 10.1016/j.neulet.2017.07.045, 28756189 Oosterwijk S, Mackey S, Wilson-Mendenhall C, Winkielman P, Paulus MP. Concepts in context: Processing mental state concepts with internal or external focus involves different neural systems. Social Neuroscience. (2015); 10(3):294–307. doi: 10.1080/17470919.2014.998840, 25748274 Kirschner H, Hilbert K, Hoyer J, Lueken U, Beesdo-Baum K. Psychophsyiological reactivity during uncertainty and ambiguity processing in high and low worriers. Journal of Behavior Therapy and Experimental Psychiatry. (2016); 50:97–105. doi: 10.1016/j.jbtep.2015.06.001, 26143445 Grenier S, Barrette AM, Ladouceur R. Intolerance of Uncertainty and Intolerance of Ambiguity: Similarities and differences. Personality and Individual Differences. (2005) Aug 1; 39(3):593–600. Carleton RN. The intolerance of uncertainty construct in the context of anxiety disorders: theoretical and practical perspectives. Expert Review of Neurotherapeutics. (2012) Aug 1; 12(8):937–47. doi: 10.1586/ern.12.82, 23002938 Herwig U, Kaffenberger T, Baumgartner T, Jäncke L. Neural correlates of a "pessimistic" attitude when anticipating events of unknown emotional valence. NeuroImage. (2007) Jan; 34(2):848–58. doi: 10.1016/j.neuroimage.2006.09.035, 17112750 Schumacher S, Herwig U, Baur V, Mueller-Pfeiffer C, Martin-Soelch C, Rufer M, et al. Psychophysiological responses during the anticipation of emotional pictures. Journal of Psychophysiology. (2015); 29(1):13–9. Ratcliff R.Methods for dealing with reaction time outliers. Psychological bulletin. (1993); 114(3):510. doi: 10.1037/0033-2909.114.3.510, 8272468 Vandierendonck A.A comparison of methods to combine speed and accuracy measures of performance: A rejoinder on the binning procedure. Behav Res. (2017) Apr 1; 49(2):653–73. doi: 10.3758/s13428-016-0721-5, 26944576 Wilcox R, Peterson TJ, McNitt-Gray JL. Data Analyses When Sample Sizes Are Small: Modern Advances for Dealing With Outliers, Skewed Distributions, and Heteroscedasticity. Journal of Applied Biomechanics. (2018) Aug 1; 34(4):258–61. doi: 10.1123/jab.2017-0269, 30045651 Fox J, Weisberg S. An R companion to applied regression. Third edition. Thousand Oaks, CA: Sage; (2019). Musfeld P, Souza AS, Oberauer K. Repetition learning is neither a continuous nor an implicit process. Proceedings of the National Academy of Sciences. (2023) Apr 18; 120(16):e2218042120. Clark SC, Benassi VA. Judgment of Contingency: Contrast and Assimilation, Displacement of Judgments, and Self-Efficacy. Social Behavior & Personality: an international journal. (1997) May; 25(2):183. Batterink LJ, Paller KA, Reber PJ. Understanding the Neural Bases of Implicit and Statistical Learning. Topics in Cognitive Science. (2019); 11(3):482–503. doi: 10.1111/tops.12420, 30942536 Kim R, Seitz A, Feenstra H, Shams L. Testing assumptions of statistical learning: Is it long-term and implicit? Neuroscience Letters. (2009) Sep 11; 461(2):145–9. doi: 10.1016/j.neulet.2009.06.030, 19539701 Chennu S, Noreika V, Gueorguiev D, Blenkmann A, Kochen S, Ibáñez A, et al. Expectation and attention in hierarchical auditory prediction. Journal of Neuroscience. (2013); 33(27):11194–205. doi: 10.1523/JNEUROSCI.0114-13.2013, 23825422 Barrett LF, Mesquita B, Gendron M. Context in Emotion Perception. Current Directions in Psychological Science. (2011); 20(5):286–90. Simons DJ, Shoda Y, Lindsay DS. Constraints on Generality (COG): A Proposed Addition to All Empirical Papers. Perspect Psychol Sci. (2017) Nov 1; 12(6):1123–8. doi: 10.1177/1745691617708630, 28853993

By Fiorella Del Popolo Cristaldi; Giulia Buodo; Filippo Gambarota; Suzanne Oosterwijk and Giovanni Mento

Reported by Author; Author; Author; Author; Author

Titel:
How previous experience shapes future affective subjective ratings: A follow-up study investigating implicit learning and cue ambiguity.
Autor/in / Beteiligte Person: Del Popolo Cristaldi, F ; Buodo, G ; Gambarota, F ; Oosterwijk, S ; Mento, G
Link:
Zeitschrift: PloS one, Jg. 19 (2024-02-09), Heft 2, S. e0297954
Veröffentlichung: San Francisco, CA : Public Library of Science, 2024
Medientyp: academicJournal
ISSN: 1932-6203 (electronic)
DOI: 10.1371/journal.pone.0297954
Schlagwort:
  • Humans
  • Follow-Up Studies
  • Uncertainty
  • Emotions
  • Cues
  • Arousal
Sonstiges:
  • Nachgewiesen in: MEDLINE
  • Sprachen: English
  • Publication Type: Journal Article
  • Language: English
  • [PLoS One] 2024 Feb 09; Vol. 19 (2), pp. e0297954. <i>Date of Electronic Publication: </i>2024 Feb 09 (<i>Print Publication: </i>2024).
  • MeSH Terms: Cues* ; Arousal* ; Humans ; Follow-Up Studies ; Uncertainty ; Emotions
  • References: Int J Psychophysiol. 2022 Aug;178:22-33. (PMID: 35709946) ; Philos Trans R Soc Lond B Biol Sci. 2016 Nov 19;371(1708):. (PMID: 28080966) ; Perspect Psychol Sci. 2017 Nov;12(6):1123-1128. (PMID: 28853993) ; Emotion. 2023 Aug;23(5):1317-1333. (PMID: 36074619) ; Front Psychol. 2016 Nov 18;7:1792. (PMID: 27917138) ; Soc Cogn Affect Neurosci. 2015 Apr;10(4):605-11. (PMID: 24984958) ; J Anxiety Disord. 2007;21(1):105-17. (PMID: 16647833) ; J Neurosci. 2018 Apr 18;38(16):4020-4030. (PMID: 29581379) ; Neurosci Lett. 2019 Feb 6;693:9-18. (PMID: 28756189) ; Behav Res Methods. 2017 Apr;49(2):653-673. (PMID: 26944576) ; Brain Cogn. 2021 Jun;150:105708. (PMID: 33714004) ; Front Psychol. 2015 Dec 22;6:1927. (PMID: 26733916) ; Behav Ther. 2016 Jan;47(1):42-53. (PMID: 26763496) ; Trends Cogn Sci. 2021 Sep;25(9):730-743. (PMID: 34144895) ; Behav Res Ther. 1995 Mar;33(3):335-43. (PMID: 7726811) ; Int J Psychophysiol. 2017 Jun;116:68-76. (PMID: 28323026) ; Emotion. 2011 Apr;11(2):413-24. (PMID: 21500909) ; J Appl Biomech. 2018 Aug 1;34(4):258-261. (PMID: 30045651) ; J Neurosci. 2013 Jul 3;33(27):11194-205. (PMID: 23825422) ; Cereb Cortex. 2012 Sep;22(9):2197-206. (PMID: 22047964) ; PLoS One. 2021 Jul 1;16(7):e0254045. (PMID: 34197554) ; Cogn Affect Behav Neurosci. 2016 Jun;16(3):447-56. (PMID: 26810702) ; Neurosci Lett. 2009 Sep 18;461(2):145-9. (PMID: 19539701) ; Soc Neurosci. 2015;10(3):294-307. (PMID: 25748274) ; Front Hum Neurosci. 2015 Nov 30;9:646. (PMID: 26648860) ; Trends Cogn Sci. 2013 Nov;17(11):565-73. (PMID: 24126130) ; J Behav Ther Exp Psychiatry. 2016 Mar;50:97-105. (PMID: 26143445) ; J Neurosci. 2010 Feb 24;30(8):2960-6. (PMID: 20181593) ; Top Cogn Sci. 2019 Jul;11(3):482-503. (PMID: 30942536) ; Soc Cogn Affect Neurosci. 2017 Jan 1;12(1):1-23. (PMID: 27798257) ; Nat Rev Neurosci. 2010 Feb;11(2):127-38. (PMID: 20068583) ; Dev Cogn Neurosci. 2016 Jun;19:98-106. (PMID: 26946428) ; Front Hum Neurosci. 2020 Apr 15;14:140. (PMID: 32351374) ; Physiol Behav. 2012 Apr 12;106(1):5-15. (PMID: 21684297) ; Psychol Bull. 1993 Nov;114(3):510-32. (PMID: 8272468) ; PLoS One. 2015 Jun 26;10(6):e0130834. (PMID: 26114751) ; Philos Trans R Soc Lond B Biol Sci. 2009 May 12;364(1521):1235-43. (PMID: 19528004) ; Int J Psychophysiol. 2019 Oct;144:47-55. (PMID: 31398378) ; Behav Res Methods. 2014 Jun;46(2):596-610. (PMID: 23996831) ; Hum Brain Mapp. 2006 Aug;27(8):662-77. (PMID: 16317710) ; Am Psychol. 2022 Nov;77(8):894-920. (PMID: 36409120) ; Brain Imaging Behav. 2011 Jun;5(2):86-96. (PMID: 21264550) ; Pers Soc Psychol Bull. 2007 Sep;33(9):1265-78. (PMID: 17586732) ; Neurosci Lett. 2012 Jan 24;507(2):112-7. (PMID: 22172930) ; Behav Brain Sci. 2013 Jun;36(3):181-204. (PMID: 23663408) ; J R Soc Interface. 2017 Sep;14(134):. (PMID: 28904005) ; Front Psychol. 2018 Dec 11;9:2547. (PMID: 30618968) ; Neuroimage. 2020 Oct 15;220:117071. (PMID: 32574807) ; Int J Psychophysiol. 2018 Jul;129:41-51. (PMID: 29704580) ; Cereb Cortex. 2010 Apr;20(4):929-40. (PMID: 19679543) ; Proc Natl Acad Sci U S A. 2023 Apr 18;120(16):e2218042120. (PMID: 37040406) ; Behav Res Methods. 2012 Jun;44(2):314-24. (PMID: 22083660) ; Expert Rev Neurother. 2012 Aug;12(8):937-47. (PMID: 23002938) ; Neuroimage. 2007 Jan 15;34(2):848-58. (PMID: 17112750)
  • Entry Date(s): Date Created: 20240209 Date Completed: 20240214 Latest Revision: 20240214
  • Update Code: 20240214
  • PubMed Central ID: PMC10857730

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -