Zum Hauptinhalt springen

The effects of distractors on brightness perception based on a spiking network

Liu, Weisi ; Liu, Xinsheng
In: Scientific Reports, Jg. 13 (2023), Heft 1, S. 1-15
Online academicJournal

The effects of distractors on brightness perception based on a spiking network  Introduction

Visual perception can be modified by the surrounding context. Particularly, experimental observations have demonstrated that visual perception and primary visual cortical responses could be modified by properties of surrounding distractors. However, the underlying mechanism remains unclear. To simulate primary visual cortical activities in this paper, we design a k-winner-take-all (k-WTA) spiking network whose responses are generated through probabilistic inference. In simulations, images with the same target and various surrounding distractors perform as stimuli. Distractors are designed with multiple varying properties, including the luminance, the sizes and the distances to the target. Simulations for each varying property are performed with other properties fixed. Each property could modify second-layer neural responses and interactions in the network. To the same target in the designed images, the modified network responses could simulate distinguishing brightness perception consistent with experimental observations. Our model provides a possible explanation of how the surrounding distractors modify primary visual cortical responses to induce various brightness perception of the given target.

Brightness perception is a fundamental function of the primary visual cortex and has been explored by previous studies[1]–[5]. Brightness perception of a given stimulus has been demonstrated to depend on the surrounding contexts[1],[6]. Particularly, it is a common situation that the surrounding context consists of distractors which could modify visual perception of the given stimulus[7]–[10]. However, the underlying mechanism of how the distractors modulate visual cortical responses to generate different brightness perception of the same stimulus remains unclear.

In brightness perception of the same stimulus, neural responses are modulated by contexts around the stimulus. Contextual modulation is a common phenomenon during visual perception, including brightness perception[2],[6],[11]–[16]. Perceived brightness of a stimulus is affected by the surrounding context[2],[6]. A recent experimental study has suggested that illusory brightness perception known as simultaneous brightness contrast is associated with low-level visual process prior to binocular fusion[2]. In experiments with complex stimuli, brightness perception of the target is influenced strongly by perceptual organizations and considered to be relative with high-level visual system[6]. The previous study on brightness perception has found that primary visual cortical neurons accomplish spatial integration of contextual information rather than response to illumination of the retina strictly[3]. Over extracellular recordings made in the retina, the lateral geniculate nucleus and the striate cortex during experiments, neural responses in the striate cortex were found to be correlated with brightness perception under all the designed conditions[4]. Besides brightness perception, contextual modulation also occurs during other visual experiments[11],[12]. Through experiments with oriented flanking stimuli, short-range connections within local neural circuitry have been found to mediate contextual modulation of neural responses to angular visual features[11]. In experiments, the varying relative contrast of stimuli in neural receptive visual fields can cause contextual effects and impact the neural stimulus-selectivity[12]. Contextual modulation also appears in visual experiments of the figure-ground segregation, the shape aftereffect, the neural sensitivity to naturalistic image and the recognitions to associated objects[13]–[16]. Particularly, in the surrounding context, properties of distractors have been demonstrated in experiments with influences on neural responses and visual perception[17],[18]. For example, the sizes of distractors, the contrast between distractors and the stimulus, as well as the distance from distractors to the stimulus, have effects on visual perception[7]–[9]. Through analyses of visual neural responses and the contrast of images, Rodriguez and Granger present a generalized formulation of visual contrast energy and provide an explanatory framework for performances of identifying targets when distractors have the varying flanking distance and number[7]. Through experiments with designed flanking stimuli, Levi and Carney has found that the bigger flanks can lead to the smaller crowding phenomena[9]. Recently, with primary visual cortical neural responses of macaque monkeys to oriented stimuli, the influence of distances between distractors and the visual stimulus has been explained as a kind of spatial contextual effect on primary visual cortical activities[8]. In this way, effects of surrounding distractors on visual perception might be interpreted as a kind of contextual modulation of primary visual neural responses.

The primary visual cortex is a basic biological structure associated with a broad range of visual researches, including sparse responding, object recognition, contextual modulation and brightness perception[2],[4],[11]–[13],[19]–[28]. The unsupervised learning within the primary visual cortex has been considered widely in neuroscience research, including the emergence of visual misperception[29]–[32]. In contextual modulation, both neural responses and neural interactions in the primary visual cortex could be modified[11],[13],[33]. Based on experimental observations, various models have been presented to explore visual perception. Batard and Bertalmío have improved a type of image processing technique which is based on differential geometry with properties of the human visual system and covariant derivatives, applying the image processing technique in exploration of brightness perception and color images correction[5]. The networks have performed as the effective models in exploring visual recognition and perception[32],[34]–[36]. Considering interactions between boundary and feature signals in brightness perception, Pessoa, Mingolla and Neumann have developed a neural network which can implement boundary computations and provide a new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information[35]. To simulate the visual cortex combined binocular information, Grossberg and Kelly have designed a binocular neural network which computes image contrasts in each monocular channel and fuses these discounted monocular signals with nonlinear excitatory and inhibitory signals to represent binocular brightness perception[36]. Considering influences of neural interactions, both contextual modulation and brightness perception in the visual cortex could be explored through the neural networks[37],[38]. To explore the influence of distractors in brightness perception, contextual modulation of primary visual cortical neural responses and interactions might be the possible factors.

Probabilistic inference performs as the feasible exploring method for the mechanism of visual perception and neural coding[39]–[44]. Murray has presented a probabilistic graphical model with assumptions about lighting and reflectance to infer globally optimal interpretations of images, exhibiting partial lightness constancy, contrast, glow, and articulation effects[39]. To estimate brightness perception of images with spatial variation considered, Allred and Brainard have developed a Bayesian algorithm with the prior distributions designed to allow spatial variation in both illumination and surface reflectance and explored changes in image luminance from the aspect of spatial location[44]. Based on probabilistic inference, the winner-take-all (WTA) networks are designed for researches on visual cortical computational functions[45]–[48]. For simplification, these WTA networks are designed in the two-layer structure[45]–[47]. With neural spiking sequences generated, the plastic weights in these WTA networks have the learning rules as the Hebbian spike-timing-dependent plasticity (STDP) which is a kind of unsupervised learning[49]. With the Hebbian STDP, primary visual cortical sparse responses could be simulated from the WTA network[28],[46]. The temporal structure of the neural spiking train could contribute to the information transformation compared to the rate-based neural coding[50]. The WTA networks consist of excitatory pyramidal neurons with lateral inhibitions, following neural microcircuits observed in layers 2/3[51]. Besides, the k-winner-take-all (k-WTA) network could simulate the observed simultaneous activities of multiple neurons in experiments[45]. Constructing the k-WTA spiking networks, our previous studies have explored the primary visual cortical neural responding variability and illusory stereo perception[52],[53]. Yet, these WTA networks have not considered the influence of distractors on cortical neural responses and brightness perception.

In this paper, the plastic two-layer k-WTA spiking network is constructed to explore how varying properties of distractors induce contextual modulation of neural responses to generate different brightness perception of the given stimulus. Connective weights have the STDP learning rule. Afferent neurons in the first layer of the network transform the visual information into feedforward Poisson spiking trains towards the second layer. There are inter-connected excitatory and inhibitory neurons in the second-layer network. Both excitatory and inhibitory neurons in the second layer generate responses in the stochastic methods. The network recognizes outside images depending on second-layer excitatory responses. In simulations, visual images are designed with the given target stimulus and various surrounding distractors. Distractors are designed with different grayscale values, sizes and distances to the target. The influence of one varying property is explored through simulations with other properties fixed. Modulations on neural responses and lateral interactions induced by each property are measured over simulations. Brightness perception is simulated with responses from the network. Our simulations show that, to designed stimuli, varying properties of distractors could modulate second-layer neural responses and lateral interactions, inducing different brightness perception of the same target.

This article is structured as follows. In Materials and methods, the network and the corresponding probabilistic model are introduced. In the section of Results, images with various distractors perform as visual stimuli. For the designed network, neural responding modulations and brightness perception induced by distractors are investigated. With the same target stimulus and the same background, varying distractors can induce modulations of second-layer excitatory neural spiking responses and interactions, leading to different brightness perception. Finally, a conclusion is given.

Materials and methods

The structure of the spiking network

In this subsection, the construction of our k-WTA network was introduced (Fig. 1). The network was designed following the basic neural circuit in the layer 2/3 for simulating primary visual cortical neural activities[45],[51]. In the network, there were N first-layer afferent excitatory neurons, K second-layer excitatory neurons and J second-layer inhibitory neurons. In the second layer, inter-connections among neurons were designed according to specific connective probabilities in the previous experiment and the study[45],[51]. With connective probabilities, the structure of the network accorded with the observed neural circuit in the layer 2/3. Influences of both neural interactions and connective plasticity in visual perception were considered in the network[54],[55].

Graph: Figure 1 The construction of the designed k-WTA spiking network. Solid circles indicated excitatory neurons. Solid triangles indicated inhibitory neurons. The network contained first-layer afferent excitatory neurons, second-layer excitatory neurons and second-layer inhibitory neurons. Lines with arrows indicated excitatory connections. Lines with squares indicated inhibitory connections. Plastic connections were represented by thick lines. Fixed connections were represented by thin lines. The dotted circular region within the bottom image indicated the afferent neural receptive field. Afferent neurons were induced by stimuli within receptive fields to generate Poisson spikes.

In Fig. 1, the black circle and triangle performed as example neurons while the gray ones indicated the other neurons. The black lines presented for the synaptic connections from the example neurons while the gray lines indicated the other connections. The excitatory and inhibitory connective weights were presented by arrows and squares, respectively. The thin lines indicated the fixed weights. The plastic excitatory connective weights were expressed by the thick lines with arrows.

Each second-layer excitatory neuron received excitatory inputs from both first-layer afferent neurons and other second-layer lateral excitatory neurons. Matrices WRK×N and VRK×K were excitatory feedforward and lateral connective weights, respectively. Each second-layer excitatory neuron had random connections from a subset of second-layer inhibitory neurons. With pEI=0.6 as the special connective probability, each excitatory neuron was randomly decided to be connected by an inhibitory neuron or not. With the inputs mentioned above, the kth second-layer excitatory neuron had the temporal membrane potential at the t th timestep as:

  • utkz=nwknx~tn+kvkkz~tk-jJkvEIy~tj+bk,
  • Graph

    where utkz was the temporal neural membrane potential, wknx~tn and vkkz~tk determined values of feedforward excitatory postsynaptic potentials (EPSP) induced by the n th first-layer afferent neuron and the k th second-layer excitatory neuron, respectively. wkn was the excitatory feedforward weight from the n th neuron in the first layer. vkk was the excitatory lateral weight from the k th second-layer excitatory neuron. The feedforward weights, as well as the lateral weights between excitatory neurons in the second layer were plastic and limited in (0,1) . The excitability of the kth second-layer excitatory neuron was controlled by the parameter bk . In our simulations, bk was sampled from the uniform distribution of (0,1) . Indices of second-layer inhibitory neurons projecting to the kth second-layer excitatory neuron were collected into a set as Jk . At the t th timestep, the jth inhibitory neuron among Jk induced the inhibitory postsynaptic potential (IPSP) as vEIy~tj . Lateral connective weights projected to excitatory neurons from inhibitory neurons were fixed in our network and expressed as a common parameter vEI . Because excitatory weights were limited in (0,1) , vEI was equal to the mean excitatory connective strength of 0.5 in our simulations. With neural spiking trains from corresponding neurons, x~tn , z~tk and y~tj were the synaptic traces at the t th timestep. If the k′th second-layer excitatory neuron generated its spikes at tk1,tk2,tk3, , its temporal synaptic trace was given as:

    2 z~tk=fεt-tkg,εt-tkg=exp-t-tkg/τr-exp-t-tkg/τf,ifttkg0,else,

    Graph

    where ε· was the synaptic response kernel, tkg was the generated time of the g th neural spike. The rise-time constant and the fall-time constant in the synaptic response kernel were set as τr=1 timestep, τf=10 timesteps[45]. With the common kernel ε· , x~tn and y~tj in Eq. (1) could be expressed similarly.

    At the timestep t , whether a neuron generated a spike or not was presented by another variable as the temporal neural active state. In this paper, 1 ms in experiments was represented by a timestep in simulations. It was assumed in the simulations that only one spike could be generated by each neuron at most during each timestep. Under this assumption, the kth excitatory neural active state at t was defined as a variable zkt0,1 . Depending exponentially on the temporal neural membrane potential, the distribution of zkt could be expressed as:

    3 pztkutkz;Θ=exputkzztkztk0,1exputkzztk,

    Graph

    where the neural active state was denoted as ztk at t, utkz was the temporal neural membrane potential in Eq. (1). For normalization, another variable ztk in the denominator took all the possible values of the neural active state. At each timestep t, a sample was generated from the distribution in Eq. (3) as the value of ztk . ztk=1 indicated that the k th excitatory neuron emitted a spike at t. ztk=0 indicated the silence of this neuron. If the k th excitatory neuron generated a spike, it had a refractory period of 5 timesteps and was set to be silent[56]. Neural active states reflected neural spiking responses generated from the network.

    The network could simulate identifications of stimuli at each timestep t through the comparisons of the vector of excitatory neural responses zt=zt1,,ztK in the second layer and the clustering sets[53]. If there were S stimuli in simulations, the network identified the temporal stimulus through its temporal action at1,2,,S . With the s th outside stimulus presented, the identification from the network was correct if at=s or incorrect otherwise. The network could receive the temporal reward as rt=1 from the correct identification and rt=0 from the incorrect identification. From previous correct identifications to each stimulus, excitatory neural responses in the second layer were collected into the clustering set for comparisons and identifications in the next. To S stimuli, clustering sets were marked as C=cs,s=1,S with a matrix cs containing previous neural responses to the s th stimulus. The network received temporal rewards from identifications to update both clustering sets and plastic connective weights, which was introduced in the next subsection and the supplementary material.

    In the second layer, the inhibitory neural spikes were generated in the stochastic method. With the frequency-current (f-I) curve, the instantaneous firing rate of the jth second-layer inhibitory neuron was given as[45]:

    4 ρtjy=σrectutjy,utjy=kφjvIEz~tk-jςjvIIy~tj,

    Graph

    where utjy was the membrane potential of the jth lateral inhibitory neuron at t. ρtjy was the temporal neural firing rate. σrect· was the linear rectifying function. For utjy0 , σrectutjy=utjy . Otherwise, σrectutjy=0 . With pIE=0.575pII=0.55 as the specific connective probability, each excitatory (inhibitory) neuron in the second layer was decided to have the lateral connection to the given inhibitory neuron or not. φjςj included the indices of second-layer excitatory (inhibitory) neurons those connected to the jth second-layer inhibitory neuron. Through stochastic decisions of second-layer lateral connections, the network could have the structure as the experimental observations[51]. Second-layer excitatory (inhibitory) neurons connected to each inhibitory neuron with the common lateral connective weight as vIEvII . Same with vEI in Eq. (1), vIE=0.5vII=0.5 in our simulations. In Eq. (4), the temporal synaptic trace of the kth lateral excitatory neuron (the j′th lateral inhibitory neuron) was expressed as z~tk ( y~tj ). With the temporal neural firing rate ρtjy , Poisson spikes of the jth second-layer inhibitory neuron were generated. The absolute refractory period of 3 timesteps was set for inhibitory neurons[45]. The temporal active state of the jth second-layer inhibitory neuron was expressed as ytj0,1 .

    Through Poisson spikes, the afferent neurons transmitted visual stimuli towards the second-layer network. For the visual images in simulations, the stimulus projected into the receptive field could induce afferent neural responses. Various kinds of Gaussian filters had been applied to explore brightness perception[57],[58]. In this paper, the receptive field of the nth afferent neuron was modeled with the Difference-of-Gaussians filter[59]:

    5 fnx,xn=gncx,xn-ϕ·gnsx,xn,gncx,xn=exp-x1-xn12+x1-xn22/σnc2,gnsx,xn=exp-x1-xn12+x1-xn22/σns2,

    Graph

    where x=x1,x2 was the point in the image, xn=xn1,xn2 was the center of the n th afferent receptive field. The nth afferent neural receptive field included center and surround Gaussian functions as gnc· and gns· . Two Gaussian functions had the spatial radii as σnc and σns . The ratio between two Gaussian functions was described as the parameter ϕ . Parameters in DoG functions were generated from the distributions in[59]. With a spacing distance of 2 pixels, the afferent neural receptive fields were positioned on a grid to cover the image cooperatively[60].

    Visual images in our simulations were 60×60 -pixel squares with different distractors which were introduced in Results in detail. Because of designed spacing distances between afferent receptive fields, there were 900 afferent neurons in the first-layer network. The second-layer network consisted of 100 excitatory neurons and 50 inhibitory neurons.

    Probabilistic model and unsupervised identifications of images

    Through probabilistic inference, the network generated neural responses and identified the outside images. The probabilistic model and unsupervised identifications were similar with those in our previous study[53]. The detail introduction was presented in the supplementary material.

    In simulations, the image performed as the visual stimulus Sti . The network simulated neural responses in the second layer through the Hidden Markov model (HMM)[46]. At the timestep t, the temporary observed pattern ot was the instantaneous feedforward input traces x~t=x~t1,,x~tNT . Second-layer neural responses zt,yt and lateral synaptic traces z~t,y~t were included in the temporal hidden state ht . Plastic parameters in the model were collected as Θ=W,V,C . WRK×N and VRK×K were matrices of plastic connections. Clustering sets to stimuli were presented as C=cs,s=1,S .

    For the timestep t , Ot=o1,o2,,ot included occurred observations and Ht-1=h1,h2,,ht-1 presented hidden states up to the timestep t-1 . The stochastic dynamics of the k-WTA network implemented a forward sampling process and sampled a new hidden active state ht forward in time based on Ot and Ht-1 [46]. In the sampling of ht , excitatory and inhibitory neural responses in the second layer were generated by Eqs. (3) and (4), which were introduced in detail in our previous study[52]. At each timestep, the network generated an action at to identify stimuli with second-layer excitatory neural spikes and obtained a temporal reward rt0,1 .

    Biologically, the primary visual cortical neural responses are modified by dopaminergic rewards[61]. The previous WTA network had considered the reward-modified Hebbian learning[62]. In this paper, modulations of plastic weights at each timestep depended on the temporal reward rt . rt=1 at the timestep t indicated that the network made a correct identification. Then, plastic weights were updated at t according to generated neural responses. Up to t , sequences of observed variables and hidden variables were marked as Ot=o1,o2,,ot , Ht=h1,h2,,ht . For rt=1 , the connective modulations are impacted by several sub-sequences of dynamics with their lengths as T1,,t . Up to t , the T -step sub-sequences of observed variables and hidden variables were marked as OT=ot-T+1,,ot,HT=ht-T+1,,ht . With the discount factor γ0,1 , the contribution of this pair of T - step sub-sequences to rt was given as αT=1-γγT-1 . With all the sub-sequences of neural dynamics considered, the likelihood function LΘ was expressed as[62]:

    6 LΘ=τ=1tατ·rt·logpOτ,Hτ,Sti;ΘpOτ,Hτ,Sti,rt.

    Graph

    The joint distributions pOτ,Hτ,Sti,rt and pOτ,Hτ,Sti;Θ could be factorized with the assumption of Hidden Markov model:

    7 pOτ,Hτ,Sti,rt=prtHτ·pHτOτ·pOτSti·pSti,pOτ,Hτ,Sti;Θ=pStiOτ,Hτ;ΘpOτ,Hτ;Θ=pStiOτ,Hτ;Θ·t=t-τ+1tpotht;Θphtht-1;Θ,pOτ,Hτ;Θ=t=t-τ+1tpotht;Θphtht-1;Θ.

    Graph

    The likelihood function LΘ had the equivalent representation with the factorization in Eq. (7) as:

    8 LΘ=τ=1tατ·rt·logpStiOτ,Hτ;Θ·t=t-τ+1tpotht;Θphtht-1;ΘpOτ,Hτ,Sti,rt=τ=1tατ·rt·logpStiOτ,Hτ;Θ+rt·t=t-τ+1tlogpotht;Θphtht-1;ΘpOτ,Hτ,Sti,rt.

    Graph

    For the likelihood function LΘ in Eq. (8), the network updated plastic connective weights through an online variant of the Expectation–Maximization algorithm. In this paper, the E-step was to estimate the expectation with a single sample of neural responses[46]. With the single sample from pOτ,Hτ,Sti,rt , LΘ was rearranged and approximated as:

    9 LΘτ=1tατ·rt·logpStiOτ,Hτ;Θ+t=t-τ+1tlogpotht;Θphtht-1;Θ.

    Graph

    Then, with directions given by partial derivatives of LΘ , the network updated its plastic weights in the M-step[63]. In this paper, the directions for updating wkn and vkk were given as[52],[53]:

    10 wknLΘrtt=1tγt-t·ztk·x~tn-1-1wkn+1expwkn-1,vkkLΘrtt=1tγt-t·ztk·z~tk-1-1vkk+1expvkk-1.

    Graph

    The modification of each weight depended on the pre- and postsynaptic responses, as well as the temporal reward rt . rt controlled connective modifications. Connective weights only would be updated with the correct identification. In simulations, the discount factor γ=0.9 .

    To identify the outside stimuli, the network applied the unsupervised method[64]. The detail generations of at and rt were introduced in our previous study[53]. The brief description of identifications was introduced in this subsection. Previous neural responses had been identified and collected into clustering sets for latter identifications. For temporal second-layer excitatory neural spikes zt and α , we calculated energy distances between zt and clustering sets cs,s=1,,S to estimate parameters eacceptα,cs,t,s=1,,S through R re-samples[64]. For cs,s=1,,S , a common parameter was set as eaccept(α,t)=mineacceptα,cs,t,s=1,,S . The likelihoods between zt and clustering sets could be quantified by the probabilities pα,cs,t=peα,cs,teacceptα,t,s=1,,S .

    To the outside stimulus, at at the timestep t performed as the temporal identification. The distribution of at was given as:

    11 pat=s=exppα,cs,t/sexppα,cs,t.

    Graph

    If at took a value equal to the serial number of the presented stimulus, the temporal identification was correct. In a First-In-First-Out (FIFO) manner, the clustering set would be updated at a timestep with the correct identification. In each update, zt would be added to the end of the corresponding clustering set. After updating, the clustering set would delete redundant components from its beginning if its size exceeds ncluster . Detail introductions of updating clustering sets were introduced in our previous study[53]. In our simulations, ncluster=20 , α=0.05 , R=50 .

    Results

    Similar with stimuli in the previous study[7], gray square images with different contexts are designed in simulations. This section is to explore how distractors modify brightness perception of the target stimulus.

    The phenomenon of simultaneous brightness contrast

    This subsection is to test whether our network can simulate distinguishing brightness perception of the same stimulus upon opposite backgrounds. In simulations, two sets of images are designed as visual stimuli.

    In the first set, two images are designed by combining a grey square with darker and lighter backgrounds, respectively[65]. Each image is a 60×60 -pixel square. The target stimulus is a 20×20 -pixel square in the center of each image (Fig. 2A). In the second set, 60×60 -pixel square images have the combined backgrounds. A 20×20 -pixel target stimulus belongs to different parts in images (Fig. 2F). In both two sets of images, the grey target stimulus has a grayscale value of 0.5, the darker background has a grayscale value of 0.2 and the lighter background has a grayscale value of 0.8.

    Graph: Figure 2 Perception of simultaneous brightness contrast from the network. (A) Images used as the first set of visual stimuli in simulations. (B) Averaged brightness perception of target squares with neural responses from our network. Perception is averaged over simulations, timesteps and pixels. Perception of the same gray square is lighter upon the darker background. (C) The histogram of contextual modulation indices (CMIs) about excitatory neural responses to Sti. 1 and Sti. 2. A non-zero CMI indicates contextual modulation of neural responses. (D) Difference of excitatory neural cross-correlations to Sti. 1 and Sti. 2. A non-zero difference indicates contextual modulation of interactions between paired neurons. (E) Temporal excitatory neural firing rates to Sti. 1 and Sti. 2 averaged over simulations. Color of each block indicates neural responding strength. Different images can induce intense responses of different subset of neurons. (F) Images used as the second set of visual stimuli in simulations. (G) Averaged brightness perception of target squares. (H) The histogram of CMIs about excitatory neural responses to Sti. 3 and Sti. 4. (I) Difference of second-layer excitatory neural cross-correlations to Sti. 3 and Sti. 4. (J) Temporal second-layer excitatory neural firing rate to Sti. 3 and Sti. 4 averaged over simulations. (K) Images used as the third set of visual stimuli. (L) Averaged brightness perception of the target from both the neural network and the anchoring function. With the background becoming lighter, varying trends of estimated brightness perception in subplots are similar.

    For stimuli in Fig. 2A, visual images are encoded by afferent neural spikes. In each learning simulation, two images are rearranged into a random sequence. In 200 learning simulations, connective weights and clustering sets are updated as introduced in Materials and Methods. Initial values of plastic feedforward and lateral connective weights are sampled independently from a uniform distribution of (0.001, 1). During modifications, weights are limited in (0, 1). The testing phase consists of 100 simulations with weights and clustering sets fixed. A sequence of images is presented as in Fig. 2A. Neural responses over simulations are measured to explore contextual modulation induced by distractors. For images in Fig. 2F, our network is trained and tested in the same way. Besides, learning simulations and testing simulations in all the following subsections are designed similarly.

    Brightness perception of the target is simulated through reconstructions of images. Over testing simulations, a point (x1,x2) in each image is reconstructed with averaged neural responses as:

    12 Irec(x1,x2)=¯k,nz¯k·wkn·y¯n·Inrec(x1,x2)Inrec(x1,x2)=anrec·fnx,xnanrec=(x1,x2)Ix1,x2·fnx,xn,

    Graph

    where x=x1,x2 is a point in the image, fn· is the Difference-of-Gaussian filter in Eq. (5) with its center as xn=xn1,xn2 . wkn is the modified feedforward weight. z¯k,y¯n are averaged spiking rates of corresponding neurons over timesteps and simulations. Ix1,x2 is the gray-scale value of a point x=x1,x2 . Irec(x1,x2) is the reconstructed gray-scale value of (x1,x2) . The reconstruction in Eq. (12) performs as averaged brightness perception. Reconstructions of two images are normalized before comparison. Reconstructed gray-scale values of all the points in two images are collected together into a set. The mean value and standard deviation of this set are common parameters for normalizations. In this way, perception of the same target upon different contexts can be simulated and compared. For the first set, the target in Sti. 1 appears to be lighter (Fig. 2B). For the second set, the target in Sti. 4 appears to be lighter (Fig. 2G). Simulated brightness perception has the relationship as same as that in experiments[65].

    The contextual modulation index (CMI) and the cross-correlation are used to qualify contextual modifications of excitatory neural responses and excitatory neural lateral interactions, respectively. For instance, the CMI of the k th second-layer excitatory neuron to Sti. 1 and Sti. 2 is calculated as[37]:

    13 CMIk=rk1-rk2rk1+rk2,

    Graph

    where rk1 and rk2 are spiking counts of the k th excitatory neuron to Sti. 1 and Sti. 2, respectively. If rk1=rk2 , responses of this neuron to two stimuli are same and CMIk=0 . If rk1rk2 , two stimuli induce different responses of this neuron and CMIk0 . In this way, a non-zero CMI indicates contextual modulation of neural responses. CMI in Eq. (13) is limited within [-1,1] by the denominator. With an absolute value closer to 1, a CMI reflects larger contextual modulation induced by different stimuli. For a neural population, contextual modulation of neural responses can be reflected by the histogram of CMIs. For second-layer excitatory neurons, non-zero CMIs in Fig. 2C,H indicate that our network can simulate contextual modulation of neural responses induced by different backgrounds.

    The cross-correlation of each pair of excitatory neurons in the second layer quantify their interactions[55]. Cross-correlations are calculated with the time lag of 0. To reflect influences on neural responding cross-correlations, differences between neural responding cross-correlations to each set of images are calculated. For a pair of excitatory neurons, a non-zero difference of neural cross-correlation indicates the context can induce the modulation on their interactions. As shown in Fig. 2D,I, our network can simulate contextual modulation of neural interactions.

    Our spiking network can simulate sparse neural responses to images[45],[46]. To two sets of designed images in Fig. 2A,F, temporal averaged firing rates of second-layer excitatory neurons to images are calculated over testing simulations. As shown in Fig. 2E,J, neural responding strengths are expressed by the varying color. For the convenient observation, the sequence of neurons is re-arranged according to their responding strengths to each set of images. At each timestep, the darker rectangular block indicates the stronger responding strength of a neuron. The white blocks stand for the minimum responding strength of 0. To an image, some excitatory neurons emit spikes intensely while other excitatory neurons generate few spikes. Different images can induce intense neural responses of different subsets of second-layer excitatory neurons. After learning, our network can simulate sparse and distinguishing responses to different images.

    We also provide brief simulations of anchoring[66]. Images in Fig. 2K perform as another set of visual stimuli. The grey target stimulus has a grayscale value of 0.5. The darker background has a grayscale value of 0.2. Other lighter backgrounds have grayscale values of 0.8, 0.9 and 1, respectively. Brightness perception simulated from the network and the anchoring function in the previous study[66] are shown in Fig. 2L. With the background becoming lighter, varying trends of brightness perception in subplots are similar. Our network can provide simulations of the basic phenomenon of anchoring.

    Simulations in this subsection show that our network can reflect contextual modulation of neural responses and interactions, simulating opposite brightness perception of the same target.

    Modulations and perception induced by distances between distractors and targets

    Recent study has explored illusory brightness perception induced by different orders of presentations of two stimuli[67]. In this paper, the additional gray-scale squares are designed around the target and perform as distractors. It is to explore how distractors modify brightness perception of the given target.

    In this subsection, distractors are designed to have different distances to targets in visual images. In our simulations, the gray patch is upon different contexts in Fig. 3A. Each image is a 60×60 -pixel square. The 20×20 -pixel grey target has a grayscale value of 0.5, the darker background has a grayscale value of 0.2 and the lighter background has a grayscale value of 0.8. 10×10 -pixel distractors have a common grayscale value of 1. Distractors in Sti. 3 locate along boundaries of the image. Distractors in Sti. 4 become closer to the target with both horizontal and vertical moving steps of 5 pixels.

    Graph: Figure 3 Brightness perception induced by distances between distractors and targets. (A) Images used as visual stimuli in our simulations. (B) Averaged brightness perception of target squares. For designed stimuli, distractors with a shorter distance induce darker brightness perception of the target. (C) The histogram of excitatory neural CMIs to stimuli. A non-zero CMI indicates the neural responding modification induced by the varying distance between distractors and the target. For a neural population, the histogram of CMIs reflects contextual modulation of neural responses. For designed stimuli, distractors with a shorter distance can induce larger neural responding modulations. (D) Difference of excitatory neural responding cross-correlation to stimuli. A non-zero difference indicates the modulation of neural interactions induced by the varying distance. With simulations of Sti. 1 for comparison, different amplitudes of neural interactive modification reflect influences of the varying distance.

    With images in Fig. 3A, 200 learning simulations and 100 testing simulations are designed for our network. Over testing simulations, a reconstruction of an image is given by Eq. (12). Average perception of the target is calculated and shown in Fig. 3B. Brightness perception of the target stimulus is darker when the distance from distractors to the target is shorter. The varying perception can reflect influences of distances between distractors and the target. The contextual modulation index (CMI) and the cross-correlations are used to qualify influences on the excitatory neural responses and lateral interactions, respectively. To calculate modulations induced by different distances, neural responses to Sti. 1 are collected for comparison. Non-zero CMIs in histograms reflect induced responding modifications of second-layer excitatory neurons (Fig. 3C). When distractors become closer to the target, there are more CMIs having the absolute value close to 1. It indicates that, for designed images, a shorter distance between distractors and the target can induce larger neural responding modulations.

    Differences between neural responding cross-correlations to Sti. 1 and other stimuli are calculated. As shown in Fig. 3D, a non-zero difference of neural cross-correlation indicates the influence of the varying distance on neural interactions. With simulations of Sti. 1 for comparison, different amplitudes of neural interactive modification reflect influences of distances between distractors and the target.

    Simulations show that our network can reflect contextual modification induced by distances between distractors and the target. Modified neural activities simulate different perception of the same target. For designed images in Fig. 3A a shorter distance makes our network simulate darker perception of the same target.

    Modulations and perception induced by grayscale values of distractors

    In this subsection, distractors are designed to have different grayscale values. Four 60×60 -pixel images are designed as in Fig. 4A. The target and backgrounds are the same with those in Fig. 3A. The 10×10 -pixel distractors in Sti. 3 has a grayscale value of 0.9. The distractors in Sti. 4 has a grayscale value of 1.

    Graph: Figure 4 Brightness perception induced by grayscale values of distractors. (A) Images used as visual stimuli in our simulations. (B) Averaged brightness perception of target squares. For designed stimuli, the target with lighter distractors appears to be darker. (C) The histogram of CMIs of excitatory neurons to images. Non-zero CMIs indicate neural responding modifications induced by grayscale values of distractors. For designed stimuli, lighter distractors can induce larger neural responding modulations. (D) Differences of excitatory neural responding cross-correlation to stimuli. For each pair of neurons, amplitudes of non-zero differences reflect influences of the varying grayscale value of distractors.

    200 learning simulations and 100 testing simulations are designed for the network. Over testing simulations, brightness perception of the target is simulated and shown in Fig. 4B. Our simulations show that the target with lighter distractors appears to be darker. With Sti. 1 for comparison, CMIs modulated by other 3 stimuli are measured for each neuron. Induced by different distractors, neural responding modulations are shown in Fig. 4C. Simulations show that, to designed images in Fig. 4A, lighter distractors can induce larger neural responding modulations. The network can reflect modifications of excitatory neural responses induced by grayscale values of distractors. Differences of neural responding cross-correlations are calculated and shown in Fig. 4D. A non-zero difference of neural cross-correlation shows how grayscale values of distractors affect neural interactions. Compared to simulations of Sti. 1, interactive modulations between each pair of neurons are induced by stimuli and reflected by amplitudes of non-zero differences.

    This subsection explores how grayscale values of distractors modify the neural activities of the network. With modulated responses, our network can simulate distinguishing brightness perception induced by grayscale values of distractors. For designed images in Fig. 4A, lighter distractors make our network simulate darker perception of the target.

    Modulations and perception induced by sizes of distractors

    In this subsection, distractors are designed to have different sizes. The 60×60 -pixel images in Fig. 5A have the same target and backgrounds as those in Fig. 3A. With a grayscale value of 1, square distractors have different sizes. In our simulations, Sti. 3 has 5×5 -pixel distractors while Sti. 4 has 10×10 -pixel distractors.

    Graph: Figure 5 Brightness perception induced by sizes of distractors. (A) Images used as visual stimuli in our simulations. (B) Averaged brightness perception of target squares. For designed stimuli, perception of the same gray square is darker with bigger distractors. (C) The histogram of CMIs to stimuli. For designed stimuli, larger distractors could induce larger neural responding modulations. (D) Differences of excitatory neural responding cross-correlation to stimuli. A non-zero difference indicates the modulation of neural interactions induced by the varying size of distractors.

    With stimuli in Fig. 5A, the learning phase and the testing phase are designed. Over testing simulations, reconstructions of the target given by Eq. (12) perform as brightness perception. As shown in Fig. 5B, for designed images in simulations, the stimulus with bigger distractors appears to be darker. With Sti. 1 for comparison, CMIs and differences of neural responding cross-correlations are measured for other three stimuli. The varying size of distractors could induce neural responding modulations (Fig. 5C). Distractors with larger size can lead to larger modulations of neural responses. A non-zero difference of neural cross-correlation indicates contextual modulation of neural interactions induced by the varying size (Fig. 5D).

    Neural responses and interactions modified by sizes of distractors are measured and reflected from the network. With modulated responses, our network can simulate distinguishing brightness perception induced by the varying size. For designed images in Fig. 5A, bigger distractors make our network simulate darker perception of the target.

    Conclusion

    This paper has provided a probabilistic framework to explore how distractors modify primary visual cortical responses and induce different brightness perception of the same stimulus. Recent experimental study has demonstrated that the phenomenon of simultaneous brightness contrast is associated with primary visual cortical neural responses[2]. Besides, contextual effects are also associated with the primary visual cortex[13],[33],[38]. While brightness perception has been studied for a long time, how distractors modify primary visual cortical neural responses to induce different brightness perception of the same stimulus is still not clear. To explore the corresponding mechanism, we design a stochastic spiking network with plastic connections in this study. Visual images are designed to control varying properties of distractors, excluding undesired factors in simulations.

    Our network is constructed with two layers. With neural receptive fields as Difference-of-Gaussians filters, first-layer afferent neurons generate Poisson spiking responses to received stimuli. Through forward sampling in the Hidden Markov Model, second-layer excitatory and inhibitory neurons generate their spiking responses and communicate with each other. With multi-dimensional excitatory neural spiking responses in the second layer, the network identifies the presented stimulus and receives rewards which control connective modulations.

    Applications of afferent receptive fields in this paper remove the strict limitation on the number of neurons in previous WTA networks while simulating sparse neural responses[45],[46]. Compared to neural populational coding with Gaussian functions[58], this model considers neural interactions and synaptic plasticity in the primary visual cortex. In this way, besides neural responses, the plasticity-based influence is also considered to explore illusory brightness perception induced by distractors[54]. Compared to the feedforward network model for visual perception with distractors, our recurrent network considers influences of neural lateral interactions[68]. In biology, the dopaminergic reward has been found to participate in the synaptic plasticity in the primary visual cortex[61]. Compared to previous networks on brightness perception[5],[35],[36],[57], this model provides unsupervised identifications of stimuli and considers influences of rewards on learning. This unsupervised method only depends on generated neural spikes which are easy to obtain from simulations. Without limiting the dimension of neural responses, the unsupervised identification can provide online rewards to control connective modifications.

    This paper explores how properties of distractors modulate neural responses and induce different brightness perception of the given target. For images as visual stimuli in simulations, distractors are designed with the varying grayscale value, the size and the distance to the target. Over simulations, both brightness perception of the target stimulus and neural responding modulations are measured. Our network can reflect modulations on both neural responses and interactions induced by each property of distractors, simulating different brightness perception with modulated responses.

    Recent experimental observations have localized brightness perception to a site preceding binocular fusion[2]. Following the associated biological structure, our network can simulate both brightness perception and contextual modifications at the same time, providing a theoretical framework based on probabilistic inference to explore how distractors modulate neural responses and lead to different brightness perception of the same target. Our model provides an alternative method to explore brightness perception from contextual modification of primary visual cortical neural responses and interactions.

    Acknowledgements

    This work was supported by National Natural Science Foundation of China (61374183, 51535005), National Key Research and Development Program of China (2019YFA0705400), the Research Fund of State Key Laboratory of Mechanics and Control of Mechanical Structures (MCMS-I-0419K01), the Fundamental Research Funds for the Central Universities (NJ2020003, NZ2020001), A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

    Author contributions

    W.L. and X.L. developed the design of the study, performed analyses, discussed the results and contributed to the text of the manuscript.

    Data availability

    Supporting codes and data will be made available upon request to the corresponding author.

    Competing interests

    The authors declare no competing interests.

    Supplementary Information

    Graph: Supplementary Information.

    Supplementary Information

    The online version contains supplementary material available at https://doi.org/10.1038/s41598-023-28326-4.

    Publisher's note

    Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

    References 1 White M. The effect of the nature of the surround on the perceived lightness of grey bars within square-wave test gratings. Perception. 1981; 10: 215-230. 1:STN:280:DyaL38%2FhsVOjtA%3D%3D. 10.1068/p100215 2 Sinha P, Crucilla S, Gandhi T, Rose D, Singh A, Ganesh S, Mathur U, Bex P. Mechanisms underlying simultaneous brightness contrast: Early and innate. Vision Res. 2020; 173: 41-49. 10.1016/j.visres.2020.04.012 3 Rossi AF, Rittenhouse CD, Paradiso MA. The representation of brightness in primary visual cortex. Science. 1996; 273: 1104-1107. 1996Sci.273.1104R. 1:CAS:528:DyaK28XltFKrs7o%3D. 10.1126/science.273.5278.1104 4 Rossi AF, Paradiso MA. Neural correlates of perceived brightness in the retina, lateral geniculate nucleus, and striate cortex. J. Neurosci. 1999; 19: 6145-6156. 1:CAS:528:DyaK1MXksVyku78%3D. 10.1523/JNEUROSCI.19-14-06145.1999 5 Batard T, Bertalmío M. A geometric model of brightness perception and its application to color images correction. J. Math. Imaging Vis. 2018; 60: 849-881. 1437.94007. 10.1007/s10851-018-0792-2 6 Adelson EH. Perceptual organization and the judgment of brightness. Science. 1993; 262: 2042-2044. 1993Sci.262.2042A. 1:STN:280:DyaK2c%2FptVentw%3D%3D. 10.1126/science.8266102 7 Rodriguez A, Granger R. On the contrast dependence of crowding. J. Vis. 2021; 21: 4. 10.1167/jov.21.1.4 8 Henry CA, Kohn A. Spatial contextual effects in primary visual cortex limit feature representation under crowding. Nat. Commun. 2020; 11: 1687. 2020NatCo.11.1687H. 1:CAS:528:DC%2BB3cXmsVWitL4%3D. 10.1038/s41467-020-15386-7 9 Levi DM, Carney T. Crowding in peripheral vision: Why bigger is better. Curr. Biol. 2009; 19: 1988-1993. 1:CAS:528:DC%2BD1MXhsFyksr%2FF. 10.1016/j.cub.2009.09.056 Chicherov V, Plomp G, Herzog MH. Neural correlates of visual crowding. Neuroimage. 2014; 93: 23-31. 10.1016/j.neuroimage.2014.02.021 Das A, Gilbert CD. Topography of contextual modulations mediated by short-range interactions in primary visual cortex. Nature. 1999; 399: 655-661. 1999Natur.399.655D. 1:CAS:528:DyaK1MXkt1Sht74%3D. 10.1038/21371 Levitt JB, Lund JS. Contrast dependence of contextual effects in primate visual cortex. Nature. 1997; 387: 73-76. 1997Natur.387.73L. 1:CAS:528:DyaK2sXivFygs7g%3D. 10.1038/387073a0 Rossi AF, Desimone R, Ungerleider LG. Contextual modulation in primary visual cortex of macaques. J. Neurosci. 2001; 21: 1698-1709. 1:CAS:528:DC%2BD3MXjslClu78%3D. 10.1523/JNEUROSCI.21-05-01698.2001 Gheorghiu E, Kingdom FAA. Dynamics of contextual modulation of perceived shape in human vision. Sci. Rep. 2017; 7: 43274. 2017NatSR.743274G. 1:CAS:528:DC%2BC2sXjsVGjurs%3D. 10.1038/srep43274 Ziemba CM, Freeman J, Simoncelli EP, Movshon JA. Contextual modulation of sensitivity to naturalistic image structure in macaque V2. J. Neurophysiol. 2018; 120: 409-420. 10.1152/jn.00900.2017 Quek GL, Peelen MV. Contextual and spatial associations between objects interactively modulate visual processing. Cereb. Cortex. 2020; 30: 6391-6404. 10.1093/cercor/bhaa197 Pelli DG, Tillman KA. The uncrowded window of object recognition. Nat. Neurosci. 2008; 11: 1129-1135. 1:CAS:528:DC%2BD1cXhtFKhu7vK. 10.1038/nn.2187 Whitney D, Levi DM. Visual crowding: A fundamental limit on conscious perception and object recognition. Trends Cogn. Sci. 2011; 15: 160-168. 10.1016/j.tics.2011.02.005 Ozeki H, Sadakane O, Akasaki T, Naito T, Shimegi S, Sato H. Relationship between excitation and inhibition underlying size tuning and contextual response modulation in the cat primary visual cortex. J. Neurosci. 2004; 24: 1428-1438. 1:CAS:528:DC%2BD2cXhs1Sqt7o%3D. 10.1523/JNEUROSCI.3852-03.2004 Franceschiello B, Sarti A, Citti G. A neuromathematical model for geometrical optical illusions. J. Math. Imaging Vis. 2018; 60: 94-108. 1397.92110. 10.1007/s10851-017-0740-6 Mahmoodi S. Linear neural circuitry model for visual receptive fields. J. Math. Imaging Vis. 2016; 54: 138-161. 1352.92032. 10.1007/s10851-015-0594-8 Baspinar E, Citti G, Sarti A. A geometric model of multi-scale orientation preference maps via Gabor functions. J. Math. Imaging Vis. 2018; 60: 900-912. 1437.94006. 10.1007/s10851-018-0803-3 Adjamian P, Holliday IE, Barnes GR, Hillebrand A, Hadjipapas A, Singh KD. Induced visual illusions and gamma oscillations in human primary visual cortex. Eur. J. Neurosci. 2004; 20: 587-592. 10.1111/j.1460-9568.2004.03495.x King JL, Crowder NA. Adaptation to stimulus orientation in mouse primary visual cortex. Eur. J. Neurosci. 2018; 47: 346-357. 10.1111/ejn.13830 Bharmauria V, Bachatene L, Cattan S, Brodeur S, Chanauria N, Rouat J, Molotchnikoff S. Network-selectivity and stimulus-discrimination in the primary visual cortex: Cell-assembly dynamics. Eur. J. Neurosci. 2016; 43: 204-219. 10.1111/ejn.13101 Dai J, Wang Y. Contrast coding in the primary visual cortex depends on temporal contexts. Eur. J. Neurosci. 2018; 47: 947-958. 10.1111/ejn.13893 Ghodrati M, Alwis DS, Price NSC. Orientation selectivity in rat primary visual cortex emerges earlier with low-contrast and high-luminance stimuli. Eur. J. Neurosci. 2016; 44: 2759-2773. 10.1111/ejn.13379 Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996; 381: 607-609. 1996Natur.381.607O. 1:CAS:528:DyaK28XjsFylur8%3D. 10.1038/381607a0 Storrs KR, Anderson BL, Fleming RW. Unsupervised learning predicts human perception and misperception of gloss. Nat. Hum. Behav. 2021; 5: 1402-1417. 10.1038/s41562-021-01097-6 Metzger A, Toscani M. Unsupervised learning of haptic material properties. eLife. 2022; 11: 64876. 10.7554/eLife.64876 Fleming RW, Storrs KR. Learning to see stuff. Curr. Opin. Behav. Sci. 2019; 30: 100-108. 10.1016/j.cobeha.2019.07.004 Flachot A, Gegenfurtner KR. Color for object recognition: Hue and chroma sensitivity in the deep features of convolutional neural networks. Vision Res. 2021; 182: 89-100. 10.1016/j.visres.2020.09.010 Zipser K, Lamme VAF, Schiller PH. Contextual modulation in primary visual cortex. J. Neurosci. 1996; 16: 7376-7389. 1:CAS:528:DyaK28Xntlegur0%3D. 10.1523/JNEUROSCI.16-22-07376.1996 Kerr D, McGinnity TM, Coleman S, Clogenson M. A biologically inspired spiking model of visual processing for image feature detection. Neurocomputing. 2015; 158: 268-280. 10.1016/j.neucom.2015.01.011 Pessoa L, Mingolla E, Neumann H. A contrast- and luminance-driven multiscale network model of brightness perception. Vision Res. 1995; 35: 2201-2223. 1:STN:280:DyaK2MzpslCktg%3D%3D. 10.1016/0042-6989(94)00313-0 Grossberg S, Kelly F. Neural dynamics of binocular brightness perception. Vision Res. 1999; 39: 3796-3816. 1:STN:280:DC%2BD3c3htVantg%3D%3D. 10.1016/S0042-6989(99)00095-4 Keller AJ, Dipoppa M, Roth MM, Caudill MS, Ingrosso A, Miller KD, Scanziani M. A disinhibitory circuit for contextual modulation in primary visual cortex. Neuron. 2020; 108: 1181-1193.e8. 1:CAS:528:DC%2BB3cXisFOrt73I. 10.1016/j.neuron.2020.11.013 Ursino M, La Cara GE. A model of contextual interactions and contour detection in primary visual cortex. Neural Netw. 2004; 17: 719-735. 1051.92016. 10.1016/j.neunet.2004.03.007 Murray RF. A model of lightness perception guided by probabilistic assumptions about lighting and reflectance. J. Vis. 2020; 20: 28. 10.1167/jov.20.7.28 Orbán G, Berkes P, Fiser J, Lengyel M. Neural variability and sampling-based probabilistic representations in the visual cortex. Neuron. 2016; 92: 530-543. 10.1016/j.neuron.2016.09.038 Zemel RS. Probabilistic interpretation of population codes. Neural Comput. 1998; 10: 403-430. 1:STN:280:DyaK1c7jtlSqsw%3D%3D. 10.1162/089976698300017818 Lloyd K, Leslie DS. Context-dependent decision-making: A simple Bayesian model. J. R. Soc. 2013; 10: 20130069 Ye R, Liu X. How the known reference weakens the visual oblique effect: A Bayesian account of cognitive improvement by cue influence. Sci. Rep. 2020; 10: 20269. 2020NatSR.1020269Y. 1:CAS:528:DC%2BB3cXisVOgtbjJ. 10.1038/s41598-020-76911-8 Allred SR, Brainard DH. A Bayesian model of lightness perception that incorporates spatial variation in the illumination. J. Vis. 2013; 13: 18. 10.1167/13.7.18 Jonke Z, Legenstein R, Habenschuss S, Maass W. Feedback inhibition shapes emergent computational properties of cortical microcircuit motifs. J. Neurosci. 2017; 37: 8511-8523. 1:CAS:528:DC%2BC1cXksl2gtLc%3D. 10.1523/JNEUROSCI.2078-16.2017 Kappel D, Nessler B, Maass W. STDP installs in winner-take-all circuits an online approximation to hidden Markov model learning. PLoS Comput. Biol. 2014; 10: e1003511. 2014PLSCB.10E3511K. 10.1371/journal.pcbi.1003511 Klampfl S, Maass W. Emergence of dynamic memory traces in cortical microcircuit models through STDP. J. Neurosci. 2013; 33: 11515-11529. 1:CAS:528:DC%2BC3sXhtFaksb%2FO. 10.1523/JNEUROSCI.5044-12.2013 Abadi AK, Yahya K, Amini M, Friston K, Heinke D. Excitatory versus inhibitory feedback in Bayesian formulations of scene construction. J. R. Soc. Interface. 2019; 16: 20180344. 10.1098/rsif.2018.0344 van Rossum MC, Bi GQ, Turrigiano GG. Stable Hebbian learning from spike timing-dependent plasticity. J. Neurosci. 2000; 20: 8812-8821. 10.1523/JNEUROSCI.20-23-08812.2000 Van Rullen R, Thorpe SJ. Rate coding versus temporal order coding: What the retinal ganglion cells tell the visual cortex. Neural Comput. 2001; 13: 1255-1283. 0963.68645. 10.1162/08997660152002852 Avermann M, Tomm C, Mateo C, Gerstner W, Petersen CCH. Microcircuits of excitatory and inhibitory neurons in layer 2/3 of mouse barrel cortex. J. Neurophysiol. 2012; 107: 3116-3134. 1:CAS:528:DC%2BC38XhtFCju7vE. 10.1152/jn.00917.2011 Liu W, Liu X. The effects of eye movements on the visual cortical responding variability based on a spiking network. Neurocomputing. 2021; 436: 58-73. 10.1016/j.neucom.2021.01.013 Liu W, Liu X. Depth perception with interocular blur differences based on a spiking network. IEEE Access. 2022; 10: 11957-11978. 10.1109/ACCESS.2022.3142044 Hussain Z, Webb BS, Astle AT, McGraw PV. Perceptual learning reduces crowding in amblyopia and in the normal periphery. J. Neurosci. 2012; 32: 474-480. 1:CAS:528:DC%2BC38XhtFWis78%3D. 10.1523/JNEUROSCI.3845-11.2012 Hata Y, Tsumoto T, Sato H, Tamura H. Horizontal interactions between visual cortical neurones studied by cross-correlation analysis in the cat. J. Physiol. 1991; 441: 593-614. 1:STN:280:DyaK383nslelsw%3D%3D. 10.1113/jphysiol.1991.sp018769 Masquelier T, Guyonneau R, Thorpe SJ. Competitive STDP-based spike pattern learning. Neural Comput. 2009; 12: 1259-1276. 1160.92315. 10.1162/neco.2008.06-08-804 Ding J, Levi DM. Binocular combination of luminance profiles. J. Vis. 2017; 17: 4. 10.1167/17.13.4 Blakeslee B, Cope D, McCourt ME. The Oriented Difference of Gaussians (ODOG) model of brightness perception: Overview and executable Mathematica notebooks. Behav. Res. Methods. 2016; 48: 306-312. 10.3758/s13428-015-0573-4 Benardete EA, Kaplan E. The receptive field of the primate P retinal ganglion cell, I: linear dynamics. Vis. Neurosci. 1997; 14: 169-185. 1:STN:280:DyaK2s3htVWjsQ%3D%3D. 10.1017/S0952523800008853 Segal IY, Giladi C, Gedalin M, Rucci M, Ben-tov M, Kushinsky Y, Mokeichev A, Segev R. Decorrelation of retinal response to natural scenes by fixational eye movements. PNAS. 2015; 112: 3110-3115. 2015PNAS.112.3110S. 1:CAS:528:DC%2BC2MXjtFemtb4%3D. 10.1073/pnas.1412059112 Arsenault JT, Nelissen K, Jarraya B, Vanduffel W. Dopaminergic reward signals selectively decrease fMRI activity in primate visual cortex. Neuron. 2013; 77: 1174-1186. 1:CAS:528:DC%2BC3sXktlOnsr0%3D. 10.1016/j.neuron.2013.01.008 Rueckert E, Kappel D, Tanneberg D, Pecevski D, Peters J. Recurrent spiking networks solve planning tasks. Sci. Rep. 2016; 6: 21142. 2016NatSR.621142R. 1:CAS:528:DC%2BC28Xisl2itLo%3D. 10.1038/srep21142 Legenstein, R, Jonke, Z, Habenschuss, S. & Maass, W. A probabilistic model for learning in cortical microcircuit motifs with data-based divisive inhibition. ArXivarXiv:1707.05182v1 (2017). Heinerman J, Haasdijk E, Eiben AE. Unsupervised identification and recognition of situations for high-dimensional sensori-motor streams. Neurocomputing. 2017; 262: 90-107. 10.1016/j.neucom.2017.02.090 Agostini T, Proffitt DR. Perceptual organization evokes simultaneous lightness contrast. Perception. 1993; 22: 263-272. 1:STN:280:DyaK3szgtlWjtQ%3D%3D. 10.1068/p220263 Economou E, Zdravkovic S, Gilchrist A. Anchoring versus spatial filtering accounts of simultaneous lightness contrast. J. Vis. 2007; 7: 2. 10.1167/7.12.2 Zhou H, Davidson M, Kok P, McCurdy LY, de Lange FP, Lau H, Sandberg K. Spatiotemporal dynamics of brightness coding in human visual cortex revealed by the temporal context effect. Neuroimage. 2020; 205: 116277. 10.1016/j.neuroimage.2019.116277 Chen N, Bao P, Tjan BS. Contextual-dependent attention effect on crowded orientation signals in human visual cortex. J. Neurosci. 2018; 38: 8433-8440. 1:CAS:528:DC%2BC1cXit1ynur7E. 10.1523/JNEUROSCI.0805-18.2018

    By Weisi Liu and Xinsheng Liu

    Reported by Author; Author

    Titel:
    The effects of distractors on brightness perception based on a spiking network
    Autor/in / Beteiligte Person: Liu, Weisi ; Liu, Xinsheng
    Link:
    Zeitschrift: Scientific Reports, Jg. 13 (2023), Heft 1, S. 1-15
    Veröffentlichung: Nature Portfolio, 2023
    Medientyp: academicJournal
    ISSN: 2045-2322 (print)
    DOI: 10.1038/s41598-023-28326-4
    Schlagwort:
    • Medicine
    • Science
    Sonstiges:
    • Nachgewiesen in: Directory of Open Access Journals
    • Sprachen: English
    • Collection: LCC:Medicine ; LCC:Science
    • Document Type: article
    • File Description: electronic resource
    • Language: English

    Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

    oder
    oder

    Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

    oder
    oder

    Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

    xs 0 - 576
    sm 576 - 768
    md 768 - 992
    lg 992 - 1200
    xl 1200 - 1366
    xxl 1366 -