|発案者||Andreas Galka(統計数理研究所) / Xiaochuan Pan(玉川大学)|
Data sets pose new challenges for quantitative analysis.
In contemporary research in the neurosciences extensive spatiotemporal data sets are recorded, reflecting the electromagnetical, metabolical or chemical processes in neural assemblies, from the level of single cells to the entire brain. Such data sets pose new challenges for quantitative analysis. We will discuss how state-space modelling can be applied to this situation, with particular emphasis on two aspects, namely dynamical source estimation and independent component analysis.
Prefrontal neurons involves in predicting reward by the integration of learned associations.
To adapt the changeable environment, it is important for animal not only to behave based on previously acquired experiences but also to accommodate novel situations by generating appropriate prediction from learned knowledge. Some psychological experiments suggest that animal has such kind of ability. The purpose of this study is to examine the neuronal mechanism involved in predicting reward based on the integration of learned associations. Two monkeys (Macaca fuscata) were trained to perform a sequential association task with asymmetric reward. In this task, the first cue A1 (or A2) was presented briefly. After delay the second cues, B1 and B2, were shown and the monkey had to make a saccadic eye movement to B1 (or B2). Then the third cues, C1 and C2, were displayed and the monkey had to select C1 (or C2). The two correct association chains were: A1->B1->C1 and A2->B2->C2. The asymmetric reward rule was introduced block by block randomly: in one block, A1-chain was rewarded, while A2-chain was not; in the other block, vice versa. At the beginning of each block, reward instruction trials were inserted to instruct which group would be rewarded in the following sequence task by pairing C (C1 or C2) with the reward. Monkeys were also trained with two different orders of stimuli (B->C->A and C->A->B). Behavioral results indicate that monkeys could predict reward by first cues in the sequence task. Out of 337 neurons from the lateral PFC, 30% showed reward-related activity in the first cue period. There are two types of reward-related cells: reward type (R type) and stimulus-reward type (SR type). R type cells predicted reward independent on the visual stimuli, while SR type cells predicted reward only when a preferred stimulus was presented as a first cue. Interestingly, the preference was not based on visual properties of stimulus, but on stimulus-group (e.g., if a neuron prefers A1, it also prefers B1 and C1 rather than B2 and C2). In the third experiment, some new stimuli were learned to be associated with B1 and B2. The monkeys and R type cells also could predict reward based on new stimuli in the first presentation. These results suggest the prefrontal neurons involves in predicting reward by the integration of learned associations.