neuroblogsdaily.com
Open in
urlscan Pro
2a06:98c1:3120::3
Public Scan
URL:
https://neuroblogsdaily.com/
Submission: On February 07 via api from US — Scanned from NL
Submission: On February 07 via api from US — Scanned from NL
Form analysis
0 forms found in the DOMText Content
Press "Enter" to skip to content * * NeuroBlogs Daily February 7, 2024 Open Access Brain Science, Lectures, & Podcasts open menu Back * Science * Lectures * Blog * Humanities * French & Spanish SCIENCE Do Love You Me? Failure to Notice Word Transpositions is Induced by Parallel Word Processing Recent research has shown that readers may to fail notice word transpositions during reading (e.g., the transposition of “fail” and “to” in this sentence). Although this transposed word (TW) phenomenon was initially taken as evidence that readers process multiple words in parallel, several studies now show that TW-effects may also occur when words are presented one-by-one. Critically however, in the majority of studies TW-effects are weaker in serial presentation. Here we argue that while word position coding may to some extent proceed post-lexically (allowing TW-effects to occur despite seeing words one-by-one), stronger TW-effects in parallel presentation nonetheless evidence a degree of parallel word processing. We additionally report an experiment wherein a sample of Dutch participants (N = 34) made grammaticality judgments about 4-word TW sentences (e.g., ‘the was man here’, ‘the went dog away’) and ungrammatical control sentences (‘the man dog here’, ‘the was went away’), whereby the four words were presented either serially or in parallel. Ungrammaticality was decidedly more difficult to notice in the TW condition, but only when words were presented in parallel. No effects were observed in the serial presentation whatsoever. The present results bolster the notion that word order is encoded with a degree of flexibility, and further provide straightforward evidence for parallel word processing during reading. Learning Through Prediction: A Case of Verb Bias Learning Linguistic prediction, which emerges from acquired knowledge, is a pervasive process in language comprehension. In language acquisition theories, prediction has also been suggested as a key factor driving the implicit learning process. However, how prediction develops as learning unfolds and how it, in turn, drives the learning process remains unclear. This study examines the relationship between prediction and learning, with a focus on three key questions: (1) whether learning leads to prediction, (2) whether prediction motivates learning, and (3) whether individuals’ prediction skills are stable across tasks. We first replicated the malleability of verb bias in adults (Ryskin et al., 2017) and their ability to predict using verb semantics (Nation et al., 2003). Beyond replications, our results revealed that learners who successfully updated their verb biases showed a higher proportion of first fixation to the instruments than to the animals upon hearing an instrument-trained verb, indicating that individuals’ verb bias predictions were modulated by the success of learning, and they were able to use the newly learned verb bias knowledge to generate anticipatory eye movements after training. To understand whether prediction might in turn motivate learning, we found that the more divergent learners’ initial verb bias knowledge was from the received training type, the greater the learning effects occurred, linking prediction errors to learning outcomes. Finally, adults’ ability to predict linguistic items based on verb information remained stable across language tasks. Taken together, these results elucidate the dynamic interplay between prediction and learning, providing empirical support for prediction-based learning frameworks. Conveying and Detecting Listening During Live Conversation Across all domains of human social life, positive perceptions of conversational listening (i.e., feeling heard) predict well-being, professional success, and interpersonal flourishing. However, a fundamental question remains: Are perceptions of listening accurate? Prior research has not empirically tested the extent to which humans can detect others’ cognitive engagement (attentiveness) during live conversation. Across five studies (total N = 1,225), using a combination of correlational and experimental methods, we find that perceivers struggle to distinguish between attentive and inattentive conversational listening. Though people’s listening fluctuated naturally throughout their conversations (people’s minds wandered away from the conversation 24% of the time), they were able to adjust their listening in line with instructions and incentives—by either listening attentively, inattentively, or dividing their attention—and their conversation partners struggled to detect these differences. Specifically, speakers consistently overestimated their conversation partners’ attentiveness—often believing their partners were listening when they were not. Our results suggest this overestimation is (at least partly) due to the largely indistinguishable behavior of inattentive and attentive listeners. It appears that people can (and do) divide their attention during conversation and successfully feign attentiveness. Overestimating others’ attentiveness extended to third-party observers who were not immersed in the conversation, listeners who looked back on their own listening, and people interacting with partners who could not hear their words (but were incentivized to act like they could). Our work calls for a reexamination of a fundamental social behavior—listening—and underscores the distinction between feeling heard and being heard during live conversation. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or ‘frame of reference’. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. Neural correlates of object identity and reward outcome in the sensory cortical-hippocampal hierarchy: coding of motivational information in perirhinal cortex Neural circuits support behavioral adaptations by integrating sensory and motor information with reward and error-driven learning signals, but it remains poorly understood how these signals are distributed across different levels of the corticohippocampal hierarchy. We trained rats on a multisensory object-recognition task and compared visual and tactile responses of simultaneously recorded neuronal ensembles in somatosensory cortex, secondary visual cortex, perirhinal cortex, and hippocampus. The sensory regions primarily represented unisensory information, whereas hippocampus was modulated by both vision and touch. Surprisingly, the sensory cortices and the hippocampus coded object-specific information, whereas the perirhinal cortex did not. Instead, perirhinal cortical neurons signaled trial outcome upon reward-based feedback. A majority of outcome-related perirhinal cells responded to a negative outcome (reward omission), whereas a minority of other cells coded positive outcome (reward delivery). Our results highlight a distributed neural coding of multisensory variables in the cortico-hippocampal hierarchy. Notably, the perirhinal cortex emerges as a crucial region for conveying motivational outcomes, whereas distinct functions related to object identity are observed in the sensory cortices and hippocampus. GABAergic regulation of striatal spiny projection neurons depends upon their activity state Synaptic transmission mediated by GABAA receptors (GABAARs) in adult, principal striatal spiny projection neurons (SPNs) can suppress ongoing spiking, but its effect on synaptic integration at subthreshold membrane potentials is less well characterized, particularly those near the resting down-state. To fill this gap, a combination of molecular, optogenetic, optical, and electrophysiological approaches were used to study SPNs in mouse ex vivo brain slices, and computational tools were used to model somatodendritic synaptic integration. In perforated patch recordings, activation of GABAARs, either by uncaging of GABA or by optogenetic stimulation of GABAergic synapses, evoked currents with a reversal potential near −60 mV in both juvenile and adult SPNs. Transcriptomic analysis and pharmacological work suggested that this relatively positive GABAAR reversal potential was not attributable to NKCC1 expression, but rather to HCO3– permeability. Regardless, from down-state potentials, optogenetic activation of dendritic GABAergic synapses depolarized SPNs. This GABAAR-mediated depolarization summed with trailing ionotropic glutamate receptor (iGluR) stimulation, promoting dendritic spikes and increasing somatic depolarization. Simulations revealed that a diffuse dendritic GABAergic input to SPNs effectively enhanced the response to dendritic iGluR signaling and promoted dendritic spikes. Taken together, our results demonstrate that GABAARs can work in concert with iGluRs to excite adult SPNs when they are in the resting down-state, suggesting that their inhibitory role is limited to brief periods near spike threshold. This state-dependence calls for a reformulation for the role of intrastriatal GABAergic circuits. The functional role of spatial anisotropies in ensemble perception The human brain can rapidly represent sets of similar stimuli by their ensemble summary statistics, like the average orientation or size. Classic models assume that ensemble statistics are computed by integrating all elements with equal weight. Challenging this view, here, we show that ensemble statistics are estimated by combining parafoveal and foveal statistics in proportion to their reliability. In a series of experiments, observers reproduced the average orientation of an ensemble of stimuli under varying levels of visual uncertainty. Computational characterization of the role of an attention schema in controlling visuospatial attention How does the brain control attention? The Attention Schema Theory suggests that the brain constructs an internal model of attention for its control. However, it remains unclear under which circumstances an attention schema is computationally useful, and whether it can emerge in a learning system without hard-wiring it. To address these questions, we trained a reinforcement learning agent with attention to track and catch a ball in a noisy environment. Crucially, the agent had additional neural resources that it could freely use. We asked under which conditions these additional resources develop an attention schema to track attention. We found that the more uncertain the agent was about the location of its attentional state, the more it benefited from these additional resources, which developed an attention schema. Together, these results indicate that an attention schema emerges in simple learning systems where attention is both important and difficult to track. > NTS 278: Lori Holt PhD on Categorical Conception of Speech Sounds Cortex Cast: Dr. Julia Harris on Sleep & Scents Huberman Lab: Dr. Kay Tye on The Biology of Social Interactions and Emotions JNP Micro Podcasts: Visual Strategy and Force-Steadiness in Older Adults BigBrains: What our hands reveal about our thoughts, with Susan Goldin-Meadow (Ep. 128) PBtS 746: Dr. Eric Skaar: Investigating the Intersection of Nutrition and Bacterial Infection and Pathogenesis NeuroBlogs Daily Open Access Brain Science, Lectures, & Podcasts * * created by https://andyadkins.com Blogged * Polyphenols, Mitochondria, Glial Cells & Neurodegenrative Pathway Studies (draft) * Week in Review * Polyphenols, Mitochondria, Glial Cells & Neurodegenrative Pathway Studies (draft) * Week in Review