Past Seminars

Archive of past seminars. Click on a card to view details.

2025 2024

2025

2025/12/10 Wed 14:30-16:30 (JST)

Revisiting Sperry in the Age of Connectomics: Genetic Rules of Brain-Wide Wiring

Nagoya University

Abstract: Understanding how brain-wide neural circuits are genetically organized remains one of the fundamental challenges in neuroscience. Roger Sperry's classical chemoaffinity theory proposed that molecular gradients provide positional cues for axonal wiring, yet its application has been largely limited to localized sensory systems. Here, we introduce SPERRFY (Spatial Positional Encoding for Reconstructing Rules of axonal Fiber connectivitY), a data-driven framework to examine whether Sperry's concept can be generalized to the entire brain. By integrating mesoscale connectomic data with spatial transcriptomic maps from the Allen Mouse Brain Atlas, SPERRFY applies canonical correlation analysis (CCA) to identify latent correlated structures between gene-expression and connectivity spaces, thereby inferring positional gradients that may reflect molecular constraints underlying long-range axonal organization. This framework bridges molecular and anatomical levels of organization, providing a quantitative basis for reinterpreting Sperry's chemoaffinity theory in the context of whole-brain connectomics.

2025/10/7 Tue 13:00-14:30 (JST)

Behavioural and neural mechanism of social metacognition in predicting others' performance

RIKEN CBS

Abstract: When we collaborate with others to tackle new challenges, we anticipate the roles that each team member will play, consider our own role based on these predictions, and adjust our behaviour accordingly. Because each member differs in experience and skills, it is necessary to flexibly adapt the strategy used for prediction. For example, when interacting with beginners who have less career experience, we can make reasonable predictions about their performance by projecting adjusting introspection ('social metacognition'). In contrast, the same strategy cannot be applied to experts with more extensive experience. In this talk, I will introduce our latest research addressing this issue. Specifically, we found that the anterior lateral prefrontal cortex (area 47) is engaged when predicting the thoughts of beginners through social metacognition, whereas the temporoparietal junction (TPJ) is recruited when predicting the thoughts of experts based on heuristics. The lecture will begin with an overview of the foundations of metacognition and then present the studies that led to these findings.

2025/8/27 Wed 15:00-16:30 (JST)

Dissecting the contribution to perceptual decisions of encoding and readout of neural information

University Medical Center Hamburg-Eppendorf

Abstract: Perceptual decisions require that neural populations encode information about the sensory environment and that other downstream populations read it out to inform behavioral outputs. Here we present our computational work to provide methods that individuate and tease apart the contribution of these two neural operations to the formation of perceptual decisions. We exemplify these methods with the study of several datasets from sensory and parietal cortices and we discuss their implications for understanding the emergent computations of neural population codes.

2025/6/13 Fri 10:30-12:00 (JST)

Integrating multimodal data for bio-realistic simulations of brain circuits

Allen Institute

Abstract: A central question in neuroscience is how the structure of the brain determines its activity and function. To explore this systematically, we develop large-scale bio-realistic simulations of brain circuits, which integrate a broad array of experimental data: Distribution and morpho-electric properties of different neuron types; Connection probabilities, synaptic weights, axonal delays, and dendritic targeting rules; And a representation of inputs into the simulated circuits from other parts of the brain. We will discuss this approach focusing on the 230,000-neuron model of mouse primary visual cortex (area V1). Simulations of neural activity in the model match experimental recordings in vivo on a number of metrics, such as firing rates, direction selectivity, and others. Applications include the following problems of broad interest: Understanding how architecture of brain circuit gives rise to the observed functional activity; Learning of behavioral and computational tasks in biological and artificial networks; Generation of the extracellular electric potential due to synaptic activity in the cortex. The model is shared freely with the community via brain-map.org, as are the datasets it is based on.

2025/3/27 Thu 17:30-19:00 (JST)

A computational approach to evaluate how molecular mechanisms impact large-scale brain activity

NeuroPSI

Abstract: This seminar will present the authors' recent preprint on a computational approach to evaluate how molecular mechanisms impact large-scale brain activity.

2025/2/25 Tue 17:00-18:30 (JST)

EBRAINS Seminar D: High Performance Computing and Co-Design

Jan Bjaalie
Ekaterina Zossimova
Co-Design & Science Support
Lena Oden
EBRAINS Base infrastructure and HPC services

Topics: Co-Design & Science Support, EBRAINS Base infrastructure and HPC services

2025/2/18 Tue 17:00-18:30 (JST)

EBRAINS Seminar C: Research Infrastructure and Education

Wouter Klijn
EBRAINS-RI architecture
Franziska Vogel
EBRAINS Education

Topics: EBRAINS-RI architecture, multiple scales of complexity, EBRAINS Education and events for early career researcher

2025/2/17 Mon 10:30〜 (JST)

9th Seminar: High-dimensional interpretable factor analysis via penalization

Kyushu University

Place: Zoom (Please find the zoom link by the registration)

Abstract: Factor analysis is a statistical method for identifying latent factors from the correlation structures of high-dimensional data. It was originally developed for applications in social and behavioral sciences but has since been applied to various research fields, including the natural sciences. An advantage of factor analysis is that it leads to interpretable latent factors, enabling applications such as the identification of active brain regions in neuroscience. In this study, we propose a penalized maximum likelihood estimation method aimed at enhancing the interpretability of latent factors. In particular, the Prenet (Product-based elastic net) penalization allows for the estimation of a perfect simple structure, a desirable characteristic in the factor analysis literature. The usefulness of the proposed method is investigated through real data analyses. Finally, we discuss potential extensions and applications of the proposed method in neuroscience.

2025/1/28 Tue 17:00- (JST)

Collaborative Seminar with EBRAINS

Susanne Kunkel, Thorsten Hater, and others
EBRAINS Research Infrastructure

Data & Contents 1/28 17:00- (JST)

  • 17:00 - 17:30 (Susanne Kunkel) The NEST ecosystem: A key enabler of efficient brain-scale spiking network simulation and sustainable neuroscience research
  • 17:30 - 18:00 (Thorsten Hater) Multiscale Simulations of Full Brain Models using Arbor and TVB
  • 18:00 - 18:30 (Various) Flashlight talks of developing integrated EBRAINS-RI workflows:
  • · Sharon Yates       QUINT workflow
  • · Giulia De Bonis       CobraWAP Workflow
  • · Wouter Klijn       Virtual Brain Twin workflow

The NEST ecosystem: A key enabler of efficient brain-scale spiking network simulation and sustainable neuroscience research

NEST [2] is a powerful simulation engine that has evolved with the neuroscience community over a quarter-century. The simulator has been continuously advanced and extended to tackle new scientific questions and to push the boundaries of large-scale brain simulation at the resolution of single neurons and synapses. The code is extremely scalable, where recent technological advancements also target GPU-based systems [3]. The graphical frontend NEST Desktop [4] and the modelling language NESTML [5] make the NEST ecosystem easily accessible to new users.

The NEST ecosystem is open source and developed under the umbrella of the NEST Initiative [1]. This community organization promotes the use of standard high-quality tools in neuroscience and related fields such as artificial intelligence and neurorobotics, thus fostering sustainable research. In the EBRAINS research infrastructure, NEST provides key functionality for research into the dynamics, structure, and function of brain-scale spiking networks [6].

Multiscale Simulations of Full Brain Models using Arbor and TVB

J. Courson, T. Manos (ETIS Lab, ENSEA, CNRS, UMR8051, CY Cergy-Paris University)
S. Diaz, T. Hater, H. Lu, M. v.d.Vlag (Forschungszentrum Jülich)

Abstract: Simulating full-brain dynamics at neuron resolution is a challenge still far out of the reach of computational neuroscience. Neural mass models, as implemented in The Virtual Brain (TVB) [^tvb], capture whole brain dynamics with a coarse or fine grid at the brain area level and have made personalized medicine and clinical interventions feasible [^tvb-epilepsy]. At the other end of the spectrum, computational tools like Arbor [^arb] enable us to model neurons with full morphological details but with only fractions of the required counts of neurons and synapses. In this study, we present an attempt to combine TVB and Arbor in a practical approach --- simultaneously simulating a given brain region at the microscopic level and neural mass models to provide a salient approximation of the neural activity of the remaining brain areas. We build a scalable co-simulation workflow by leveraging the interfaces provided by both Arbor and TVB to communicate with other external simulators. The resulting framework is flexible as it employs existing models available in both simulators respectively with only minor changes. Additionally, synaptic connections bridging models between TVB and Arbor are specified. This feature allows for swapping dynamics on a per-region (TVB) or per-cell (Arbor) basis without redefining the workflow. Thus, different parts of the brain can be modeled and simulated in detail without sweeping changes to the overall model. We show here some preliminary applications of this Arbor-TVB framework and workflow to simulate the dynamics of seizure propagation.

1. A Morphologically-Detailed Neural Network Simulation Library for Contemporary High-Performance Computing Architectures (Abi Akar, N. et al, 2019)
2. The Virtual Epileptic Patient: Individualized whole-brain models of epilepsy spread (V.K. Jirsa et al, 2017)
3. The Virtual Brain: a simulator of primate brain network dynamics (Sanz Leon, P. et al, 2013)
4. A unified physiological framework of transitions between seizures, sustained ictal activity and depolarization block at the single neuron level (Depannemaecker, D. et al; 2022)
2025/1/16 Thu 13:00〜 (JST)

3rd Digital Brain Tutorial

九州大学

Place: Zoom (Please find the zoom link by the registration)

Abstract: わが国は⻑年に渡り数理科学分野で高い水準を維持してきた.しかし現在の社会におけるDX化の流れの中で,産業界・諸科学分野・社会からの課題に応え価値を共創する新たな数理連携基盤を築く必要に迫られている.このような要請に数学コミュニティ全体で応え,総合知構築を実現するオールジャパン体制のプラットフォームの構築を目指して,2023年10月に全国17の数学・数理科学に関連する機関が参画し,マス・フォア・インダストリ・プラットフォーム (MfIP) を発足した.MfIPの活動の一つとして,脳科学分野の皆様と,連携探索ワークショップの開催 (2023年12月28日@日本橋ライフサイエンスビルディング),Digital Brain Seminar/Workshopの開催 (2024/9/19〜21日@日本橋ライフサイエンスビルディング) などで連携を進めてきた.本チュートリアルでは,さらなる連携の深化を目指して,MfIP의 活動や連携機関に所属する数学者の持つシーズについて紹介する.

2025/1/16 Thu 9:00〜 (JST)

Collaborative Seminar with Allen Institute: Science at the Allen Institute for Neural Dynamics (AIND)

Karel Svoboda
Allen Institute for Neural Dynamics

Place: Zoom (Please find the zoom link by the registration)

Abstract: Our goal is to discover how the brain produces out emotions, memories, and actions. Answers will be in terms of neural activity in defined neuron types interacting across the whole brain and body. To advance our goals we are developing next-generation neurotechnologies. We are also committed to Open Science: Knowledge, data, and tools will be widely shared, to facilitate science elsewhere and to support the development of therapies for brain disorders. I will describe how AIND scientists and engineers organize for team science. I will then discuss a few recent neurotechnological and scientific advances.

2025/1/7 Tue 15:00-16:30 (JST)

Emergence of Cognitive Functions in Natural and Artificial Neural Networks

KAIST

Abstract: How do the diverse functions of the brain originate? Understanding the developmental mechanisms that underlie brain functions is a fundamental question in neuroscience, with significant implications for research on artificial neural networks. This talk will introduce principles related to these developmental mechanisms, which differ notably from the data-driven learning paradigms predominantly used in AI. I will present our recent findings demonstrating that early functional circuits and cognitive functions in the brain can emerge spontaneously, even in the complete absence of training. Using a biologically inspired neural network model, I will first show how regularly structured functional maps can arise from simple local interactions between individual cells. I will discuss how evolutionary variations in physical parameters may lead to the development of distinct functional circuitry in the brain. Next, I will demonstrate that higher cognitive functions, such as visual quantity estimation and primitive object detection, can also emerge spontaneously in untrained neural networks. I will argue that random feedforward connections in early circuits may be sufficient to initiate functional circuits. These findings suggest that early visual functions can emerge from the statistical properties of bottom-up projections in hierarchical neural networks, providing insight into the origins of primitive functions in the brain.

2024

2024/12/3 Tue 17:00-18:30 JST, 9:00 - 10:30 CET

Introduction to the EBRAINS-RI – An overview of the Infrastructure, Data sharing and Brain Atlas

Katrin Amunts, Trygve Leergaard, Oliver Schmid
EBRAINS Research Infrastructure

Speakers and Titles:

  • Prof. Dr. med. Katrin Amunts (30 Min.): EBRAINS – concepts, services and applications profile
  • Prof. Trygve Leergaard (30 Min.): Brain Atlases profile
  • Oliver Schmid (30 Min.): The EBRAINS Knowledge Graph - a scientific metadata management solution profile

Abstract: This seminar, as part of an ongoing series, will introduce the EBRAINS-RI, featuring a presentation by Prof. Katrin Amunts, co-CEO of EBRAINS. The session will also provide an overview of two key components of the infrastructure: the EBRAINS Knowledge Graph (KG) and the EBRAINS Brain Atlases by Oliver Schmid and Trygve Leergaard respectively. The EBRAINS Knowledge Graph is the metadata management system that underpins EBRAINS' Data and Knowledge Services. It offers essential tools and services designed to make neuroscientific data, models, and software FAIR (Findable, Accessible, Interoperable, and Reusable). EBRAINS Brain Atlases provide comprehensive maps of brain regions defined based on structure, function and neural connections. As spatial reference systems for neuroscience, they are essential for understanding the complexity of the healthy brain, studying brain disorders and seeking to develop new treatments.

2024/11/12 Tue 18:00-19:30 (JST)

Virtual Brain Twins in Medicine

Institut de Neurosciences des Systèmes

Abstract: In the past twenty years, we have made significant progress in creating digital models of an individual’s brain, so called virtual brain twins. By combining brain imaging data with mathematical models, we can predict outcomes more accurately than using each method separately. Our approach has helped us understand normal brain states, their operation and conditions like healthy aging, dementia and epilepsy. Using a combination of computational modeling and dynamical systems analysis we provide a mechanistic description of the formation of resting state manifold via the network connectivity. We demonstrate that the symmetry breaking by the connectivity creates a characteristic flow on the manifold, which produces the major data features across scales and imaging modalities. These include spontaneous high amplitude co-activations, neuronal cascades, spectral cortical gradients, multistability, and characteristic functional connectivity dynamics. When aggregated across cortical hierarchies, these match the profiles from empirical data and explain features of the brain’s microstate organization. The digital brain twin augments the value of empirical data by completing missing data, allowing clinical hypothesis testing and optimizing treatment strategies for the individual patient. Virtual Brain Twins are part of the European infrastructure called EBRAINS, which supports researchers worldwide in digital neuroscience.

2024/10/22 Tue 13:00-14:00 (JST)

Hands-on tutorial for OptiNiSt

Yukako Yamane
OIST

Abstract: OptiNiSt (https://optinist.readthedocs.io/) is an open source software tool that helps you to build calcium neuroimage data analysis pipelines by comparing and combining multiple tools through graphical user interface and to produce data processing scripts. In order to promote this tool, developed by the Brain/MINDS project for reproducibility and standardization of neural data analysis, we will hold a hands-on tutorial session. You can use your own note PC to experience how to build a data analysis pipeline. We recommend downloading the Docker image of OptiNiSt according to the instructions (https://github.com/oist/optinist_tutorial_preparation).

2024/9/30 Mon 13:00-14:00 (JST)

Tutorial on NIfTI Files, 3D Slicer and Image Registration using ANTs

Rui Gong
ExCELLS, NINS

Abstract: In this tutorial, we will go over how to use 3D Slicer to load and view NIfTI data, brain atlas and labels. We will be using one ex vivo mouse brain data as an example to perform image registration to a reference template (Turone Mouse Brain Template) space and then apply inverse transforms to the atlas to view it in individual space.

2024/9/19-21

1st Digital Brain Workshop

Kyushu University Nihonbashi Satellite
on-site only

Event: The first Digital Brain Workshop held as an on-site event at Kyushu University Nihonbashi Satellite.

2024/9/6 Fri 10:00-18:00 (JST)

1st Digital Brain Tutorial: MRI Data Usage & Analysis

International Brain Protocol

Topic: Tutorial on MRI data usage and analysis for the International Brain Protocol.
Format: Hybrid (Registration necessary).

2024/7/4 Thu 13:30-17:00 (JST)

5th Seminar: Data sharing and standardization in human neuroscience

Franco Pestilli
University of Texas
Jean-Baptiste Poline
McGill University

Chair: Saori C Tanaka (ATR)

Place: Hybrid: ATR + Zoom

Main conference room in B1F, ATR, Kyoto
https://www.atr.jp/map_etc/access_e.html

13:30 - 14:30: Franco Pestilli (University of Texas)

Title: Putting brain data and cloud technology to good use

Neuroscience research has expanded dramatically over the past 30 years by advancing standardization and tool development to support rigor and transparency. Consequently, the complexity of the data pipeline has also increased, hindering access to FAIR (Findable, Accessible, Interoperabile, and Reusable) data analysis to portions of the worldwide research community. I will present, brainlife.io the platform was developed to reduce these burdens and democratize modern neuroscience research across institutions and career levels. Using community software and hardware infrastructure, the platform provides open-source data standardization, management, visualization, and processing and simplifies the data pipeline. brainlife.io automatically tracks the provenance history of thousands of data objects, supporting simplicity, efficiency, and transparency in neuroscience research. Here this http brainlife.io's technology and data services are described and evaluated for validity, reliability, reproducibility, replicability, and scientific utility. Using data from 4 modalities and 3,200 participants, we demonstrate that this http brainlife.io's services produce outputs that adhere to best practices in modern neuroscience research.

14:45 - 15:45: Jean-Baptiste Poline (McGill University)

Title: Changing the landscape of datasharing in brain research with standardized distributed infrastructures: a Neurobagel journey

Data sharing in human neuroimaging remains a critical component of many research projects, in particular when machine learning models for predicting diagnosis or disease progression need to be fitted or tested on data from different cohorts. Data sharing remains very hard for clinical researchers who i) don't always have the technical resources ii) face ethical and legal barriers iii) have institutional incentives to keep the data local. We propose a change in paradigm in neuroimaging datasharing. Current solutions are often centralized and need complex data sharing research agreements and adaptation to legal frameworks, for instance the GDPR in Europe. We present Neurobagel, a distributed data sharing solution based on neuroimaging and clinical standards, that enable search of participant data across the world. Our current implementation already has more than 8 nodes and 30,000 participant data (healthy or patients).

16:00 - 17:00: Panel Discussion

Chair: Saori C Tanaka, ATR

2024/6/13 Thr 12:00-13:00 (JST)

4th Seminar: Creating neuromorphic artificial intelligence using reverse engineering of generative models

Takuya Isomura
RIKEN Center for Brain Science

Abstract: Empirical applications of the free-energy principle at the cellular and synaptic levels are not straightforward because they entail a commitment to a particular process theory (i.e., neuronal basis). To address this issue, we developed a reverse engineering technique that links quantities in neuronal networks to those in Bayesian inference and showed that any canonical neural network—whose activity and plasticity minimise a shared Helmholtz energy—can be cast as performing variational Bayesian inference. By combining with an in vitro causal inference paradigm, we experimentally validated the free-energy principle by showing its ability to predict the quantitative self-organisation of in vitro neural networks. We have recently begun to apply this technique to neural activity of zebrafish and rodents to reverse engineer their generative models. The virtues of the reverse engineering are that, when provided with initial empirical data, it enables the systematic identification of a generative model employed by the biological system. The reconstructed generative model yields a neuromorphic artificial intelligence that performs Bayesian inference. This further enables the quantitative predictions of subsequent self-organisation and learning in the system.

Reverse engineering of generative models diagram
2024/5/10 Fri 13:00-14:30 (JST)

Creating bridges between the digital and physical realms with 3D vision

Thomas Diego
Kyushu University

Abstract: Our modern societies strongly rely on machines to survive. These machines treat information in the digital world but interact with humans in the physical world. To bridge the gap between the digital and physical realms, we have dedicated our efforts to the creation of new AI-based 3D vision models. Our research has been centered on the task of capturing and modeling the human body, as this is fundamental for enhancing human-machine interactions. Leveraging generative AI, which learns from vast collections of images and videos, we have been able to gain insights into human body shapes, deformations, and semantic interactions within various scenes. This research represents a significant step toward the development of Large 3D Vision Models, which are essential for advancing machines toward greater autonomy and intelligence. In this talk, I will present the latest innovations in the creation of digital and autonomous human avatars, showcasing how these advancements are shaping the future of human-machine interactions.

2024/4/22 Mon 13:00-15:40 (JST)

2nd Seminar: Multiple Talks Session

Ken Nakae, Hiromichi Tsukada, Hiroshi Ishii, Keiichi Ueda, Yoshitaro Tanaka

Place: Zoom

13:00-13:30 Ken Nakae (ExCELLS, NINS)

How to use the Brian/MINDS data portal.

📄 Presentation Slide

13:30-14:00 Hiromichi Tsukada (CMSAI, Chubu Univ)

Connectome-based modeling using marmoset MRI and gene expression data

14:10-14:40 Hiroshi Ishii (Research Institute for Electronic Science, Hokkaido University)

Pattern formation in mathematical models including neuronal interaction effects

14:40-15:10 Keiichi Ueda (Faculty of Science, Academic Assembly, University of Toyama)

Decentralized distributed parameter tuning model for coupled oscillator systems

15:10-15:40 Yoshitaro Tanaka (School of Systems Information Science, Future University Hakodate)

Proposal of a mathematical model of a reservoir computing using the diffusive chemical reaction

2024/4/1 Mon 15:00-17:30 (JST)

1st Seminar: Learning of hidden principled structures behind observations / What is the Digital Brain

University of Tübingen
Kenji Doya
OIST

15:00 - 16:30

1st Speaker: Takeru Miyato (University of Tübingen)

Place: Hybrid: Kyoto University + Zoom

328, South, Research Bldg. No 8, Kyoto University

Campus Map

Title: Learning of hidden principled structures behind observations

Abstract: Recent advancements in AI have utilized extensive data sets to train models capable of understanding and generating observations across various modalities. Notably, Large Language Models (LLMs) based on transformer architectures excel in natural language processing, often outperforming the average human. This success demonstrates AI's ability to grasp the complex structures underlying human language. However, a significant challenge remains: AI models do not possess a clear method to communicate the "generalizable structures"—the underlying patterns and principles—they learned from data. Unlike humans, who can innovate language through the creation of new terms and grammatical rules—as exemplified by the development of mathematical languages to elegantly describe physical phenomena, AI lacks a mechanism to explicitly reveal the structures and principles they have learned. The core issue stems from AI's inability to "invent" their own languages or to recognize and express the implicit structures acquired through training. By developing AI systems capable of not only learning from data but also articulating and refining the implicitly learned structures, we would mark a significant leap towards machines that can think, learn, and create in ways more akin to human cognition. In this talk, I will present our recent work aiming at enabling machines to explicitly learn the hidden principled mechanisms behind real-world observations, such as disentangled and equivariant structures. Additionally, I will discuss an ongoing project focusing on neural synchrony-based object discovery, inspired by the temporal coding hypothesis in neuroscience.

16:30 - 17:30

2nd Speaker: Kenji Doya (Okinawa Institute of Science and Technology Graduate University)

銅谷 賢治(沖縄科学技術大学院大学 神経計算ユニット)

Place: Same (Speaker is online)

Title: What is the Digital Brain of Brain/MINDS 2.0

脳神経科学統合プログラム「デジタル脳」のめざすもの

Abstract: Following Japan's Brain/MINDS project since 2014, the next project, nicknamed Brain/MINDS 2.0, is starting this year. A remarkable feature of this new project is that the "digital brain” plays a central role in integrating brain data at multiple scales from multiple species and for understanding the brain functions and neuropsychiatric disorders. But what exactly is the digital brain? As project’s core organization member in charge of the digital brain development, I will present what is the current design of the digital brain, what kind of data, technologies, and infrastructures are require, and what outcomes are expected. A call for specific research proposals is now open till April 10th (AMED Call). We hope this talk will help you plan your proposal to be best connected to the digital brain.