Dreams, Brain and Machine Learning From Dream Theory to Dream Data Science

Web Technology Digital Transformation Technology. Artificial Intelligence Technology Natural Language Processing Technology Semantic Web Technology Deep Learning Technology Online Learning & Reinforcement Learning Technology Chatbot and Q&A Technology User Interface Technology Knowledge Information Processing Technology Reasoning Technology Zen and AI Navigation of this blog Technology Miscellaneous Life tips& Miscellaneous

Summary

From Iwanami Data Science Vol. 6.

Although there appears to be no relationship between dreams one has while sleeping and data science, dreams have been one of the sources of ideas in the development of machine learning and brain theory. Recently, it has become possible to analyze (decode) the content of dreams from brain activity patterns during sleep through data analysis using machine learning. In this section, we trace the footsteps of dream research that led to dream decoding.

Dream Physiology

Dreams are defined here roughly as “the phenomenon of consciousness that occurs during sleep. Although there are attempts to define and quantify consciousness according to the state of the brain and systems, including Tononi’s Integrated Information Theory (IIT), at least at present, we have no choice but to treat the person’s subjective report as the “ground truth” of his or her state of consciousness. However, at least at present, we have no choice but to treat the person’s subjective report as the “ground truth” of the state of consciousness. Until the discovery of the rapid eye movement (REM) during sleep in 1953, dream research had been conducted only by psychoanalytic approaches based on subjective reports. Psychoanalytic approaches based on subjective reports were the mainstay of dream research.

When EEG is measured during sleep, the frequency, amplitude, and waveform of the EEG changes according to the depth of sleep. Once the sleep state with rapid eye movement (REM sleep) was discovered, polysomnogram recording, which simultaneously records EEG, eye movement, and electromyography, became mandatory for uniform classification of sleep stages. This led to the classification of wakefulness and sleep states into wakefulness, non-REM sleep, and REM sleep, with non-REM sleep further divided into four stages.

In a 1957 paper, Dement and Kleitman concluded that dreams occur only in REM sleep, based on the results of an experiment in which subjects were awakened from both non-REM and REM sleep and asked whether they had dream experiences. However, subsequent studies have shown that dreams are also reported from non-REM sleep, albeit with relatively low frequency, and in recent years, Tononi and many other researchers believe that sleep stages and dreams should be treated as separate events. However, the dogma that “dreams = REM sleep” persists even today.

An interesting link to dreams will be the “replay” of neural activity that Wilson et al. discovered in the hippocampus of rats in 1994. This is a phenomenon in which neural activity similar to that during the task is replayed in the sleep or resting state after the animal has completed the task, and is thought to contribute to memory fixation and other processes.

When we consider whether this can be called a dream, it is difficult to confirm whether a conscious phenomenon is occurring, since it is difficult for rats to report subjectively in the first place. On the other hand, there is no harm in assuming that the brain is spontaneously active and performs functions such as memory fixation in a way that is not brought to the conscious mind. Replay is often observed during non-REM sleep. Therefore, it is not consistent with the dogma that dreams = REM sleep. Although it is sometimes assumed that “dreams = replay” in explanations for the general public, this, too, has problems that are not straightforward.

Dream Functions and Freud’s Dream Theory

what are dreams for? When discussing the function and purpose of dreams, the influence of Freud’s dream theory cannot be ignored. Freud developed psychoanalysis based on the theory that thoughts and feelings that are not consciously accessible (“the unconscious”) guide our behavior. He believed that dreams are cryptic manifestations of repressed unconscious desires and intentions, and that by deciphering their meaning, we can deal with problems of the mind. Many scientists, however, have dismissed this theory as having no empirical basis and being impossible to disprove.

Hobson et al. have criticized Freud’s dream theory as unscientific and worthless since 1977, when they proposed the “activation-synthesis hypothesis” of dreams. According to the activation-synthesis hypothesis, signals from the brainstem during REM sleep randomly activate the cerebral cortex and limbic system, resulting in the synthesis of dream scenarios that make sense.

On the other hand, Solms, a neurologist and psychoanalyst, criticizes Hobson for equating REM sleep with dreaming, citing the fact that dream reports are obtained even during non-REM sleep and the relationship between dreaming and brainstem activity is unclear. Rather, brain activity during dreaming is characterized by activation of the cortex/limbic system, which is associated with emotion and perception, and decreased activity in the prefrontal cortex, which controls thought and behavior.

His own research has also shown that symptoms of complete dreamlessness due to brain damage are limited to partial damage to the frontal and parietal lobes, and cannot occur with brainstem damage. Solms argues that the activation of regions associated with emotion and imagery during dreaming and the reduced activity of regions associated with conscious thought and judgment may be consistent with Freud’s dream theory, at least at a rough level.

Whether one takes Freud’s dream theory literally or not, there is a widely shared recognition that dreams have some adaptive significance, and various functions have been proposed, including simulation of crisis situations, consolidation and organization of memories, and neutralization of emotional responses. Hobson, too, does not see the same significance in the content of dreams as Freud did, but as will be discussed below, he has recently proposed a hypothesis of an adaptive function for the brain processes that produce dreams.

Dreams and “generative” models

In 2015, Google researchers unveiled “Google Deep Dream,” a technique that uses deep neural networks (DNN) to generate nightmarish images. The DNN used in Deep Dream is a feed-forward network with only forward coupling, but it has recently been used to generate nightmarish images using the Generative Adversarial Network (GARN), which is a feed-forward network with only forward coupling. Generative Adversarial Networks (GANs) and other networks that aim to generate images and sounds are being developed more and more. Dreams, on the other hand, are “generated” by spontaneous brain activity in the absence of sensory input from the outside world. It is a natural direction to associate neural networks with dreams where they perform generation rather than recognition of input patterns.

The “Wake-Sleep” algorithm proposed by Hinton et al. in 1995 explicitly uses this “dream = generation” analogy. Here, we assume a stochastic network with “recognition” coupling from the input layer to the output layer and “generation” coupling from the output layer to the input layer.

While learning of the coupling weights is performed using the input signals when the user is awake, learning is performed using virtual input patterns (“dreams”) generated through the generative model from random activity patterns in the output layer when the user is asleep. Offline learning using such “dreams” allows efficient representations to be acquired and improves “awake” recognition. Hbson’s activation-synthesis hypothesis is clearly behind this idea.

More recently, Hobson himself, in collaboration with Friston, has attempted to formulate dream functions using machine learning models; Friston is prominent in the development of analytical methods for brain imaging, while Hobson’s work is based on generative models and free energy minimization principles also used in Hinton et al. above. In a 2014 paper, Hobson and Friston postulate that dreams generate a “virtual reality” and that learning with it adjusts the complexity of the brain’s model of the external world, allowing for more efficient prediction upon awakening. They discuss the relationship between various physiological phenomena related to dreams.

Dream Content and the Brain

The discussion thus far has implicitly assumed that dreams correspond to brain activity during sleep. However, this process is not necessarily self-evident. The dream experience cannot be described in terms of the present system, but can only be shown in terms of a post-waking hierarchy. For this reason, skepticism is also presented on the basic point of whether dream experiences really occur during sleep or are made up after awakening.

One example of evidence supporting dreaming during sleep can be found in the case of sleep behavior disorder: during REM sleep, the body is inherently immobile because of the deactivation of antigravity muscles. However, if the deactivation system fails to operate normally due to an abnormality near the brainstem, the center of REM sleep, the patient may act out what he or she has seen in dreams (REM sleep behavior disorder). A similar behavior is sleepwalking (somnambulism), which occurs during non-REM sleep, but it is said that sleepwalking often results in no memory of dreams.

The most straightforward way to settle this issue, however, would be to examine whether brain activity patterns during sleep can be used to infer the content of dream reports after awakening.

brain coding

Kamiya et al. at ATR are developing a method to decode mental images from patterns of human brain activity. This research stems from the idea of using machine learning to analyze the measurement of human brain activity by functional magnetic resonance imaging (fMRI).

In conventional fMRI studies, the method is to identify brain regions where the fMRI signal intensity differs depending on the sensory stimulus or task. fMRI images are regressed on individual pixel (voxel) intensities using a linear model consisting of stimulus or task variables and their coefficients, and if the coefficients are statistically significantly different from zero, a “coloring” procedure is applied to the image. If the coefficients are statistically significantly different from zero, the procedure of “coloring” the image is repeated for all pixels (general linear model, GLM). Since there are several hundred thousand pixels in a single brain image, there are various problems with this method, such as the fact that naïve statistical testing is likely to produce false positives because several hundred thousand pixels are colored even in noise-only data.

Among them, the methodology in which individual voxel values are treated like answers to a questionnaire and the “stimulus (S) → response (R)” diagram of psychology is forcibly applied to brain image analysis, has many problems, such as the fact that the brain functions as a network in the first place, and that more free modeling using “patterns” of voxel values should be possible. I feel strongly uncomfortable with such a methodology.

In contrast, instead of GLM, which is the basic analysis of fMRI, we can use the 2001 paper by Haxby et al. that showed that the correlation of voxel patterns for the same object was higher than the correlation of voxel patterns for different objects in fMRI measurements when looking at several different object images, such as faces and houses. Based on the paper, our approach is to use machine learning to extract the information contained in the patterns of a large number of voxels.

This allows us to decipher information that is thought to be represented in brain structures (column structures) finer than fMRI voxels, such as the orientation of lines (vertical, horizontal, diagonal, etc.) in stimulus images, from patterns of slight changes in voxel values. Support Vector Machines (SVM) were initially used as a machine learning method, but in recent years, Bayesian linear models that introduce strong sparsity in the dimension of input voxels have been used. In general, the number of features (voxels) in brain image data is much larger than the number of observations (number of brain images or number of experimental blocks), so regularization and dimensionality reduction are necessary.

With this approach, “decoding” has been used as a method of brain image analysis, and the expression “brain decoding” is now commonly used. This is based on an information-theoretic approach that regards brain activity patterns as “codes” that represent stimuli and mental and physical states.

This is not an assertion that the brain itself uses this code, since it is not thought that the brain treats voxels as units of information, but rather a methodology that focuses on whether brain measurement signal patterns in specific areas can be converted into information as stimuli or states of mind and body from the perspective of an external observer The same is true of the brain.

Kamiya et al. also propose a method for decoding subjective psychological states. First, a machine learning model (decoder) is trained using the brain activity pattern when the stimulus image is shown. By using the decoder to decode the brain activity in a subjective state, for example, when a certain image is being imagined in the mind, it is possible to predict the shape that is being imagined.

This can be regarded as a type of “transfer learning” in machine learning. Although decoding can be trained with brain activity in a subjective state, it is not possible to objectively confirm what the subject is actually doing at that time. By using brain activity in response to stimulus images, some objectivity can be ensured, and this method is called “neural mind reading.

Dream Brain Measurement Experiment

Neural mind reading is possible because when we envision images in our minds, brain activity patterns similar to those that occur when we actually see images occur, and if the visual content of dreams is also represented by brain activity patterns similar to the near waking content, then it should be possible to decode them in the same way However, this experiment requires a variety of different methods to be used.

However, there are various difficulties in conducting this experiment. One of the research challenges is how to collect a large number of dream data. Since REM sleep usually appears more than one hour after sleep, targeting REM sleep is not efficient in terms of data collection (MRI equipment costs 100,000 yen per hour). Therefore, in the initial study, dreams that occur within a few minutes of falling asleep were targeted. This corresponds to sleep stage 1 or stage 2. In stage 1 sleep, rapid eye movements are not observed, but the EEG is very similar to that of REM sleep, and dream reports appear at a relatively high frequency.

The subject is actually asked to sleep in the MRI machine while wearing an electrode cap for EEG measurement, and fMRI measurement is performed while determining the sleep state from the EEG in real time. EEG is a standard method for deciphering sleep stages, but it alone does not reveal the content of dreams.

On the other hand, simply looking at fMRI images with the eyes does not even determine whether a person is asleep or awake. We awakened subjects at the timing when the sleep EEG, which is known to be related to dreaming, occurred (2 to 3 minutes after the start of sleep), and asked them to freely report the content of the dream they had been dreaming until just before (about 30 seconds). After the report, the subjects were asked to fall asleep again, and the procedure was repeated. This procedure was repeated until each subject had at least 200 visual dream sequences.

Thanks to the periodic noise and the confined environment, subjects became accustomed to the loud noisy MRI and were able to fall asleep again, often within 5 minutes after reporting the dream content.

Dream Decoding

The content of dream reports varies widely. To convert such unstable dream report data into a structured form, we first extracted nouns from the dream report sentences, applied them to WordNet, a linguistic database with a semantic hierarchical structure, and classified them into 20 major object categories (cars, men, letters, etc.).

The content of the dream report was thus represented by a 20-element vector representing the presence or absence of each category. We also collected images corresponding to the major object categories using ImageNet, an image database with a structure corresponding to WordNet, and conducted an experiment on a different day with the same subject to measure brain activity when shown these images. This was analyzed using a machine learning model, and a decoder was constructed to output a score (continuous value) indicating the presence or absence of each object category.

When the decoder constructed in this way was used to analyze the fMRI patterns just before awakening, it was found that for about one-third of the object categories, it was possible to discriminate at a statistically significant level whether the object appeared in the dream or not. When the timing of the input fMRI pattern was varied, the decoder output scores correlated with dream reports up to 20 seconds before awakening. Even before that, the scores fluctuate, but this may represent the content of the dream that was forgotten upon awakening.

This study was published in 2013 as “Neural decoding of visual imagery during sleep. This study provides evidence that dreams are experiences that occur during sleep. However, dream reports cannot be completely equated with dream experiences, and the measurement and analysis of brain activity during REM sleep remains a challenge for the future. In addition, the technical crux of this research is actually the (pre)processing of unstructured data using WordNet and ImageNet, and the dream decoding was realized with their support.

コメント

タイトルとURLをコピーしました