In Texas, they’ve connected something called ChatGPT to the brain

29. 04. 2023 | Michal Krcmar

The Japanese have recently connected an AI to the brain that draws thoughts. In Texas, they connected a text generator to the brain, as we know it from ChatGPT. It can write summaries of what we’re thinking about.

Do you feel that the pace of commercial AI chatbot development is too frenetic and we should slow it down while there’s still time? Then you better not even want to know what’s happening in labs around the world right now and what will be normal relatively soon.

It’s not even two months since we wrote about some research out of Japan that was able to use AI to convert extremely complex functional magnetic resonance imaging (fMRI) output into images and film fields. At the time, the researchers used essentially the same technology that underpins the popular AI generator Midjourney or Stable Diffusion.

In short, the Japanese, managed to portray quite realistically what we are looking at by remotely tracking chemical changes in the brain. Today’s younger generations might therefore live to see a time when we can similarly display, for example, what a bedridden person is thinking about, or what we ourselves dreamt during last night’s wild night.

What if we connected ChatGPT to the brain?

After a few weeks, the research from the University of Osaka was followed up in the United States and went much further. In fact, Jerry Tang and supervisor Alex Huth from the University of Texas at Austin have published a final report in the journal Nature that describes reading and interpreting human thoughts using a language transformer. Simply put, they’ve done the same thing as grafting a text generator onto your brain, which also powers ChatGPT!

While the Nature paper is not publicly available, the university has also put it on an external repository, and a preliminary report from last fall is finally available in the bioRxiv public repository of biotechnology studies.

It should be noted that almost anything can be published on bioRxiv and the more general Arxiv without proper peer review, so sometimes it is utter nonsense, but a notch in the renowned Nature network is something else entirely.

What is a transformer

Transformer is a general AI architecture first described by Google Brain researchers in 2017. But the cream of the crop ended up being licked, especially at OpenAI, as they built their big language models from the GPT (short for Generative Pre-trained Transformer) family on top of it.

In a nutshell, it’s artificial intelligence that learns to look for relationships in consecutive (sequential) data. It started with text and sound (a stream of consecutive letters and tones), but there has been a huge growth in the last two years, and now there are transformers for image detectors and other domains as well.

Many experts believe that this family of AI architectures is where the near future lies across the board, as they achieve much higher performance than previous technologies.

Hemodynamic response

So what did they actually do in Texas? They simply strapped a few volunteers to the bed of a massive MRI machine, as we know them from hospitals, and played podcasts to them for long hours. The volunteers listened while our team recorded gigabytes of data about what was going on in their heads at that very moment.

Functional magnetic resonance imaging shows what’s going on in the brain in different ways, but one of the most typical is the BOLD (blood-oxygen-level-dependent imaging) technique, which is what they chose in Texas.

It’s actually a bit of science fiction, as BOLD does not directly measure the aggregated electrical activity of neurons as in the case of EEG, but their correlated haemodynamic response. What the hell is that? Well, our neurons run on sugar and oxygen, I’m sure you all know that, but they don’t have any fuel tank themselves.

So as soon as you engage in some very intellectually demanding activity – opening Live.cz on your mobile phone, for example – your body starts pumping fresh oxygenated blood and nutrients into your brain, like a Sunday bike ride. Think a lot, think well and often, and with a bit of exaggeration you will lose weight.

The bottom line is that we can monitor this sudden change in the oxygenation of the brain’s bloodstream, and BOLD fMRI is one way to do it. It’s actually very similar to trying to gauge what’s going on in your community (a parallel to the brain) by monitoring the rate of rotation of the electric meter at each house (a parallel to the hemodynamic response of different parts of the brain).

Hours of listening to podcasts inside the fMRI machine

But back to Texas. Unsuspecting volunteers listened to podcasts for hours and hours, scientists stored quanta of fMRI data, and well, then mapped that information onto a language transformer (a podcast is, after all, a spoken word, a sequential stream of letters).

Once they had created their neural language model for each of the volunteers in this way, they could flip the whole operation around and the transformer started generating text instead of learning. Text with a rough description of what was going on in the brain.

The decoder writes summaries of what we’re thinking right now.
Since we’re simultaneously imagining the information as we listen, so the whole spectrum of our mind’s domains are involved, including the visual ones, the semantic decoder from Austin captured this complex picture in its entirety. As a result, it responds not only to voice, but also to visual perception and mere imagery.

When volunteers watched a short video without an audio track, that was enough for the semantic decoder to start writing a summary of what was actually happening in the video. And if they just thought about something, the decoder worked the same way.

A quick video tour of what a semantic decoder from Texas can do (without audio):

Stephen Hawking

If science continues to progress at the same pace, Tang and Huth promise that we could see, for example, a device that will write or read the thoughts of a person who is confined to a wheelchair and bed, but is nonetheless sane.

Imagine, for example, Stephen Hawking presenting his research and ideas in just this way, without any complex preparation preceding each sentence from a voice synthesiser. He would simply think of what he wanted to say to the world and it would happen.

There have been many experiments with converting thoughts to text, but all of them worked with a limited vocabulary to a few words or required a brain implant. However, thanks to Transformer-based AI, the semantic decoder makes use of a very rich vocabulary generated from many hours of listening to podcasts, and it works even without a hole in the head.

fNIRS instead of MRI

The technical weakness is the fMRI itself. The MRI machine isn’t some little helmet you recharge on your bedside table, but a multi-ton monster.

But even here it looks promising, as other academic teams are working hard to develop and explore the possibilities of sensing neural activity using fNIRS (functional near-infrared spectroscopy), a technique whose output is relatively similar to a bolt.

Functional near-infrared spectroscopy differs in that it monitors the haemodynamic response to neuronal activity by surface scanning of the head.

It actually works similarly to the oximeters in our smartwatches, which measure the degree of oxygenation of the blood by simply illuminating the skin and measuring the reflection of light off the blood in tiny capillaries. The blood dye haemoglobin absorbs frequencies close to the IR by far the most.

At the same time, however, surface monitoring of the hemodynamic response is inherently very slow, occurs with enormous latency, and the information value is even more diluted by the immediate environment. But even this could be helped over time by AI, which is the best that humanity has yet invented at finding hidden patterns in data.

Should we be afraid?

No, at least for now. Tang and Huth reassure the public that even if we can replace a multi-ton fMRI machine with a chip that will be part of a wireless headset in the coming decades, the element of free will is key.

Since their technology is passive, it sends nothing to the brain and only responds to what we think. So once they instructed the volunteers to think about something other than what they were looking at, of course it stopped working. The cognitive activity had a stronger response in the brain than the visual perception, the text decoder got lost in the noise and started writing nonsense. So everything works for now only if we ourselves want it to work.

ChatGPT was science fiction a year ago, too.

And secondly, we are (so far) far from the vision where such a reading device would work with a universally rehearsed model. Every brain is unique, so the learned model of one volunteer could not be used to decode the brain of another participant in the experiment. At least, for now.

However, at the current pace of primary and applied AI research, things may be different in just a few years. After all, even ChatGPT can do stunts today that we could only dream of as recently as last October. Today, it’s a reality. So the rhetorical question is what will be reality on May 2, 2024. What do you think?

Author of this article

Michal Krcmar

Michal Krčmář is a man behind the idea of JustFreeTools who came with a vision to create one of the world’s top free tools providers to help individuals to learn programming, marketing, and make calculations easier than ever.

WAS THIS ARTICLE HELPFUL?

Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!

Write a comment