The Qualia Factory

Yesterday was a big day for Prophetic and the future we are aiming to build. Before today, tFUS was already inducing extraordinary states of consciousness, yet these results were accomplished using single element transducers and the basic statistical mechanics that have dominated neuroscience for decades. Despite the 20 years that tFUS has been used for neuromodulation and the concurrent advancements of multi-element transducers, state of the art transducers have been running into software limitations. These software roadblocks have limited the performance of the control and steering capabilities of large element count transducers. Prophetic’s closed loop system, powered by our ultrasonic transformer architecture, is the missing ingredient necessary to bring this extraordinary technology to the world. We are not only able to create precise steering instructions, but we can also go one step further and create acoustic holograms - three dimensional acoustic shapes of cortical subregions.

An acoustic hologram made in simulation software of the Prophetic Logo.

To build this system, we created an entire system and operation that is just as innovative as the system itself. The Model T was a massive leap forward for transportation, but the assembly line enabled it and would go on to transform manufacturing as a whole. That is why we wanted to write this post alongside the demonstration of the closed loop system— to explain that we are not just innovating on technology, but innovating on the process of building and fine-tuning ML models for neurotech. We call this process the Qualia Factory. 

Qualia are subjective conscious experiences; a quale might be what it is like to see the color blue. Qualia Factories represent the ability to generate conscious experiences at scale. 

The Qualia Factory is composed of three core components: data acquisition, model training, and experience generation. Below, we will explain each of these components and chart a vision for a future where qualia factories are scaled up. With this architecture we will be able to produce refined experiences for everyone at scale. In practice, we are talking about a process that inputs neuroimaging data into ML models, and outputs targeted tFUS which then produces conscious experiences.

Data Acquisition

Neural data is very unique from a spatial and temporal perspective. This uniqueness hindered the use of machine learning techniques in neurotech for years until the creation of neural transformers in 2022. Additionally, different forms of neural data have different qualities - for example fMRI is extremely spatially rich, while EEG is extremely temporally rich. Thus, in order to create models that can approximate reality, one needs to create a multi-modal transformer that uses both. This multi-modal transformer trains on both EEG data and fMRI data, which is made possible by collecting a simultaneous EEG & fMRI dataset on active lucid dreams and modifying an archetypal transformer architecture to tokenize elements of a given data type. This neuroimaging technique is rare because it requires a specific setup and non-ferromagnetic EEG sensors which are more expensive than the commodified ferromagnetic EEGs that are found in most neuroscience studies. 

A simultaneous EEG and fMRI setup

With our collaborators at The Donder’s Institute we are building such a dataset while also using open source datasets of other brain states. Again, the key input to our Qualia Factory is a simultaneous EEG-fMRI dataset, so one of the key goals of Prophetic in 2024 will be to secure the necessary equipment to control and scale up our internal neuroimaging data acquisition. This will allow us not only to improve our yield of lucid dreaming data acquisition, but allow us to aggregate data on other brain-states of interest, which we can launch as experiences alongside lucid dreaming.

Model Training

This data collection schema is designed to train a transformer model. Trained on EEG/fMRI data, yet operating similar to transformers such as GPT, Morpheus-1 takes EEG data as an input, and outputs steering instructions for tFUS. To provide some context on the significance of this transformer, let's frame it within our overall architecture: 

  1. Put on the Halo.

  2. Within the app, choose the mental states/qualia of your choosing.

  3. The app prompts the transformer with an EEG generated from the chosen qualia.

  4. The transformer output provides coordinates and shapes that operate as spatial instructions for our multi-element transducers on The Halo. 

  5. These instructions inform the transducer configuration so we can ultrasonically stimulate the output regions, thus inducing the brain-state that the EEG prompt was initially derived from. 

  6. Finally, the EEG on our headband serves as the prompt for the transformer model to output the next set sequence of spatial instructions, creating a stabilizing feedback loop.


Introducing Morpheus-1


Experience Generation

As our demo shows, our ultrasonic transformers take EEG/fMRI data in and outputs the spatial instructions for our multi-element transducers on The Halo - inducing a given brain-state. Additionally, the EEG on our headband serves as the context and prompt for the model to output the next set sequence of spatial instructions.

The EEG data gathered from our user’s Halo can be sent back to us via our app, which allows us to improve the models in the same way a social media feed becomes more interesting as it becomes tailored to an individual. We call this form of reinforcement learning, Qualia Reinforcement Learning (QRL), as it is the actual validation of the user's conscious experience that allows us to discard bad tokens, while scaling effective ones. Thus, QRL allows us to improve not only the tokens from the encoder block, but the decoder block without the need for more fMRI data. This means that as the number of Halo users increases, we will be able to improve our models at an ever increasing rate. As we test our technical prototype in the first half of 2024, we will provide a demonstration of this novel form of reinforcement learning.

The Mass Production of Conscious Experiences

When Henry Ford introduced the assembly line, he not only produced a product that changed the world, but an operating system that powered the second industrial revolution. People moved to urban centers to produce goods of all types that they in turn could actually afford to purchase. We believe that the Qualia Factory could have a similar impact on society. An entire economy can be built around neuroimaging people who can enter extraordinary conscious states. Moreover, users who discover and do extraordinary things in brain-states like lucid dreams can develop followings like influencers do today on social media. And finally, we can create various incentives for our users to send us back their EEG data to continuously improve our models and thus our tFUS targeting sequences.

Mass production, enabled by the assembly line created the world of material abundance that we enjoy today. Our aim is that Qualia Factories mass produce extraordinary conscious experiences that create an abundance, not of material, but of qualia. 

We will have our first fully functional Qualia Factory online in Q4 2024.

Previous
Previous

The Unbearable Whiteness of Neptune

Next
Next

Neuroscience is pre-paradigmatic. Consciousness is why.