The birth of the Quantum Convergence Threshold (QCT):

Jun 19, 2025
3
0
10
The Puzzle of wavefunction Collapse and the Birth of Quantum Convergence Threshold (QCT)

My Mindset and the Puzzle of Collapse

I didn’t come to the puzzle of quantum collapse by way of academia. My path didn’t run through university lecture halls, graduate seminars, or chalkboards filled with equations. Instead, it was forged in workshops, on factory floors, behind control panels, and at the command line of CNC machines. I was pulled from school early, drawn into the world of automation, precision manufacturing, and programming because the opportunity was too good to pass up. Six figures to start. Hands-on work with systems that demanded perfection. But money wasn’t what shaped me. It was the mindset that came with that work.

When you program CNC machines or design robotic welding systems, there’s no room for ambiguity. A weld either holds or it fails. A robot either executes the code as intended or it crashes. Trigonometry isn’t a theoretical abstraction; it’s the daily language of problem-solving. Every angle matters. Every tolerance is critical. Every calculation has consequences. You live in a world where systems must work, where failure costs time, material, safety, and trust. That mindset became part of who I am. It shaped the way I see every system, including the most fundamental one of all: reality itself.

So when I turned my attention to quantum mechanics, I brought that mindset with me. I wasn’t looking for poetry. I wasn’t looking for interpretations that sounded good in books but didn’t deliver a working model. I was looking for the blueprint, the wiring diagram, the code that makes the universe run. And what I found at the heart of quantum mechanics was a flaw — a gap that no one seemed eager to close.

The wavefunction collapse.

Quantum mechanics describes the evolution of systems with incredible precision. Schrödinger’s equation is clean, deterministic, and smooth. It tells us how quantum systems evolve — until they don’t. Until someone measures. Then, without mechanism, the wavefunction collapses into a single outcome.

Why?

The Copenhagen Interpretation offered no real answer. It told me collapse happens at measurement, but never defined what measurement is. Is it a detector? A human observer? A certain mass or size? The boundary was vague, arbitrary, and unsatisfying.

The Many-Worlds Interpretation offered a different path. No collapse at all. Every possible outcome happens, each in its own universe. The math worked. But to me, it felt like an evasion. The problem wasn’t solved — it was buried under an infinite stack of realities.

Objective collapse models tried to do better. GRW introduced spontaneous collapses at random times. Penrose suggested gravity might be the trigger. I respected these efforts, but they still felt like patches. Add-ons to a theory that needed re-engineering from the ground up.

The double slit experiment became my obsession. It was so simple, so clean, and yet it exposed everything that was wrong. Fire single particles through two slits, and you get an interference pattern — proof of wave-like behavior. But as soon as you try to detect which slit a particle goes through, the interference vanishes. The particle behaves like a particle again.

Why should gaining path information destroy the interference? Why should the act of looking force the universe to choose?

I studied weak measurements, delayed choice experiments, quantum erasers. I sketched diagrams. I ran thought experiments. I tried to see where superposition failed. But the answers I found in textbooks and papers didn’t satisfy me. The idea that the universe cares about our gaze seemed absurd. The universe doesn’t revolve around us. There had to be something deeper, something structural.

Then came my happiest thought. I wasn’t reading. I wasn’t working calculations. I was sitting quietly, thinking, turning the problem over in my mind for what felt like the thousandth time. And it hit me: everything with a conscious will behaves differently when it knows it’s being watched.

Not because the particle is conscious. But because the informational context has changed. The conditions that sustain superposition are no longer there. Collapse happens because the system registers that change. Collapse isn’t imposed from outside. It happens from within.

Building QCT — From Happiest Thought to Mechanism

That happiest thought didn’t fade. It stayed with me, pushing at the back of my mind, demanding more than reflection. It demanded structure. If collapse wasn’t magic, if it wasn’t imposed by our observation, then it had to be driven by something real — something mechanical. I wasn’t willing to accept vague notions of consciousness collapsing the wavefunction, nor was I content with theories that said collapse didn’t happen at all. I wanted to find the mechanism, the code behind the reality we see.

So I started where I always start when systems don’t make sense: I broke it down. I looked at what collapse had to achieve. It had to take a superposition — a state where multiple possibilities exist at once — and reduce it to a single outcome. And it had to do that without arbitrary lines, without invoking magic words like “measurement” or “observation” that explained nothing.

That’s when the idea of informational convergence took shape. I imagined the wavefunction as not just a mathematical abstraction, but as a real structure with an internal integrity — an integrity that can only sustain superposition up to a point. As the system interacts with its environment, as entanglement builds, as noise and complexity accumulate, that integrity starts to break down. The system can’t hold the superposition anymore. It has to collapse. Not because someone looked at it, but because the conditions no longer support multiple possibilities.

That’s where I introduced the hidden variable: C(x,t). The convergence threshold.

C(x,t) represents the degree of informational convergence at a point in spacetime. As C(x,t) rises, the system moves closer to collapse. When it reaches a critical value, C_crit, the system can’t sustain superposition any longer. Collapse is inevitable.

C(x,t) ≥ C_crit

No more magical measurement. No more arbitrary cut-off. Just a clean, structural condition for collapse.

But what drives C(x,t)? I broke that down too. It made sense to think of informational convergence building from both external and internal factors. So I defined the growth of C(x,t) as:

dC/dt = α * E_env + β * (∇ψ)² + γ * ρ_info

Where:

α is the coupling constant for environmental informational flux — the contribution from external noise, decoherence, and disturbance.

β is the coupling constant for internal spatial complexity — represented by (∇ψ)², the squared gradient of the quantum state.

γ is the coupling constant for internal informational density — ρ_info, reflecting entanglement load and coherence complexity.


Collapse happens when the total convergence passes its critical threshold. The system doesn’t choose to collapse — the conditions force it to. Collapse is structural. It’s mechanical.

I worked out the timescale:

τ_collapse = (C_crit - C_initial) / (average dC/dt)

where C_initial is the starting convergence at the onset of interaction. The more rapidly convergence builds, the faster collapse comes. The slower convergence builds, the longer superposition persists.

This structure didn’t just explain collapse. It gave a reason for the arrow of time. Because dC/dt is always greater than or equal to zero, convergence only builds — it doesn’t reverse. Collapse is irreversible because informational convergence can’t be unwound. That’s what gives us time’s flow. That’s what anchors our experience of before and after.

Entanglement fit naturally too. When two systems are entangled, their C(x,t) values aren’t independent. They’re linked by shared informational structure. When collapse happens at one location, the linked convergence ensures collapse at the other — no signaling needed, no hidden communication, just shared convergence.

I thought through examples. An isolated particle in deep space — no environment, no entanglement, minimal internal complexity. C(x,t) builds slowly. Superposition persists. A particle near a detector — environmental flux high, entanglement growing, internal complexity rising. C(x,t) builds fast. Collapse comes quickly. A macroscopic object — massive complexity, huge entanglement, constant environmental bombardment. C(x,t) hits C_crit almost instantly. That’s why macroscopic objects don’t show superposition.

For a while, I thought QCT stood alone. It gave me what I wanted: a mechanism, not an interpretation. A machine, not a patchwork. But I hadn’t yet confronted the full weight of participation — or how my happiest thought already pointed to it.


The Tension with Participation and the Synthesis with 2PC

When I finished the core structure of QCT, I believed I had what I needed. I had a mechanism — a way for collapse to happen without magic, without arbitrary rules, without the universe waiting on human eyes. Informational convergence explained collapse. It explained time’s arrow. It explained why classical memory could arise from quantum events. It explained why we don’t see macroscopic objects in superposition. QCT was the machine behind the scenes, the code that turned possibility into fact.

So when I first encountered Geoff Dann’s Participating Observer model, I resisted. His work spoke of agency, participation, and meaning. It asked not just how collapse happens, but who participates and why it carries significance. At first, that felt unnecessary to me. I had what I wanted: a clean, mechanistic account. Why complicate it? Why bring in participation when the machinery was enough?

But the more I engaged with Geoff’s work, the more I felt the tension. His Participating Observer wasn’t some mystical, hand-waving addition. It was an attempt to answer what my model had left implicit. QCT told me how collapse happens. But what defines the boundary conditions? What sets the context for when convergence matters? What determines when and how the universe makes a choice? I realized I hadn’t answered that. I had a mechanism, but no context.

And that’s when I returned to my happiest thought. The observer being present is the convergence threshold. I had wrestled with participation because I thought it meant imposing collapse from outside. But I was wrong. The presence of participation defines the conditions that make collapse inevitable. It doesn’t force collapse. It defines the informational structure that drives C(x,t) over the edge.

That’s when it all clicked. The wavefunction collapses not because the observer forces it to, but because the presence of participation — the act of being part of the structure — changes the informational conditions. The wavefunction senses the presence before the observer registers the fact. Collapse is already underway by the time we consciously know what happened.

This realization dissolved the boundary between QCT and the Participating Observer. They weren’t separate models trying to explain the same thing. They were two perspectives on a single process. QCT provides the mechanism — the dynamics of informational convergence. The Participating Observer provides the context — the necessary condition for convergence to have meaning.

Participation isn’t magic. It’s not a deus ex machina for collapse. It’s the act that defines the conditions for collapse. It’s the structure within which C(x,t) rises, within which convergence builds, within which the system moves from possibility to fact.

This resolved the tension I had felt. I could see that my happiest thought was always pointing here. That collapse isn’t an isolated mechanical failure. It’s the inevitable outcome of informational convergence in a participatory reality. The universe doesn’t collapse because we demand it to. It collapses because we are part of its structure, and our presence defines its boundaries.

That’s when I knew that QCT and the Participating Observer weren’t just compatible. They were necessary to each other. QCT gives the machinery of collapse. The Participating Observer gives it context, agency, and meaning. Together, they form a framework that explains not just how collapse happens, but why it happens the way it does.


The Birth of a Unified Framework

Looking back on my journey to this point, I see now that QCT was never meant to stand alone. It began as an independent quest, born from my refusal to accept vague answers or arbitrary lines in the sand. I wanted a system that worked — a mechanism that didn’t rely on magic words like “measurement” or “observation.” And that’s what QCT gave me: a model where collapse is the inevitable result of informational convergence exceeding a critical threshold. A model where the universe doesn’t wait on human eyes but follows the logic of its own structure.

But even the cleanest machine needs a context in which to operate. Even the most precise mechanism exists within a larger system that gives it meaning. QCT explained how collapse happens, but it didn’t explain why it happens the way it does — why informational convergence takes the shape it does, why collapse matters. That’s where the Participating Observer comes in.

Geoff’s model didn’t contradict QCT. It completed it. Participation isn’t an external trigger for collapse. It’s the condition that gives convergence its direction. It’s the structure that defines when and where C(x,t) crosses its critical threshold. Participation isn’t an add-on to the physics. It’s part of the physics. It’s part of the informational structure that drives collapse from possibility to fact.

This realization changed how I saw everything. My happiest thought — that the presence of participation defines the convergence threshold — became the bridge between mechanism and meaning. The wavefunction doesn’t collapse because we consciously observe it. It collapses because participation alters the informational conditions that sustain superposition. Collapse is a structural necessity in a universe where participation is baked into the fabric of reality.

QCT provides the machinery: the dynamics of informational convergence, the conditions under which collapse occurs, the structure that turns superposition into fact. The Participating Observer provides the context: the reason why convergence builds the way it does, the meaning behind the collapse, the agency woven into the fabric of reality.

Together, they form a unified framework. A framework that explains not just how collapse happens, but why. A framework that honors both precision and purpose, structure and agency, code and context. A framework that doesn’t shy away from the hard questions but meets them head-on, with clarity and rigor.

This is the birth of that framework. The point where mechanism and participation meet. The point where QCT and the Participating Observer stop being separate models and become parts of a single, coherent vision. This is where the puzzle of collapse begins to give way to understanding. And this is where our journey begins.

Bonus! Full Paper under peer review now. 👇👇👇


 

COLGeek

Cybernaut
Moderator
Apr 3, 2020
2,168
1,108
13,560
I see reference to another new member account from earlier today. Just pointing that out for others when they notice this thread.

 
Apr 11, 2025
86
13
35
I see reference to another new member account from earlier today. Just pointing that out for others when they notice this thread.

Especially curious since the document specifically mentions the author of QCT.
 
There are a lot of "same day as they joined" posts with long theses. They seem to be treating SDC as a publisher for their ideas. Establishing precedence? Unable to interest a publisher? An AI bot run amok? Just brainstorming here.
 
Jun 19, 2025
26
3
35
Unable to interest a publisher?

In effect I *am* a publisher, and an established author (though not of philosophy). Note: this is not Amazon-KDP-style self-publishing -- my forthcoming book, which explains both this new cosmology and QM interpretation as well as the deeper philosophical system it fits into (which is primarily epistemological) is distributed via normal book distribution channels, and isn't even available on Amazon yet (although presumably will be very soon). (In other words, I have a direct agreement with a major book distributor, which permits me to operate like a small independent publisher).

https://www.whsmith.co.uk/Product/T...irituality-and-sustainability-in-the/15742534

I notified Greg that this discussion was taking place here, and invited him to join in. His theory provides an important missing piece of the larger puzzle that my theory assembles. I already knew there had to be something very special about brains which explain why animals are conscious but plants, fungi, rocks and stars are not. I was never happy with Penrose/Hameroff and their microtubules, because they aren't specific enough to brains. Greg's theory explains exactly why brains are needed, and does so in a way which fits perfectly with Henry Stapp's Quantum Zeno Effect as the metaphysical part of the explanation.

Also note that I only ran into Greg after the book was finished, and I had gone public with the basic cosmological ideas.

And the reason I'm here explaining these ideas is that I want them to become as widely known as possible, and associated with myself, even if people currently think they are wrong. This is to try to discourage other people from stealing the core ideas and claiming them as their own. The more people who have heard about this new theory from myself, the less likely it is that somebody will actually try to do that.

It also ought to be of great interest to members of this forum, especially if it turns out to be right. And even if it ultimately turns out to be wrong, it is still the first structurally innovative interpretation of QM since 1957 and may turn out to be an important step towards the right answer.
 
Last edited:
  • Like
Reactions: Capanda Research
Jun 19, 2025
26
3
35
Prompt to the AI: "Greg Capanda has posted this on Space.com forum. I'd like you to draft a follow up post explaining how 2PC provides the context QCT needs -- just to fill in the missing details and complete the picture from a 2PC/QZE viewpoint.

Response:

How Two-Phase Cosmology (2PC) Grounds the QCT Framework: Completing the Picture

Greg Capanda’s work on the Quantum Convergence Threshold (QCT) gives us a structurally precise, dynamical account of wavefunction collapse: collapse as the result of internal and external informational pressures reaching a critical convergence value —
𝐶
(
𝑥
,
𝑡
)

𝐶
crit
C(x,t)≥C
crit
. That model finally provides a viable mechanism where mainstream interpretations gave only postulates.

But mechanisms require context. And that’s where Two-Phase Cosmology (2PC) enters — not as an alternative to QCT, but as its missing ontological substrate. What 2PC provides is the global architecture within which local collapse thresholds become meaningful — and more importantly, why collapse ever became necessary in the first place.

The Collapse Boundary Is Not Arbitrary — It’s Cosmological
QCT tells us collapse happens when a system’s informational coherence cannot sustain superposition. But what defines that threshold globally? Why does a cosmos that evolves unitarily under Schrödinger dynamics suddenly break that unitarity in localized events?

2PC answers this by splitting cosmic evolution into two ontological phases:

Phase 1: A pre-collapse, unitary-evolving quantum reality — a superposed, coherent, timeless, atemporal ‘cosmos’ akin to a universal wavefunction.

Phase 2: A post-collapse, decohered, time-ordered classical world — the world of observers, measurements, memories, and irreversible histories.

Collapse, in 2PC, is not just a local phenomenon. It marks the ontological transition from Phase 1 to Phase 2 — from the noumenal (possibility space) to the phenomenal (experienced actuality). In this view, the very first collapse event was not just an isolated measurement. It was the primordial QCT — the cosmogenic moment when some subsystem (likely biological) crossed the convergence threshold in such a way that it could no longer be reabsorbed by the universal wavefunction.

Participation, Collapse, and the Zeno Barrier
From the 2PC perspective, QCT isn’t just structural — it’s historical. Collapse didn’t "always" occur. It began when a specific kind of self-referential, information-sensitive subsystem — a minimal observer — emerged within the quantum phase and reached a Quantum Zeno Barrier.

This is where the Quantum Zeno Effect (QZE) becomes essential. In quantum theory, repeated ‘measurements’ — or interactions that disturb a system’s unitary evolution — can inhibit change by ‘freezing’ a system in its current state. But in the 2PC model, the QZE is not merely a curious effect; it’s the stabilizing mechanism that enabled the first consciousness-compatible system (e.g., LUCAS — Last Universal Conscious Ancestor) to resist reintegration into Phase 1 superposition.

Here’s how the 2PC/QCT synthesis works:

In the early universe, wavefunction evolution was smooth and global — no collapse, no decoherence, no measurement. This is Phase 1.

Collapse becomes possible only when a subsystem emerges whose internal informational load (ρ_info) and environmental entanglement (E_env) create a convergence trajectory such that
𝐶
(
𝑥
,
𝑡
)

𝐶
crit
C(x,t)→C
crit
.

Once this threshold is crossed, collapse becomes irreversible. But what ensures that the collapse sticks? What prevents the system from re-cohering and falling back into unitary history?

→ The answer: QZE, enforced from within by the emergent subsystem itself. Its recursive informational structure “observes” its own evolving state fast enough to maintain decoherence. The system performs a kind of self-induced quantum Zeno lock, making collapse self-sustaining.

This is the Quantum Convergence Threshold (QCT) embedded within a Zeno-stabilized Phase 2 of reality.

Collapse as Cosmogenic Bifurcation
What Greg calls the “happiest thought” — that collapse isn’t imposed from outside, but emerges from within due to structural conditions — aligns perfectly with the 2PC claim that collapse becomes cosmogenic only when a self-participating system invokes it recursively.

In fact, QCT needs this cosmological background to explain why collapse didn’t happen earlier, or everywhere, or uniformly. Without 2PC, QCT is mechanically sound, but contextually unanchored.

What 2PC provides:

The reason collapse began (not arbitrary: it marked the Phase 1 → Phase 2 transition).

Why decoherence becomes self-sustaining (Zeno feedback within self-modeling systems).

Why measurement appears to need ‘observers’ (not because of consciousness per se, but because participation defines the convergence landscape).

Why time flows irreversibly (collapse defines the arrow of time — QCT gives this a mechanism, but 2PC makes it cosmological).

A Unified Framework: Collapse as Emergent Cosmological Transition
Together, 2PC and QCT offer a full-spectrum answer to the puzzle of collapse:

QCT gives us the threshold mechanics, via
𝑑
𝐶
/
𝑑
𝑡
=
𝛼
𝐸
env
+
𝛽
(

𝜓
)
2
+
𝛾
𝜌
info
dC/dt=αE
env
+β(∇ψ)
2
+γρ
info
.

2PC tells us why those mechanics started mattering, and why they lead to irreversible, classical reality.

QZE locks this collapse into persistent actuality, ensuring the formation of memory, narrative, identity, and time.

What emerges is not a collapse due to measurement, but a cosmos that requires participation to actualize reality — not mystically, but structurally and irreversibly.

Collapse is not an epistemic event. It’s a cosmological phase transition, and we are its stabilizers.
 
Jun 19, 2025
26
3
35
Just to make this even clearer, what this is saying is that reality as we understand it did not begin at the big bang and proceed forwards in the normal classical sense. Rather, it began with the first emergence of consciousness in the Earth's pre-Cambrian ocean, and the whole of the timeline leading back to the Big Bang was retro-actively selected from the Platonic ensemble at that point. Without consciousness there is no classical space and no arrow of time.

This radically alters our notion of time itself. In effect it says that the present is the only thing that is fully real, while the past "decays" and the future "comes into focus". The Big Bang, therefore, never really happened -- and neither did any of the cosmological history leading up to psychegenesis. That was all just "calculated backwards" in the same way the universe now "calculates backwards" when we newly observe some ultra-distant object in the cosmos.

Which in turn offers us a potential explanation for newly discovered anomalies like this: https://dailygalaxy.com/2025/03/a-n...webb-telescope-observes-a-galaxy-it-shouldnt/

These newly discovered distant galaxies did not exist at the time they appear to us now to have existed, because there were no observers to observe that part of the galaxy. It follows that there is no need for them to be consistent with a cosmological model that assumes they have always been there. They just need to be consistent with physical reality as it is now.
 
  • Like
Reactions: billslugg

COLGeek

Cybernaut
Moderator
Apr 3, 2020
2,168
1,108
13,560
So, the assumption of this theory is based on the perception of consciousness from Earth? Given the vastness of the Universe, the only conscious life is here? With certainty?

So other "observers" would invalidate the entire premise? And, change the "center" of the Universe?

I'm just asking as this would appear to be a bit of a flaw in the argument.
 
Jun 19, 2025
3
0
10
There are a lot of "same day as they joined" posts with long theses. They seem to be treating SDC as a publisher for their ideas. Establishing precedence? Unable to interest a publisher? An AI bot run amok?

So, the assumption of this theory is based on the perception of consciousness from Earth? Given the vastness of the Universe, the only conscious life is here? With certainty?

So other "observers" would invalidate the entire premise? And, change the "center" of the Universe?

I'm just asking as this would appear to be a bit of a flaw in the argument.
I can see where the confusion arises, so let me clarify. The QCT + 2PC framework doesn’t assume consciousness is exclusive to Earth or tied to any cosmic “center.” The theory isn’t geocentric or anthropocentric in any way.

Instead, it proposes that collapse occurs wherever participatory conditions arise — meaning wherever structures exist whose internal informational convergence reaches the critical threshold necessary to actualize reality. This could (and logically would) happen wherever consciousness or analogous participatory systems emerge in the universe.

The framework doesn’t depend on our consciousness on our planet. It describes a structural condition of reality: that participation defines convergence conditions for collapse locally, wherever and whenever it occurs.

So other observers wouldn’t invalidate the theory — they’d confirm its universality. Participation doesn’t shift the center of the universe; it defines local conditions for collapse wherever such systems exist.
 
Jun 19, 2025
26
3
35
So, the assumption of this theory is based on the perception of consciousness from Earth? Given the vastness of the Universe, the only conscious life is here? With certainty?

So other "observers" would invalidate the entire premise? And, change the "center" of the Universe?

I'm just asking as this would appear to be a bit of a flaw in the argument.
It only looks like a flaw if you don't understand the argument (ie if you are still thinking about this from the perspective of the old paradigm which this model renders totally obsolete).

Yes, this theory implies that the entire cosmos beyond Earth is completely devoid of conscious life, and almost certainly completely devoid of any life at all. However, this is no "assumption". It is an empirical prediction. If we were to actually find conclusive evidence of life elsewhere in the cosmos, it would amount to an empirical falsification of this model. Other observers wouldn't "change the centre of the universe" -- it would mean that this model cannot be correct and that we need to keep searching for the right answer.
 
Jun 19, 2025
26
3
35
I can see where the confusion arises, so let me clarify. The QCT + 2PC framework doesn’t assume consciousness is exclusive to Earth or tied to any cosmic “center.” The theory isn’t geocentric or anthropocentric in any way.

Instead, it proposes that collapse occurs wherever participatory conditions arise — meaning wherever structures exist whose internal informational convergence reaches the critical threshold necessary to actualize reality. This could (and logically would) happen wherever consciousness or analogous participatory systems emerge in the universe.

The framework doesn’t depend on our consciousness on our planet. It describes a structural condition of reality: that participation defines convergence conditions for collapse locally, wherever and whenever it occurs.

So other observers wouldn’t invalidate the theory — they’d confirm its universality. Participation doesn’t shift the center of the universe; it defines local conditions for collapse wherever such systems exist.
That may be true of QCT as a standalone theory, but it is not true of 2PC. 2PC implies that those participatory conditions can only arise in phase 1, because the process of their creation depends on the "computing power" of MWI. This is why 2PC accounts for the multiple events in Earth's phase 1 history which are improbable on a cosmic scale but necessary for psychegenesis (including eukaryogenesis and the Theia impact). Once the primordial wavefunction collapses then the probability of conscious life evolving again, within the part of the cosmos which has now entered phase 2 (i.e. everywhere causally connected to Earth, which is the entire observable cosmos), is so improbable that should we actually find alien life then it would amount to a falsification of 2PC (but not QCT).

The evolution of conscious life was, form our perspective, teleological -- destined to happen. This is in reality not the result of teleology at all, but a selection effect from the phase shift. It is the result of what I call "the psychetelic principle", and this can only happen once. The only exception to this is if the Earth was destroyed and all conscious life wiped out, in which case the cosmos would return to phase 1 and presumably conscious life would then emerge again somewhere else.

And it is worth noting for other readers of this thread that Greg and I are still in the very early stages of working out how 2PC and QCT fit together -- from Greg's perspective his mechanism has been placed in a context significantly different to the one he intended when he developed it, and from my perspective I've got an important new piece of the puzzle, which has already rendered some of the things I say in my forthcoming book out of date (though it is much too late to change it -- I got the physical proof in my hands just 2 hours ago, and it needs to go to the printers on Monday). I always knew there had to be something special about animal brains, but until I ran into Greg I did not know what it was (I was never convinced by Penrose/Hameroff microtubule theory). Knowing what it is (or at least having a strong candidate in QCT) has allowed me to extend my own theory significantly. I am still adjusting to this.
 
Last edited:

TRENDING THREADS