What if the key to understanding human consciousness lies in building it into AI? In this episode of High Agency, we sit down with Dr. Suzanne Gildert, Founder & CEO of Nirvanic Consciousness Technologies, to explore the cutting edge of AI, quantum physics, and the pursuit of machine consciousness. With a background in robotics, AGI, and quantum computing, Dr. Gildert has led breakthroughs at Sanctuary AI, Kindred AI, and D-Wave Systems. She holds 67+ patents and is now pioneering quantum consciousness technology at Nirvanic. This conversation challenges our assumptions about what intelligence is, where AI is headed, and how it could change our understanding of the human mind.
SUZANNE GILDERT
[00:00:00] I'm interested in consciousness in general. So applying it to AI is interesting because we could ask the question, would consciousness in AI improve it? Would it make it more performant? Would it make it safer? I think by trying to understand consciousness, by trying to build it into AI will get us closer to our mission of understanding what it means for humans to be conscious.
MO DHALIWAL
[00:00:25] Welcome to High Agency, igniting conversations with inspiring people, leading transformative change. Development of artificial intelligence has been a huge breakthrough and it's continuing to accelerate. While AI can now generate realistic text, analyze medical scans with uncanny precision, and even compose music, there's both skepticism and fantasy in the question of how we can improve consciousness. So I think by trying to understand consciousness, there's both skepticism and fantasy in the question of whether AI will ever understand or possess self-awareness. Recent trends show a clear trajectory toward AI systems that will mimic the depth and complexity of human reasoning. Researchers are experimenting with neural architectures that are inspired by the human brain, while others are embedding bio-drives into AI agents, programming evolutionary motivations that echo biological instincts. The global AI market is projected to skyrocket to over $2.5 trillion by 2032. Fueled by advances in multimodal AI and breakthroughs in self-organizing networks, this massive global focus and perceived opportunity is driving investment into R&D at a geometric scale. And some experts are predicting that artificial general intelligence within the next decade is going to be developed, while others see its arrival in the next few years. Regardless, the race to AI is going to continue to evolve. The race to develop systems that are capable of real-time learning, agency, and self-reflection is intensifying. But with progress comes complexity. And a recent 14-point checklist that was designed to evaluate AI consciousness revealed that no current system satisfies more than a few criteria. However, there's a contrary perspective that says the current path of AI isn't a linear march towards consciousness. In fact, the vast majority of AI development may yield powerful logicians that can analyze and synthesize vast quantities of data. But the underlying approach may actually be an evolutionary dead end when it comes to developing machines that are self-aware. And should machines be self-aware? The pursuit of consciousness in AI isn't just a technical challenge. It's an ethical, philosophical, and societal one. From quantum computing innovations, to debates on whether consciousness is inherently biological, this endeavor pushes the limits of what we thought possible, and what we're ready to accept. Surfing the absolute tip of this edge and pushing its boundary is Dr. Suzanne Gildert. Suzanne is founder and CEO of Nervanic, a new venture that is developing quantum consciousness technologies for AI. Previously, she co-founded Sanctuary AI in Robotics, where, as a CTO, she led the development of a human-robot platform and software. Earlier, she co-founded Kindred AI and pioneered quantum computing applications at D-Wave Systems, including the creation of MaxCat, the first game played against a quantum computer. She's the holder of 67 US patents ranging from robotics to quantum computing, and has a PhD in experimental physics from the University of Birmingham. How do you, Suzanne, see the future of what's happening with AI? We're just going to dive right in, if you don't mind.
SUZANNE GILDERT
[00:03:58] Yeah, well, thanks for having me, Mo. This is a really great opportunity. I think that AI in the future is going to be both amazing and everyday at the same time. So I think if you look at the kind of things we have now, like our chat GPT and the AIs that are assisting us every day, you imagined having those things 10 years ago. It's just remarkable what we have now, and we're beginning to take it for granted. We're not stepping back and thinking about how marvelous it is. I use AI every day to assist me with things now, like programming, helping with documents, just strategizing. And if I'd thought that I'd had this 10 years ago, I would have been mind blown. Yeah, it's just another tool now that we have. So I think it's going to be incredibly amazing what we'll have in the future, but we'll also take it for granted.
MO DHALIWAL
[00:04:55] Well, it's interesting how quickly humans can take things for granted, right? Because I remember having lunch with a tech startup founder, I think it was five or six years ago. And over lunch, we're just casually having conversations around technology and gravitated towards AI. And we just kind of took it as a given as we were talking that, oh yeah, AGI, it's not going to happen. Our lifetimes, but you know maybe one day etc and that was such a short time ago, so now the AI conversation around the tools it's gotten quite mundane, right? It's AI everything, it's everywhere, so it's become as much of a cliche as it has been taken for granted to an extent. But there's a lot of promise in this technology, there's a lot of hope, there's a lot of development. There's also some amount of I think consternation around what it means both societally, ethically etc and what you're working on at Nirvanic right now is a very interesting aspect of the artificial intelligence conversation, and I hinted at some of this up front but I want to actually just get straight into Nirvanic and learn about what you're doing right now, and the reason we're Diving straight in is because I saw you speak at an event just a few short weeks ago, and what you had to say actually made my head explode. Because again, I had already thought AI is quite cliché, and yes, we're going to keep developing it; it's going to get better and better at what it does. But you have this entirely wildly different perspective, and I want to hear about that.
SUZANNE GILDERT
[00:06:34] Yeah, so Nirvanic's mission is to try and understand consciousness and then use that to apply it to AI and to human wellness, so it's quite a broad mission. And really, the AI component I think is the first component. And I'm interested in consciousness in general, so applying it to AI is interesting because we could ask the question: would consciousness in AI improve it? Would it make it more performant? Would it make it safer? I think by trying to understand consciousness by trying to build it into AI will get us closer to our mission of understanding what it means for humans to be conscious.
MO DHALIWAL
[00:07:11] But there was something specific that you had shared, you know, and I kind of mentioned some of this in the intro that there's skepticism and this sort of like you know fantastical hyperbole I think around what's actually happening with AI right now but the idea that ChatGPT is just gonna get better and better until it comes alive right is I think some of you know where that hyperbole is right now, but your field and your area of research is really going off on a different tangent so you know without getting I think maybe into the mathematics of it because I'll be fighting for my life in this conversation. Could you share with us what's different about your research and your approach to this?
SUZANNE GILDERT
[00:07:53] Right. So let me take a step back and say that different people fall into different buckets here about what they think about consciousness and AI, which is what makes this really interesting because there are really diverse viewpoints on this. So some people think, I find there's kind of four main types of people or viewpoints, let's say. So the first one is some people think AI is already conscious and robots are already conscious. So a lot of people believe that. I don't. And then right at the other end of the scale, there are people that will believe it will never be conscious no matter what we do. AI and robotics will never, never be conscious. So that's another extreme. And then in the middle, I find there's two kinds of interesting camps. So, there's people that believe if we just keep building the kind of AI we've been building, we're going to be able to do a lot more. All along and make the models bigger and better and more data, more training, it will emerge. It will become conscious on its own. And then the second nuanced camp is the one I fall into, which is: I think we can make AI conscious, but we need a scientific breakthrough or a new kind of technology that we don't have yet. So making AI models just bigger and bigger will not make them conscious by default, but there are technologies we can build that will make them conscious. And the first one is the AI. The second one we're investigating at Nirvanic is quantum computing. So, there are a range of kind of viewpoints on this again, but some scientists believe that consciousness requires a quantum component and there are quantum effects in our brain. So, that's the hypothesis we're testing first.
MO DHALIWAL
[00:09:31] Yeah, that's pretty fascinating because obviously a lot of the math, and I just learned this recently, but a lot of the math that our current AI systems are built on was theorized in the 50s, the 60s. And then there was some avenues of mathematics research that actually led to like, you know, AI winters a couple of times. And this is all before the 2000s. And so, you know, what I'm hearing in your perspective is that relying on current approaches for consciousness or relying on the current development of these LLMs and these large systems that actually might be kind of like a, might lead to a bit of a winter for consciousness and AI. And you're taking a completely separate approach into this quantum field. And this isn't your first foray into anything quantum related. That seems to be a recurring theme in your work and your research. I mentioned in the introduction that, you know, you're early on at D-Wave Systems in developing quantum computing. So can you, again, you know, in a way that I can digest it, you know, explain what is so different about the field of quantum computing, and the research that goes on there, and why you think that it's necessary for consciousness?
SUZANNE GILDERT
[00:10:50] Right. Yeah. So there's a lot to unpack there, but I'll try and tackle it. So just to go back to the history, I actually started out in quantum physics. I did a PhD in quantum physics. So I was looking for some way to apply that PhD in industry to build something big. And quantum computing at the time was just taking off as an industry. So I moved into that field, and I worked on quantum computers for a long time. And at that point, I'd actually heard of some of these quantum consciousness theories. Specifically, there are two scientists, Stuart Hameroff and Sir Roger Penrose, who has a Nobel Prize now. And they came up with a theory called, it's called ORC-OR, which stands for orchestrated objective reduction. So it's a bit of a technical term. But the idea is that our brain, it uses quantum information as well as classical information. Or it could use it in addition to classical information. So if you think about classical information processing, this is the kind we're all familiar with on our laptops, on our PCs, on our phones. They move zeros and ones around. On or off. On or off. And they do computations. And all of mathematics, all of logic, all of information processing underlying every computer we use is based on that, zeros and ones. So quantum information is completely different. In a quantum system, you still have zeros and ones as your sort of starting point. But a quantum system can put them into what's called a superposition of zero and one at the same time. And that sounds kind of weird. Like, imagine doing a mathematical computation, and then all your numbers became mixtures of different numbers all at the same time. It would be kind of confusing to you if you were doing algebra or something. But that's what quantum computing is all about. And so people that design algorithms, they're not going to be able to do that. So quantum algorithms have found ways to use this fact that you can, numbers could be two things at the same time to gain a computing advantage. So if something can be zero and one at the same time, you can search massively more potential answers, because now you're considering lots of different options, not just having to consider every option all at the same time. So these, Roger Penrose and Stuart Hameroff and others in the field sort of made this hypothesis that, well, maybe our brain is doing this. Maybe our brain is taking advantage of this quantum computing effect to be able to compute faster, to compute more performantly. And this makes sense. If evolution could have found a way to use an extra computing boost, it probably would have discovered it. And so the idea is that we evolved to harness quantum information in our brain to do faster and better computation by being able to consider many more options all at the same time. And so that's what quantum computing is, all at the same time. And just as one last thing, well, how does this relate to consciousness at all? Because all I've been talking about so far is computing. So the theory is that if the brain uses quantum information, it's putting all these different numbers into superpositions at the same time. Does that feel like something? And this is where it gets a little bit philosophical and difficult to wrap your head around. And you kind of have to take a bit of a leap of faith. But the hypothesis is that when information goes into these weird quantum states, it's going to be going to be going into these weird quantum states. That's what it feels like to be something. That's what experience is. And that's why I don't think that classical AI will ever be truly experiencing anything.
MO DHALIWAL
[00:14:25] Because it's foundationally on zeros and ones.
SUZANNE GILDERT
[00:14:27] Yeah, it's just using the zeros and ones. It's not putting them into these exotic quantum information states. Yeah. So sorry, it gets a bit technical. No, no, that was fantastic.
MO DHALIWAL
[00:14:38] You paint an incredible picture. It just stretches the brain a little bit trying to hold on to it. It does.
SUZANNE GILDERT
[00:14:43] You have to use your quantum computing facilities.
MO DHALIWAL
[00:14:49] And so when I mentioned that this isn't your first foray into anything quantum related, obviously your PhD is based in it. But you've had an interesting history of the companies that you've founded or co-founded and have been a part of because everything seems to be on just like not even the edge, but just so far beyond the edge of what is going on everywhere else for technology, right? Whether it was D-Wave or something. Yeah. Or maybe it's just a little bit of a And what's interesting to me is that there's there's an entrepreneur's journey in your journey as a scientist and a researcher that's looking for application for these technologies, right? But what's it like working on a mission that you're not even sure what the outcome looks like and you're trying to build teams, you're trying to attract investors around that and you're trying to move towards this outcome that perhaps you can't, I mean, maybe you can't do it. See it. Maybe you know exactly what it is and what it's going to arrive by. I shouldn't make assumptions. But you know, the perspective that I have is that perhaps you're starting in on a journey and you're not exactly sure where it's going to take you, but you're trying to align people along the way.
SUZANNE GILDERT
[00:15:58] Yeah, that's right. I think. So I have a very weird history in business because I'm an unusual breed of person that wants to try and take an idea that's way too early. And actually commercially, that into a startup or a technology or a product. So 99. 9% of people that I talk to or that someone who's in this position talks to will say, 'That's a crazy idea. That's never going to work.' Why are you bothering to do that? That's too early. There's still unanswered scientific questions. And so I say, okay, that's all true. But there is a 0. 001% chance that it could actually end up being a huge deal and your hypothesis is correct. At which point, I feel like sometimes if a scientific question is suddenly answered, it's then quite simple to then quickly turn it into a technology. So, I think that's the big bet-that you really want to discover something new and then be able to turn it into a technology or a product pretty quickly once you've confirmed your hypothesis. So, to me, that's the appeal of it, because-I think of myself more as a scientist than as an entrepreneur. But, it so happens I'm kind of in a crossover role of both. So I'm looking for these deep scientific ideas and questions that there are one or two really creative insights you could have that then turn something, take it from the realm of science into the realm of engineering pretty quickly. So that to me is very, that's what motivates me. That gives me passion, I guess. But it's hard. And I think the biggest piece of advice I'd give to someone who is also this same mindset is be extremely honest and open with whoever you're talking to, whether it's an investor, an early hire, like just other people in the community. You have to keep saying all the time, 'This might not work.' This is very risky. This has a very different risk profile to what people are normally seeing in deals. But-But there are investors out there and collaborators and partners that really do want to take a big risk or a big bet on something. You just have to look for those people.
MO DHALIWAL
[00:18:23] No, as you're talking, I'm realizing that you'd have to have a pretty deep well of optimism and resilience to kind of do what you do. Yeah, that's right.
SUZANNE GILDERT
[00:18:33] I think you have to kind of ignore a lot of criticism or maybe criticism is the wrong idea. I'd say skepticism, because I think skepticism is healthy. And I honestly want people to be constantly critiquing my ideas or what I'm trying to build in a, but in a constructive way. I think that people who just dismiss something outright without looking carefully at all the, all the evidence, I think that's wrong as well. But then I also think just being too optimistic and buying in is also wrong. So you need to be somewhere in the middle where you have a healthy amount of skepticism, but also a healthy amount of optimism and hope for you. And I think that's a really good point. So that's your, what I call the hypothesis. Yeah.
MO DHALIWAL
[00:19:14] And, and look, you might consider yourself a scientist, but I think the fact that you want to turn it into a technology and look for applications, uh, like you've got a pretty strong entrepreneurship gene that is absolutely expressing itself. Right, yeah. Because that's, that's, you know, the entrepreneur's journey, right. Is to go out, try to solve a problem and figure out how to build something valuable out of that.
SUZANNE GILDERT
[00:19:35] Well, I love thinking about this because as a, as a scientist, you can get sucked into a kind of rabbit hole where you're doing this, you're working, you're working, you're working, you're just being extremely scientific and you're getting into a narrower and narrower and narrower field and it ends up, and I've actually had this happen to me, it ends up there are only maybe five people in the whole world that understand what you're working on, what your research is, and why it's important. Everyone else has absolutely no idea what you're saying. So, that's one of the reasons I love trying to turn scientific ideas into technologies and then trying to find ways to communicate what that means for people and what that means for society. And so, I just think that's a way of getting cool ideas out there to a much wider audience in a way that people really understand and resonate with.
MO DHALIWAL
[00:20:21] So from D-Way, which was entirely quantum computing, if I'm right, and then Sanctuary was working on robotics; now we're into the the phase of consciousness-is that your next big problem?
SUZANNE GILDERT
[00:20:36] They're actually all related as well. Really? Yeah. I wanted to maybe point that out because it sounds like a huge pivot between all these different areas, but it's really I'm on the same trajectory I've been all along, which is to try and understand the human mind. And so one of the reasons I got into quantum computing early on was I was interested in how the human brain worked. And at that time, I just thought we're going to need really fast supercomputers to simulate the brain. So I thought quantum computers will be at that time, I hadn't really made the direct connection between that and consciousness. And then from there, I got really interested in AI and thought that AI, if we want to build AI like the human mind, we're going to need human-like bodies to put that AI in, which means humanoid robots. So the kind of left turn into AI and robotics was just, again, trying to find a way to understand the mind. And then it sort of came full circle in trying to build AI and robotics for so long, I started realizing. Maybe there is a component in these so-called cognitive architectures that hasn't been looked at yet. And so I wanted to go back to Novonix's work and now trying to fill that like missing piece in AI. Yeah. But it's been about understanding the human mind all along.
MO DHALIWAL
[00:21:57] Wow. Well, I mean, you know, I kind of half-jokingly talked about how you're so far beyond even what we would consider to be the bleeding edge in technology, like an OD wave holds a number of patents. I mean, we've got a lot of patents for quantum computing, sanctuary for robotics. I just saw somewhere recently that it was, I think, ranked fourth after like Google, I think Boston Dynamics and like one other company that I can’t remember, but in the world. Right. And this Vancouver-based organization is incredible. So when did you arrive at and this is this is classically the entrepreneur story. When do you arrive at the moment where you’re like, no, I need to tackle consciousness in artificial intelligence and we have to start in organic?
SUZANNE GILDERT
[00:22:40] So yeah, really what occurred to me was a very simple observation that anyone can make. You don’t need to be a technologist or a scientist or anything, which is just I was looking around and I was like, why don’t we have these sort of robots everywhere at the moment? These humanoid robots, why don't we have them in our homes helping us in hospitals?
MO DHALIWAL
[00:23:04] Well, Elon Musk's working on it, apparently. Yeah.
SUZANNE GILDERT
[00:23:07] I mean, well, there are a lot of companies working on it now, including Sanctuary. And I think what's happening is the robots and they're starting to find their place. And almost everyone in this field is going into factories and factories, automotive warehouse applications, which is great. And there's a huge product, there's a huge market demand for that. So I think that the way people are doing this with standard robotics and classical AI solutions and training using teleoperation is absolutely great. And it's going to be a really cool opportunity. But I think that they might they're going to end up getting stuck in these repetitive type tasks. And so I'm going to try and weave in why consciousness is important here. So if you think about the way people work, we can we can do tasks or make decisions in one of two ways. We can do things unconsciously, which is imagine you've been doing something, I usually use driving as an analogy. You've been driving for 20 years. Mm hmm. You're not really aware of when you're driving now. It happens. I'm definitely not. Subconsciously. It's we say, we're on autopilot, right? So, we can be making decisions in that mode. It often happens when we're doing repetitive tasks or we could be making decisions consciously. So we're aware of what we're doing and things seem a lot more nuanced. We need to use intuition, maybe in environments we're unfamiliar with. So what I think is that what's happening in AI at the moment is it's focusing on that first bucket of things. It's building robotic systems and other types of agents that are operating completely unconsciously. And that's fine. And it actually agrees with how we work because they've seen thousands or millions of training examples already. So they've kind of developed this ability to do things on autopilot in this unconscious mode of operation. We're able to do a lot of things using that mode. But if you take those same systems and put them into an environment they've never seen before, they've never had any training data on, they suddenly have issues. And what I think is missing in that situation is consciousness. So I think there'll be a lot of applications for these unconscious AIs if they've been trained. But if you want to take those same AIs and now put them in new situations and have them learn on the fly in with changing conditions, unpredictable things happening, you need a different type of technology. And I think it needs to be a consciousness technology.
MO DHALIWAL
[00:25:38] I really like the driving analogy a lot because when I think back to when I first learned to drive, the metaphor works actually really well because at the time you just think you're being really self-conscious, which I guess is consciousness, but you're really paying attention to your body. Like where am I putting my hands and the seatbelt and you're looking at all the mirrors. But there's this like hyper awareness and self-consciousness going on. And with enough repetition, it does become this programmed activity. That's right. Right. Like I remember the last time I sat in a car and ever thought about any of those things because you get in and the only thing you're worried about is, 'I've got to get, I've got to do this today and so-and-so is upset at me and I've got to get here quickly. And you know, these are the things that occupy you, but none of the functions actually matter anymore.
SUZANNE GILDERT
[00:26:23] That's right. That's right. And it's kind of interesting to think, once you start thinking about this, you'll notice yourself doing it. You'll notice when you're doing something. Consciously versus when you've sort of switched off. Often, you'll notice when you're doing something unconsciously because you notice yourself thinking about something else. So you'll be saying, doing the dishes and you'll, you'll be thinking about your vacation or you'll be thinking about that meeting you have to attend tomorrow and you'll suddenly realize you're doing the dishes without being consciously aware of doing the dishes. And a lot of people practice mindfulness meditation as a way of trying to shift their conscious focus of attention. Deliberately. And people do this to prevent things like rumination because you'll be consciously ruminating on something when you should be being aware of, like the, the current situation you're in more.
MO DHALIWAL
[00:27:15] So I just had a business idea. I feel like as soon as you've cracked quantum consciousness, I'm going to start a meditation business for artificial intelligences to make sure, you know, teach them to be mindful. So you know, I want to talk about the, the entrepreneurial journey. A little bit more if you don't mind. And again, it's because I'm just remarking at how hard business can be in general, right? You know, my own trajectory in life was my first entrepreneurial activity was actually weirdly a nonprofit space. It's like I wanted to produce a festival. I started doing that just because I wanted to see it in the world. And I thought that was hard enough because you're trying to raise money from sponsors and donors and government agencies that you're just selling an idea to. And later on, in comparison to that, starting an agency like Skyrocket, you know, it was easier than that because we're selling a service that people actually want, but it's still hard. You're still trying to align people around a vision. You're still trying to pull in different stakeholders to, to make something go. And you know, any business I would say requires a high degree of resilience, adaptability, of course, optimism, you know, and then we have the realms that you're talking about where everything feels like, not even like a moonshot. It's like you're targeting a distant black hole and saying, 'we think we can make it there.' You know, what has your experience been with even like team members of bringing people around and saying, 'okay, let's, let's work on the moonshot together' and keeping them inspired and activated and involved in that journey?
SUZANNE GILDERT
[00:28:48] Yeah, absolutely. It's really important question. I think the alignment of mission and business in a, in a deep tech moonshot type company or a company where you want to take science and commercialize it. Yeah. I think that's one thing to focus on and it's, it's hard. It will constantly that these two things will constantly try and pull apart from each other. So you just have to be aware that that's going to happen and don't sort of get disenchanted if it starts to happen because it will, you just have to know, find ways to try and pull them back together or patch it up. So I can guess I could go over a couple of strategies maybe that I've come up with for this. Absolutely. So one thing is, and I'm, I'm talking now to anyone who wants to. Yeah. Who wants to do this kind of thing that wants to take a really big moonshot idea and make it, make it happen in the world. So the first thing is you want to have a set of like foundational documents. So this is very practical advice now I tend to call it like a manifesto or something. So you want to, as the founder or founders or the core team that really believe in the mission need to produce a set of like foundational documents that are sort of like the the holy book of. Yeah. Of the mission of what you want to do and the more detail and kind of like, uh, points of interest and content there is there the better because you want something. So remember what happens is as your organization grows. So say you're five people, then, um, the people who truly believe the mission get to talk every day to the new, the person that's just been hired. So they get to like, I don't want this to sound weird, but indoctrinate.
MO DHALIWAL
[00:30:27] A hundred percent. Yeah.
SUZANNE GILDERT
[00:30:28] Yeah. So you, you have to indoctrinate people into the like cult of what you're doing. Otherwise it will, it won't work. So that's fine when you have five people, but when you have 50 people, you can no longer do that as a core kind of early mission, a team of say two or three people. So you have to find a way of amplifying and scaling this mission culture within the organization. So having a set of foundational documents is great. You have to really look carefully at your onboarding technique, like how are you onboarding people? And, um, uh, kind of like immersing them into the, those ideas early on. So lately I've been exploring new ways of using AI to do this. So I've been looking at things like Noteable LM, so you can load in all your kind of documentation and then you can actually create AI tools to help onboard with the storytelling.
MO DHALIWAL
[00:31:19] Yeah.
SUZANNE GILDERT
[00:31:19] Yeah. So I'm working on a kind of like side project to help with the business, which is, I call it the Oracle. So imagine you had, um, an AI chat bot that could answer any question about the organization from the perspective of a mission-driven kind of like a person. So when you hire a new person or you bring on a co-op or something like that, they get to interact with this kind of like Oracle. And in doing that, you can scale yourself. You can actually find ways to do that. So I think in internal, what I call internal marketing and communications is as important as external. When you're doing this kind of mission-driven organization. Yeah.
MO DHALIWAL
[00:31:58] Absolutely. I mean, it's great to hear that AI itself and these tools can actually be used to empower and enable that sort of work. The intent though, and, and knowing that it's necessary, and the focus on it though, I mean, that's the most important part. Um, in fact, actually just before you came on, I was, uh, chatting with Craig, uh, Dougherty, who is a people and cultural person, very experienced, 20 years of this. Mm-hmm . And all he does is actually bring people together, align around vision, mission values, and understand organization, bringing the stakeholders to a table, and that's for conventional business. Right. Um, and I'm sure that the more ambitious your vision, right, and perhaps even the more grand and vast the idea, perhaps more difficult it is to maintain that type of alignment and keep everybody, you know, together and moving in the same direction. Because correct me if I'm wrong, but my assumption would be that if it's something that again is so vast and doesn't have a specific outcome yet. Is there a worry in your mind or have you experienced like any sort of loss of urgency where you have team members or stakeholders that kind of feel, well, you know, we're working on the impossible anyways, so kind of who cares, we'll drift along and see if it happens.
SUZANNE GILDERT
[00:33:18] Yeah. A little bit of that. I'd say where it, where that happens. Where that kind of thing manifests is in. So, there's always this worry of, 'Am I too early?' And so when you're doing a really like deep tech new idea, there's a concern that, um, if you just sort of hold back and wait, there will be more and more evidence or breakthroughs that happen independently outside of your organization. So I wouldn't say it's sort of like lack of motivational urgency, but there is a sense of like, 'If we just wait, will there be a breakthrough that helps? It helps us a lot.' And I find this working in a humanoid robotics for a long time is that, um, I got into it very early and it's like, if, if I just waited a couple more years, the cost of say motors, robot, um, motors and actuators would have fallen by three or four times. So once you've seen this happen enough times, you start to get into this mindset where it's like, well, if I just wait a little bit, maybe there'll be a new. A new piece of information that occurs and maybe that'll help me more. And I will actually have saved time in the long run. So you have to carefully balance that, like waiting for new evidence on new breakthroughs with just kind of like pushing forward as fast as you can.
MO DHALIWAL
[00:34:40] And you know, so neurotic just came out of, um, stealth mode recently, uh, and you described that, you know, it’s a small team purposefully, uh, because you want to build a solid foundation of the science. Before you move on to the next phase. And in that, you know, what are you excited about with, what do you see happening out in the world? That is really interesting to you even beyond your work that you think when it comes time to actually really rapidly grow or build your team that you’re saying, okay, I’m glad this is happening. Right. What are the things that perhaps you were waiting for a couple of years ago that you’re seeing now and that you think you’ll be able to leverage?
SUZANNE GILDERT
[00:35:15] Yeah. So one thing that happened over the last couple of years, which I guess is one of the reasons I decided to take the plunge into this is there's more and more evidence piling up for there actually being potentially quantum effects going on. So like I said, I, I knew about this theory 12 years ago, but at that time I, sorry, what I mean by this theory is this quantum consciousness kind of, um, idea that there might be quantum effects going on in the brain, but at that time I didn't see enough evidence for it. Whereas now there are some science, like more and more scientific papers are coming out, finding evidence. Yeah. About quantum physics going on in like substructures inside the brain. So that's really interesting. And I think the second thing that's been happening is, um, a shift in people's openness to different philosophical ideas. So you might think, oh, how is business related to philosophy? Cause those two things are opposite ends of the spectrum. But how we think about reality itself really influences what we can actually build in the world. Mm-hmm . So, um, I've seen some shifts in philosophy from people, um, mostly believing in what's called materialism. So everything is made of matter and atoms and reductionist approaches to science. And there's been a bit of a shift lately from that to more that kind of like, um, uh, idealism or you might think of it as almost a more spiritual approach to things like maybe there are interactions going on. Mm-hmm. In the mind that we don't quite understand yet using, uh, currently known science. And there are, um, ways in which our current science is incomplete that could mean we're looking at things entirely wrongly. So there's just been more of an openness to that. So when I talk about some of these ideas now, it's much better received, I guess, than it was, um, a while ago. So the philosophy shifting can actually influence business by making people more aware. Open-minded to new ideas.
MO DHALIWAL
[00:37:20] No, absolutely. Like what you're describing is the cultural shift that's taking place, right? And I talk about this all the time that whether it's business or technology or frankly anything, that the strategy behind it typically is governing maybe a dozen decisions that you're going to make, right? That's the strategy. But the culture influences a thousand unspoken contracts and decisions that might come up, right? So that 12-year cultural shift now is creating more availability for some of the ideas you're talking about.
SUZANNE GILDERT
[00:37:48] And I think there's an interesting link back to AI here because AI is proliferating. It's infusing our lives. Like every day we're interacting with AI now. And I think the fact that it's grown so much has made us start thinking more about our own minds and who we are as humans. So, you could say that, um, maybe, I don't know, 10 or 20 years ago, we were all kind of focused a lot more on, um, I don't know. Just, just doing well in life, like running our businesses, sort of material possessions, all these kinds of things.
MO DHALIWAL
[00:38:24] I was going to say the materiality and functions that we play.
SUZANNE GILDERT
[00:38:27] Whereas now what's happening is because AI is, is evolving so much, everyone is starting to think about what does this mean? It is starting to touch on our, our humanity, our connectedness with each other. It's starting to cause us to ask questions, more questions about what does it mean to be human? What does it mean to be connected? What does it mean to interact with other beings like AIs that are not like us? So that's causing us to start going back to those kind of, um, questions that philosophers were asking thousands of years ago. So I think we were kind of just a bit distracted from those questions for a while and, you know, industrial revolution and everything. We've been very obsessed with just building technology, like let's build technology for hundreds of years. We've sort of lost some of those deeper questions, but AI is forcing us to ask them now.
MO DHALIWAL
[00:39:20] Yeah. And, um, as you were talking, actually, I, I started thinking about the industrial revolution because you know, we're still in its halo effect and that had the cultural impact of really kind of mechanizing humanity. That's right. You know, it's, it's a machine and how do we fit as cogs into this machine? In fact, I'm not sure if this has changed now, but, um, I know as of a few years ago, that the public education system in the UK, mm-hmm. Um, still referred to teachers as 'supply.' Mm-hmm. Right. Because, right. It was to, you know, uh, train and put people. Yeah. Into factories. Yeah. Right. So that was the supply side. Yeah. Of the balance sheet. So again, I'm not sure if it's changed now. Um, but that mechanizing of humanity is something that we still live with and it's probably going to be a legacy that's going to persist for a while. Yeah. But I know for me, it's been exciting to think about the idea and I'm going to butcher this quote, but somebody said recently that, um, I want AI to, you know, do the laundry and clean the dishes so I can spend more time being creative and making art. I don't want AI to be creative and make art so that I'm spending my time, you know, doing dishes and, uh, and the laundry and that's been, you know, uh, probably an oversimplification, but my sort of perspective on it as well; as I'm interested in seeing what are the kind of low-value human activities that can be shunted away from me so that I can spend more time actually interacting with people and being human.
SUZANNE GILDERT
[00:40:45] Yeah. And again, there's a little bit of a trade-off here because you might think, well, don't we just want purely unconscious AI because then it will be doing all these routine activities for us that we don't particularly enjoy or like, and I actually agree with that. But what's going to happen is if we, if we take that stance, we're going to fill the world with robots. They're going to be amongst us everywhere doing everything, but they will have no consciousness. They'll have no awareness. So. They'll, if you, if you want to be poetic, they won't have a soul. And so I think what will happen at that point is we won't be able to truly connect with them. So, you know, you have to ask yourself the question, okay, do I want to be in a world where there's like millions of robots everywhere doing everything, but I'm not connecting with these very like human looking things, or do we want them to have the option to be able to, uh, I guess, turn on and off a little bit of consciousness. So if we do need to interact with them in a more human way. In a more human-like way, that option is available. So it's really one of two worlds. And I, I honestly think what's going to happen is if we just fill the world with these unconscious machines, I think people will become increasingly isolated from one another and they'll just be surrounded by unconscious machines. So I see conscious machines as being like a natural evolution from where we're going, where these things will be interacting with them on a daily basis and we can choose to put them into unconscious. Just get on with it mode. But if we want to connect and understand ourselves more, we can actually turn on a little bit of the self-awareness.
MO DHALIWAL
[00:42:24] Yeah, there's that human tendency to anthropomorphize things as well. But then the flip side of that is if we're in environments that are so mechanized and actually we appreciate the fact that these are soulless machines that are just like automatons and mindless. That there's a potential for us to actually dehumanize them and therefore become a little more inhuman ourselves. It's a really interesting effect.
SUZANNE GILDERT
[00:42:54] It's kind of a subtle argument, but if you create something in the image of a person and then you don't give it all the aspects of a person, you start to sort of dehumanize yourself because we're interacting with these, we will be interacting with these things on a daily basis. And it's a little bit of the same argument people have used for like violent video games, like if you're playing very human characters in a video game and you're, you know, inflicting violence on a lot of other characters, are you actually changing yourself and your own way that you relate with other human beings? Because you're kind of like just, you know, indulging in this like violence all the time. So there's a similar argument here. If you're just interacting with soulless machines and you get used to that, how are you then going to bring that across to your real human connections with other people? And will it affect that and change it? And I think what we're doing with AI in the physical world now is we're taking inanimate matter and we're attempting to bring it to life. We're animating it. Like you said, we’re going to be in environments where we’re surrounded by machines that are moving and doing things all the time. We’re sort of bringing matter to life, but we’re not imbuing it with a soul or a spirit. And I think that we should.
MO DHALIWAL
[00:44:12] Yeah. I see a potential there for humanity to actually lose some of its humanity if we have things, like you said, made in our image. That’s right. But we’re now regularly treating as inhuman and dehumanizing now. The cultural impacts of artificial intelligence just over the past couple of years have been vast, and we’re accelerating on that path. The impacts of some of what you’re describing, I can’t even fathom right now. I feel like civilization. Civilization itself, what it means to be human, all of that is kind of in a state of flux right now. What is your vision, I guess, of where do you see your work culminating? What is your end goal? I know that asking for what's Nervanix next, scientific breakthrough going to be and by when. I'm not an investor, but I'm sure your investors are asking. Right. Right. Right. Right. Right. What is your vision for the future? What does the world look like to you?
SUZANNE GILDERT
[00:45:16] Yeah. And again, I want to take this back to our mission statement and this mission versus business argument. So the mission statement is really to understand consciousness and then to use that to improve human flourishing. So that's our mission. The AI part is where I see the low-hanging fruit and where I see the business alignment. So, where I want to see this go in the long-term future is okay. We've understood consciousness. We've built it into AI in a way that's controllable and optional. So, I want to make this very clear: I don't think AI and robots should be conscious all the time. I think it should be something that is like the way we do it, where we can turn it up and down. So, we've done that. We've put it into AI and that's helped our society. It's helped productivity and helped people that maybe can't do tasks on their own, like elderly people. So all that I think is going to be great. But I see where I see this being applied in the long term is actually to our own wellness and our own understanding of ourselves. So this is going to be a little vague because it's not a business model. It's a long-term vision and a mission. But I think understanding our own consciousness will help us understand each other more. And you could imagine the far future, there may be technologies that actually allow us to modify our consciousness. Or connect our consciousnesses or expand our consciousnesses more as well. So, in the long-term future of this, I see the technologies being almost more applicable to people than to AIs.
MO DHALIWAL
[00:46:56] So, looking back on your work to date and the companies that you've helped start, if we were to backtrack a couple of decades and say, OK, you know, Suzanne Gilder, 80% scientist, 20% entrepreneur. Yeah. That's just I'm a rational woman signing to you. What's the advice that you would give yourself? What are some things that you would do differently?
SUZANNE GILDERT
[00:47:24] I think I would not underestimate the importance of communicating an idea to lots of different people in a way that they could understand. Because again, coming from a scientist's background, you tend to end up with a little bit of this, like, insular, oh, I'm just going to talk my science jargon. And if people don't get it, then, you know, that's their problem. I don't believe that now. I think you need to. Any idea.
MO DHALIWAL
[00:47:51] Did you actually start off that way?
SUZANNE GILDERT
[00:47:52] Yeah. Really? Yeah, absolutely. I mean, when I was doing, like, physics, when I was deep into the physics research, you go to conferences and everyone is talking in what sounds like an alien language. Everyone's talking in these, like, with formula and equations and all these, like, technical jargon terms. That's just how people talk. Yeah. But you can't use that to now explain what the impact of a scientific breakthrough might be to everyone's life. So I think you need to bridge that gap. And so I would kind of tell my younger self not to underestimate the difficulty of that process and the importance of it as well.
MO DHALIWAL
[00:48:30] Well, I feel like you've already taken that lesson on that advice because you're doing a great job of communicating now. In fact, what I've seen in your public talks. And when you're sharing about Nirvanic and your mission, like, highly accessible and incredibly impactful communication of your vision. So you're doing great with it now.
SUZANNE GILDERT
[00:48:50] Well, the cool thing about consciousness is everyone, as I assume, everyone has it. And so it's something most people are just generally very interested in. And so the way I think this super detailed, complicated science about quantum physics still can touch everyone's lives. Because it's impacting something that they experience every second of their waking life. So I think people should be naturally curious about consciousness and want to ask questions about it. And there's a lot of fiery debates about what it is, what is even the definition. Does it even exist? Some people don't believe it even exists, which is interesting. So I think, you know, it's just a topic everyone's curious about. Everyone has an opinion about. And I think we should try and capitalize on that. And use that as a way to get people interested in where we could take these technologies and what it might mean for AI.
MO DHALIWAL
[00:49:46] Yeah. Get them engaged and enhance that understanding. Yeah. Suzanne, if somebody wants to follow your work, learn about Nirvanic, everything that you have on the go, where should they go?
SUZANNE GILDERT
[00:49:56] Yeah, you can go to nirvanic. ai, our website. And then there's links there to all of our social media. There's a Substack newsletter you can sign up for. You can find a link to that on our website. Just pop in your email and you'll get our newsletter updates from Twitter, LinkedIn, and Blue Sky as well. Oh, wow. Experimenting with other new social platforms as well. So yeah. Yeah. Search us up. Nirvanic AI or Nirvanic Consciousness.
MO DHALIWAL
[00:50:24] Sounds great. Well, thanks for coming on High Agency. Yeah.
SUZANNE GILDERT
[00:50:26] Thanks very much for having me. It's been great.
MO DHALIWAL
[00:50:29] Well, hopefully we've given you a lot to think about. That was High Agency. Like and subscribe, and we will see you next time.
Kraig Docherty is a strategic HR leader with over 20 years of experience in building and scaling high-performing teams across multiple industries, from technology and gaming to healthcare and logistics. As the founder of Why Talent, Kraig has been a trusted advisor to CEOs and founders at top companies like Electronic Arts, Activision Blizzard, and Indochino. He specializes in aligning people strategy with business objectives, focusing on transformative HR leadership and creating environments where teams can thrive.
Kraig Docherty is a strategic HR leader with over 20 years of experience in building and scaling high-performing teams across multiple industries, from technology and gaming to healthcare and logistics. As the founder of Why Talent, Kraig has been a trusted advisor to CEOs and founders at top companies like Electronic Arts, Activision Blizzard, and Indochino. He specializes in aligning people strategy with business objectives, focusing on transformative HR leadership and creating environments where teams can thrive.
Steven Ten Holder is the co-founder of Acorn Biolabs, where he’s redefining the future of health through groundbreaking work in cell preservation and regenerative medicine. From pioneering CRISPR for plant immunity to launching a successful startup backed by Y Combinator, Steven’s vision is reshaping longevity-tech. His work blends artificial intelligence, biology, and synthetic breakthroughs to help people live sharper, stronger, and more resilient lives. A dual-minded thinker, Steven continues to lead the next wave of innovation in the biotech industry.
Steven Ten Holder is the co-founder of Acorn Biolabs, where he’s redefining the future of health through groundbreaking work in cell preservation and regenerative medicine. From pioneering CRISPR for plant immunity to launching a successful startup backed by Y Combinator, Steven’s vision is reshaping longevity-tech. His work blends artificial intelligence, biology, and synthetic breakthroughs to help people live sharper, stronger, and more resilient lives. A dual-minded thinker, Steven continues to lead the next wave of innovation in the biotech industry.
As VP of Strategy & Communications at Rescue | The Behavior Change Agency, Penny Norman is a leading force in behaviour change marketing. Specializing in substance use prevention and mental health advocacy, she has spearheaded innovative campaigns that leverage real-time insights, storytelling, and cultural relevance to shift public attitudes. Her work, recognized across the industry, demonstrates how purpose-driven marketing can create lasting change in communities worldwide.
As VP of Strategy & Communications at Rescue | The Behavior Change Agency, Penny Norman is a leading force in behaviour change marketing. Specializing in substance use prevention and mental health advocacy, she has spearheaded innovative campaigns that leverage real-time insights, storytelling, and cultural relevance to shift public attitudes. Her work, recognized across the industry, demonstrates how purpose-driven marketing can create lasting change in communities worldwide.
Book a Meeting
Or
Send a Message