
Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37)
Humans + AI
AI should make everyone smarter, wealthier, healthier
Henrik outlines his vision that AI must uplift all people without sacrificing privacy, freedom, or environment.
“The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.”
–Henrik von Scheel
About Henrik von Scheel
Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work.
What you will learn
- Why human-centered AI is crucial for widespread societal prosperity
- The impact of AI hype cycles, media narratives, and the realities of technology adoption
- How equitable wealth distribution and capital allocation in AI can shape economic outcomes
- Risks around data ownership, privacy, and the importance of controlling your own data in the AI era
- Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership
- The importance of trust calibration and intentional human-AI collaboration in practical applications
- How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning
- Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills
Episode Resources
Transcript
Ross Dawson: Henrik, it is wonderful to have you on the show.
Henrik von Scheel: Thank you very much for having me, Ross.
Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI?
Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself.
So I’m excited. It’s one of the most exciting periods for the industry and for us as people.
Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey?
Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better.
The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future.
The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people.
So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right?
Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way in which AI is developed, there will be creation of wealth. There is the real potential for productivity improvement.
But then it’s about finding, how do we have the mechanisms for allocation of wealth or capital from that which is allocated? Let’s call it equitably.
Henrik von Scheel: I’m a firm believer that this year, 35 to 45% of the money invested in AI will evaporate. Companies that have invested—they’re the early adopters—they have this format, so they’re rushing to it. From a company perspective, you always adapt the best practices. When it goes beyond the hype, and the performance curve and adoption curve is low. For example, for AI, the simple version is there. You heard that Deloitte and McKinsey talked 10 years ago about robotic process automation like God’s gift to mankind in AI. Today, you don’t hear them talking about it, because you can download it for free—for HR, for forecasting, planning, budgeting, and so on, you can save 20 or 30%, and as an organization, you can do it yourself. You download two, three models, you test it, and you run it. Good, okay, so that’s when you apply best practices.
Then you have industry practices, like AI agents. So when you have AI agents for manufacturing, for industrial sectors, for energy sectors, they are nothing else than workflow optimization. You use robotic process optimization, you do a visualization on it, so it’s far more practical at a level, because you use the data they already have in the organizations under a simple line on the process flow, on the safety, security—it’s very much down at the level where they can apply it and use it. So this version of large language models, where you have this magic powder you spread over the organization and it’s totally working—it’s not really there.
And then there’s the third leg that companies are quite aware of. It’s called Shadow AI, right? Shadow AI is because AI is the biggest infringement on intellectual capital within organizations. The reason why normal people are not allowed to look at pornography at their work is because of cybersecurity. It’s not that your boss doesn’t like you to look at pornography; it’s because of cybersecurity. It’s the same reason with AI—you should not be allowed to use Copilot latest version or large language models as a CFO or as a worker, because you’re exporting your own information outside. Copilot takes, every five seconds, a screenshot for the large language models’ learning. So as a corporate point of view, that’s the first thing—you should actually protect your own data so you can monetize your data in the future.
From an economic point of view, if you go two, three steps behind this, you ask, okay, what is it that makes sense in this? There’s something really, really strange in this. Australia was built by building railways—they take 100 years to build, they also last 100 years. The infrastructure that lasts. So there’s a return on investment. You build streets, you build education systems—everything we build as humans, as society, has a lasting element to it. Now, we build data centers that last three months until the chips need to be returned, or six months. So there’s no sense in that we are building data centers around the world where we capture all data. It has a volume of hundreds of trillions of dollars, and we need to exchange them at a rate between three to six months to maintain the data. And then you say, wow. And you do that via license models of large language models—the data can never, in its entire life cycle, be that much worth.
So there’s a very strange element, because most of the entrepreneurs that go to large language models and use their solutions on Gemini and ChatGPT and so on, you say, okay, you are building your solution on large language models, but you don’t own the model. You don’t own the data. You don’t own your own data. So what are you doing?
Ross Dawson: You have architectural choices, to a point, as to—
Henrik von Scheel: That’s Architectural choices, but you are limiting yourself. So the first element you always say, if my value is customizing a solution, your value is actually the data. So you must have a way to keep and maintain the data yourself. We can take another call to say how you apply AI and what the future of AI looks like, because AI today is very much focused on language models, and language models are the most limited version of AI science of all. It has the least data, but it’s the one we’re most excited about, because it resembles something we do—our wording, our formation of words. It’s a recognition. Recognitions are what we do.
I wanted to come back to this about the economy, right? The US economy puts all chips on this. It’s highly energy sensitive, and it’s working all railroads. However, the US dollar is on a really, really bad track record. Three and a half years ago, there was a president in the US—he was sleeping—and meanwhile, he was sleeping, Saudi Arabia’s King MBS went in and he did a divorce, which is called the divorce of the petrodollar. So the gold linked with US dollar linked with oil—that was the solution. The US had that anybody, they could print as much money as they wanted, and the rest of the world was paying the dividend for it. It was the only country that could just print money. That brought the US into a mode, and when the new president came into his office, it’s very rare that in the US, you are writing an accord. An accord is only written when the Federal Reserve goes into the president’s office saying, guys, we’re hitting the wall. We need to do something. And they wrote five plans, what they wanted to do. And here’s the funny thing—when I mention them, you will recognize them very much.
Number one, bring back manufacturing. Number two, implement tariffs so they can pull back US dollars. Number three, then they wanted to implement stable coins to pull back US dollars. I forgot number three, actually. Number four, and number five was actually they want to go to war. Now they go to war, right? So they are going to war, not because of any reasons besides their economy is based on a war machine, and the economy is becoming unstable. So that’s one of the main reasons. The US has put all cards on AI—all their economy cards are on AI. And that’s, from a country perspective, a very dangerous thing to do because you need energy and you need data, and AI from the US perspective has become a defense mechanism.
When you look at the regulatory aspect of AI, Europe is very much put into human and center, and that the human owns the data, protects teenagers up to 16 years old, and that you can work as an entrepreneur with data, but you have to coordinate how you protect and manage the data. You have to be transparent on how you use the data and how much data you use. The US is very different—red tape off, no regulations at all, full-blown power to the market, and you are seen as a consumer, Ross, so all power to the guys who earn money to make more money. So no protections of anything, of your data—that’s the US version and literally, no regulations, no redtape regulations.
Ross Dawson: In a moment, I want to move on to the human-AI collaboration. But just to round this out, you said before about your prediction that 35 to 40% of the investment in AI is gone, which I think is very, very fair. So back when we both were speakers at the Future of Sex Summit in Dubai last year, I was on a panel where I was asked, is it boom or bust? And basically both, in the sense of 35–40%—that’s bust. But at the same time, there are other parts of the market which can prosper. Of course, consolidation of the market means that there’s massive investments and in some cases massive losses, but there still are sectors where high value can be created. But this goes back to your point where still a lot of the center is in the US. We are starting to see sovereign AI initiatives and other initiatives around the world, but those are often open source foundation models. And obviously the regulation, particularly around the EU, provides a still very differentiated AI landscape with US, China, EU, and then some other players as well, where if we see boom and bust, that could be very much focused on the US, with potential for other parts of the world to see more growth in AI.
Henrik von Scheel: So Ross, you’re using large language models, right?
Ross Dawson: Yes.
Henrik von Scheel: Do you have the feeling that they, since last year, are getting stronger or weaker? they’re getting weaker?
Ross Dawson: They’re getting better.
Henrik von Scheel: My feeling is the opposite. My feeling is that they’re getting weaker and weaker, and that’s because part of the data —
Ross Dawson: In which content?
Henrik von Scheel: They’re using old, old content. They’ve already used old content. So now you need to go to specialized, you need to go to public sources, to go for research data, you know. But from a content-wise perspective, it becomes extremely weak. I mean, last year, I’m extremely disappointed by large language models—very, very disappointed in terms of what they can deliver and what they do. Ask it whatever—ask it about futurism prediction, or ask about Industry 5.0, 5.6, whatever answer you give it, you can get an answer. You know, 110%—like CPAM, there are 19 regulations on CPAM, and you ask, how many regulations are there? They will give you sometimes 19, sometimes 17, sometimes 23—they just make up stuff. It just gets worse and worse. So if the valid data is not strong enough, it becomes actually a very, very weak tool after all, right?
Ross Dawson: So are these using the top models from the frontier labs, because they are very good.
Henrik von Scheel: Yeah, but then you have to have the paid model. But it’s not like I’m really, really impressed by it. It’s not kicking my bum where it says, holy smokes. In the beginning, the first two years, you were surprised, right? So I have a little bit of the feeling that AI today is a little bit where emails were in the beginning, and then digitalization came. With emails, we were all excited, but emails just created not less workload, but more workload for us—it decreased our productivity. There are really good signs of this.
Then you look at digitalization, right? We were all excited because we can connect, we can talk to our friends, all of this. But what ended up with WhatsApp Business? WhatsApp Business is no business, right? We are using it, but it decreases our productivity level far more. So today, with digitalization, we are becoming generalists—quick information, we know something, but we don’t know anything, right? It’s not that you would put the finger on it and say, well, it has really increased our innovation level. No. Has it really increased our research level? No. Has it really made us better human beings? No. So I’m not negative against it. I’m just saying we have to be careful, because we have a knife or a hammer—we shouldn’t use the hammer for everything. And you mentioned that really well, right? AI’s hype cycle is, with any technology, there’s a hype, and then it goes down and matures, and then the application of this is different than what you thought in the beginning, of course, but that’s AI—it’s very much relevant. But you know, the big message today in AI is AI physical, right? What is AI physical?
Ross Dawson: Well, just going back to the point—a lot of what I’m working on at the moment is the idea of appropriate trust. So you trust the models enough, but not too much, so that if they are going to give you bad results, you’re not relying on them. But if they are useful, you can use them. So we have to continue to calibrate for any particular model, which is different in every particular context. This is both essentially a skill or a capability, where we need to know when and how to use models at any particular time, because they’re changing in whatever way. So that becomes a foundation of how we can trust them to the right degree—not too much, but enough that we can actually use them if they are useful. Which comes back to this frame of the human-AI collaboration, which you’ve been doing a lot of work on. So if AI can be useful in some contexts, how is it that we can best build effective human-AI collaboration?
Henrik von Scheel: I like this. Let’s play a little bit, right? So if human evolution is evolving with the birth certificate, we go to kindergarten, we go to school, and we learn differently. Everybody’s individual—we learn differently, right? It takes humans a long time to learn, to sense, to do all of this. And then you have AI, which is a supporting learning model for you to store information. But today you learn, and the model learns on you. You log in, and every time you learn, the model learns from you. That means that all your information is captured there, right? So the next evolution of a model should be that the privacy of Ross is throughout your last five years with large language models—you’ve studied Porter’s models, you’ve studied this and this. Well, if I ask you next day about Porter’s model, you still forget it, but the machine should be able to help you to learn, to adopt the skills in your daily life. So it cannot be a machine knowledge learning that is owned somewhere else by a big company—it must be something that is attached to Ross throughout your life, that you go from where you are now, and in five years, you’re somewhere else. So the knowledge that you have searched and gained and adopted, it follows your life, right?
This is, for me, AI—the real AI revolution happens in the bio revolution in 2030, because the biggest amount of data we have is biophysical data. So the interconnection between our body, the modules, the biosystem modules, the biophysical systems, how we eat food, how material, with their level, is coming all in there, and part of this is the knowledge center of you, Ross. So if you learn something, how does it follow your evolution? Do you learn the same way today you learned 10 years ago?
Ross Dawson: And it’s a wonderful thing that we continue to learn and forget and evolve. We are the same person, sort of, but, you know, we are a different person at the same time.
Henrik von Scheel: I was talking yesterday to a psychiatrist who’s studying human evolution, and she’s called Trina Gondo, and I had this interesting discussion with her, because she says humans’ learning capacity changes throughout their life. So if we have learning modules that can support us throughout our life—to go through how conscious, how focused we are on things, how much stress level we can take, because stress levels are also different, how much breadth are you covering in terms of your work, your private life, how are you in terms of setup, in terms of your spiritual life—all of this has something to do with your learning, because it’s your perspective you drive. It’s your values you drive.
I actually developed with her a model in terms of how the six aggregates of the brain work to understand our human evolution. For the last eight months, I’m trying to map human evolution, to map it to what AI—how it affects it, what we should regulate and how we should protect it, and how the human can monetize its own data, right? So just look at—
Ross Dawson: The initiative by Doc Searls. So there’s a couple of really interesting initiatives. This is one where he worked originally on VRM, the vendor relationship management—you own your own data and trade that as effective—and is now building, or being instrumental in setting up, an AI initiative where it is around your personal AI, so you own the data, you own the systems, and you’re able to evolve with it. There are some other interesting initiatives like this, but these are obviously very tiny compared with the ways in which most people are using—essentially giving off their data to other people. But this is certainly part of the potential, to build the structures and architectures where we do own our data and our models and how they are used and what comes from them.
Henrik von Scheel: So let’s go back into one element, right? Originally, Ross, you and everybody else of us who live in a society, we made an agreement with the government—a social agreement. And the social agreement is, I’m using, you’re protecting me, and I’m willing to pay tax somehow, right? So in reality, the government you made an agreement with should have the ability to protect you. However, in an AI model today, it’s not possible, because if they should protect you from the very beginning and keep the store of your data and maintain your data, the amount of money they need just to maintain your data is immense.
So we need to define and find a model with governments where governments and the human being can, in co-ownership, hold the data structure—like in a blockchain, that you have a public and a private key, and both can hold the data, but the data is only unlocked both ways. Why? Because there’s a monetization model on your own data throughout your life. And when you die, your data goes on to your children, because that’s your DNA data, that’s your history life data, that’s all of it. So there should be an ability to monetize it.
The challenge we face with this is the amount it will cost to maintain your data throughout your life, and we need to find—in the fourth industrial revolution, we’re going through the bio revolution, then we’re going to the consumer revolution, and then we go to the fusion revolution. And in the fusion revolution, the objective and the hope is that we are finding mechanisms to have cheap energy, because the amount of energy we use today in terms of data is literally crazy. It’s utterly, utterly crazy. We should be ashamed of ourselves if we see that, and that’s just for the amount of convenience.
So if we find a model for our government to do this, we should actually work on this. This is what I’m trying to look at. I want to alert you to one interesting thing. My key field of study is patternicity with probabilities. So when you look at trends that are coming, you look at probabilities—not ChatGPT stuff, right? When you look at this, there’s one trend that emerged last week that hasn’t been emerging before—the trend of anarchy in Europe. Anarchy is an interesting aspect, because anarchy is your distrust in the government. And when anarchy comes, it’s just an equation of 25%. If 25% in a country like Germany or UK or France will take it, 25% is a flipping chart for everybody, because the petrol prices are too high, expenses for food are too high, they get too many promises they never—and then take the power in their own hand.
When you look at it a little bit, you say, but anarchy—is that something new? No, the US is living in anarchy today. Trump is the true version of anarchy. They distrust the government, and they choose him, and he, from all aspects, says, okay, I’m doing something very different. I give all the power to the market. There’s been no time in history where all the power is residing within the market—Elon Musk and Amazon, Apple, all of them have literally all the power. It’s totally, utterly crazy. This is the highest version of anarchy you can see in a country. And if we’re not careful, it’s spreading.
Why am I discussing this in an AI human element? Because if the human is the centerpiece, what is the core element of human development? It’s that we have safety, security, and trust. If trust is broken, anarchy emerges. So if anarchy emerges, AI can take on very different versions that we don’t want in a scenario thinking, but AI can also take on the version that it can support us in our evolution.
Ross Dawson: Well, just going to that—education. You are a professor. You are an educator. You look at the future of education, and you alluded to that before. So in this world where AI is already and is becoming more significant, how do we reinvent education? How do we educate ourselves as individuals, as educational institutions, or society? How do we shape the education that we need for the exciting coming times?
Henrik von Scheel: I think one of our challenges with education is that we as people, when we go beyond eight years old, the key element we’re learning is reasoning, and our reasoning skills are learned by doing mistakes, unfortunately. We never learn by getting an answer. If you study Porter’s model on ChatGPT, and you get all the answers from Porter’s model, and I ask you the next day, if you haven’t applied it, you haven’t learned it. If I would ask you, you will learn it. You do mistakes, and it’s by doing the mistakes, by putting yourself into the content, working with the content, and doing mistakes, you learn. Unfortunately, most of the stuff we learn today—now, human evolution in reasoning is by doing mistakes. So we need to find a very smart way how AI can support us in this mistake learning phase, because it’s the way that we are built to learn, right?
Ross Dawson: And I think that’s a critical thing—where as individuals, we need to understand that if we delegate our thinking to AI, it’s not going to work; you’re going to be dumber rather than smarter. But if we can have the intent of using it to hone our thinking and helping us to make mistakes or be a Socratic dialog or whatever, we can do that, but that requires the individual intent. So again, we also need to frame as educators and also in organizations—which should be educational institutions in their own right, because they are learning organizations—it’s this framing of the use of AI as a cognitive foil for us, as opposed to something where we delegate our work, which is never going to get us anywhere good.
Henrik von Scheel: And where do you think we can use it in education?
Ross Dawson: The good thing is, you know, personalized education, where I think that there is definitely this ability to address where individuals are and their understanding, the metaphors that will be relevant to them, the frames for that. But it never has to be in a form of giving the answer. So there’s always this complement of human—as in, the educator needs to be inspiring. They need to help the person to find themselves. They have that relationship with them. So it’s this complement with the AI, which can guide to specific lessons or frames or examples that people resonate with, which can assist them. And so again, it needs to be very much—individuals need to understand, they have to shape it for themselves. I think we can present things in the right way. And there’s very much a human plus AI educational frame.
Henrik von Scheel: I think you’re spot on with this. When you look at the five aggregates that we have in human evolution and in education phases, our sensory—our forming of ourselves to the outside world—is shaped quite early on, until we are maybe 12 years old, but quite early, the first two years. That means our sight, our smell, how we hear, how we taste, how we feel, and how our balance works—we learn quite fast. This is what AI is focusing on in AI physical today. They’re trying to come from a language model point of view outside to the physical world.
Then we have this cognitive version of us, which is the intellect version. It’s very different. The intellect version of us is a version of awareness, a version of how we comprehend things, how we understand things, how our knowledge is conceived and given out. So it’s both communications, it’s storytelling, it’s our comprehension, it’s our perspective, it’s our reasoning, it’s our awareness. These four things are never the same for the same person. I can have a room of 200 students, I can talk about the same element on Adam Smith’s first principle, and they will all understand it differently because of their different backgrounds. So this part of cognitive understanding, the intellect, is far more complex.
Then you go to the versions of who we are as a person. Our memories—our memories are a whole element of our emotions, which is a hugely important part of our learning, because memories have nothing to do with truth. Large language models always look for the truth, but in our own memories, we are lying to ourselves to keep our sanity. We are partly, not consciously but unconsciously, lying to ourselves because we view it only from one perspective. So our reflection of our memories or our impulses are related to our memories or our conceptual things. All these elements are our emotional elements, in terms of how strongly we can link to knowledge, how strongly we can see the future, how we can see ourselves in the future—all of this.
When you look at the crisis now, the memory is on how resilient we are as people, how resilient we are in our learning phase, how comfortable we are with the unknown, how comfortable we are to learning. Then you have the next two ones. The other one is our mental formation or our identity. This is the element we’re trying to protect in digitalization—how we form our opinions, our insight, our resolution, our understanding, ourselves, and our retentiveness, who we are. All of these things are being shaped as teenagers. We don’t want this to be in a social aspect. We want this to be a safe, secure element. So this is the identity you form.
Then you have the consciousness. The consciousness is a strange thing. You have two layers running in your education. You have the layers that are running long term and the unconsciousness that actually takes the decision—the analytical versions and the underlying elements. For example, why are you doing something? So you come with purposes, you come with energy, you come with desire, or you come with willpower. Then you say, well, they’re more etheric. No, they’re not. Because, Ross, you wake up every morning with that much amount of energy. You can use this the next eight hours you work. You can use it on emails the first four hours, but then you’re using your most precious willpower and energy right then. You have your willpower to train, for example, if you want to do training. When you want to train in the evening, when your willpower is lower, you want to train early in the morning. So this willpower and the energy is what we as humans in our consciousness—how we are aware of things, what we focus on, we magnify. So these are the five aggregates you’re using from the learning perspective.
If we apply these, you and I, Ross, we would go into an initiative to say, how can we apply this to understand human evolution when we evolve this? Because I’m nearly 60 years old now, and that means, for me, my concept of life, experience of life, is different than when I was 30, than when I was 20. You cannot go to a young person that is 15 years old and say, let me tell you about love—there are four different phases of love. They need to experience them themselves, because it’s not my job to take that away from them. And it’s not my job to tell a young man, now you want to conquer and do, you want to have freedom, Generation X and all of this. And then you realize, easy, easy, easy. I’ll let you know. When you fall in love and you become a father, it changes you. Why does it change you? Because accountability moves into a man’s focus area, as before he was conquering. And then accountability—a man wants to be a caretaker of something, and it fulfills and magnifies a man. And then you say, well, this is not part of the five aggregates—very much so, right? Because it’s part of human evolution. Ross, you have experienced that in your life. So then you say, how do we connect that with our evolution and learning?
Ross Dawson: Yeah, no, I think that’s a really important point around accountability for ourselves, for those around us, directly in the broader community. And I think that’s kind of this big humans plus AI frame. So we’re obviously just touching the surface of what we could dig into now. But how can people find out more about your work Henrik?
Henrik von Scheel: I’m a public figure. I’m doing a lot of research projects with universities. I have a lot of PhD students and coaching and supporting governments on policy initiatives. Currently, I’m focusing a lot in the Gulf regions on strategic briefings, on crisis management, in terms of doing scenarios for strategic, tactical, operational, for short term and long term. But my passion is actually teaching, and this is far more a personal story on teaching.
People see me always as the Industry 4.0 originator on everything I have accomplished. But my true story is actually quite different. When I was young, I was dyslexic. I’m actually double dyslexic, and I was stuttering. I had a very, very difficult time in school. That’s why I am a little bit passive aggressive, because I’m always on the defensive, because many years I went through life just being some sort of an outcast. So within that phase, I had a very strong teacher that actually supported me and used time and effort to see my skills, and he helped me to overcome my dyslexia—which is not really true. You never overcome your dyslexia. You are just getting tools to work with it. So that means I’ve written today nine books, and five of them are bestsellers, but I cannot even read my own books aloud.
So what is the message I’m giving? Everybody of us is made different, and because we’re made different, it’s not that—because society is often built on, if you don’t fit that frame, then you’re not part of that frame. But I think AI opens up something for us—that the breadth of who we are as people is a beautiful thing. And because I cannot speak the same way, like I have a good friend Tarek, who is also your friend—he’s a gifted storyteller. My gift is that I can see patterns. So I believe that every human being should be able to see their superpower. Your gift, Ross, is a very different gift. You can gather communities, you can convey difficult things in a simple thing, you have an ability to put the human in the future, where everybody sits today and they freak the hell out because they don’t see them part of the future. So I think everybody has a future in that.
To answer your question, I’m a quite reachable person. I believe the future looks like a good future for us, Ross. I believe this is the time for our educators to wake up out of their long-term sleep. We need to evolve our teaching material. We need to evolve the way that we learn and teach. We have terrible lessons in terms of how boys and girls evolve in their learnings, and we’re not doing anything about it. This is our chance with AI to change the learning mechanisms for boys and girls, our learning mechanisms if you’re one like me that doesn’t fit these templates, if you have special needs. We have the ability with AI to specialize ourselves far more in detail.
One of the challenges we have with education today—when you go from primary school to higher education, and then go beyond higher education—our challenge with higher education is we have become generalists, and our generalism is actually inhibiting us to innovate, so we’re not meeting some of the core challenges that we have in science today, and we need to push the boundaries on where we go to research to really become innovative. We need to push our boundaries in terms of manufacturing, energy sector, and so on, to specialize in special fields. When you look at engineering schools, engineering schools have become more and more generalist in six fields, and they should become specialists in fields. So I think that’s where we need to really push the boundaries.
Ross Dawson: Yeah, no, I think, to your point, what I see as one of the ultimate possibilities from AI is that it amplifies our individuality. And so that’s an extraordinary possibility. So thank you so much for your time and your insights, Henrik. You’re sharing some great work, and we’ll share in the show notes links to one of your research papers and the work you do. Thank you.
Henrik von Scheel: Okay, thanks a lot. Good. Goodbye.
The post Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37) appeared first on Humans + AI.


