Joel Hellermark was only 19 when he started Sana, an AI-powered knowledge platform he describes as the ”Google for companies”. Today, at 26, the Swedish entrepreneur runs a 3 billion SEK company that is expanding to London and New York. As artificial intelligence enters a new arms race, with Microsoft and Google battling for pole position, the question remains: can Sana compete with Big Tech?
Words KONRAD OLSSON Photography ANDREAS ACKERUP
Despite his boyish appearance — the curly locks and tilted smile — Joel Hellermark does not beat around the bush. When asked about the potential of Sana, the brainchild of the 26-year-old entrepreneur, the answer is nothing less than world domination.
— The company that cracks knowledge could be one of the largest and most important companies on the planet, he says. That’s what we’re aiming to do.
We meet in the company’s new headquarters in Stockholm, a revamped former hair salon that has been turned into an office that can best be described as an Italian 1970s apartment crossed with a Japanese public library.
Designed by Ruxandra and Christian Halleröd, the sought-after architect-duo most famous for having created some of Acne Studios’ most boundary-breaking retail stores, the space is a next-level startup office. There are no Fatboys or ping-pong tables, but the environment is consciously casual, with big shared work desks and an extra-long kitchen table for family-style lunch gatherings.
— Every day, we cook food together. That’s a great time to have discussions between different teams. We wanted to maximise the likelihood of those discussions happening, like the engineer running into the designer. It’s a bleak Wednesday at the end of February 2023 when Joel greets me in black jeans, black polo, and black slippers. He just got back from New York, where he has been leading job interviews for the company’s new North American office.
— Recruitment can take years to get right. Whenever we find someone who we think is a good fit, we can spend two years convincing them to join. You need to fight really hard for the best talent.
The timing for this interview couldn’t be better. Just a few months earlier, the company OpenAI released its chatbot Chatgpt, which got everyone talking about how artificial intelligence had finally become mainstream. Chatgpt stunned the world with its ability to assist people in a variety of tasks, from writing and editing to answering complex questions. The app became the fastest-growing in history, with 100 million active users after a mere two months of existence. It also threw Google, the world’s biggest search engine that has been a leader in the AI space, into an existential crisis.
In March, it was announced that Microsoft, an Open AI investor, was adding a layer of GPT-4 to all their 365 products, including Teams and PowerPoint. Shortly thereafter, Google did a similar play with its Workspace suite. This puts the application of artificial intelligence even closer to Sana’s wheelhouse.
It is through the prism of this context that one should view the promise of Sana.
— Sana is focused on applying machine learning to capture and organize an organisation’s knowledge, says Joel. You can think about it as a Google for companies.
If Sana is the Google for companies, what is to say that you’re not going to be the next Google?
— It’s a good question, he replies, with a huge laugh. Yeah, I think there’s definitely something there.
”Sana is focused on applying machine learning to capture and organize an organisation’s knowledge. You can think of it as a Google for companies.”
This might explain why some of the Nordics and increasingly, I’m told, the world’s biggest investors and VC funds have bet more than 80 million € on the company’s success.
The other explanation is, of course, the promise of Joel himself. His journey from teenage wonderboy to startup titan is well-documented.
Born in Kuala Lumpur. Brought up in Japan, Singapore, and Stockholm. His father was working as the Asia manager for the software company Intentia, and his mother was a guest lecturer at IBM.
At 13, he learned to code by taking remote classes at Stanford University.
At 14, he interned at the digital advertising agency Great Works under entrepreneur Ted Persson, who later, as a partner at EQT Ventures, would lead Sana’s series A investment round.
At 16, he started his first company, Sample, a video recommendation engine.
At 19, in the last year of high school, he started Sana, backed by angel investor and Scandinavian MIND Issue 1 cover star Sophia Bendz.
— It was an incredibly bold bet to invest in a 19-year-old that was looking to do some pretty risky research on machine learning, where the technology wasn’t proven, the market wasn’t proven, and I wasn’t proven. Not sure what got her to think beyond that, says Joel.
Bold or not, the bet paid off. Eight years later, the company is now valued at 2,8 billion SEK.
— Let’s go down to the tea room!
Joel guides me down to the basement of the company’s offices, where we sit down on floor-level chairs. Joel pours me a glass of water and opens a bottle of kombucha for himself.
Can you describe this place we’re in right now, why it looks the way it looks, and the thoughts behind creating it?
— From the beginning of Sana, the focus has been as much on the company and the organisation as the products. I’ve been obsessed with groups like Macintosh, Bell Labs, and Walt Disney. I wanted to understand how their environments shaped the way they worked.
— When we set out to design our offices, we wanted to design something that emphasised the level of care we put into the culture. There were a few different ingredients. One was the big kitchen. Then we have the big collaboration area, which is where the different teams can gather, discuss, and collaborate. The deeper into the office you go, the more silent it becomes, and you go into more traditional desk space.
— We were sitting in the tea room. It didn’t have a very high ceiling, so we were considering what to use the room for. We wanted to create a space where people could think big thoughts. We thought a tea room and that level of calmness were suitable for that. We host a lot of one-on-one meetings down here for that reason.
Were there other inspirations that you gave to Halleroed?
— I’ve spent a lot of time in Paul Fägerskiöld’s studio; he’s an artist and a friend. The first thing you run into is all of his books and references. I thought about how you can consistently get into that state of inspiration and energy and creativity. We also wanted it to feel like coming home, a place you’d enter and immediately swap your outdoor shoes for slippers. We wanted to make sure nothing reminded you of an office. Aesthetically, we wanted it to feel as if you had created a movie in the ’70s about the world 100 years from now, so retro-futuristic. We’re combining a lot of Italian design with Japanese influences. I think that’s all coming together in a quite fun way.
Was this a project that you have been longing for, or was it something that was a consequence of running a company?
— It’s definitely something I’ve been longing for. We’ve always been obsessed with the idea of gathering some of the most brilliant creative minds to create a new set of tools for humanity. And that begs the question: how does that intention manifest itself in physical space? I was really excited that we could convince Ruxandra and Christian Halleröd to work on it. They signed up on the premise of questioning what an office is.
How do you describe Sana to someone who doesn’t know what you do?
— Sana is an AI-powered learning platform that empowers organizations to find, share, and harness the knowledge they need to achieve their missions. It’s where teams can easily create knowledge, learn alone or in live groups, and get answers to any question.
— There are two major problems we’re trying to solve. The first is that institutional knowledge today is scattered across dozens of systems. It’s stuck in presentations, docs, PDFs, and Slack threads. The second problem is that knowledge is trapped in people’s heads. These two problems are slowing teams down and wasting precious time.
— To capture a company’s knowledge, we first have to index it. Our platform does this by connecting to all the apps a company uses. Think: Drive, Slack, Notion. Once these apps are connected to Sana, we can use AI to help every single employee find exactly what they need. This is where the search bar comes in. Say an employee needs a reminder of the company’s latest OKRs. All they need to do is type that question, and Sana will search through all of the content inside all of those apps and provide a relevant answer in natural language. This is how we capture the knowledge that’s already documented.
— To capture all the knowledge trapped in people’s heads, we need to make it really easy and enjoyable for people to pass on what they know. This is what our editor is for. You can use it to create anything from a quick doc to a formal learning course to an interactive live session. And the AI assistant is there to help you create an outline, brainstorm ideas, summarize your paragraphs, and more. If you decide to create a live group session, you can host that directly in Sana too—the video experience is built in. By offering all of this in one platform, we can accelerate a company’s ability to onboard new talent, ramp up sales teams, develop their leaders, and much more.
How else are companies using Sana?
— It’s amazing to see how organizations are using Sana to fulfill their missions. For example, researchers at leading pharmaceutical company Merck are using Sana to learn how to use data to develop new medicines more effectively. Scaleups like Svea Solar are onboarding thousands of solar panel installers to accelerate the transition to renewable energy. During the pandemic, hospitals around the world used Sana to upskill 100,000 nurses on covid-19 critical care practices.
What kind of opportunity did you see when creating this?
— The original question was: how can we apply machine learning to human learning. We’ve developed models that can gain a much more fundamental understanding of human language. That understanding enables us to create more personalised learning experiences than ever before. The thesis was that you could move from a medium where everyone consumes the exact same information in the exact same way to a medium where the content was dynamic based on what you already know, based on what you need to know, and how to teach you that most effectively. Just these past few years, we’ve seen significant breakthroughs in natural language processing that enables this transition.
We’re recording this at the end of February 2023, when there is this surge of articles, discussions, and chatter around artificial intelligence and ChatGPT. The technology that you are using, is this something that has been around for a while, or is it only now coming to fruition?
— In 2017, an article called ”Attention Is All You Need” was published, and it introduced transformer architecture. This is what you use to train image-recognition models or text-generation models. It has evolved a bit since, but the underlying principles are still the same.
— What has shifted is our ability to bring a lot of computing to the task. When we bring this level of computing, we can effectively train these models on the entire internet. When you do that, these models gain very deep intuition for all-natural language tasks and capabilities that go far beyond what we thought language models could have at this point.
— The underlying principle is very simple: what these models do is that they predict the next token in a sequence. A token can be a word or a pixel, and by training a model to predict the next token on the entire web, it can learn to solve most human tasks. Then you start questioning, what really is human intelligence? Are we doing something beyond predicting the next token in the sequence? When I’m talking now, I’m outputting the next word with the highest likelihood. It turned out we could, with that very simple principle, train extremely good machine learning models.
You use the word intuition. How do you define intuition in this context?
— When we intuit, we’re using high-dimensional reasoning, so it can be difficult to put into words exactly what’s leading up to that reasoning. I think historical, expert-based systems relied heavily on human labelling and rules. Those systems could work well in very narrow domains.
— Playing chess. In 1997, Deep Blue beat Kasparov. There, you had to put the best chess players together with the best computer scientists. They had to define a set of rules. Then they just used the computer to act on those rules. When it comes to human language, you need to develop models that can capture nuances in language. It would take us billions of lifetimes to define that set of rules.
— What you want to develop now are models that can learn those rules and represent them in high-dimensional ways. That goes beyond these single-dimensional rules or explanations that we could give them. They learn to develop these nuances by seeing trillions of examples.
We’re going to get back to AI. I want to talk about how you got into all of this.
— I got into programming when I was 13. I came across these online Stanford courses, and based on those, I learned to code. After a year, I wanted to put the skills into practice. I was fortunate to get to work at [the digital advertising agency] Great Works with Ted Persson. They were doing some of that most pioneering stuff back then, really pushing the state-of-the-art apps and interactive mobile experiences. I was excited to be part of that. A decade later, Ted ended up leading our series A.
”I got into programming when I was 13. I came across these online Stanford courses, and based on those, I learned to code.”
Going back to when you were 13 or 14, learning code, how did you envision the future back then?
— I was really interested in the idea of augmenting human intelligence. Back in the ’80s, there were two leading schools of thought, led by Douglas Hofstadter and Marvin Minsky. One school was the notion of artificial intelligence, where we would effectively replace human intelligence with AI. The other was augmenting human intelligence. During a 1950s encounter at MIT, Marvin Minsky, declared: ”We’re going to make machines intelligent. We are going to make them conscious!” To which Douglas Engelbart reportedly replied: ”You’re going to do all that for the machines? What are you going to do for the people?”. I was really obsessed with Douglas’ notion of how we could augment humans to do things that extend their capabilities.
— As I was starting to write software, I thought it was incredibly powerful that you could write code once, execute that code indefinitely, and have hundreds of millions of people run this code on their computers, and, in turn, enhance their capabilities. Developing tools that could augment human intelligence became a big obsession. Somewhere around that time, I was reading a lot of biographies… I started learning from most of my mentors long after they were dead.
— I started realising that the tools that we leverage every day, and that we take for granted, didn’t just fall down from the sky. There was a group of people that came together to shape these tools. I was intrigued about whether I, at some point during my lifetime, would be able to crack something that was as important as those tools that I used every day to augment my capabilities.
Can you give an example of these mentors?
— Buckminster Fuller. I was fascinated by his idea of meta-problems. Meta-problems were problems, that if you solved them, had cascade effects. In other words, if you solved those problems, you would solve everything else. Apparently, Fuller and Einstein went for a walk one day, and concluded that knowledge and education were the greatest meta-problems. If you advanced that, you advance everything else.
— When you go back in history, back to the printing press, or the Library of Alexandria, or whenever we’ve changed how people share and consume knowledge, that has had ripple effects across the whole of humanity. The notion of meta-problems is something I learned a lot from Bucky.
— Haha. He was called Bucky.
When did you read Buckminster Fuller?
— I was 14 or so. Then you have people like Edwin Land, all the way back to Michelangelo. What I found interesting about those characters was that their genius didn’t lie in a specific discipline, rather in finding intersections of different disciplines. They were polymaths in their truest sense. They would find overlaps between, in Edwin Land’s case, chemistry, business, design, and marketing.
— Throughout my childhood, I was curious about a lot of different areas. I didn’t want to pick one discipline. Instead, I wanted keep these interdisciplinary interests and combining my love for neuroscience with design, machine learning, and company building. That notion of interdisciplinarity was another thing that I learned from these dead mentors.
You just said that you had a love for company building, and you mentioned that you were fascinated by the meta-problem of education. That all sounds like the perfect backstory to what you’re doing now with Sana. Was that really so when you were a teenager?
— What’s interesting about Sana is that it wasn’t really founded on a business idea. I wanted to build a place like Bell Labs, where we would gather the best scientists, designers, engineers, and marketers, and we would find intersections across all of these disciplines and build a generation of new tools that could empower people to learn more effectively. I wanted to work on machine learning. I wanted to work on human learning. That was really the beginning for Sana, much more so than that we had a specific idea. It was designed to be my life’s work, something that I could spend decades on. That’s why I think everything lines up so much towards it because it was very much a function of my interests.
You seem to be in the ultimate pole position right now to bring these ideas to fruition, because of where we are in technological maturity. Do you agree?
— 100 %. I’ve been spending most of my time on this problem since 2015. Back then, the language model quality just wasn’t there. Now, the models are reaching the right level. Learning and knowledge have never been a bigger problem for society, due to the half-life of skills reaching three years. We need efficient systems at scale to help us acquire new knowledge. Things are definitely lined up to realise that vision.
You started the company when you were 19. Did you have a sense then that now’s the time? That the technology was getting there?
— There was still some work to be done on the language models. In the first couple of years, we spent a lot on machine learning research. It wasn’t until 2020 that we launched our product. We knew that the models would reach this quality at some point. We wanted to advance the state-of-the-art.
You decided not to pursue education after high school but instead start the company. What was the reasoning behind that?
— I started Sana in the final year. My plan was to start university after the summer. Then we received our seed funding from Sophia Bendz. I knew that I eventually would want to build something like Sana, and I was fortunate to get started early.
What do you think Sophia saw in you when she gave you the funding?
— I don’t know actually. It was an incredibly bold bet to invest in a 19-year-old that was looking to do some pretty risky research on machine learning, where the technology wasn’t proven, the market wasn’t proven, and I wasn’t proven. Not sure what got her to think beyond that. [laughs]
I’m fascinated by the combination of machine learning and human learning. That’s a big part of the product and the vision, and it goes all the way back to what you talked about with augmented intelligence. It’s not like a new synthetic thing that will solve all our problems. It’s a combination of technology and humanity. Could you give any insight into how that process has developed?
— Between technology and humanity is the user experience. At Sana, we bring the Scandinavian design ethos to machine learning. By combining state-of-the-art machine learning with consumer-grade experiences we can enable people to interact with these large language models in very intuitive ways. It has been as much of a design challenge as it has been a technical challenge.
”It was an incredibly bold bet to invest in a 19-year-old that was looking to do some pretty risky research on machine learning”
— Take for example how AI helps you create content. Where should it appear? How exactly should it assist you? How do we make sure that it doesn’t get in the way? You want these experiences to be naively obvious. You shouldn’t even notice them. They should just be there, augmenting you and supporting you in your flow.
Can you define the design challenge to build something to be naively obvious?
— Back when Apple launched the Macintosh, they had this idea called Hyper Cards. They were these interactive cards that you could assemble in whole new ways, and you could build out small apps using them. In Sana, we also want to go beyond text to what we call cards, where you can create much more interactive experiences and communicate your ideas in a more compelling way. Whether you’re creating a q&a or a poll or an interactive graph with a narration, you have these cards which enable you to create much richer, more engaging content. It took several decades to get those ideas right to scale them up. That’s what I find fascinating now with Sana — that we’re working with ideas that computer scientists have been battling to get right over decades.
You need a pretty broad set of skills to be able to create this.
— Yes, it’s all about interdisciplinary talent. Organizing Genius [by Warren Bennis and Patricia Ward Biederman] has inspired me a lot here. The book talks about how they assembled groups of computer scientists, artists, and designers, all at the same time. At Sana, we don’t have a single specialist in-house. Our designers write code. Our customer success managers build dashboards from scratch. We want Sana to be a place where you can truly do your life’s work and explore all of your passions and interests.
I know you’re recruiting right now. Is that the pitch you’re saying?
— [Laughter] Yeah. Always recruiting. We don’t compromise on talent and have optimised for density over volume. Our interdisciplinary teams achieve a disproportionate impact relative to their size. Take our marketing team. It comprises three full-time people. This team runs all our performance marketing, social media, events, conferences, websites, messaging, and more. Our designer in the marketing team also codes the website. He works across all of the different areas. I’ve always wanted to build a company where people are surprised by how few we are.
We’ve talked about how the milieu, the actual space, fosters that. Now you’ve also talked about the importance of this interdisciplinary approach. Are there any other tactics or methods you use?
— We obsess a lot about getting to the global maximum. You can get to local maximums by iterating toward a certain direction, but it’s really difficult to get to global maximums.
Define global maximum.
— A myriad of factors, ranging from loss aversion to bottlenecks, can lead us to get stuck at local optimums that aren’t global optimums—i.e., we’ve reached the top of a small hill and have nowhere to go but down. Global and local maxima help us identify peaks, and if there is still potential to go higher or lower. It’s a mental model that helps you avoid incrementalism.
— In order to get to the global maximums, you need to be clear about when to explore something versus when to exploit it. This actually comes back to machine learning. In machine learning, you have algorithms that either explore the problem space and try out new things, and algorithms that exploit, i.e. iterate on a specific idea. At Sana, we spend a lot of time exploring before we exploit. It’s very dangerous getting into exploitation before you’ve done the exploration. Because then you’re very likely to end up in a local maximum.
Fascinating. Talk about where you are right now in the company journey. You pretty much underwent in another round of funding. Are you approaching a better way of doing this global approach to your business?
— We’re launching in London and New York now and building out the teams there. We’ve raised over 80 million euros to execute that vision. We work with some of the world’s leading companies. Now, the next phase is to expand the team geographically and build more local presence in markets where we already have momentum.
Do you think you’ll take in more money eventually, or is this enough for now?
— We’ll most likely seek financing again, but we’re not in need of additional funding right now.
What do you attribute the interest from investors to? Is it the current business model, or is it the potential of your technology?
— I think it’s a combination of both. Ultimately, the company that cracks knowledge could be one of the largest and most important companies on the planet. That’s what we’re aiming to do. Now, we have proof points that we’re on track toward that vision.
Let’s speak more generally about where we are right now. There is a lot of chatter around whether or not Google is actually truly threatened. They’ve had such a dominant position for so many years. To me, this is the reason why this is such a big story right now. People feel like, ”OK, maybe I don’t need that interface. Maybe I don’t need to put in my query in that way and get 10 blue links where half of them are sponsored.” What OpenAI is doing, with the help of Microsoft, is envisioning a future where there is another type of interface, there is another type of answer I can get back. How would you describe this situation we’re in right now?
— We’re in a time where the paradigm is shifting and we can move beyond blue links to dynamically generated content. You can put in a query, and that system already has access to all of your knowledge, all of your company’s knowledge, and all of the world’s knowledge, and then it can dynamically generate content to answer your specific query. Whenever the medium or the paradigm changes to that extent, you have the classic innovator’s dilemma — where whoever benefited from the previous paradigm will struggle to shift to the new one.
They’re so tied to the current business model that it’s hard to pivot.
— Business model and user experience. You would effectively want a language model that had access to all of your knowledge. In order for that model to have access to all of that data you need to entrust it. I think people will struggle to trust some of these historical brands with that data. Equally, I think those brands will struggle to transition into a fundamentally new user experience that’s generative first.
You’re smiling when you’re talking about this. It seems like you’re excited about this opportunity.
— I am. Just think about how human access knowledge has evolved. We went from the Library of Alexandria to the printing press to the web. In the beginning, we just had Yahoo with links to the top websites. Then we had Google which could organise these links and make them searchable in whole new ways. The next paradigm is generative content. We’re moving from creating static texts to dynamically generating content based on people’s exact needs. That’s a very exciting transition, which will empower people with the knowledge that they need to do their jobs more effectively.
You mentioned the privacy-first approach and new interfaces. Talk about the privacy part first, because that’s a huge issue in all technologies. Can you describe how you are working with it at Sana?
— Our biggest priority is making sure that raw data never touches our servers. We want to design experiences in a way where you can benefit from that level of personalisation, but without ever having to share your raw data. That is, I think, going to be an absolutely critical part of this.
— What’s going to be interesting is, if you’re an artist or a copywriter, you’ll want to teach these agents to basically augment your specific work. If you’re writing copy, you’ll want to fine-tune the models on all of your references. You’ll want it to learn your exact tone of voice. I think we’ll start moving to a world where we fine-tune these models extensively to our preferences. That’s going to allow them to write copy like us, paint like us, or think like us.
I guess the analogy is J.A.R.V.I.S. in Iron Man. Tony Stark has his own AI that’s tailored to him and knows everything about him and creates things that he wants. Are you saying that you are doing this for companies today?
— Exactly. We do that down to a personal level.
Oh. You can do it on a personal level as well?
— Yeah. We want you to feel like you have da Vinci in your pocket. You have this true polymath that has all human knowledge but also your knowledge and your company’s knowledge. It knows exactly what you know, can act on your behalf, and can use the same tools you use. I think that’s going to be very profound.
How far off are we from that happening, on an every human- type scale?
— It usually takes decades for these types of advancements to be adopted, but if we’ve learned anything from history, those cycles have gotten shorter and shorter. If you go from radio to TV to mobile phone to Google to Facebook to Chatgpt, you’ll see that the adoption curves have drastically accelerated.
It’s exponential development.
— Humans struggle to reason exponentially. Intuitively, we reason linearly. If you reason exponentially and take Chatgpt as the starting point and extrapolate that another decade, it seems inevitable that we’ll get to tools that are unimaginably powerful.
Is the way we are interacting with computers part of this paradigm shift?
— We’re entering an era where you could change user experiences drastically. One of the core ways we will interact with a computer is just going to be a text box where we prompt it to do actions. These models will be connected to all of the tools that we use day to day. It will be able to use Figma, Salesforce, and Slack on your behalf. You will put into the text box what you want to get done. Then your model can act on your behalf in these different systems. Also, you can develop user experiences that anticipate what you need. Then it gets that done before you even ask for it.
For example, making a reservation at a restaurant.
— Or you’re in Figma and it has already written the copy for you and designed the button according to the rest of the design. You can just press the tab. Or you’ve started writing this interview with me. It already has access to the recording from the interview. It’s seen half of your text. It picks up on your tone of voice, and then it uses the same level of writing that you’ve done for the first half, and populates the second half. And then it gives you four different versions so you can select which one you like the best.
Now you’re scaring me, haha. Talk about where you see the human role when technologies are becoming so effective at execution. The dystopian way to see it is that I’m just there to pick the right version of my article about you. Another way to see it is I now have more time to focus on something else. What’s your approach there?
— I think humans will be the stochastic element.
— We are going to add randomness to these language models. When you’ve written that first section, you’ve written with a specific tone of voice that you’ve learned based on your preferences and experiences. You’ve prompted the model with this, and now it’s picked up on that. There are a lot of ways to design products. There are a lot of ways to build companies. Setting the principles will be for humans. Acting on those principles will more effectively be done by these models. Also, what we’re going to see over the next 10 years is that the cost of intelligence will get to zero.
What do you mean by that? Computer power? How do you define intelligence in that context?
— Basically, the cost of every task that a knowledge worker does today will go towards zero. This means that you could basically turn on indefinite intelligence for a specific area. If there’s an area you want to research, write about, or develop within a company, you’ll have intelligence on tap to pursue that.
Intelligence on tap. People get weirded out by this. They get scared.
Do you think you can teach AI human values? Can it develop its own values?
— It can. That’s a big research direction right now. What will end up happening is that you teach your model your values. But it’s going to be very hard. As a society, it’s very hard for us to agree on a set of values. There will be different models with different values, and we will adapt our own models to our own values.
Do you think there’s a possibility for an AGI [artificial general intelligence]?
— If you extrapolate the current progress over the next few decades, it’s inevitable.
How would you define AGI?
— I would define it as effectively being able to solve all tasks human intelligence can solve in a way that’s indistinguishable from human intelligence. It’s not always super-practical to develop an AGI necessarily either. For a lot of use cases, it makes much more sense to develop specific models for those use cases. There will be one model we can interact with and ask it to accomplish tasks on our behalf, and then it can go on and do those tasks for us. It could effectively solve any tasks that a knowledge worker could solve today.
Do you believe there’s a scenario in which this AGI or this model will actually become sentient or have human-like emotions?
— If it acts sentient and acts like it has human-level emotions, I think it will be very hard to distinguish, and we might face the same philosophical issues.
Do you believe in God?
— Not really. There doesn’t seem to be anything that’s unique about human intelligence.
”There doesn’t seem to be anything unique about human intelligence. There’s nothing that can’t be solved with a sufficient amount of computing.”
What do you mean by not unique?
— There’s nothing that can’t be solved with a sufficient amount of effective computing. There’s something on a molecular level that we don’t understand yet that we need to get to. You could probably simulate a lot of those effects in simpler computational models so that you could get to that point. With relatively simple model architectures and a lot of training data, we can basically solve AGI.
Is this something you are concerned about? It’s been a debate in the past few years with people like Mats Tegmark, Sam Harris, and even Elon Musk. Many have expressed worry about AGI.
— I’m concerned short term about its ability to exploit us for propaganda. Or to spread falsehoods. We’ve seen the recommender systems of YouTube taking people down rabbit holes in incredibly compelling ways. I’m concerned by those short-term implications. Longer term, I’m concerned about our ability as a society to adapt to these things. Historically, when certain professions got automated, we shifted those to new areas. If this shift ends up happening in five years’ time, we will struggle to adapt fast enough.
Do you believe there will be an automation problem in certain fields?
— One study claimed that 30 % of jobs would be completely automated. 60 % of jobs would be automated to 30 %. So 30 % of the tasks would be automated for 60 % of the jobs. I think in the shorter term, we’ll see more automation, but these models will empower us to do much more with less, which will most likely affect unemployment. That shift worries me. We know that whenever we face mass unemployment, it has severe negative cascade effects throughout society.
This brings us back to the notion of meta-problems. You talked about finding solutions to meta-problems and that education was one of the solutions. Are there other meta-problems that you are worried about?
— I think AGI is another meta-problem. If you solve AGI, you can use that to solve all other problems as well.
You’re very optimistic about the future of human-AI interaction.
— Yeah. I think we can get to a stage where humans are augmented; where we have a society where we don’t have a universal basic income but more of a model of universal basic services. Where we can use our AI assistant to do more fulfilling things. That’s an incredibly inspiring future. There are going to be a lot of challenges along the way. It’s very important when you have these profoundly impactful technologies that you are not naive about their potential negative implications — that we work to address those proactively before they’ve done any damage.