All right, we will get started here. Thanks for joining this Fireside Chat that we're hosting with Andrew Ng. I'm Stephen Sheldon, and I'm an equity analyst at William Blair, covering vertical-focused software and services, including the education technology space. Andrew needs very little introduction as one of the most influential thought leaders around artificial intelligence and machine learning. He wears a lot of hats, I think as most know, and some that stand out include stints at Baidu and Google, helping out with various AI initiatives and leading various AI initiatives. He then founded DeepLearning.AI back in 2017. He's the managing general partner of the AI Fund that was started in 2018, and earlier this year was appointed to Amazon's board of directors.
And most relevant for this conversation, he's the co-founder and current chairman of Coursera, while also continuing to, as a professor at Stanford, something he's been doing for over 20 years now. So we really appreciate him spending time with us today as we dig into his views on the future of AI, its potential impact on the broader tech landscape, along with the potential impact on the education ecosystem and Coursera's positioning within it. So thank you so much for the time today, Andrew.
Good to see you, and thanks everyone for dialing in to watch this.
Just to start, I think just given your position in this whole ecosystem and the visibility you have into the broader AI ecosystem, can you talk some about the more transformative applications of generative AI that you're seeing today, especially those that might not be as widely known? What are you seeing out there?
So one thing about that question is almost hard to pick. One of the things about AI and generative AI is a general-purpose technology, kind of like electricity. If I was to ask you, what is electricity good for? It's hard to answer that because it's so pervasive, and generative AI feels like that as well. And I think my team, AI Fund, is finding applications in tons of industry verticals. So we're using it to process tricky legal documents, to deal with complex government compliance. It has obviously many applications in education and in edtech, using this to assist in medical diagnosis. But I think a lot of these applications of generative AI are a little bit nascent, so maybe they aren't as widely known. Things like ChatGPT and Gemini and Claude are widely known. Some things of AI-assisted coding are really taking off. That's more widely known.
But I'm seeing so many green shoots in so many different industry sectors that are sometimes B2B rather than B2C, and so the word also doesn't get out as quickly. But I'm seeing many applications being built pretty inexpensively, frankly, that are starting to deliver ROI in a very wide range of industries.
I guess when you think about, I guess, what is surprising you the most about the way generative AI is being leveraged today across industries? Are things generally moving faster or slower than you would have expected two, three years ago? How fast are things moving?
It feels like the hype has been more than I would have expected, and I think that the progress feels good and steady and healthy, although for some parts of it, maybe there was more hype than that would take a little bit of time for everything to catch up to. Maybe I want to share what I think might be the one common misconception about AI, which is people still think it's very expensive to build with AI, and it turns out that, well, this is how I think of it as an AI stack, right? At the lowest level is a semiconductor layer. So clearly, Nvidia is doing well, AMD has strong offerings, and so on. On top of that are the clouds, large clouds, and then on top of that are the foundation model companies like OpenAI, Anthropic, maybe Meta longer.
It turns out that whenever there's a new technology, these technology providers, semiconductor, cloud, maybe AI model, foundation model trainers, these tend to grab a lot of headlines. And that's fine. Nothing wrong with that. But it turns out that some of these layers, training large AI models, it does cost billions of dollars. So it feels very expensive. But almost by definition, there's one layer of the stack that has to do even better, which is the application layer. And because others have spent billions of dollars training these AI models, what is not as widely appreciated is that the cost of experimentation, of using a model that someone else spent hundreds of millions of billions of dollars to train to build a new application on top, that is much more capital efficient than most people realize.
So my team at AI Fund budgets $55,000 to get to working prototype, to often tackle a major industry application. And I find that, I don't know, people probably don't know, I personally still code. There's a lot of time to code, but I still do. But when I build prototypes, it often costs like tens of dollars to get a working prototype in API calls. And this is causing both large corporations as well as startups to have an innovation engine that can be operated very differently, very unlike anything I've ever seen before, because you can prototype and test and, frankly, fail very inexpensively. So I do see corporations now even explicitly taking a strategy where they say, you know what, let's build 20 things. It'll take us 10 days to build each of them.
If 18 of them fail, but two turn out to be valuable, that's great. This is actually changing the way that corporate innovation as well as startup innovation is being carried out.
Yeah, that's fascinating. Maybe just in that context, with the pace of innovation, which industry do you think could be the most impacted as we think about the next five to 10 years due to AI's proliferation? Are there some that are likely to be massively reshaped as we think about that time frame?
Yeah, I'm going to give that unsatisfying answer. I think it's all of them, and maybe there's a research study by some friends in the University of Pennsylvania and some of their collaborators at OpenAI that analyzed different types of work and their exposure to AI automation or augmentation, and it turns out that whereas early ways of automation tended to affect routine repetitive work (think factory automation or industrial automation), generative AI is much better at knowledge work, and so with this wave of transformation, it tends to be the higher wage jobs that have more tasks that are amenable to AI automation, so maybe in terms of the industry sectors I think will adopt generative AI faster, I think it's more the knowledge work industry this time compared to the industrial automation types of sectors.
And then I think also the sectors that are more digital will tend to adopt things faster because of the DNA, the IT personnel. One interesting thing that's happened over the last decade is everyone has become, almost all industries have become much more digital than they used to be. So I think 10 years ago, when the last wave of AI technology, deep learning, predictive AI, supervised learning started to take off, there was a huge gap between the more digital industries like financial services, maybe healthcare, and the more traditional industries like natural resource extraction. There was a bigger gap. But it feels like in the last decade, everyone's become more digital. And so the pace for the leaders and the laggards, there's just less of a gap.
But I do think the more digital industries, things like finance, some parts of healthcare, will probably still adopt faster than some of the more traditional industries with a lot of moving atoms around, much more than moving bits around.
Got it, and maybe this is kind of an education-focused, a certain degree-focused conversation. There are predictions that education could be one of the industries most impacted. I guess, how are you thinking about it? Do you agree with that?
I think many of us, and I do too, feel like there could be a meaningful, exciting transformation of education coming because of generative AI. I don't feel like I know exactly what it will be yet, this coming transformation. But folks at Coursera and more broadly around certainly thinking hard and experimenting and prototyping. Maybe just to share with you some things, a couple of things that Coursera has done that I think were actually really nice are Coursera Coach, kind of like an AI TA that chats with learners, custom responses. It really helps learners. The data seems to be that really good for learners that the AI TA answer questions, help you do the learning.
I think for a lot of learners, if you are studying something and you get stuck, maybe you're trying to write a piece of code and you don't know how to move forward, you're actually stuck potentially for a long time until you find a human expert to get you unstuck. With the AI TA or assistance, Coursera Coach, you can get unstuck right away. Coursera also built a product called Course Builder, which I think is actually really nicely done, where a lot of Coursera's enterprise customers are very interested in, if there's a three-hour course, but you don't want a three-hour course for employees, can we cut it down to a one-hour course and maybe have the AI contextualize it to make it more relevant for a specific business? So I think taking content and building custom courses seems like an exciting step forward as well.
There's proof that's going to be very useful right now. Having said that, I think these are just early designs of what will be possible, things that will be even more possible and even more exciting in the future, so actually, Mustafa, Coursera's CTO, CPTO, product and technology officer , really does a nice job running tons of product innovations, so cautiously optimistic there'll be a lot of new things to be invented. I think it'll be an exciting sector. Oh, one thing about Coursera's DNA, very loyal to learners. From day one, we've always said, let's put learners first, so part of the team DNA is to stay connected to learners, really think through what learners want. I'm not worried about the demand for learning. I think humans want to be empowered and learn stuff.
So the societal demand for learning has been large and seems like it'll only continue to grow. And then I think our ability to invent new things to serve learners, it feels like it'll be an exciting next few years.
Yeah, that's helpful. So I guess maybe stepping back, what do you think, as you think about Coursera's opportunity here, what do you think differentiates Coursera's generative AI capabilities and offerings? And are there particular elements of how you design the content, the platform, the data, etc., that you think are uniquely valuable?
Content and platform, I'm not sure. I think Coursera's had a good product for a long time. But because Coursera has a relatively large user base, this does give Coursera the ability to see what a lot of learners are doing, as well as get detailed data about what a lot of learners are doing on the platform. And then when Coursera has product innovations, it also makes it relatively easy to have that platform to scale it up very quickly to a lot of learners. I think Coursera has a reputation for being the quality, educational, of being high quality. And I think, by the way, it was interesting. I feel like I've chatted a lot with Coursera employees and Coursera partners.
One of the things I'm genuinely very grateful for is the degree to which a lot of the Coursera team and also a lot of Coursera's content partners that teach on the Coursera platform is just that loyalty to learners, where people really are in it for the mission, and then one nice thing about the Coursera team is the team has spent lots of time thinking about generative AI, so internally, I want to say pretty much everyone is very facile with using large language models, and so there is a lot of product innovation and thinking, and then often chats with the team about some of these things too, about not just what's on the website today, but also a bunch of ideas being experimented with product innovation to serve learners better and then to try to take this to scale quickly on Coursera's platform.
Got it. Makes sense. And that's all really helpful. And I think the focus on the learner experience has been very evident. Maybe shifting to the demand environment, how do you think the expansion of AI could impact demand for Coursera solutions over the coming years? And maybe from the perspective of both individual consumers that are trying to learn about, upskill themselves and leveraging AI, and at the enterprise level for the B2B side.
Yeah. So I think maybe I think there's certainly been very significant demand from both consumers and enterprises in learning about AI. And in fact, AI technologies evolve rapidly enough that most organizations, frankly, even universities, don't have the resources or the capacity to hire the experts fast enough to deliver the training. So frankly, many universities, even very good universities, just don't have enough professors to teach cutting-edge AI because the field is evolving so quickly and the number of experts is still relatively small. So I think that Coursera works with hundreds of content partners, including many of the world-leading experts, really some of the best companies in AI inventing the future technology, to get the cutting-edge, most relevant, most high-quality, and technically accurate content to teach people in many different walks of life. I find that exciting.
And then I think maybe at some point, there will be a job disruption of AI with more people being reskilled. Honestly, it's hard to say how fast that will come. It doesn't feel like it's happening quickly, but I think the demand to learn AI, oh, one of my friends, Stanford Professor Kurt Langlotz, said basically, AI won't replace people, but people that use AI will replace people that don't, and I think for many, I think that generative AI has reached the point where every knowledge worker can get a meaningful productivity boost right now by using generative AI, but one of the challenges is many people will need a little bit of training to use it safely and effectively, so there is definitely significant demand there, and maybe just to share with you some stories that excite me.
On my team at DeepLearning.AI and AI Fund, I see a lot more people, for example, learning to code and going really deep in the usage of AI. So for example, I have a marketer whose job is a marketer, not a software engineer. He's learned to code. So as a marketer, he writes code to scrape web pages, download web pages, get marketing insights. So I have an investment professional on my team that uses AI to help him code, to try to customize legal contracts more efficiently. But I've seen many people whose job role is not software engineering, nonetheless, get a lot of value from using AI in a very deep way to the point of, frankly, writing code.
One of the things that maybe on a personal level, I would love to see happen that will require a big societal-wide transformation is the world will be much richer if we can get everyone to not just be a user of AI, but be a builder with AI, kind of take control of the technology, really learn in a deep way, maybe to the point of coding. I'm seeing in my life enough non-software engineers that have learned enough about AI to use it really deeply, drive business results that I think I'm hoping this will be a broad trend that we can push out across society.
Yeah, that's really interesting. Maybe just on, you kind of brought it up, but I think there have been some estimates that 20% to 30% of the workforce might need to upskill or reskill or just generally be retrained just as jobs get more or less displaced to a certain degree by AI. I think you said you don't know when that'll happen. Do you think that will happen? And do you think that is an opportunity for Coursera?
So yeah, I've seen some of those studies saying 20% to 30% of the workforce. I think it depends on what we think of upskilling. Actually, honestly, I'll tell you my gut. I was surprised it's only 20% to 30%. Not because I think 20% to 30% of jobs will go away, but I think I would love to see closer to reskilling of 100% of the closer to 100% of the workforce because maybe when the internet started to work and we've got web search engines and so on, kind of like everyone, at least all knowledge workers pretty much, had to learn to use web search. So today, I can't imagine hiring a marketer, an investor, an HR professional, whatever, that doesn't know how to do web search. So everyone had to use web search.
Pretty much everyone had to learn to use mobile phones, at least in knowledge work, and so I think a few years from now, actually, even today, I can't imagine hiring a marketer anymore that doesn't know generative AI to some extent, but having said this, this is not about all the marketers are laid off and need to be reskilled for something completely different. I think this is people in their current job roles will be much more affected if they know AI than they don't, and then I think there are some sectors where jobs are going away. One of the most heavily disrupted sectors is call centers, contact centers. I think there'll be a few other sectors like that, so governments, citizens, maybe corporations to have a responsibility to make sure people are well taken care of.
I think so far, the layoffs, fortunately, have been much lower than maybe some of the hype has been. But then I think when someone needs to switch or start a new career, edtech, Coursera, I think, is a wonderful tool for that. But then at least for the immediate future, I think almost everyone should learn to use AI to make themselves better and more effective in their personal and professional lives. I feel like the amount of work there that remains to be done in education just seems massive.
Makes sense. Yeah, I'm just hoping equity research stays relevant here for quite a while. But I think everyone, I think you're right. I think everyone's going to need to focus on how do you leverage AI and make yourself more productive. And maybe shifting that into maybe the more enterprise focus. Companies, I think, have been, you talked about the cyclicality, I think, of company spending. Companies have been spending less in recent years on learning and development. I think Coursera and some peers have been talking about stabilization there. What do you think it'll take for business spending to pick back up? And are companies starting to think about, "I've got this workforce. I need to make them more productive. How do I train my workforce to focus on or to leverage AI in specific tasks?" Are you starting to see that pick up at all?
So there's definitely a lot of companies thinking about how to train up their workforces of AI. In fact, people used to talk about the digital transformation. I hear the term AI transformation more and more. And training is often a key piece of AI transformation. Maybe just to share very quickly some things I see. One of the challenges of the entire edtech sector is education, I think, has a profound long-term ROI. I mean, when we train up someone, provide them new skills, they're materially more productive and more efficient. One of the challenges is I don't think we're as good at measuring that ROI as I wish. And so the challenge of quantifying what I think is just a massively profound ROI, it makes some of the investments a little bit harder to justify.
And then I think the other interesting thing is really all of the education sector, not even edtech, but really education is when you change someone's life with education, how does the provider of the service capture just a small slice of the massive amount of value that is generated? This is kind of a, fortunately, the value created is so massive. It's working out okay for the m ost part. But I think this is an interesting structural problem for education.
Got it. And maybe I think one thing that I've heard come up is just with the expansion of generative AI and content authoring tools as well, that companies could start to insource more of their learning content by leveraging generative AI and therefore rely less on external vendors like Coursera or, I guess, even academic institutions. So is that a risk you think about? What are you seeing from that whole kind of insourcing concept?
Yeah, I don't ever want to say something is absolutely never not a risk, but then part of me feels like, boy, I wish it was that easy, so Coursera, DeepLearning.AI, me personally spent a lot of time experimenting with AI for content generation, and boy, I wish it was that easy. It turns out that at least right now, with the current state of the technology, it's still extremely difficult to generate high-quality content. Generating low-quality content is easy, but to generate technically accurate, thoughtful, time-efficient content is very difficult, so it turns out some of my friends have built avatars of me, so it turns out you could do the computer graphics to make something that looks like me. You could do the voice cloning to make something that sounds a lot like me, enough to fool my parents sometimes even.
To make it say the right words, to convey technically accurate as well as insightful content, that's incredibly difficult. So again, Coursera team, also some folks at DeepLearning.AI, definitely working hard. We actually are working hard on that. And I think one thing that would be exciting would be today, because content is so expensive to generate, we tend to have to generate content that is then served to a large audience. I think that as the technology improves, it will become economical to hopefully serve smaller and smaller, more and more niche audiences. And the ultimate niche would be a niche of one audience. We can generate custom content just to teach one person. But I wish the technology were that easy. I think that's a very high technical barrier. But with Coursera's existing very high-quality content assets, the team, I feel like, well, I don't know.
It'll be exciting to see what we can do, and in fact, I think Course Builder is one foray into this, where we use basically high-quality content, but help customers mix and match it to form their own custom courses, so that works because you start from a very high base of very high-quality content, but even that was not easy, so I think there's actually deep tech to be built in the next several years on this.
Yeah, and I think that whole concept of personalized learning. I think people have a lot of different definitions of what that is and what that could look like, and it still seems like we're far away, but we might be getting closer, it seems like, with some of the generative AI capabilities, where content could be specifically tailored to one person. Maybe.
Just share a quick story. So it's been about six months. So it took us a week to make something that looks like me and sounds like me, spent six months trying to write code to generate words the way that I tend to say words. And that was incredibly difficult. And maybe even after six months, I don't think we've got it yet. So it's actually a technical barrier, I think.
And maybe shifting then to that is kind of a good. How have you, when you think about as you've created courses, because you've added a lot of content on Coursera and through DeepLearning.AI, I guess, leveraged AI tools for new courses at both Coursera and DeepLearning.AI?
I feel like, well, I personally use GenAI as a brainstorming partner. It's also a pretty good copy editor. And maybe that's what I do and what some of my collaborators do. Coursera, it turns out, well, actually, what I'm seeing, I actually talk to a lot of people. The Coursera team talks to a lot of people using GenAI for different stages of the content creation pipeline. What I'm seeing is that there is no standardization. Tons of different teams are trying lots of different things. So boy, not sure how much of this is public.
Maybe I'll just say, from the entire process of looking at the requirement. I want to teach this to how do you come up with a syllabus to align it with what employers are looking for, and then to generate the items and the content to then come up with a high-level curriculum to the detailed script to proofing the script. There is a lot of manual work throughout this entire pipeline. I'm seeing lots of different people, Coursera team members, some of Coursera's partners, experimenting with tools on pretty much all sorts of different points along the spectrum. One of the things that I'm actually have been doing, in some Coursera forums I've been doing, is trying to actively stay in contact with this community, experimenting, doing very exciting work. Again, I'm excited about this.
I wish I had a more trusted answer, but it's very diverse experimentation and no convergence yet of the recipe of how to do this.
Yep. That makes a lot of sense. Maybe shifting then to strategic partnerships. What is Coursera doing from a strategic partnership perspective to enhance AI capabilities? How important are these partnerships to Coursera's overall kind of business objectives? What are you seeing there?
So I think it's important. I think Coursera, being an educational platform, is privileged to be able to talk to almost anyone, right? So what I'm seeing at DeepLearning.AI and at Coursera is leading generative AI companies, they often welcome help to teach people how to use their tools. And so as a neutral educational platform, we can talk to almost anyone. And this actually often gives us partners building all the leading technologies and also gives us visibility into what exactly is happening on many of these businesses. So I think it turns out that, certainly, when I work with many of these companies, they will share openly with us, right, of whatever things they release, which ones are really important, which ones don't quite work yet. And then sometimes we hear from these companies details of unreleased products.
So frankly, I actually know what some companies are going to release in the next few months because it's already running on my cell phone under NDA, so I can't share details, obviously. But I think the Coursera team being connected often has, I think, a very good view into the next couple of steps of what might be coming in generative AI. And then with our learner base, our knowledge of learners, the data, current content assets, partner relationships, this lets us brainstorm and try to innovate to create even better learning experiences. So I do think that there's product innovation work to be done. But I think I'm excited about that work. And I think we're in a pretty good, can't guarantee that we'll come up with the best product.
But I think we're certainly in a very good position with the key assets to come up with, to keep on experimenting and inventing those next few products.
Well, and that kind of brings up an interesting, you're very well connected. You get a lot of visibility. How closely are you taking the visibility that you have and what's going on in the broader AI ecosystem and trying to leverage that and try to pull that into Coursera's product, kind of helping Coursera navigate the changes that are coming? Are you taking some of that? Even if you can't share specifics on what's going on, can you take some of that insight and help with the steer where Coursera is heading?
Sure. Yeah. So I think with Jeff and Ken and our executive team support, I've been spending quite a lot of time with some of Coursera's executives and engineering and product teams. So I definitely am working hard and trying to do what I can. Having said that, I don't want to make it too much about me. I think Coursera really has many very good engineers. Actually, one thing about Coursera, even the non-technical people, the whole company uses GenAI. So I don't know. I was chatting with Marni, our chief content officer , a few weeks ago. And then she was actually telling me how some of the stuff that she sent me, which I thought was very good, she said, "Oh, yeah, I had GenAI help me write that stuff to send to you, Andrew." I go, "Okay, cool.
That's good to know." But I think the whole Coursera team is actually, I think, certainly within the edtech sector, way above average, in my opinion, in terms of the sophistication and familiarity with generative AI and the ability to then marry emerging AI technology with the domain expertise of education and training.
And maybe then you kind of brought up the whole talent concept. From your perspective, what is Coursera doing to attract and retain top talent when you think about especially AI talent and machine learning talent? What are you guys doing to attract talent there?
I think many people have joined Coursera. When I chat to the Coursera team, one thing that kind of warms my heart is how mission-oriented the team has always been and still remains to this day. Even today, when I look at a lot of society's challenges, it's not that education is the panacea. When you wake up in the morning and think, "What do I want to work on today to really move the needle for the world?" There are multiple things one could do. Education is certainly very high on the list. One thing that really gratifies me is how much many of Coursera's team is in it, has always been in it, and continues to be in it for the mission.
And then I think the excitement of the technical and product innovation. I feel like I know a whole bunch of people at Coursera that still go to work every day excited to go and do that work. And maybe just to share with you one other common misconception about AI. I know that we read in the headlines about, "Oh, an AI engineer makes $5 million or whatever," right? And the salaries and so on seem very high. What I'm seeing in the market is because the AI model, foundation model training layer, is hyper-competitive. I think the number of large players that all feel like they can't afford to lose in terms of training the very large AI models. There is a relatively small cohort of people that are highly skilled in training these AI models.
And so the market for this layer kind of in the AI stack, the foundation model layer, the number of people they know how to train them is relatively small. The number of very large companies that all feel like they can't afford to lose. And so the salaries for engineers that know how to do that has gone through the roof. But it turns out that if you focus on the application layer, it's not that salaries are not high, but they aren't through the roof the way that they are for the foundation model training layer. And so I think Coursera, with a U.S. and global-based workforce, has tons of highly skilled engineers that do really innovative work.
That's helpful. It makes a ton of sense. I guess I'm going to ask, what about education? And then maybe shift to the future in an AI world. But a lot of discussion, I think, recently about changes for the Department of Education under the new administration. You're an educator yourself. Obviously, you're very connected in the broader education ecosystem. How are you thinking about those changes and the potential impact to Coursera, I guess, and the general broader edtech landscape?
Yeah. I think we definitely think a lot about that. And I think the next four years, I think there's an uncertainty in what exactly will happen in the regulatory landscape in education in the United States. I think the change over the administration in the United States will probably affect global institutions of higher education less because USDOE primarily affects the United States. But so focusing just on the United States, one thing I hope is the edtech sector has had many wonderful good players and then a handful of players that, frankly, did not really serve learners in the right way. So I think that I hope that in the future, our society is able to continue to manage the potential negative impact of bad players to keep the education ecosystem trusted and healthy.
And then maybe some things I wonder about would be. This is maybe less affecting Coursera directly, but universities. So universities in the United States depend a lot on federal dollars for funding, both for tuition dollars at the undergrad level, say, as well as research dollars for NIH, NSF funding for research and a lot of institutions. So depending on how that flow of funding goes up or down, this will have a very material impact on the health of a lot of American universities. So how that changes will matter. And then because of tensions between the U.S. and China, the flow of students from China to the U.S. has also become much more challenging. And that was a meaningful source of revenue. And I think it was actually a win-win for the students and for the American universities.
But as that becomes more challenging, we're already seeing universities in the United States having to lay off faculty, really tight budgets. So I think as the edtech sector continues to go through challenges, what will happen? There is one thing that Coursera has been trying to do to help universities, which is lots of demand for job skills, relevant content to learn GenAI and so on. And frankly, many wonderful universities with wonderful faculty just do not have enough AI professors to teach the cutting-edge AI content. And in fact, if you're teaching, I don't know, let's say a psychology major that's going to graduate and go be a marketer, do you have a professor that can teach the psychology major that uses generative AI to set them up for success for the marketing job?
One of the challenges, and then I think as universities face more pressure to ensure high employability of their graduates, I think with university tenure system, academic governance has been difficult for a lot of universities to move fast enough to bring in the new types of knowledge that are needed for the workforce of the future. So one thing Coursera has done, I would say very successfully in some geographies, it'd be exciting to keep growing this in the United States as well, would be to help augment existing on-campus full credit type of programs. So this additional knowledge to help set up students going through academic programs for career success. And by the way, when I think about higher ed and the transformation through GenAI, maybe three quick pillars I'll be quick. One is university operations, a lot of efficiency gained.
Second is pedagogy, things like Coursera Coach, AI TA, whatever. I think Coursera has good technology there, and then the final one, the hardest one, is the future of work because how do you train someone to be ready, not just for the job that's available right now, which is already hard enough, but to train someone to be ready for whatever job will be there four years from now when they graduate, so I hope Coursera with its assets and reach and signals can play a meaningful role there.
Well, that hits close to home. I got very young kids right now. I'm kind of thankful that they're not approaching that decision right now because I have no clue what I would tell them would be the most impactful course of study as you think about moving into the higher education ecosystem to that point. You don't know necessarily what skills are going to be the most relevant. There are probably some base-level skills that will always be relevant, but it's tough to predict. I think it's gotten way tougher to predict what types of roles will be in demand even three or four years out.
How old are your kids?
I've got seven, four, and one, just turning one.
Oh, wow. Congrats. Can I make a suggestion?
Sure.
As they get older, have them learn to code. I think I know people have wondered with GenAI, maybe no one needs to code anymore. You just tell a computer what you want, but what I'm seeing now, and I think for a long time, is people that understand coding will be able to do much more with computers than people that don't, and I think this will remain true for a long time. I think it turns out that when you tell a computer what to do, maybe it works some fraction of the time, but there's an important fraction of the time that it just doesn't work yet, and then if you have that deeper understanding of how a computer works, it can break through just writing English or whatever language.
Then, people that know how to code. It's not that you need to write code all the time, but people that have that deeper understanding of a computer. And I think one of the most important skills for the future will be to be able to tell a computer exactly what you want and get a computer to do it for you. And one of the most powerful ways to do that still, and now for the foreseeable future, would be to really go deep and learn coding and take control of the computer and take control of AI and make it work for you no matter what job role you end up in.
I love that. Yeah, I will take that to heart with my kids. Maybe now shifting a little bit more to the future. And I guess you were talking about kind of the different kind of layers of the tech stack. But just as we think about tech stacks in the future, what's your view? I think this has become a bigger debate, the view of software's role, as I think there's a narrative out there that software could become less important over time, potentially disintermediated by AI, especially on the workflow side and the application side. But at the same time, the data that's captured, I think, within software when serving as the system of record, at least right now, is arguably becoming more important. What's your view on the role of software and how that could evolve over the next five to 10 years?
Boy, I think software is going to be important for a long time. Maybe I just don't see how software could go away, and AI is software, and AI sits on top of software. But some exciting trends, AI is making software developers much more efficient. So I think I know there's a little bit of hype about this, but I think that hype, well, a lot of hype, I tend to say, don't buy into the hype. But I think the hype about AI making software developers more productive and more efficient, that really is true. And then as the cost of software goes down, one of the things I'm excited about is really empowering every individual to write small software programs. And I think that will be able to address a lot more applications in the long tail with software.
So until recently, software has been so expensive to write that you have highly paid software engineers, a small number of them write a small number of applications, and everyone uses the same applications, so this is why everyone uses a handful of web search engines, a handful of whatever, and it was just completely uneconomical for, say, a mom-and-pop store to hire a software engineer to write code to customize their LCD panel display outside the window or whatever. You just don't do that, but as the cost of software development drops and maybe the operators at that mom-and-pop store can use AI to help them learn a little bit of coding, help them code, but still learn enough, then I would love to see if a mom-and-pop store wants to build a highly custom LCD display. Let's empower them to do it.
Maybe just for myself, I think I'm a decent machine learning engineer, but to be candid, I'm like a mediocre. Actually, I'm a senior front-end engineer. I'm so-so back-end. I find that with generative AI, I end up writing more programs myself. It's kind of dumb, dumb, dumb parent stuff. My daughter's obsession with bunnies. Over the Thanksgiving holiday, I coded up something to take a picture of her, replace the background with tons of bunnies. When I do these things, I can now, with generative AI, be much more productive coding stuff up for the enjoyment of one human, my daughter. I think this is a very narrow use case, not very economic.
But I think as the cost of software development goes down, there'll be a lot more of these very long-tail types of applications that now make sense to write custom software for. But software will be important. But as this development cost comes down because of AI, I think we'll actually see a lot more of it.
Got it. Yeah, that makes sense. I know we're coming close on time. Maybe the regulatory framework. How do you think about regulatory frameworks and how they might need to evolve and to really drive the right balance, I think, between innovation and protecting the broader public as we think about generative AI?
So I would love to see more governments invest more in innovation and going for the upside. I think the upside for AI will be tremendous. Many governments are thinking about protecting the public or AI harms, which is fine too. I think we should think about that too. But what took me by surprise over the last year and a half, two years was the intensity of the lobbying effort by usually some large companies in the, frankly, false name of AI safety to try to pass stifling regulations to stifle open-source software. So open-source software is when someone writes software or builds an AI model and releases it free on the internet for anyone to use.
It turns out that if you spend hundreds of millions of dollars training an AI model, it's really annoying if someone releases a similar model for free because it really degrades the value of your investment. What started to happen a couple of years ago was there was intense lobbying efforts in the U.S., in Europe, some of the places around the world, which claimed that AI is so dangerous, we must put complex regulatory burdens on it. I think this was an attempt, among other things, to stifle open-source software. I said this in the false name of safety because AI is a general-purpose technology. I think there are AI risks, but more at the application layer. Technology like AI is something that can be put into many applications. Maybe to make an analogy, I think an electric motor is a general-purpose tool.
You could put it into a blender, into an electric vehicle, into a dialysis machine, or into a smart bomb. If you ask the electric motor maker to guarantee their motor will never be used for anything dangerous, it puts them in an impossible position because they can't control how someone else uses it, but it does make sense to not look at the technology, the motor, but the applications and demand that blender manufacturers make sure blenders are safe, go to the electric vehicle makers and demand that EVs are safe, and I think we see the same thing with AI. The AI model is a technology, and what I saw with some of the regulatory capture attempts was to try to make AI model makers demand no one will ever use it in an unsafe way, but that's the wrong layer to regulate it.
I think if you want to put AI in a medical device, well, let's make sure your medical device is safe. If you want to put AI in a form of political advertising, well, let's set standards on what's okay for political advertising and not. So I think regulators, by regulating the application rather than the technology, would do a better job protecting consumers from what's stifling innovation. Over the last year, I think we've beaten back a lot of the worst regulatory proposals, but these seem to keep on popping up. So I think we have to remain vigilant to really allow AI to have its upside and also think about the risky application.
Oh, one of the most disgusting applications I've ever heard of was non-consensual deepfake porn, where, for example, there have been highly inappropriate images of even minors, really affecting the mental health of some girls that were victims of this. That's just disgusting. Really glad that the Senate passed unanimously right rules to generate liability for that. I think we're actually doing work, sorry, not me presently, but I think governments are doing work to get rid of the bad applications. We also have to be vigilant to not stifle the technology innovation.
Yep. Got it. And I have this as a quick hitter. I don't think this is an actual quick hitter. How far away are we from more artificial general intelligence? What are the biggest challenges to get there? Should we as a society want to get there?
So AGI, artificial general intelligence, I think, is most widely defined as AI that could do any intellectual task that a human can. So when we have AGI, we should have self-driving cars because I could do that. AGI should be able to do that too. So for that definition of AGI, I think we're many decades away, maybe even longer. I hope we'll get there in our lifetimes. But the biggest barrier to getting there is we just don't know how to do it. There is no roadmap. We still need technical innovations to get to AGI. Having said that, there have been alternative definitions of AGI that really lower the bar. So one alternative definition has been AI that could do most economically useful work or something like that.
I was chatting with one of my economist friends, and he pointed out to me. He said, "Hey, Andrew, well, that's the definition. 100 years ago, most of the United States was in agriculture. We're certainly automated most of agriculture. So if only we defined AGI 100 years ago, we would have gotten there like 30 years ago." So depending on how you define AGI, we could have gotten there - 30 years ago to maybe + 50 years and anything in between would be reasonable. But to really any intellectual task a human can do, I think we're many decades away. I hope we'll get there because one of the most expensive things in the world today is intelligence.
Because if you want to hire a highly skilled doctor to help a medical condition, or frankly, or send kids to a great school, it's really expensive today because it costs a lot to train a smart, wise doctor. If we can make intelligence cheap, then we'll be able to give everyone access to such wonderful services, some of which, imagine if everyone today can hire an army of smart, well-trained, well-informed staff to solve all sorts of problems for you. I think that would be wonderful. Today, there are some very wealthy people that can hire that army of staff. But if everyone can hire that army of staff, I think it'd be so much abundance and so democratizing. So I would love to get to AGI. I wish I knew how. But I think we'll still need multiple technical breakthroughs before we know how we get there.
Got it. Well, this is the last question, but we've covered a lot. This is more for me. If you knew what you know now about AI and the state of the education ecosystem, would you steered Coursera in any different directions? If you thought back to when you founded it, I think it was 2011, would you do anything differently with the asset?
Gosh, with eventual hindsight, boy, I personally made so many mistakes. So many things I would do differently. But maybe one thing I want to say: one thing is an exciting opportunity looking forward is product innovation. I think we're in a very good position with a large user base, the trust of learners, the trust of excellent content partners to then take advantage of the new and emerging AI technology to invent or work with partners to get really brand new learning experiences that we can then take to scale across a pretty wide platform. So one thing I'm actually really excited about. I tend to be very impatient as a person. So part of me always wishes everything was already done. But I feel like the product innovation to invent even better experiences. That I think there's a lot of upside to that.
Sounds good. We will end it there. Thank you so much, everyone, for joining. A huge thanks to Andrew for taking time out of his busy schedule for us today. I would highly recommend Andrew's content on Coursera. I'm currently taking the generative AI course, AI for Everyone, Generative AI for Everyone. Making my way through that now. Hope everyone enjoys the rest of their Wednesday. Please reach out with any follow-up questions. Thanks, everyone.
Thanks, Steve.
Thanks, Andrew.