All right, thank you everyone in the room and on the webcast for joining us. My name is Matthew Cost from the Morgan Stanley US Internet team. I am thrilled today to be joined by Dan Sturman and Mike Guthrie, the CTO and CFO of Roblox. Thank you so much for being with us.
Thank you.
I mean, as a programming note, the way that we're going to do this is Dan is going to run through a presentation that should be visible, you know, both in the room and on the webcast, and then after that, we'll move to an open Q&A. Dan, please take it away.
Okay, great. Again, yeah, I'm Dan Sturman. I'm the Chief Technology Officer at Roblox. What I'm going to take you through today is kind of the Roblox view of where the, you know, how and where the current trends and excitement around generative AI and Large Language Models are impacting us. As I do this, I've interspersed a fair amount of history to give context to where we're going, because I think it makes it much more understandable both on where we're at, what we're doing today, and where we will go in the future. I'll touch a little bit on some key elements about Roblox's business. I'm assuming most folks here kind of know what we do, but I'm going to highlight the elements I think are most material to where this technology can take us, just so that we're all on the same page.
I'm also going to touch a little bit on the history of AI and why we're having this discussion now, and what's going on that has kind of led to now being the time we're having all this excitement, as opposed to a few years from now or a few years ago. I'm going to give you some examples of things we are doing and where we intend to go. I have even one demo that's a little bit lengthy, but it's about three minutes. I'll show you folks. It's a very early concept, but it helps illustrate where things are happening. With that, I just want to talk a little bit about our creators, because this is the backstory to all of this. Roblox has always been focused on its creator community. It's the lifeblood of what we do.
All our content is created by the community. We don't build it ourselves. With that, we have always invested in making creators' lives easier, so they can create content better, faster, with as low a skill level as is possible and is reasonable. We have here kind of a animated screenshot out of Roblox Studio. You can see fairly rich 3 D manipulation techniques for building. You can see the ship, you can see the terrain, you can see the volcano going. Not shown here also is the entire coding environment that's set up for both individuals and teams to be able to build and code in the system. This is Roblox Studio. We give this away for free. There's no money we make off this, and we do that because giving great tools to our creators is what enables them to create great experiences.
We have a very large team that focuses on these tools, and it's not just Studio, but we have very rich and powerful back ends that enable things that would normally be very hard to be done very easily. Again, all in the service of the idea that the easier we make things for creators, the more great content they'll produce, the more that will be attractive to users, and that drives Roblox's success. There's another element to this behind the scenes, which is kind of our scale. As you're all aware, we have about six million daily active users across the globe in a number of languages. Backing all this is what we like to think of as a 3D interactive cloud.
It's a set of computing resources that our creators are leveraging in many ways, in a similar way to the way you might leverage a public cloud provider, except it's very specialized for delivering these interactive scenarios. We are different from the way you might think of an AWS, and that, yes, we have our core data centers look more like what you might find in an AWS or a GCP or a Microsoft Azure environment, but we have a real investment also on the Edge. Why? Because we're talking about 3D interactive content at 60 frames per second. That Edge presence is really important. Right now, this gets us to, like, over 100,000 servers. It's actually quite large, quite a large footprint.
There's another layer in this infrastructure as well that I want to talk about, which goes back to what I was saying before. We have APIs and systems that make it far, far easier to scale experiences. In Roblox, you are never worried about something that looks like a database. You're never worried about, heaven forbid, a container or how do I manage compute, how do I scale up a service? Roblox is doing all of that for you. Again, this is all in the spirit of how do we take friction away from our creators? All the creator knows, I've written this cool thing, and I press the Publish button, and next thing you know, we've scaled it to 1 million users for them, right, in real time. Roblox also isn't new to AI.
AI has been at the core of a lot of what we've done. I just want to call out a few examples that are actually fairly recent improvements we made to the platform. The first is auto translation. This is something that launched probably a year and a half ago. Translation is very important on the platform. If you are a new creator to the platform, or even an experienced one, most of our creators are not, in order to get global reach, going to go hire expensive translation teams to take their experience that was playing well in the United States and United Kingdom and Canada and put it in Japanese, for example. We do all the translation for them, and it's all automated.
Up until a few years ago, we were using publicly available, best in the industry translation tools, and we realized this just wasn't cutting it. It wasn't cutting it, not because the AI behind them wasn't good, it was because those tools are trained to do things like translate documents or translate emails, or translate search results or web pages. That is not what a Roblox experience looks like. This is a case where we got our own AI models. We used our own data set, which was all the experiences that had been translated and in many cases, been touched up by their creators or the community around those creations.
We use that to build a model, and we saw a dramatic improvement in what auto translation started to look like, which directly reflected in our growth numbers in a lot of these countries. We saw immediate return. If you only know Japanese and you're playing in Japan, you know, an experience that doesn't seem like the translations are all wonky is a lot more appealing. That was one example of where relatively recently, we've really invested in AI, and it was one of the first times we built an AI model, like, from scratch. A big area for us is around trust and safety, where AI has had a long history and is getting increasingly sophisticated. Text filtering, again, a few years ago, probably more like four years ago, we used to rely totally on industry-standard tools.
We've realized that building our own text filtering tools, we'll be able to use state-of-the-art translation, not translation, but AI techniques. We're able to look at context around text filtering as opposed to individual words, which is where most of the tools in the industry have existed. It makes text filtering both safer and more precise, fewer false positives and also fewer false negatives. I mean, that's a win-win. Normally, you're trading those two off. Typed on the slide, I meant to say content moderation. We have millions and millions of 3D models submitted to the system, many, many each day. That can't be something where you just rely on humans to see if something's okay. We have dramatically improved our systems around detecting a bad object, in some cases, the computer does a better job than humans.
People who are trying to subvert our moderation systems will do things like crank the alpha in an image way low, so you don't see the offensive symbol on a shirt, for example, until it ends up in a creation, and then with a simple I think, I'll take that alpha back up. Alpha's kind of like a lighting parameter on a 3D image. A computer doesn't care about the alpha, it just gets the bits and can learn about these things. We do a huge amount now of automatic moderation of 3D content. One of the trickiest things we've always been working on is what we call inappropriate or suboptimal experiences. People who actually build experiences to cause, whether they're about violence, or about gaming in a way that's not allowed.
They're intentionally trying to subvert our rules and launch something on Roblox that should not be allowed. These can be tricky because their whole experience is a clever individual who's trying to be subversive can actually write code that says, If you're on my whitelist, you see one world. If you're not, you see a completely different world. How do we detect those? We've really upped our ML around that. We launched that at the end of last year. It pretty much brought the number of inappropriate experiences that we detect on the platform down to zero. They've gotten a little bit smarter, though. There's a few that are popping through, but we just adjust our models, and those go away quickly.
I'm not going to say how we did that because who knows who will hear this presentation, but, you know, some really a use of much smarter signals and much smarter machine learning techniques to understand holistically what's going on with an experience. We've been doing AI for a while, and we've had an increasing investment over the past 3 years, I'd say, to really aim to be top of game in the AI space. Now I come to generative AI and why it's important for Roblox. If there's only one slide to take away from this presentation, it would be this one, because this is kind of the core of it. The rest is just details. If you look at what it means to create on Roblox, you kind of have to be able to do two things.
You have to be able to code, and you have to generally be able to obtain or create artwork in some way, because they're 3D graphical worlds with complex behavior. Those behaviors code, and then the beauty of the world that you're in is the artwork. The reality is, as folks approach Roblox, it can be a lot of work to get good in both of these domains. In fact, most of our creators tend to approach the platform with a strength in one place or another. I'll use myself as an example. I'm the CTO of Roblox. I'm not bad at writing code. Right? I can kind of do that. My engineers might differ because I spent a lot of time on management, but I'm not bad at writing code. I can certainly code up a Roblox script in Lua.
Now, I'm going to confess something here. I hope, Mike, this isn't material information, but the lowest academic grade I've ever gotten in my life was in a middle school art class. The class that everyone else can sleep through and get an A, I nearly failed, and I am that bad an artist, right? Approaching Roblox, when I get down to, Hey, I want to make it look pretty, I need to find a partner. I have no way out. Either I can use our creator marketplace and go find models other people have built, or I have to find someone with some artistic ability to work with me on this.
Right now, I might be an extreme case, very strong in one, not the other, but what we think about with generative AI, it's an opportunity to drop the barriers on both of these, where skill starts to get out of the way, and what we want this to become is the genius idea someone has in their head is very easy to bring to life as a 3D interactive experience. That's the end goal. There should be almost no skill needed at the end of the day, but just the genius thought, and that's what should determine what a successful Roblox experience is. I'll show you some examples where we're tackling the art side and where we're tackling the coding side.
In both cases, we think we can bring our creators along in a way where it's easier and easier for them to be on the platform, and therefore broadening who is a creator, down to the point where a big initiative at Roblox is a drive towards every user could be a creator. We move away from creators and users as kind of two distinct pools to just using the platform is in of itself some degree of creation exercise, down to having experience creation, experiences where you can build roller coasters, you can build haunted houses, you can build whatever you feel like building from within the experience. You're not having to drop in Roblox Studio to build these things.
Before I get into a few specific examples, I just want to now give a little bit of context, and I hope everyone can read the slide. The text is a little bit small. How did we get here? Why are we having this discussion now? There's two main technical timelines that I'm just going to call out. One is what I'm calling the theory timeline. It's a theory, really, of deep learning, of neural networks, of AI, that's had a lot of advances, and I'm going to go back to the 1960s. We could go back actually farther on this one to, like, the 1940s.
There's a systems timeline, which may be more familiar to all of you, which is just the rapid advancement of hardware capability and how these two have come together in some pretty interesting ways relatively recently that brings us to today. If you look at the theory of AI, 1965 is a good starting point. That's when the first deep learning models were talked about. All deep learning means that rather than having a single computational neuron, you're stacking them. You're having one feed into another. You're having multiple layers in your neural network. That's what deep means. It sounds super sci-fi, it sounds super sophisticated, but all deep really means is that you're stacking layers of these together, and that was first proposed pretty much in 1965, a fairly long time ago.
From there, the history of neural nets is kind of fraught with a little bit of trouble. There were two what are called AI winters. One from about the 1970s up into the mid-1980s, where folks said, Ah, this tech is going nowhere. Then they did it again from kind of the late 1980s up until the late 1990s, where, Ah, this tech is dead, and people explored other things. There were some stalwart researchers who kept going. We got techniques like backpropagation, how you train and set the weights on these neural networks. Different models evolved. Basically, the late 1990s, things start to get really exciting. People started to have some breakthroughs.
We started to see some of the things that we originally saw around, you know, identifying cats on YouTube videos and stuff like that, which is kind of one of the first real exciting breakthroughs of neural nets coming back into play, was manipulation of images or understanding of images, categorization of images, recommendation systems, which, you know, have pervaded our lives now, from Netflix to search results to whatever, to things like diffusion models, like DALL-E, where all of a sudden. Diffusion models were the first models that really something was being created from what seemed like nothing. You would type in a text prompt, you would get a cool image of, you know, a cat with a bazooka or something like that, out of nowhere. Meanwhile, we have the systems timeline. Going back to 1964, multiprocessor supercomputers with Cray Corporation. They pioneered that.
They also pioneered the idea of parallel data path processors in the 1980s. This kept growing. You know, in the 1990s, 2000s, we started saying, Hey, you don't need big, expensive machines like Crays anymore. We can start using commodity servers. Up until about 2004, I think it was really Google who pioneered the idea of really large scale-out, hyperscaler data centers with massive amounts of compute capacity, great networks, but none of the individual computers were particularly special, right? That started to bring to bear this idea that we can throw a lot of computation at problems. Google originally did it to compute better search results. The amount of computation available for a particular sort of problem starts to get really, really big at this point.
We have introduction of GPU hardware and the observation that, hey, GPUs, graphic processing units, aren't just for pictures. That what's a GPU doing? It's doing matrix math really, really fast. What is a neural net? Oh, it's just matrix math, and the faster you can do it, the better you can train. Those being deployed into AI systems. All this kind of comes to a head in 2016 with Google Translate, where for the first time, deep neural networks outperformed all the prior art in previous natural language processing. They did it in a way that was so much simpler. There was no understanding what part of the sentences was, what sort of vocabulary was this, what are phrases? There was a huge amount of work that went into NLP before this time, and they just kind of took some neural networks.
They trained it on some sample data. It was incredibly simple and outperformed everything we'd seen in translation up to that point. That continues to advance 2017. Let me add, the reason we're able to do that was taking AI, but also taking the hyperscaler data centers and the compute capacity they had and just throwing a lot of compute and a massive data set at the problem, and that was different. We can now deal with these massive data sets, and we found you can have bigger neural networks with big data sets, give you really interesting results. They took that another step out of Google Research in 2017 with a ML model called a transformer. What was unique about transformers is they weren't actually architecturally better than what came before them on a case-by-case basis, but they scaled more easily.
They were built to actually be maybe a little bit worse, but their scaling capabilities, the ability to throw the hyperscaler data centers at them, was easier. They said, That's a trade-off that's worth it for us. It soon became worth it for everyone as this sort of compute became more available. You move forward, and you start to see things like GPT, Stable Diffusion, next thing you know, we're talking with ChatGPT, which is really just a combination of massive training sets, a lot of hardware in the training, and incredible results. That's what gets us to where we are today. You kind of see how these things come together, why we're at this moment, which I think is pretty special.
It's not rare for computing that all of a sudden, we hit a point where computation gets to a point we can do things we never dreamt of doing before. In this case, it was advancement, both the theory and the computation in parallel. With that history lesson, let me move forward here. Where are we going with this at Roblox? I kind of touched on this before, but we're really imagining a world where anyone can create anything that looks like never would have had the skill to build on their own before. We have this is just a mock here.
This is not a real demo, but I'll show you a demo that isn't too far from this, where someone would type in, Hey, I want a scene with a forest, a river, and a large rock in the middle. I think this goes to an extreme at some point. You take, you know, the setup chapters of your favorite piece of fiction, and then we're not that far away from being able to build a world around that. The example I like to use is the first chapter of The Hobbit, and you build the Shire, right? Before, that would have taken many, many person-hours of work, detail by artist work, and so on.
I expect in some moments when you want a professional result, your artists will, and coders will come in as a cleanup crew, you know, and make things and customize them, put that unique twist on it. I don't think that's going away, but it's a massive accelerator. We already see this in Roblox community, with creators using generative tools to even just give them inspiration, sometimes to bring assets along, but they almost always touch them up to put their own style on it. This is where we're headed, and we're not that far away from it. I mentioned we've been investing pretty heavily from a talent and technical capability point of view in AI. Even in generative AI, we've had a few published novel results. These are both efforts that we were part of.
We didn't do it by ourselves. We were part of kind of a consortium or a group of authors working on this. I want to talk about StarCoder. StarCoder is a state-of-the-art open source LLM. The key here is open source. It was done as part of the BigCode initiative. That was a partnership both between industry and academia. Some of the key academics on that are now working at research in Roblox Research. StarCoder was trained primarily on coding, on coding examples. A large part of the corpus was Roblox Lua. It's a great example if you've all heard of GitHub Copilot. This is kind of driving towards an open-source version of that. I'll show you a demo of using this in just a little bit. Also, ControlNet.
This gets a little bit hard to explain technically. ControlNet is you view it as the idea of using a small neural network to better fine-tune a larger neural network when it's running. You have these generative models, these diffusion models like DALL-E, that can create all sorts of images. Often, you want to control them in some way, whether it's the type of content you want to guide them on, the style, possibly even the appropriateness. ControlNet is some work that some of Roblox Research recently published with some others on how do you build a smaller network that's far easier to train, that can kind of give that control over the larger network? You can build these general-purpose networks and then fine-tune them without a massive retraining exercise.
We've also been innovating infrastructure. This is important because there's a lot of powerful things you can do with generative AI and Large Language Models. They can also be computationally very expensive. It's important to be advancing both at the same time. One advantage we have at Roblox is we have control over our entire stack. We can use some third-party cloud technologies if we want to. Most of our computation is done in our own cloud. That could be in our core data centers. As I mentioned, those look pretty traditional. We also have a lot of edge capacity, which is very close to the user and is kind of computationally intensive.
Lastly, every time a user opens up the Roblox app, they're kind of lending us a little bit of compute capacity at the time, which we can use for all sorts of things. It's primarily to run a game server, but we also have some examples of running a small AI model that's appropriate for, let's say, a mobile device, but does some things local to that user that benefit from latency and privacy benefits, and so on. We also have a lot of very unique data. We, of course, have the social graph, like any sort of social network might have, that between who knows whom at Roblox. We've also, we've recently launched Voice in the United States to older users. We're getting a lot of good voice data that helps us make voice data better.
Allows us to talk about moderation of data, possibly automatic translation, and so on. 3D objects, we have a huge library of 3D objects created by creators that we can use as we think about how we optimize AI models. Of course, code, like the example I gave you with StarCoder. We have a lot of code in the system, and what's interesting about our code repository is all that code is in a single programming language oriented at building interactive 3D experiences. It's a very large collection, a very single-purpose sort of code, which is exactly the sort of things we need to learn from if we want to make creators successful. Here's a little demo of voice or face animation on Roblox. This is something we've launched with Voice.
It's right now in limited launch because we still have some moderation tools to build around some of the parts of Voice. What you see here is we're using a camera, we're capturing facial expression. There is a small model running on the machine that translates that into what we call bone movement on Roblox. In other words, when we have what we call a dynamic head, such as the head you have here, by the end of the year, every avatar head in Roblox will be a dynamic head. We have code to say, move the eyebrows. We're actually moving, like, in a sense, so you're moving the muscles within an eyebrow, within the smile. There's many of these in a face.
It's not just we're plastering different smiles on you, but we're actually looking at the facial expression and translating that to a fairly fine-grained detail. We can transmit that across Roblox, so that if you're in an experience with someone else, they can see your facial expressions, and most relevant to where we're using this today, your avatar lip syncs with your Voice stream. When you're talking on Voice, you just don't hear voice coming from someone spatially in a 3D environment, you can see who's talking. It really does add to the realism level, so to speak, or the immersiveness of it. We run this model primarily on the local device you're running, your phone, for example. Of course, we train it in our cloud. We're using that aggressive options across the board.
As these models get more expansive, like the anti-model 2 will also be the voice moderation model. We expect on the voice moderation model, and by the way, this is new. No one's really ever done real-time voice moderation before. We have our first running version of it. It will probably run in a few places at the limit. It'll probably run in your device to protect you against the stuff you really don't want to hear. It'll probably run in our cloud on the voice streams. It may even run on the source device at some point, some prep work. We're careful about that in moderation. Generally, the source device is not the safest place to be running something you want to prevent someone from doing bad activity.
By and large, we can use the full gamut of our extended cloud to run these models. We're building infrastructure that allows our data scientists and ML engineers to build and deploy in the appropriate place without a lot of rework in order to make this work well. Next slide. The getting to the art side, which is, as you heard, is near and dear to my heart. About a year ago, we launched these new materials on Roblox. Traditionally, Roblox materials look like probably what you would expect them to. It was kind of a bitmap of color. You could, in a sense, paint it on any object. There's a lot of advances in the world of graphics, and in particular, this idea of what's called a PBR material or physically based rendering material.
The idea behind a PBR material is there's not just a layer of color, but there are additional layers that talk about how does light bounce off this object. You can go from having a gray sword, for example, to a shiny Excalibur that glints in the sunlight based on where the sun is located in the world. This is a great improvement in the sorts of quality you could have in a given world, in experience, but they're hard to make. Building all these layers requires understanding the properties of how kind of graphics works and how light might reflect and so on. Some people could do it, but most of our creators to take advantage of PBR materials, had to resort to googling PBR material, finding examples that were in the open source on the web, and downloading them into Roblox.
That was a good first step, but in the early part of this year, we deployed a generative model that allows you to type in text. For example, the example I've done is hot, bubbly lava, and it will create a bunch of example materials that look like hot, bubbly lava. You can bring them into our material manager, and I can put them on whatever surface I want. Now we've taken something I never would have had the skill to build one of these materials. I can now go build any sort of interesting material I want. I can look at a bunch of different versions. I can iterate through them. It just dramatically dropped the skill level necessary to make a very realistic-looking Roblox experience.
Doing the same for code. I apologize that this is a little bit hard to see when we get to the code example. I'll just walk you through what's going on. You can see this character was walking into these bubbles. They were changing color. They were popping. That gets coded here. You can't see it in gray. Gray is just a comment that says exactly that. It's English. It generates the code for this automatically. That can then get loaded directly into the spheres you saw. You'll see it here. You just walk up to it. It changes red. It disappears, right? I mean, that's a simple behavior. The idea of moving towards this is what it takes to code. The benefit is, you always have the code when you're done.
We haven't dumbed down the coding language or anything like that, because we find as experiences scale, people want to take and modify and tweak and go a level beyond where the AI might take them. It's all there. It's also great, by the way, since a lot of our creators start, this is their first coding experience, it's a great way to learn, right? You type what you want, and you see how the system does it, and you can learn from that as you go, which makes you a stronger creator as you go down the path. This is something we've launched. We launched it with some open models, and then we trained them on our code. We're now bringing StarCoder into this, so it's kind of a wholly owned within Roblox sort of solution.
We're going to roll a demo in a second, not quite yet. This, let me first stress, it's really, really early. This was what one of my engineers hacked up over a weekend, it shows the power of these generative techniques. It's a little bit long, bear with me. It's about three and a half minutes, it was so impressive, I knew I was coming here to give this talk. I wanted you guys to be able to see it because I think it does a good job of painting the future. I don't think the folks on the cast will hear anything I voice over, I'll try to minimize it. For the folks that are in these one or two places, I didn't realize that they wouldn't be able to hear me when we did this.
There's one or two places I clarify what's going on, but you'll get the general gist. Let's roll the demo. Okay. Look, still very early. Obviously, there's no code injected into that, but as I showed you before, like, that's not a far away thing to start doing. See how it's voice-controlled. You can see how in the not-so-distant future, you wouldn't even need something like Studio to do that. That could be somebody's just in the Roblox experience, and anyone could be creating. I don't know about you, but it would have taken me a long time to build that out compared to what was done right there generatively, right? This is pretty exciting from my point of view on where the future can go.
The fact that we can even put this demo together today, there's a lot of sophistication. You saw how the AI was almost coming back and saying: Wait, is this what you mean? Is this what you're interested in? It's helping the creator think about things they may not even be thinking about on their own. Because odds are the creator is not an expert on Roman history when they approach this sort of problem. They're just trying to make something that looks cool, right? I think there's a lot of opportunity here. Just to sum up, oops. Yeah, I know. That's always a problem with this. There we go. Okay, great. Just to sum up on, you know, we really think this is an opportunity to accelerate creation with all these techniques, and that's really the goal.
The shorter we can make the path from I have a great idea to it's resolved as a 3D experience, the better. We think there's an opportunity where that just becomes something where we don't have, you know, millions of creators and then hundreds or tens of millions of users, but they're all one pool, so it's coming together. People are just having awesome ideas, possibly just some for some, maybe their friends, their coworkers, an advertising campaign they want to do, and they're just able to bring them to life without going, Oh, now I got to learn how to code, or maybe I have to go out to a firm or something like that. It's just at everyone's fingertips, and we call this democratization of creation.
The last thing I want to touch on, because this is important for Roblox, we always talk about respecting the community. We're starting this journey with a big focus on being thoughtful and ethical. For example, all the code samples we used for our training already were in the public domain, given there. Everything we're doing from a creation point of view involves opt-in by our creators. We have found our creator community is really excited about this. Opt-in rates are very, very high, so we're not losing a lot there. A lot of folks who think they have very special IP, not to worry about it appearing in someone else's world because it kind of translated through a machine learning system at some point. I think the potential here is huge for Roblox.
We're incredibly excited and motivated by it, I hope what I talked through kind of explained where we were coming from, how we got here, and, like, where we're going with this technology. With that, thank you.
Great. Dan, thank you so much for that. There's a lot to go through. I think before I, you know, give the people in the room a chance to ask some questions, I do want to follow up on some of the stuff we just saw. There were a lot of different tools and capabilities that, you know, some of which are farther in the future, some of which are nearer term that you addressed. I guess maybe break down for us what are the things that are the near term focuses for the next year or two that we're going to see rolling out onto Roblox soon, and then what are, you know, three years out or longer of the things that we just saw?
I think what you'll see in the next year or so, we're going to continue to push on in studio sort of creation. We have other tools coming to mind on maybe a text prompt for some early 3D images. A big one will be avatar creation. We are opening up the whole ecosystem so creators can come and bring any avatar body, any head. We can make it a dynamic head, kind of automatically. If you bring a mesh of a body, we want to be able to what's called Auto Rig . Turn it from a image, a 3D image, to something that has working arms, legs, and you can put items on.
That's going to be a focus for this year, is just trying to allow that level of creation on the platform. Well, you won't see it, but it will impact you if you're on the platform. There's a big push on some of the things around further automating trust and safety, which is not generative AI, but it's uses Large Language Models. Just this same technology, different applications. We're not generating things from a trust and safety point of view, but the power of these models allows us to understand context in a way that can make the entire experience much safer for everyone, and also reduce false positives, which are frustrating to creators when they get flagged for something that's totally legitimate. Going forward, though, I think the sky's the limit.
I talked about being able to, you know, drop The Hobbit into experience, and create the world around it. I think you'll see more and more creation move to users who will have these tools directly in their hands. They'll be creating experiences within experiences, things like be able to morph your avatar. Let us say I have a Wild West game, and I have an avatar that I've dressed up to be kind of look like punk. You don't want a punk person showing up in a Wild West game, but can we keep the spirit of the avatar I built, and translate that into a way that works with the creator's experience? I think that's an exciting area to go for all of this.
I could go on and on, but I think that's kind of roughly the timeline. I think it's going to be, in many ways, incremental. Like, there's not, I think next year will be dramatically different from this year. We're going to gain experience with this. We have a lot of experience with our data sets. Those data sets are going to get richer, and we'll just be able to crank out more and more interesting capabilities that do all this making, mostly in making things safer and accelerating creators.
One follow-up. I mean, there's a lot of different functionalities and use cases that you touched on there. I would think it would, you know, take a lot of different tooling to accomplish that. How much of this is something that, you know, your intention as CTO is to build it from the ground up at Roblox, as opposed to partnering with other models that are out there?
Yeah. I think a few months ago, even just six months ago, I would have said, Oh, it looks very hard to get to this building these Large Language Models on your own. In the past six months, the open source community has kind of gone wild and shown that while they may not be exactly where the top models are at, they're catching up pretty fast, right? That's not 100% surprising if you think about it, because compute is becoming more and more available. A few folks pull together, you have a lot more of this large computation capacity. Techniques are getting better, which means you don't necessarily always need to have as much compute. Things are getting more efficient. That's just the history of computing, right? Going back, like, to the first relays, even, these machines.
Things are always getting faster, Moore's Law and all that. I think right now, what you'll see us probably doing is working closely with the open source community, probably taking those models and we have some really unique data sets that probably won't be fully in the open. You know, using those to build some very special capabilities inside Roblox. We know it's important to keep like, what you take from open source, it's generally good to give back something, so we'll probably be working with them in a collaborative manner as we move forward. We think we can get a lot of mileage out of that.
Great. Thank you. Happy to turn it over to the people in the room.
I think in two different places, you talked about AI accelerating 3D content creation and monetization. I think the creation part was really obvious. On the monetization part, can you just speak to that?
Yeah, that's great. What I really meant by that is, generally we've learned when people can create experiences better, like things, you get better experiences that leads to accelerated monetization. I think there will be some interesting things as we get better at building more dynamic marketplaces. For example, we just released what we call UGC limits on the platform. Before that, the only limits could be built by Roblox. We've now opened that up to everyone. Getting the right economic models behind that is looking pretty sophisticated because it's much easier to duplicate a Gucci bag online than it is, like, in the real world, and we all know it's pretty easy to do in the real world, right?
I think there's a lot of things around better detection of knockoffs that will accelerate, make it more worth a creator's time to go do things, and even the pricing models themselves. We recently brought a few economists onto Roblox's team as we think about deeper systems there, and we're seeing the application of AI to do things like, This object is like that object. Like, in the real world, I think we kind of all know a purse when we go to look at something and say, That's a purse, right? That's a little bit harder to do in the 3D world. Doing better classification and understanding and then understanding markets behind it, I think will power economy as well.
The biggest driver is just if I can create faster, if I can create better, and there's less friction, we're going to monetize better.
Maybe let's talk about what's table stakes versus what accelerates it. It feels like we're seeing a lot of these tools proliferate in the industry right now. There's a lot of focus on them and, you know, in the game industry and many other places. In order to stay on the path that Dave has talked about to get to a billion users, are these things that you need to do just to keep up with the Joneses, or are these things that will really move the needle in terms of growing the user base, you know, commercializing the ecosystem, et cetera?
Yeah. I think in general, this idea of, you know, dramatically drop the skill level needed to be a creator is something that is relatively new. I mean, it really came with the move towards the first. For me, it was when I saw DALL-E for the first time in kind of like the summer of 2020, right? That is all I think new and something that is purely additive to everything we've been thinking about. It's not something we've jumped on, because I think it helps with all these other goals and will accelerate that growth. You know, in terms of table stakes, there's not a lot of things in this space that I think are like, Oh, we weren't knowing how we're going to get this done, but for example, I mentioned voice.
Our ability to build voice moderation correctly now because of this sort of technology is far better. In building voice moderation, we are using a large language model for what's called synthetic data and training. It's rather than having to have, you know, millions and millions of hours of well-classified and scored, this is good voice, this is bad voice, we're able to, in a sense, generate enough data in a realistic enough way using these Large Language Models, to train a model that can eventually beep you out when you're saying something you shouldn't say, whether it's bad words, that's pretty easy, but bullying, explicit sexual content, anything like that, and doing it at the voice level, something that would have been very, very difficult to do before. It would have slowed us down on our match on voice. Now it's going to accelerate that.
Mm.
Right? It's not so much table stakes or future, but I think it's all kind of blending a little bit, but it's just all an accelerant to where we want to go.
There are quite a few studios that have built their entire business around Roblox. You named some also, like in your partner program.
Yeah.
How are you kind of working with them, or trying to be collaborative in light of maybe there's more competition out there because you're democratizing creation tools for everyone, and it's not just?
Right.
Specialized skill necessarily anymore?
I think the studios that are working with us, they're bringing more to the game than just technical understanding of the platform. They get the user base, they get what the ecosystem works. Yes, they have skill, but I don't think what's distinguishing them is necessarily them being the most technically skilled. I think these tools are going to accelerate them. In fact, this is where a lot of these ideas came from. A lot of ideas around here, and the teams' ideas came from being at RDC two years ago, and we saw artists who were building things just for the avatar marketplace and how they were starting to use DALL-E as storyboards and idea boards and bringing that into items they were going to create, r ight? I think it's going to be very collaborative.
I think those folks are absolutely going to be successful. I think they're going to find they can run a lot faster as we deploy tools that enable the creation. Nowhere in here are we thinking we become the creators. We still think all content creation is by third parties on the platform. There might be more competition. I think competition has always been a factor. I mean, we had millions of creators already, so there's already a lot of competition. There's a class of folks who tend to bubble up, and sometimes they come, and sometimes they go. It's not always the same properties or studios that are doing well on the platform at any given point in time. There's fairly rapid turnover there. There's a core group that kind of get what this ecosystem is about and are thriving on the platform.
I think that's going to stay the same.
They have assets today, I think users, loyalty and understanding of what monetizes and what doesn't monetize, that they carry with them. This just accelerates their ability to grow. They'll be just as innovative and they'll want to accelerate their leads to the extent possible. They want to continue to be innovative, just like any other company operating in an environment. They'll use the lead that they have and try to extend that. Yeah.
As the bar for creation comes down, does this change how you guys think about sort of your developer economics? Probably unit economics are of the business.
Yeah, I mean, I think we kind of think more about unit economics more on the user side than on the creator side, right?
Yeah.
What I think it does mean, I think we're affected from a different place. I think our recommendation systems, for example, we have to rise to the challenge that there might be more and more bespoke, narrow reaching, but incredibly compelling to a narrow group, sort of audience for these creations. That's where I think it's going to go. Like, I think, you know, maybe a set of investors get together in a bar one night and put together an experience to collaborate going forward, right? That's not going to have broad appeal to one million users, but might have a lot of appeal to that small group.
I think it's more like the systems around how do we take content on the platform and put it in the right hands is where the most of that pressure will be versus I think. You know, this goes to the infrastructure point I was making. We're always thinking about what are the costs of running these things at scale? We're thinking about the fact that if it's a tool that could be used by everyone, then we need to build it in a way where it makes economic sense for us, for it to be used by everyone. I don't think we tend to think much about unit economics anymore.
I think what it'll do is accelerate great content. Whenever we accelerate great content, we see more engagement. Wherever we see high engagement, we see conversion. I think the place we'll see it really is twofold. One will be, again, more and more content, so that'll drive more users to become payers because the content is better and it's worth-worthy of payment. The second area, which is a little more on the cost side and probably a little more mundane, will be on safety. You know, the requirement to have as much manual moderation over time will be reduced. I see a world where, you know, the creation is also not just to start developers, but also, you know, like, last year, about 100 brands built persistent experiences in Roblox.
That's a big number. This year, that number is growing. We made some announcements about a week ago. The faster brands can do that and build persistent experiences in their voice, in their brand, the faster that will grow as well. I've got a firm belief that changes the unit, as I've talked about before, that changes the unit economics of the business as well in a really favorable way. Those are probably the three things that I think jump out.
Mike, maybe I'll stay with you because Dan brought up the point about the philosophy at, you know, at the top of the company, potentially changing on working with open source and building as opposed to maybe licensing tools. Does that change the hiring plan or, you know, how much investment is required up front in order to make sure you have the resources to do that internally now? Is that maybe different than it was six or 12 months ago?
Yeah, it's something we're looking at, you know, and very much in real time. We, as you know, we've always leaned towards innovation and investment in innovation and investments in things that we help grow the business at high rates. That's generally our bias. We've had periods of sustainably good margins, and we've had periods, and being able to be self-sufficient. We've had periods of very, very high margins and recently, reinvestment in the business. You know, it's a, it's something that we're talking about a lot. We really want to stay on the innovation edge and continue. Because ultimately, we believe that's the biggest barrier that we can build in the business. I don't have, you know, nothing to report today that says, you know, we're changing our cost structure as a result.
We're clearly very interested in the technology. We've been implementing it for years, but it's something we're definitely going through right now.
Hey, Dan, can you talk about, I know you guys have been doing AI for a long time, but can you just talk about some of the ways in which you're managing the cost through this, like distillation, the quantization, something to do with the models to sort of better control the cost of them?
Yeah. We kind of jumped on early that you can't just take a large language model, do something cool and chuck it into production and smile, like, that these models are expensive. There are some well understood, but emerging in the past few months, techniques around how do you know, get an affordable outcome from these. Those tend to be referred to as distillation, which is the idea that you're basically using a model to train a much smaller model, but use the ability to generate content from this large model and essentially get a distilled version of that model that's much smaller and much more efficient.
Quantization is the idea that rather than everything being a 32 or 16-bit number represented in these neural networks, you can find ways, because you have all this data, to fine-tune that down, use fewer bits. Fewer bits means you need smaller machines to basically run it, more fits in memory or in cache or on a GPU and so on. Anything we have at scale, we have a general rule that, like, you know, you're not going to just toss an LLM into production and do it that way. We will have a few exceptions. We're looking at one application, for example, where we thought it could help on our creator portal because the QPS there is very small. It's like less than one per second.
Like, there's no point going through all the optimization techniques where it's computationally negligible even to run the LLM. We will rightsize, but almost anything we're putting in front of our entire creator community at once, or particularly our user community at once, only deploys when there's a TCO argument to be met. You know, that slows down a little bit of technical progress sometimes, but I think it's also good for the teams. It means they, it sharpens their game. They better understand what they're deploying. They're more focused on their objectives. It all works pretty well. Great example of that is where we are with voice moderation.
We got pretty quickly, like in a matter of short weeks, to the point where we had an LLM that could tell you, whoops, this is inappropriate thing you just said. We now have something that runs fairly efficiently, right? That is starting to move towards deployment, but we had to take the extra month to get ourselves there, and that's the distilled and quantized model. These techniques will evolve both in Roblox and outside of Roblox. The community is very excited about this approach. The reality is you generally, for most things, don't need the full power of a LLM for any given problem, particularly as we get multimodal, you're probably not using all those types of media at the same time. It's great they're there.
It can do interesting things, response to a text prompt, you get a picture or maybe a song or whatever it is. For our applications, they tend to be a little bit more focused, and when that's the case, we'll build models that represent that and do that efficiently. I guess one other thing I should call out, sorry, that I keep forgetting what people do not know. A lot of this also is still these models, the ability to do inference, which is the execution of the model in response to a query, on CPUs, not just GPUs, right? That's important for us to help scale. We run our own cloud. We have a lot of CPUs kind of sitting around, depending on where the sun is at a given moment, right?
There's parts of the world that are asleep on our edge and parts that are awake, and we're actively looking at how and we've already deployed some of the first models in a CPU basis into this cloud to start to take advantage of that. There's cost advantages there. When you have a team that has experience looking at how do you build out global compute infrastructure and, or, you know, are dialed in to understand hardware and TCO arguments and the trade-off between hardware and bandwidth and so on, this falls very naturally to us as a team to expand and think about how do we make sure we get the most bang for the buck from AI algorithms.
Probably NVIDIA holders, you got to figure out a way to do training on CPUs, right?
I think there's work going on there, and we'll see. I mean, the difference is training tends to be a batch process, not a real-time process. I think there's differences in the architecture, but right now, I mean, GPU is a pretty efficient way to do that. We're also being careful in how we use GPUs when and so on, and do we pull them into our own data center or not? Like, what's the best way to model all that.
Related, going back to the question for a second. You're saying that next year is going to be dramatically different. Help us understand, is that difference going to pop more on the professionally developed sort of content side or on that longer tail of like novice developers sort of side?
Yeah, I think it hits both, but in different ways. The most obvious place is what you might call novice or what does it take to not be a novice. Like, you saw what I showed you with the materials. All of a sudden, someone that would have been a novice might not look like it's produced, like, by a novice, because things look pretty awesome, right? Gameplay may get richer because coding speed goes up. You don't need as big a team to get things done. That's one advantage of Roblox. You don't need a huge team to do a lot of these things already.
Even at the top end, like I think about some of the folks in our game fund and their ability and their access to PBR materials and how quickly they can move on that, you know, these were never huge teams. They didn't want to be huge teams. We all know, like, any organization feels less efficient as it grows, right? In terms of people. They're able to bring to bear with fewer folks. Like, I was just before we came down here, I was messing around with FRONTLINES, which is one of the newer experiences. If you haven't checked it out, you want to understand where Roblox is going, check out FRONTLINES. I'm terrible at first-person shooters, almost as bad as I am at art.
Seeing how it plays and what the world looks like and everything is really kind of eye-opening. They're what I'll call, you know, a very professional team. Even they will benefit from all these sorts of tools. I think it hits both, but I think the democratization is going to be the biggest, maybe seismic change for the platform as a whole.
I have a question sort of on the competition. You have cited the big data set in your stack, and I'm just wondering, how does that help you to utilize generative AI versus some other people can utilize this? You know, think about Fortnite and how they
Right.
utilize generative AI, can they, like, copy you or do the same thing as you do? How do you really, like, differentiate yourself?
Right. I think, there's a few pieces to all of this. First of all, we've had creators on the platform. That's where our data sets are coming from, and that is certainly a leg up. I think it's more than that as well. Like, we understand where creators are coming from. We're working with them a long time. We don't produce any content ourselves, so we're not competing with their creators. We're done a lot of thought on what an economy should look like. I think generative AI is a piece of it. I think we have some really unique data sets. I think we're well positioned to be a leader in things like real 3D object creation. I mean, when I say object, I don't just mean image.
I mean a car with wheels that spin and a steering wheel that controls it, and be able to understand that's what a car conceptually is, as opposed to, you know, a photo of a car being translated into a 3D image of a car, right? Which is a empty mesh with nothing. It's not a solid object. I think we're in a great position there. When you talk about competition, I think it's about accelerating on platform, which already really gets creators, and I think that's as much of it, the fact that we already get creators, and we understand how they think, and they work, and it drives what sort of tools we think they need and so on. That's, that's the whole package of what makes Roblox successful.
You guys sit around when you and Mike and Dave and the strategy group sort of sit around and talk about some of the things that worry you about this technology and where your own vulnerability is within the ecosystem. Where is that? What is that?
Yeah, that's a great question. I think, we worry a lot about a lot of things. It's what keeps us sharp. You know, like, we have a healthy dose of paranoia in the way we think about strategy. Obviously, we started this a while ago, but it was, you know, do we have the talent we need? I think we've done a great job improving that talent. I think some of the things we've done recently, we wouldn't be able to do even a year ago, but there was a focus on understanding we need this sort of talent. We started going and bringing those sorts of folks on board. That's going very well. We obviously want more, and you know, there's going to be more of this done in the company want to do it.
From my point of view, when I think about, you know, running the most recruiting organization in the company, what I think we really dialed in is things like. How do we differentiate good from not great talent? What are we looking for? How do we do interview loops? All that sort of stuff, and I think we've nailed that pretty well, but it's something we talk about. Watch the comp ranges and so on. We got to make sure we get good talent here. Second big one is running this at scale. Yes, everyone's super excited. Every team in the company wants to go do something with this tech. Wait a sec. Let's talk about things like running on CPUs, distillation, quantization.
What are the tools in the process so that our company can scale without deploying, you know, an amount of compute that's just unrealistic and doesn't deliver the value based on that? I think just making sure we are staying in tune with where the technology is going. I mentioned we started Roblox Research just a few years ago. That was done with the stated purpose, not specifically for AI, but I said, hey, there's a bunch of open problems we're going to be going after, and we need to be well connected to where the state-of-the-art in the world is on this. I've mentioned to some folks before, it's kind of interesting, like, you know, 10 years ago, industry didn't really track what academia was doing in computing.
I know that having coming out of academia myself, like, it just wasn't really tracked in the class of. The way it is, let's say, in medicine, where there's a very distinct pipeline from like academic labs into biotech startup, you know, into medicine that we can all get at some point. That's changed. We've built up an organization whose job is in part to make sure we absolutely know what the best results out there are and then can bring to bear that we're solving a different problem. How do we cherry-pick from these state-of-the-art techniques and bring them to bear to solve a problem that maybe has never been solved before, right? We will have problems that have never been solved before. We are much more into 3D than others are.
We're much more into content creation of 3D than others are. We're doing things with facial expression in that realm. We can't wait for someone else to go solve these problems for us. We basically have to take the lead. Those are the primary things we worry about. There's also an aspect we think about on what I'll call the safety side. Obviously, society as a whole is worried about things like deepfakes, right? That's a little bit less concern for us because I'm not sure what a deepfake means in the context of avatar. Avatar is already kind of a fake.
It's a persona you put out there to be who you want to be, and I may want to look like you, and I might build an avatar that looks like you, and that is just kind of expected in the virtual worlds that we run in. You know, we have to keep an eye on like, you know, these have helped our safety model. These sorts of technologies have helped our safety models a lot. Where might they hurt? Where might it get more adversarial? Where might, in a sense, the bad guys get more clever because of this? I think we're doing pretty well on that front. So far, we haven't seen any indicators of that being a big concern. It's something we're always going to keep an eye on.
Hey, Joel, one thing. You know, the company is relentlessly focused on innovating and staying on the edge of innovation. You know, 1 year ago or 18 months ago, we were getting as much pressure as every company out there to cut back, cut heads, do whatever. We really didn't, right? We chose to continue to innovate, continue to invest, to be consistent through the cycles rather than sort of stopping and starting. We're also a company that's much smaller than, y ou know, 2,500 people isn't that many folks. In some ways, it's not that we're not worried. I think we're always worried, the default has been to be pushing innovation at the edge as much as we possibly can. We have a very good unit economic business model.
At times, it's had a certain margin structure. It's periods of time, we've had massive margins in when the top line grew really fast during COVID, and throughout sort of the last six quarters, we've basically run the business at sort of neutral, cash flow neutral, basically. That's always been with an eye towards staying on the innovation treadmill, if you will. I think that's one of our biggest assets is, for better or for worse, we, you know, we believe when we the minute we step off of that, you know, almost any company is inviting obsolescence and new entrants to come in. It's the classic innovator's dilemma. The ones that stay on the innovation treadmill have the opportunity to stay on it, but that's really as much a mindset as anything.
Dave is just really relentless on recruiting and hiring the best talent to solve more and more problems. This is one of many over the 20 years of this business's evolution, that has been an opportunity to keep investing and keep advancing the platform. There'll be others. I think that mindset is the thing that, you know, helps us guard against becoming obsolete and letting somebody else, you know, bubble up. It's anytime you have this kind of new disruptive technology, it's exciting times.
Can you give us maybe a little bit more of a case study on the trust and safety side, specifically? Obviously, maybe give us a sense of the size of the team there as of today. Obviously, you guys have rolled out some innovation here, so more is going to come out. Just the sort of level of dent you guys can make on that expense base and then sort of even beyond trust and safety, sort of the room that you guys could have on just general product driven gains across your entire.
Yeah.
Product space, and how that sort of gives you room to do some of the things you're talking about. Just give us a little bit of maybe a case study on that.
Yeah. When you say trust and safety, there's kind of like, there's the engineers who build our trust and safety system.
Yeah.
There's a fairly large community of moderators and customer service agents, and then, like, the kind of the team that supports them, manages and trains them, and all that sort of stuff. I don't expect to see us being smaller in the, on the engineering side. Like, to my point, that's where the innovation is coming from. We see a lot of opportunity, a lot of opportunity to apply this technology on the moderation side, making them more efficient, right? I mean, voice is a really good example of if you just give someone a two-minute voice clip, they can't be a very efficient moderator. If you can narrow it down automatically. If someone files an abuse report, this offended me.
If you can narrow it down to the eight seconds that was offensive automatically, and this is why we think it's offensive, because our users don't always tell us. Their abuse reports aren't necessarily high quality. We say, We think there was bullying going on in this conversation. They become a more effective, that moderator.
Yeah.
is going to be when you do that, right? Obviously, on the customer service side, like, it's not just our dislike, everyone's going to get benefits from this there as well. I think, you know, I'm not going to comment on how small do we think it will get or. Because there's this weird dynamic of, you know, there's more going on, and they're getting more efficient, and like, there's curves crossing. Overall, I think we see a lot of line of sight, both on moderation, quality, and efficiency down the road on at least a cost to serve basis, let's say, per user hour basis, right? It kind of has to.
If we go to every user is a creator, like, I can tell you, I'm not launching that feature until I have a moderation story that works, using the same moderation techniques we're using today will not work there. It's going to be automated, right? It's going to have to be automated. Moderator's role will change to be either the most egregious cases or the cases where, you know, these neural networks always give you a probability. They don't give you yes, no answers. They give you probabilities. Maybe in a certain probability range, a human gets invoked or something like that, where we want someone to look at it. Humans aren't foolproof either, as we've learned, right? There's just a huge amount of opportunity, and it comes down to, I think, understanding the context.
There's also an interesting ability for us to change our trust and safety rules more dynamically. We ran some experiments just using the standard LLM conversational, and conversationally building a set of moderation rules with it around text, just as an example, and how well it was able to learn that. As we move to different communities where trust and safety standards may be different, it's going to be different on how you moderate for an under 13 community in the United States than how you might moderate a 25 plus community in, I don't know, pick a country, Israel, right? Or something like that. Those are going to be different. The ability to take policy and translate that kind of automatically into implementation is extremely attractive as well.
It might be something that allows us to be much more dynamic and have maybe an order of magnitude or two order of magnitudes, more distinct communities than we thought we would as we started thinking about differentiation between these groups.
All of our moderation costs are embedded in the line item infra trust and safety on our PNL. I think that's been running about 17%, 18% on bookings. The infra piece of that is the bigger of the two.
Yeah.
The savings there will be on, you know, the smaller number, but it'll be meaningful cost savings. Whether we reinvest that or what we do with that savings is, you know, we'll see.
Yeah.
Yeah.
Just, is that something that switches on overnight, or is it like, how long has the development and improvement process before they start seeing benefits?
It's gradual.
Yeah, it doesn't switch on overnight.
Yeah.
Yeah.
I feel like two use cases that feel like, you know, for AI, that feel like they are commercially important, and one of them you talked about last year in the shareholder letter, I think in the beginning of 2022, are discovery and personalization. I guess, can we talk about where we are on the journey to discovery? Just getting the right experience.
Yeah.
In front of the right user at the right time, which feels important to get them to spend or at the very least, to spend time. I'll follow up on personalization.
Yeah, I think, it's somewhat different space than some of these others because getting discovery right is, it's a little bit more of a slog.
Mm-hmm.
Like, the techniques are well known. It's a matter of applying them better and better and getting better signals. I think we've made some incredible progress just in the past six months. Like, discovery is getting better, particularly for, when we think about age up, and I think the sorts of signals we're giving folks there. cold start has gotten a lot better. That's always a hard one. You have no data on the individual except what they tell you when they join, and you're trying to give them a reasonable recommendation. There are things we're learning, like, well, you make friends early, bring more social signals in, for example, to that stuff. I think that's going well, but I don't think there's a quick seismic event around personalization discovery.
It's an integral part of what we're doing as we imagine more and more bespoke experiences and being able to support that.
Mm-hmm. On the personalization front, and this is probably years into the future, but I'm curious how many? I can imagine playing FRONTLINES, and it doesn't seem too far-fetched that if Roblox knows a lot about me and cosmetics that I've bought in the past, that it could, in fact, not only get the right offer in front of me, but perhaps generate the cosmetic that would appeal to me while I'm playing that game, using some of these generative AI tools combined with the data that the platform has on me and my interests. I guess, how far away are we from tools like that or anything else you would add that could drive monetization through better personalization of experience?
Yeah, no, I think that's a very good question. I don't have like I can't say, like, a date will be that, or where we'll have that. I think it also comes back to how do we enable our creators to get access to a set of tools that may be first party from us, they may be third party, that help them understand that in a way we're being safe and careful with the data behind it. The scenario you described, I think, would be absolutely a tool a creator decides to bring into their experience, help them understand what they might want to purchase, you know, make a better recommendation from a purchase point of view. That will also come...
We are starting to enable, we've had examples of this, of bringing marketplace in the experience as well. That's something that's been done with all, with a lot of the brands work we've done and definitely all the music work we've done. You can go get concert swag at any, you know, music concert on Roblox, and that was definitely something you could have done even, I guess before, right before COVID was when. That launched right during COVID, was our first ability to do that. I think there's some real opportunities to kind of sub-personalize the experience for someone and to pick up signals about what they're interested based on, you know, other behavior they've had on the platform. I think there's also an interesting opportunity.
One of the unique challenges for us on personalization is we get fewer data points than an extreme in a web search, right. Like, because you're in an immersive experience, you hop between experiences less often. W e're starting to look at what signals can I pick up in the way you are in experience, how you behave in experience that can inform. Like, you may be in experienced, but what are you enjoying doing, really, in that experience? Can we pick up on that? Can we understand it and take that back into personalization so we get more signals per hour, so to speak, than we typically do if we just say, Oh, these are the things you've played in the past month. Right. We see some real opportunity to start to get those kind of micro signals.
Just to go back to the question on discovery, one possible conclusion is that the determinant for success for developers is less about sort of their ability to create great content and more about distribution, right? How do they get their experience in front of the right people and effectively monetize it? Can you just share more about how you're solving that problem for them?
Yeah.
As every user becomes a creator, distribution may become.
Like, there's only so much screen real estate, right? I do think that there's two areas I think that ties on, and I should have mentioned one of these in the prior question, but it gives me an opportunity to hit on it now. One is really thinking about personal recommendations, much more personalized, much smaller, kind of. Your results will look very different from mine and so on. I think that's how we kind of get around the fact that an experience might not get as much broad-based exposure, but it'll get to the right folks, and it'll vary from person to person. The other one I forgot to mention, search is an integral part of this.
We've made some, and this is using some of the technology that's starting to emerge now, much more semantic-type searches. For example, if I have an experience called the OK Corral, when someone searches for Western, understanding that's what it is, without the creator having to say, Oh, this is a Western, and be able to bring that up, as an experience I might be interested in. We've made some really positive advances in search, and we'll continue to work on it in search, which is another way people find what they're excited about. I think the third way, that's underneath all this is the social network, and we're really working on enriching the social network. What are your friends doing? What are they up to? Because generally, at Roblox, people want to do these experiences together, not by themselves.
Understanding where your friends may be in experience right now or something like that, can really drive a lot of this as well, and feeding that into the recommendation system with much heavier weight.
Dan, you and I were talking earlier about sort of your, you know, your North Star, your Hobbit story. Can you sort of share that Hobbit vision you have with this group? Like, just technologically, what are sort of the hardest parts to doing that you want to do with The Hobbit?
Right.
We can understand, like, what has to happen for that to happen.
Yeah. This is that demo I showed at the end on stories. What I challenged the team is I want to be able to take the first chapter of The Hobbit, which is mostly descriptive of what the Shire looks like, if you've read that, be able to drop that into Roblox and have an entire world pop up, not just with the terrain that looks like the Shire, but where hobbits wandering around, and they have probably chickens and goats because You can pick up it's a rural society, and they probably have some carts. Stuff is just happening automatically, and it's all been built and coded, and it's live, right? There's a lot that has to happen to get us there.
I think the first step is going to be kind of terrain building, which I think is not quite line of sight, but, you know, something that is not that far away. How do you start to, though, understand what a Hobbit is and create that and animate it and give it realistic behavior? I think that could feel like it's very far away. You can start to see some of these techniques where generative systems will recognize patterns, and they pull from even just a set of scripts that they know are available to start to automate these things. Eventually, they'll be generating these scripts themselves. I don't know the exact timeline, but what I do know about this space is everything seems to be happening faster than we thought it could. Amazed at that demo at the end, that an engineer put together that quickly.
More than the fact it was interactive back, it was asking questions back. It kind of understood the capabilities behind Roblox Studio without a lot of programming behind that. I think this stuff's going to happen a lot faster than we all expect, and we'll be able to do that sort of scenario. Think what that opens up. That can then open up, how do authors engage with the platform? A set of creators who probably haven't approached the platform before, maybe they can start to approach this in some meaningful way.
That goes down the constraint. What other constraint do you see in that thing, in that? Is it the amount of images that's out there, the access to the images?
I think there's a lot of core science that is the constraint. Like, I don't think we've developed the machine learning techniques that can do that yet, right? They're going to have to get better. Look, capital is always a constraint in the sense that these problems are easier to solve with more compute horsepower than with less, right? Even then, even given infinite compute horsepower, I probably couldn't solve that problem today, right? I think there's science that has to happen in order to get us there. I think the scientific community, both in industry and academia, is so engaged. We're not the only company, like, ramping that up and being more aggressive in how we think about it. I think, I'm optimistic it's going to get better.
It's talent.
Yeah, at the end of the day, this all comes down to talent.
One of the things, you know, is everything we talked about is something that's tangible, we can see today. Sometimes, you know, with the technology innovation, it's the imagination of what it hasn't been. If you look out 10 years, what's the vision then?
I'm not into making 10-year bets at this point, because I think everything's changing so quickly. Even five years is very hard. The space is moving so fast. I mean, I would not have imagined where we are now five years ago, right? I think I saw other things that were leading up to it, right? Some aspects I saw, like, five years ago, I was looking at what does it take to do a better job at summarizing documents? It was just a side hobby project of mine. It was hard, and it wasn't very good, and the techniques weren't really there, and now, like, they're absolutely there. Take any large document and drop it in the ChatGPT, ask it to summarize for you.
It does a very good job of that summary, right? I'm not going to try to predict 10 years out, I don't think. I think I will say what's exciting about this, I kind of view generative techniques on par with giving automated machinery to farmers. Like, before when all they had was oxen and a plow that could drive behind it, and then you get a tractor. Think about what that's been able to do to make food production much more productive and how you can approach it and the economics of the whole thing. I view this technology as the first real lift for creators, right? I think we all know, because all of you probably have more economics than I have. Like, the internet had a weird thing. It didn't really raise productivity. I think this one's absolutely going to raise productivity.
I think it's a different sort of technology. I'm really excited about where that can take us. I mean, we're in a creator business, so I'm, like, really lucky. I wish I could say I saw this coming, so I joined Roblox, so, like, this revolution. I joined Roblox because just in general, it's a very innovative company. I was excited to join it. In general, it was coming in this focus on creators, and that's exactly where we are. Like, I feel like we're at ground zero for the acceleration of the creator, not just on Roblox, but kind of worldwide.
You talked about democratization of things. What is the ultimate entry barrier, then, in your mind?
Genius. I think that's what it's going to come down to, right? Like, I mean, I don't know if any of you ever go to art galleries or art museums. Sometimes it's the technique of the artist, but often, more often than not with modern art, it's the thought they had, right? I mean, we've all heard people criticize modern art, like, my five-year-old could have done that. Not really, because there's the methods and the genius behind what they're doing, and I think that's what we're distilling towards. What's going to differentiate content is going to be the genius versus what I'll call almost the artificial barrier of skill.
Do you use the technology aside from content creation, can you use it to help developers better monetize their experiences?
I expect we'll find a way. Those ideas, I think we kind of touched a little bit this before, but they're not popping in my head directly, except that I know we can use it to build smoother economic systems. One thing we try to do at Roblox on the economy is how do we build the simplest system that leads to complex emergent behavior, right? Which is kind of how our real-world economy works. It's kind of what keeps all of you in business, actually. I think there's a real opportunity to do that. That said, as ML gets better, the ability for creators to more easily understand who their demographic is, their behavior, what they're looking for, be more in touch, will definitely lead to better monetization.
I think that's something that's not new to ML with these general techniques, but the amount of power behind these things, that technology too, will get better. I'm trying to think of any question that we will be able to do that. Conversion rates will go up. Yeah. The technology will enable creators to figure out what's the most engaging thing for the user and how they convert them. There's no doubt in my mind.
In the past, you've talked to being the Rule of 40 company many times. It sounds like there's a benefit on the top-line side and also potentially on the bottom line as well, because the cost. Do these gen AI tools, at the end of the day, like, make you more comfortable, in that position longer term?
Well, the fastest way to get to being a Rule of 40 company is to grow very fast. Obviously, the top line absorbs cost, you're just better off in general. Given our business is entirely based on creators making great content that's appealing to users, this makes me, yes, much more optimistic about our ability to do that on a sustained basis, for sure.
One quick one, just financially. There's been weird seasonality the past few years in Q3. Things like last year, back to school inflected. Was there something in particular that drove that, given, you know, Q3 is a lot stronger sequentially than we've seen in the past?
September inflected in what way?
Well, Q3 as a whole was up, like, 9% sequentially on bookings versus the prior few years before that, we didn't see that seasonality. Trying to understand.
You're saying September was 9% ahead of August?
Sorry. Back to school engulfed within all of Q3.
Okay. I'll have to go back and look at the numbers. I don't remember an inflection in back to school, but it could be a whole host of reasons. It could be in comparison to 2021. Our Q3 is usually, you know, a very strong July and August for obvious reasons. September, things slow down again. I don't see any change in that. Whether the exact sequential percentages or year-over-year percentages are changing at all, I'm not sure. I'd have to go back and look at last year and get to your question.
Yeah. Historically, one of the advantages of Roblox is that it allows the, you know, one of the easier languages to create in and, you know, to code before people start off. With, like, developer of Gen AI tools, it kind of, like, makes it easy to create in any environment now. Lua no longer becomes necessarily the easiest one to create in. You know, seems like that could be a competitive barrier going away. How do you think about that?
You're right. Lua is a. It has a few properties. One, it's easy to get started on. It also has enough oomph and enough features that you can be sophisticated in it. And it also has a really compact runtime, which is key for us right now, a wide range of devices. But if you look at the demo I gave before, I do think we're going to be in a world where people will want to go back and fine-tune their code, no matter what's generated, and tweak it. If you go to a world where you're not writing the code, but you're reading it more often, having a language that is easier to understand actually accelerates that.
The thing about it, I'm not sure if you've ever coded for a living, but it's very hard to understand code sometimes that you didn't write. You know, try grading papers sometimes. Like, you're like: Where did this works, but I don't know how this student got here. You know, you're just trying to understand what they did. I think having a language that is straightforward and pretty easy to understand helps with that process. The AI might have written this code. I want to go tweak this one spot. How quickly can I understand what happened here to go tweak it? You're right. At some point, it's possible programming languages just completely drop out of the picture.
I still think we'll then benefit from the fact we have, for example, a very compact runtime that we can run on any device and easily port to new devices and all that sort of stuff. I think we're a ways away from the code, drop it all together, and being, again, more like, I have a tractor, not a pair of oxen, when I do this.
We have a developer community that is fairly large. It's been built over a really long period of time. It's, there's an audience on Roblox that they're building for, and now we're adding these tools and capabilities to their tool set. It's not as if we're staying static, and all of a sudden, you can create a creator community out of thin air, right? Our creator community has been large and long tail, and now we're going to add capabilities. We're not standing still, right? We're going to continue to advance that creator community and we'll have more and more creators that can build in our platform. Our platform has a pretty long track record of growing a user base, aging up across, you know, across the world.
The community's seen its earnings grow at really high rates over a long period of time. Adding this on top of takes our community to a different place. We're not going to sit around and be static while other people try to create or use this dislocation in technology to create their own community. That's great . It's evidence of a great model. Our model is not standing still.
Last week, you guys made some announcements, at Cannes, I think, enhancements to the immersive ad platform.
Yeah.
Can you elaborate on some of these changes? Then just maybe thinking further down the line, how could Gen AI be a tie-in to the ad platform? Have you thought about, you know, implications there?
Yeah. Go ahead, Dan.
I was just going to say, first of all, one thing you have to remember about ads on Roblox is, the focus is around things like brand experiences and how do you get a brand experience? Well, you have to create it. Everything we've been saying about creators, first and foremost, I think absolutely applies in that domain. Particularly if you argue brand experiences may change more, they may be less persistent. I don't know this yet. We're, you know, still trying to figure out what they want, but, you know, the classic one is:, Oh, I want to create a Super Bowl experience, right? You know, like, Super Bowl experience is not very relevant in July to the typical user. It's relevant when the Super Bowl is happening.
I think with brands and so on, be able to create experiences as fast as you might create an ad campaign seems pretty exciting and compelling. I'm not sure if you wanted to add?
No. Yeah. About 100 brands built persistent experiences last year. A barrier to there being hundreds more is the ability to build experiences. Anything that reduces that hurdle, some of that can get done inside the brands themselves, some of that can be done within their ad agencies, and some of that can be done with developers on our platform who are working for hire with some of these brands. All of that speaks to faster creation and ability for brands to leverage the platform more quickly. You know, 100 last year is great. This year will be more than 100. We're moving in a healthy direction in general with brands.
This is about brands and agency and developers partnering with us and saying, We're in on this platform. We're testing the platform. We're really excited about what it can do, for advertising and for, you know, creating experiences where we see enormous amounts of engagement for brands. That's really what this was all about. There were some real, you know, commitments of people to work on Roblox. It was an exciting, it's an exciting start to what we've been trying to do.
Can you comment a little bit on the engagement and the fill rate of the 100 branded experiences that sort of where that sits now?
Yeah.
I just called it three, four, five months ago, and sort of how we should think about-
Just starting. Yeah, I don't have any data to share with you today.
Of all the things you showed, in today, just really a presentation like that, what changed the most from what you would have showed in November of last year?
Well, the only thing that we knew we had last year was the facial expression capture, and that was something we had actually been working from. That came actually out of a acquisition we did a few years ago, Loom.ai.
Mm-hmm.
Everything else wouldn't have existed. I don't think we even been trying to do it. Yeah.
Yeah.
All right. I think that's everything in the room. Maybe we'll wrap the webcast there. Dan and Mike, thank you so much for being with us.
Thanks for having us.
Thank you.