GitLab Inc. (GTLB)
NASDAQ: GTLB · Real-Time Price · USD
21.51
+0.85 (4.11%)
At close: Apr 24, 2026, 4:00 PM EDT
21.49
-0.02 (-0.09%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Fireside Chat

May 3, 2023

Brian Robins
CFO, GitLab

Thanks for joining us today. We appreciate everyone's time and interest in learning more about GitLab's strategy regarding AI. The format of today's event will be a fireside chat where I will ask Sid a number of questions regarding how GitLab is incorporating AI into our DevSecOps platform. We will also show product demos of some of GitLab's current functionality. Please note we will be opening up the call towards the end for panelist questions. To ask a question, please use the chat feature and post your questions directly to IR questions using the dropdown menu. We will not be discussing any Q1 financial performance questions today. Before we begin, I'll cover the safe harbor statement. During this conference call, we may make forward-looking statements within the meaning of the federal securities law.

These statements involve assumptions and are subject to known and unknown risks and uncertainties that could cause actual results to differ materially from those discussed or anticipated. For a complete discussions of risks associated with these forward-looking statements in our business, please refer to our SEC filings, including our most recent quarterly report on Form 10-Q and our most recent annual report on Form 10-K. Our forward-looking statements are based upon information currently available to us. We caution you not to place undue reliance on forward-looking statements, and we undertake no duty or obligation to update or revise any forward-looking statement or to report any future events or circumstances or to reflect the occurrence of unanticipated events. Additionally, this presentation contains information related to upcoming features and functionality.

It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes. Just like with all projects, the items mentioned during the presentation are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab. I'd also like to note that a replay of today's call will be posted on ir.gitlab.com. With that, let's kick it off. Sid, thanks for joining us today. The investor community is excited to hear about our vision and capabilities with AI. Let's start by having you share our overall AI vision.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Thanks, Brian. AI is a big industry shift. It's gonna make it much, much faster to use GitLab. You can expect from us tens and tens of AI features throughout the application. It's not just about developing or coding. It's about the entire DevSecOps workflow. It's also about planning, building, securing, deploying, monitoring, and analyzing. Apart from making GitLab itself better, it's also about enabling our customers to use AI in their applications with ModelOps. They have to be able to apply AI to make their applications better all within GitLab. Last but not least, it's also about how we do it. Our customers expect a privacy first approach from us, where their intellectual property is secured.

Brian Robins
CFO, GitLab

That's great, Sid. With this vision in mind, let's now move to capabilities. I know the team has been innovating rapidly on new AI capabilities and the span of software development lifecycle. Can you share with us the new capabilities available to customers today?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah, for sure. Let's first look at the features that we have to help developers. We have Code Suggestions in beta in VS Code. Code Suggestions gives you the code that you need to write as you're writing it. In practice, developers spend less than a third of their time writing code, and we've had customers like a large insurance company we work with, where developers spend less than 10% of their time writing code. It's really important to empower developers in other ways as well. Our first AI feature actually Suggested Reviewers. We acquired UnReview in 2021 for it. What it does, it's assigning your code to the right reviewer. If you get that wrong, it can cost days. We found it was much more effective.

Even for ourselves, we thought we were pretty good at this, the AI was much better. Since launching in general availability a month ago, Suggested Reviewers has been used over 100,000 times. Another feature for developers is summarizing merge request changes. If you get a merge request, and it says like, "Hey, what does this do?" AI can help you say that. Last but not least, a feature that we also have available is Summarize My Code Review Changes. If you critique someone's code, it can give you a summary of these are the things that I said about that. Let's look at the demo of the above.

Taylor McCaslin
Group Manager of Product for Data Science, GitLab

Hi, everyone. My name is Taylor McCaslin, and I'm the Group Manager of Product for Data Science at GitLab. Today, I'm gonna showcase some AI-assisted technologies we're integrating into GitLab to support developers during the software development lifecycle. Let's start with writing code. Code Suggestions allow developers to write code more efficiently by receiving Code Suggestions as they type. It helps developers improve developer productivity, focus, and innovation. This works in VS Code using the GitLab Workflow extension. Let's take a look. Code Suggestions can quickly complete common tasks like importing Python packages. It can also help you complete functions and then use those functions as you're writing code. Here, we're defining a first and last name and then defining a full name. We can use those defined functions in a user form. Next, Code Suggestions can also be used to fill in relevant content based on context.

Code Suggestions can also be used to leverage common API interfaces that help developers get quickly started with new APIs and writing common boilerplate code that can be extended for custom functionality. Here, we're taking the FastAPI, creating a "Hello, world" instance, and then creating some examples of how you can use data within this new FastAPI example. We can also use it to recommend boilerplate code, like connecting to a MySQL database. Once a developer is finished writing code, they then push up that code to GitLab via a merge request and it's time to get some reviewers. Suggested Reviewers helps customers have faster and higher quality code reviews by automatically finding the right people to review merge requests.

This leverages a novel machine learning algorithm that analyzes the specific source code changes in a merge request and suggests code reviewers with contextual knowledge of those changes who are likely to be able to provide a code review. Let's take a look at how this works in a real merge request. Once a developer opens a merge request with their new source code changes, it's common to request a code review. In this merge request, I have a documentation change, but haven't set a reviewer. In the code reviewers dropdown, I can receive suggestions. I see two recommendations already who are perfect for this code change, Mon, who's my engineering manager, and Amy, who's one of our technical writers. Amy specifically is a code owner who can actually merge this change for me.

This is great because it helps me respect this repository's governance rules around who can merge changes. Now that we've selected a reviewer, let's ensure that that reviewer has the appropriate context to be able to provide of a high quality review. Summarize merge request changes. When a developer creates a new merge request, it's common to not take the time to write a detailed description of the changes. In the worst case, this can lead to blank merge request descriptions that leave reviewers left in the dark about what a change is intended to do. Summarize MR Changes helps the author of MRs efficiently communicate what their code change does. This helps reviewers easily understand the change and begin their reviews faster. Let's take a look at a merge request. Here we have some simple changes, but a blank description. Let's leverage the GitLab action to summarize this diff.

This leverages a large language model to analyze the changes in this merge request and publish a comment summarizing the changes this merge request implements. Now that we have a merge request summary, let's now get some feedback from a reviewer. Summarize my merge request review. Merge requests make it easy for developers to get feedback on their code with reviews. However, reviewers can frequently create many small comments across many files, which can be hard to understand. GitLab can help reviewers summarize all their changes, enabling the original developer to more efficiently understand the reviewer's requested changes and implement them faster. Let's jump into an active merge request where a reviewer is actively adding comments for the original author. Once a reviewer's done, they can click Finish review and choose the quick action to summarize their merge request.

This will highlight all of the various feedback items in a single summary comment. The original author can quickly take this reviewer's feedback and iterate on their merge request.

Brian Robins
CFO, GitLab

It's great to see our AI capabilities that benefit developers. As you mentioned, one of GitLab's differentiators is that we help everyone involved in the software development life cycle. Let's share other AI capabilities that are available to customers now and help security and operations personas.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

I'm super excited about the features we have for security and operations. The first feature that we have out there is Explain this vulnerability. GitLab helps you recognize which vulnerabilities might exist in your code. It can now say not just, "Hey, you have this vulnerability," but also, "How does that vulnerability work?" On top of that, how do you remediate it? All are driven by AI. Another feature we have is generating tests in merge request. Suppose you fix those vulnerability, you wanna make sure that that vulnerability never comes back again. You want a regression test. AI can help you do that. Last but not least, Explain this code. For example, if you're in operations, you're frequently dealing with reviewing new code, like what did this do? What does this cause? The AI can help you interpret the code you're seeing. Let's look at a demo.

Taylor McCaslin
Group Manager of Product for Data Science, GitLab

Now let's look at some features that support security and operations users. Explain this vulnerability. GitLab already has security scanning built in, which can help developers detect security vulnerabilities in code they write. However, it can be difficult for developers to understand these vulnerabilities, especially if they aren't trained in cybersecurity or haven't encountered a specific type of vulnerability before. Explain this vulnerability helps developers to understand a detected vulnerability, learn about why it's a problem, and even receive tips on how to resolve it. Let's look at a specific vulnerability. Here I have a static analysis vulnerability, and I can see the prompt to Explain this vulnerability and learn how to mitigate it with AI. Here we can see I'm missing a user instruction in my Docker file. I get an explanation of this vulnerability and how it's exploited. I also am recommended a fix.

Now I've got everything I need to quickly go and resolve this vulnerability. Now that we've resolved the vulnerability, let's Generate tests to ensure that our code does what we expect. Generate tests and merge requests. When developers push changes to a new merge request, it's common for them to iterate continuously. A common problem with new code is it frequently doesn't have tests associated with it. This makes it hard to review code as there's no way to easily check if the code does what it's expected to do. With AI, we can take code changes and suggest test files to help both the original developer and reviewers ensure code works as expected. In a merge request, developers can choose the dropdown associated with a file and click Generate tests. This will use AI to generate test files.

Developers can then take these generated tests and add them to the merge request with subsequent commits.

Brian Robins
CFO, GitLab

That's great, Sid. What other capabilities apart from DevSecOps are available to our customers that benefit everyone involved in the software development life cycle?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. The software development life cycle is more than developing, securing, and operating. It's planning things together. It's looking at the entire value stream. Here's a couple of exciting features we have available right now. One is Issue Comment Summaries. For example, if you're discussing something and there's been tons of comment, maybe 50 comments, who's gonna read all that? Nobody has the time. The AI has the time to do that. It can read it can summarize them so everyone's on the same page again. Another feature we have is GitLab Duo Chat. If you wanna ask a question about GitLab, you used to have to look in the docs yourself, now GitLab Duo Chat can do that for you. Last but not least is Value Stream Forecasting. GitLab is especially good in value stream analysis because everything is in a single application.

With the forecasting, the AI can make a prediction about how many deployments you're gonna do. We're gonna expand on this, but we're very happy what's already in there. Let's look at a demo.

Taylor McCaslin
Group Manager of Product for Data Science, GitLab

Now that we've generated tests, let's go back and look at our code base and see what we might do next. Explain this code. It's common during the development to encounter code that you are not familiar with but need to understand to continue your task. GitLab now allows users on code views to receive an AI-generated explanation of how a code block functions. This enables anyone interacting with source code to quickly onboard to a new code base, but also to uplevel their skills and understanding as they encounter code they're not familiar with. When viewing code on GitLab, simply highlight a code block that you don't understand and click the question mark. An AI-generated summary will explain what this code does and allow you to quickly understand what's happening in that selected code block. Let's look at some features that benefit anybody who uses GitLab.

Issue Comment Summaries. In large software organizations, it's common for there to be many ideas tracked in issues. Issues can accumulate many comments over time, or particularly exciting ideas can generate lots of comments back and forth as various people provide inputs on how to solve an issue. These comments can quickly become overwhelming and hard to follow. GitLab now offers a simple way to quickly summarize issues with many comments. Here I have an issue that's seven years old, and it has a lot of comments. It actually takes a long time to just scroll to the bottom of the page. Now we can choose the quick GitLab action to summarize comments. This will use AI to summarize all of the comments above and give me a quick, easy to understand summary of all the content so that I can get to action faster.

Once I understand what I wanna do to solve an issue, it might be common for me then to have questions about how to move forward next. GitLab Duo Chat. During the process of creating software, it's common to have questions about how to accomplish something, especially if you're a new developer. This can frequently require asking a colleague, Googling the question, or scouring documentation to find an answer. With AI, we can make this as simple as asking a question in chat. Using a large language model trained on GitLab documentation, we can answer queries and point users to relevant documentation and tutorials. Let's take a look at what this looks like in the UI. In the bottom left, click the help icon and click Ask GitLab Duo Chat. Let's ask about enabling security scanning in GitLab.

GitLab Chat analyzes our documentation and provides natural language answer to our query, and even links relevant sources that include documentation and tutorials. Now that we know that our users are productive, we should think about measuring that productivity. Value stream forecasting. GitLab Value Stream Analytics enable decision-makers to identify trends, patterns, and opportunities for digital transformation using metadata from all the deployment activities across GitLab. This historical data can also be used to predict what to expect in the future, allowing you to make critical decisions for planning and staffing. In a repository, click the CI/CD analytics under the Analyze tab. Choose Deployment Frequency and enable the Forecast option. You'll see a forecasted trend based on historical data, allowing you to predict what to expect in the future based on your historical data.

Brian Robins
CFO, GitLab

Sid, can you share some of our upcoming AI features that are on the roadmap?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. We're working on a ton of new features. I'm gonna name some things that we hope to release the majority of in the coming weeks. One of them is automated commit messages. In Git, in version control, you continually say, "Okay, this is the code I'm changing, and this is about what it does." The AI is really good at writing a summary for that. Another thing is a conversational interface for editing .gitlab-ci.yml. That's for a verify stage, how you test your code. That's a file that you need to edit. The AI can learn from other files and make suggestions on how you edit that. Spelling control. It might be pretty mundane, but it's super helpful and for issue descriptions, comments, et cetera, it's gonna make you sound a lot smarter.

We're also gonna have a natural language assistant for creating Git commands in the command line interface, the CLI of GitLab. If you're in that CLI, it's gonna be way easier to write a command. We're gonna have vulnerability report summaries and suggested action mode plans. These are just some of the ideas. We're looking at many more, but we hope to release some of these in the coming weeks.

Brian Robins
CFO, GitLab

Thanks, Sid. We're hearing from customers, especially at the enterprise level, about the importance of IP protection. It's a key part of the vision that you just talked about. Let's dig deeper on what we mean by being privacy first in AI.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

GitLab is trusted by more than 50% of the Fortune 100 to secure and protect their most valuable assets. We believe that enterprises, especially the heavily regulated one, need to know their intellectual property is secured. We're focused on a privacy-first approach. The protection of what our customers trust us with is at the forefront of how we will apply AI. For many features that interact with customer source code, like Code Suggestions and Explain this vulnerability, we use models that reside completely within the GitLab cloud infrastructure to help safeguard the customer intellectual property.

Brian Robins
CFO, GitLab

Yesterday, we announced that we have partnered with Google Cloud. Can you please share with everyone a bit more about that partnership?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

We're super excited to expand our long-time partnership with Google. GitLab can now tune Google's foundational models with our own data, leverage these great LLM models to deliver new generative AI-powered experiences. It allows us to control the data. We have data isolation, protection, sovereignty, and compliance. All the customer intellectual property and source code stays within GitLab's cloud infrastructure. We're open to partnering with multiple hyper clouds and third-party AI services to make sure that our functionality will stay best in class and privacy first.

Brian Robins
CFO, GitLab

Let's put this in the context of our business and the market. We know that AI will change the nature of how people work and collaborate to develop, secure, and operate software. Can you share how we think about the impact of AI on the total addressable market and our business?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. We believe that AI will increase the total addressable market for multiple reasons. First, in the code creation process, we're seeing new personas get into the code creation process. We call this sometimes junior or citizen developers. They're using the platform to contribute. The developer tab, who's considered a developer, it's expanding. For example, our Code Suggestions makes it easier to code. Second, with features like Explain this vulnerability, we're expanding who can help with securing software. It's gonna expand that persona as well. Third, we're adding ModelOps capabilities to the DevSecOps platform. That will invite data science teams as new personas to use GitLab, the DevSecOps platform. I think that's really exciting. Finally, we see market interest.

As AI makes every individual part bigger, there's more demand for a single application that doesn't have these integration points between applications that slow everything down, and we believe that will expand the DevSecOps platform market.

Brian Robins
CFO, GitLab

Sid, you just mentioned ModelOps as one of the new key areas that we believe will expand our total addressable market. Can you expand upon this?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Brian, I absolutely can. We see ModelOps as a big opportunity for GitLab. ModelOps is a combination of DataOps and MLOps. It's everything you need to do to add AI to your application. In GitLab, we're adding AI to our application to GitLab. But with GitLab, customers are creating applications, and they now need to add AI to that. We want to help them on that road. We already started building this functionality in 2021 in collaboration with the wider community. That led to GitLab CI/CD runner support for NVIDIA GPUs. More recently, we added the capability to link MLflow experiments with GitLab experiments. Later this year, we plan to introduce a model registry, allowing our customers to store, version, deploy, and track the health of their AI and ML models natively within GitLab. We want to help our customers be more productive.

Every significant application is going to have both code and AI, and we want both managed with GitLab.

Brian Robins
CFO, GitLab

Thanks, Sid. All these capabilities you just saw are currently available to customers, and you'll continue to see our fast pace of innovation with new AI capabilities. I encourage you to follow along through the GitLab blog, where you can see all the new capabilities as we launch them. Would you like to close this out, Sid?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Thank you all for joining us today. AI is a major shift for our industry. GitLab has evolved. It's gone from a dev platform to a DevSecOps platform to now an AI-enabled DevSecOps platform that includes ModelOps. AI is gonna make it faster to develop, secure, and operate software. Our vision expands beyond just code creation to encompass the full software development life cycle, including planning, securing, deploying, monitoring, and governing. As AI speeds up different parts of an application, the power of a single application like GitLab to speed up the overall cycle time truly shines. I believe that AI will expand our total addressable market, bringing more personas into the mix. We're grateful that more than 50% of the Fortune 100 trust GitLab as their DevSecOps platform. We're now very happy to answer your questions about AI.

Brian Robins
CFO, GitLab

At this time, we'd like to open the meeting up to a broader Q&A from analysts. To ask a question, please use the chat feature and pose your question directly to IR questions. Thank you. With that, we'll take the first question. Thanks. The first question is from Sterling Auty from MoffettNathanson.

Sterling Auty
Senior Managing Director, MoffettNathanson

Thanks. Hi, guys. Really appreciate you guys doing this. What I'm really wondering is, I wanna go deeper on that Google partnership. In particular, you know, it almost seems like there's just a natural pathway here in terms of your partnership with Google, making it a GitLab Google versus Microsoft, you know, battle, you know, in DevOps moving forward. You know, how would you see that playing out? In particular, kinda curious which LLMs are actually being incorporated. How much of what you've done is the Microsoft solution versus Google? Because you did mention kinda doing best of breed, you know, from that perspective.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. Thanks, thanks for that. I think the battle is GitLab versus GitHub. In that battle, we have twice the amount of features available to customers to date, if you compare it to what GitHub has available to customers, plus what they have announced. We're very excited about the deepening Google partnership, but GitLab AI uses multiple hyperclouds, third-party AI services, as long as they meet our privacy first requirements. For example, we're using OpenAI. Regarding the LLMs, we're using the state of art LLMs that you would expect. It seems that the bigger the better, and we wanna make sure that GitLab customers get the absolute best.

Brian Robins
CFO, GitLab

Next question will be from Rob at Piper Sandler. Rob, you're on mute.

Rob Owens
Managing Director and Senior Research Analyst, Piper Sandler

It's only been a couple of years. Why not still be on mute? I think my question's really for Sterling because he always seems to be in a car. I wonder if he has an office or not, but I'll take that one offline with him. I think more broadly as we contemplate just monetization and where this fits, will everything be available in premium and ultimate? And if not, how are you gonna make a distinction between which tier to include these things in? Thanks.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Thanks, Rob. A great question. Yeah, we're gonna be looking at our buyer-based open core model, and we're gonna be looking at the cost of providing the features to date and how those costs are projected to evolve. That might result in having features available to everyone, restricting it to a certain tier, Premium or Ultimate, or even charging for features separately.

Brian Robins
CFO, GitLab

Thanks, Rob. Next question will go to Joel from Truist.

Joel Fishbein
Managing Director, Truist

Hey, thanks for taking my question again. Thanks for doing this. I have a follow-up to Sterling's question. First of all, can you give us a timeline, a more specific timeline, Sid, for when you think some of these features will be out? You said over the next several, you know, weeks. The second thing, question I had as a follow-up to Sterling is, any more color on expanded partnerships? I mean, you guys pride yourselves on being sort of the, you know, agnostic, you know, and allow people to use the LLMs that they want. Just curious, you know, where you... If you'd give us any more color on any of the other partnerships you guys are working on?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Thanks for that. Most of what you saw today is available to customers as experimental features to date. Most of what you saw is available today. This wasn't what we're gonna do, this is what we're doing. Now, specifically, the suggested review, which is generally available. Code Suggestions is in beta, and we did discuss a few features that are on the planned roadmap. We wanted to show you what we have today. There's a lot more coming. It's gonna be tens and tens of features throughout the lifecycle. If you think about where we get the features, some of them are with our own models, some of them are with the hyperclouds, and we specifically like it where we can run it within our own infrastructure, and that's what the Google partnership makes possible.

We're also using third-party AI services. I mentioned OpenAI and other players in that market, Anthropic, and there's a couple of other ones. We're looking at all of them, and we wanna use kind of the best fit for every particular feature.

Brian Robins
CFO, GitLab

Thanks, Joel. We'll now go to Michael from KeyBanc.

Michael Turits
Managing Director and Software Investment Banking of KBCM Technology Group, KeyBanc Capital Markets

Hey, everybody. Good morning. Thanks a lot for doing this. This is great. Regarding the TAM, I thought, Sid, that was a very helpful thought process to us on regarding the amount of new personas, and I've always felt that there were lots of new personas that could use different features of your product set. That's useful. If I just try to isolate it to the impact of GenAI on the requirement and the ability, the addressable market for pipelining per se. In other words, if it becomes easier to write code, is it therefore more demand for pipelining because there's a lot more of lower quality code out there? Or does...

Is GenAI so good that it can help us create higher quality code such that there's less demand for pipelining, and pipelining is in the end really a big part of your core business?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

That's a great question. I think I see a couple of trends that could lead to more pipelining. More code, more pipelining. I think easier to write tests, more pipelining. I think more, kind of more improvement, more innovation. The more you change, the more you wanna test whether things are still working. The only thing I can see making a difference is AI driving which tests you run. One thing we have in GitLab and we're working on is code intelligence. Run the right test at the right time. That is something that can drive down the total amount of pipelining because you're gonna run only the tests that matter. That might be a downwards trend. All in all, I think we're seeing human work being replaced by computers, and it involves more and more compute.

I think the compute trend is up and to the right, and I don't think any amount of code intelligence or test intelligence is gonna stop that.

Brian Robins
CFO, GitLab

We'll now go to Matt from RBC.

Matt Hedberg
Managing Director and Software Research Analyst, RBC Capital Markets

Great, guys. Thanks for taking my questions. Maybe a two pointer here. Have you seen an increase in top of funnel interest given all of your focus on Generative AI? Second, do you think there could be a consumption element to pricing at some point?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. I think a lot of what we do with generative AI is pretty recent. We really changed how we allocate our resources. I think now if you look at our development, about a quarter of our efforts are in generative AI. I think it's a pretty recent development. We still have to see a big impact from that. Yeah, I do think some consumption pricing is appropriate. Some of the features will just be included as part of the product, may be available to everyone. Some of it will be tiered, and some of it will be charged separately, some on a user basis, like per user per month, and some of a consumption basis, where you pay for the compute or specifically for that feature in the form of tokens.

Brian Robins
CFO, GitLab

We'll next go with Koji from Bank of America.

Koji Ikeda
Director of Enterprise Software Equity Research, Bank of America

Hey, guys. Thanks for taking the question. Sid, I wanted to ask you a question on the example that you had with the Hello World coding example. It's a question on risks. You know, in your view, you know, why what are some of the risks of having to, you know, use something like an auto-coding Copilot, whatever it may be, to just say, "Hey, write me the code for this Hello World example," and then start with that versus, you know, starting with a blank sheet and using the Code Suggestions that GitLab has today. Maybe help us walk through, you know, what in your view are some of the risks with starting with code just made up right up front.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

I think the risks are similar if you kind of work with somebody else's code. Frequently, if I'm a developer and I have to work with somebody else's code, it's faster because the code's already written, but it's harder to understand. AI is gonna make us faster. It's gonna allow for us to do more work in less time, but it's gonna require us to be critical of what is in there and to have more judgment of what works and what doesn't. I think it's a lower bar if you write it yourself than if you have to review somebody else's code. It's intellectually tougher. I think there's a certain amount of upskilling going on here.

Brian Robins
CFO, GitLab

We'll now go to Derrick from Cowen.

Derrick Wood
Managing Director, Cowen

Great. Thanks, guys. Lot of kind of announcements and thanks for walking through all the new technologies you're working on. I think it'd be helpful to understand, you know, just how you'd compare and contrast what you guys are working on versus what GitHub has rolled out, and maybe give us some differentiators that you're looking for out there. I know it seems like you've stressed that you're running this in your own cloud infrastructure, that that may be different from more of a public infrastructure out of GitHub. I would love to hear kind of a compare and contrast on in terms of what you guys are working on.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. Thanks for that. I think if you compare GitLab versus GitHub, I think GitHub is a dev platform where GitLab is much more a DevSecOps platform. If you look at generative AI, you see the same thing, where GitHub is very focused on everything that has to do with development and coding, and we have to do that, but we also wanna do security. We also wanna do operations. We also wanna do planning and value stream management across the entire life cycle. I think that's coming out with the generative AI features too.

Brian Robins
CFO, GitLab

We'll now move to Kash from Goldman Sachs.

Kash Rangan
Managing Director, Goldman Sachs

All right. Thank you so much, Sid. Good to see you. Good to see that you're keeping good energy level. Brian, thanks for organizing it. Sorry for my throat here. Two things I was curious to get your take on. One is, when you talk about the, you have the hypothesis that this is gonna lead to more code generation, more personas, et cetera. Can you talk to perhaps any customer anecdotes where that hypothesis has actually been proven out? Because the general prevailing view on Wall Street, maybe this is wrong, is that there's gonna be a contraction of the TAM. You're talking about an expansion of the TAM. There's a general view that it's gonna lead to a contraction of the developer opportunity, because it's just a lot more efficient.

I can spend 20%, 30% less time, why would I not need fewer developers? Maybe that's incorrect, if you could just debunk the hypothesis based on some customer case studies, whatnot, if you have them. Secondly, when you boil it all down, what would be the net critical differentiators versus versus Microsoft GitHub Copilot? The market seems to be obsessed with the first mover advantage. Help us debunk the myth that you are not lagging from the first mover advantage that Microsoft appears to have. Thank you so much.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. Thanks for that. What we're seeing at customers is that, because of generative AI, it's easier for people to start participating. That's a trend. The second thing we're seeing is that every significant application now has both code and AI models, and you need to manage both, and that's becoming a bigger problem. We already have functionality to run experiments in MLflow and in GitLab, and I'm super excited about the model registry coming up and having controls around that. You wanna make sure that you version that right, you don't have regressions, you don't have discrimination going on, and I think we can have a huge role there. I think with the large models, we're seeing that it's really important to have, like, a really big training set and a really big training run.

It doesn't make sense for 1,000 companies to make their own big models. You're seeing a kind of a flight to, like, a few companies in the world who can do that well. I think we're very lucky to be able to partner with Google, who has expertise in this, has run large models for a long time, has written Code Suggestions for a long time. Being able to partner with that, I think we have a really good feeling about being able to offer our customers something that is best in class. We're not just dependent on that. Like, our Code Suggestions is based on nine different models. It's not that you can have any one thing that solves everything.

It's a question of scope and size, and, I think we're doing a really good job on making sure we have something that's high quality there.

Brian Robins
CFO, GitLab

We'll now go to Ryan MacWilliams from Barclays.

Ryan MacWilliams
Software Equity Research Analyst, Barclays

Hey, guys. You know, just as we've talked to, you know, enterprises about how they think about incorporating large language models into their DevOps process, it seems like there's still some hesitancy, into using large language models. One, from the sense that, like, will it copy my proprietary code? Then two, you know, if it's pulling from the broader internet, how do we know that what I'm incorporating with a large language model is secure? Like, how can GitLab be different in helping enterprises get more comfortable at securing the code output from large language models? Thanks.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

It's super important, we wanna make sure that it's privacy first, that people know that what they are doing is not gonna expose their code to others. We wanna make sure that when it involves customers' code, like Code Suggestions is run on GitLab infrastructure, we control everything, and it's not that the output we produce can be shared with other customers. We have a lot of trust from our customers. We are the enterprise solution, the enterprise standard, we incorporate that in the work we do here. We diligently vet everything and give customers options to use experimental features or not, use third-party AI services or not, even if it doesn't involve their code.

Brian Robins
CFO, GitLab

We'll now move on to Karl from UBS.

Karl Keirstead
Managing Director of Software Equity Research, UBS

Okay, great. Thank you. Maybe Brian, if we could go back to monetization, maybe a couple for you. When do you think you'll announce specifically your monetization plan? Have you baked anything into your fiscal 2024 guidance for this technology? Is there any framework you can provide to try to size what the revenue impact might be in fiscal 2025? Then a quickie for Sid. Sid, you framed this as really GitLab versus GitHub. Obviously, Amazon threw its hat in the ring a couple weeks ago with CodeWhisperer. Do you mind just sharing your framework on how to think about Amazon's entry into this auto-programming space? Thank you.

Brian Robins
CFO, GitLab

Sid, would you like to go first?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. I think it's super cool that we have more and more companies kind of making code suggestion technologies. I think that, if we look at the platform market, the dev platform market, I think that GitHub and GitLab are the by far the biggest contenders. That's where we're focusing our competitive attention. Brian?

Brian Robins
CFO, GitLab

Thanks, Karl, about the financial questions. The purpose of today's call was really to go through the features and what we've been doing with generative AI and to go through and show some demos. At a later time, we will go through what the impact will be to outer year's revenue as well as the cost model. We aren't updating, you know, any of the guidance today.

We'll now go to Pinjalim from JP Morgan.

Pinjalim Bora
Executive Director and Equity Research of Enterprise Software, JPMorgan

Hey, thank you. Thank you for doing this. Very helpful. Quick question. What portion of the capabilities that you're talking about today are available for self-managed customers? Is there any additional work to be done by the self-managed customers to train these models on premises?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Thanks for that. That's a great question. Everything we show today is assuming SaaS, and most of it is available only on gitlab.com. For self-managed, some of it will never be available. Some of it will be available by proxying it to gitlab.com, kind of as a, we call it GitLab Plus, and some of it will be able to be run locally. How that shakes out per feature, we don't know yet. We're focusing our innovation on the SaaS services today, and over time, some of it might trickle down to the self-managed ones.

Brian Robins
CFO, GitLab

We'll next go to Nick from Scotiabank.

Nick Altmann
Director of US Software Equity Research, Scotiabank

Awesome. Thanks, guys. See if I can get my video working. There we go. Just kind of building on Matt's question on the consumption side of things and then Karl's question on the monetization side of things, where do you guys sort of expect the monetization to show up first? Is it more in the, what I would call the variable components, the CI/CD minutes, the compute, the storage? Or is it more on kinda what you were talking about earlier, Sid, around expanding the TAM to sort of different personas?

Just as a follow-up, will the bulk of the monetization opportunity on some of the variable components, CI/CD storage, et cetera, is the bulk of the opportunity there or are there sort of incremental consumption components that you guys kinda plan on adding to the pricing plan that you think will be a little bit more meaningful?

Brian Robins
CFO, GitLab

Yeah. Thanks, Nick. You know, same answer I gave Karl. You know, we just finished our quarter, we have our earnings call coming up shortly. You know, we'll provide guidance on that call. Main purpose of today was just to go through the features and talk about, you know, everything that we've released to our customers. The consumption component and the price and the monetization, we'll go through at a later point.

We'll next move to Mike from Needham.

Mike Cikos
Senior Analyst of Infrastructure and Analytics and Security Technology, Needham & Company

Hey, guys. Thanks, thanks for getting me on here, and thanks for doing this. Just two quick points that I wanted to touch on. The first, can you give us an indication, I know Suggested Reviewers has been out there for a couple of months now, Code Suggestions is still in this public beta, but what has been the customer adoption or attach rate? As well, how can we think about the success of the AI, as in, what's the likelihood of the customer actually adopting the recommendation that the AI engine is making, that as a proof point or evidence that the engine is actually delivering an actionable outcome that the customer is looking for? That's the first question. I did have just one follow-up after.

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yeah. Thanks, Mike. For Suggested Reviewers, we're not charging separately for that, so there's nothing to report there. We did have hundreds of thousands of suggestions given, and anecdotally, it's people report it saves them a whole bunch of time. Actually, internally, we thought we were really good about assigning reviewers, but the AI turned out to be much better still. It's been a big success here. What's the second part of your question? Sorry, can you help me?

Mike Cikos
Senior Analyst of Infrastructure and Analytics and Security Technology, Needham & Company

I was looking for what's the adoption rate been like from customers, the success rate? The second point that I wanted to touch on, I know we were talking about things like comment summarization, right? A lot of these new features are in experimentation mode. My thought is that experimentation is still behind, like, beta and then eventual launch, right? How do we think about the progression of when these features are actually launched for customers?

Sid Sijbrandij
Co-Founder, Executive Chairman, and CEO, GitLab

Yep. To answer the last thing first, it's certainly most of what we show today is in experimentation. Next step is beta, and the next step after that is general availability. That's in the order of weeks that we hope to progress these features to that. About the success rate, I don't have any data on that. It would be different data, like, for example, Code Suggestions uses nine different models. We would look at it per model. Anecdotally, people perceive the suggestions to be of high quality.

Mike Cikos
Senior Analyst of Infrastructure and Analytics and Security Technology, Needham & Company

Terrific. Thank you, guys.

Brian Robins
CFO, GitLab

I don't see any more hands raised. I wanna thank everyone again for joining us today. We know that everyone's schedules are extremely busy. We appreciate your interest in GitLab. We look forward to speaking with you again after our upcoming earnings call. Thank you so much. Have a great day.

Powered by