Excellent. Thank you to everyone for joining us, as part of the Needham Growth Conference. My name is Mike Cikos. I'm the Lead Analyst here covering cybersecurity and infrastructure software. As part of the conference, I'm pleased to say that we have the management team here from JFrog, CEO Shlomi Ben Haim, as well as the CFO Ed Grabscheid. As part of this fireside, we have 40 minutes set aside. We'll be running through some prepared questions on my side, but sorry if my eyes are darting around. I have a separate screen here. If the audience has questions, please send them in, and we'll make sure that we get to them, while we have Shlomi and Ed on the line. And Ed and Shlomi, thanks again for the time here. We were just talking about before we hopped on the fireside.
They're currently at an offsite, and it's rather late on their side too, so we really appreciate the participation. Thank you. For investors who are newer to the story or maybe revisiting JFrog, just to start, set the table for the crowd, maybe, Shlomi or Ed, can you just provide a quick overview of the company's history, the value proposition that you're currently delivering to customers?
Yes. Well, why don't I start? Mike, first of all, it's a pleasure meeting you again. We enjoy these conferences a lot, so thank you for having us. And greetings from Israel. We are here for a management offsite, so very happy to get on the line and share with you our stories, and especially as we get ready for another wonderful year. JFrog, we founded the company a long time ago, 15 years ago, as the company that managed all the primary assets of software supply chain. Basically, what it means is that no developers on the planet now build software from scratch. You bring software packages from public hubs like Docker or Java or C++, or models from Hugging Face. They are all binaries.
We provide the platform that hosts it for you, manages it, stores it, secures it, distributes it, and basically everything that comes into the software supply chain or goes out to your production environment, runtime environment, is powered by the JFrog Platform. So you can look at it as the power grid of software supply chain, focusing assets that you might call artifacts, software packages, or binaries. Universal solution, hybrid solution, serving today over 6,500 customers, the majority of the Fortune 100. We are honored to have all of them in our portfolio building the company together.
Great. And if I shift over to Ed for a second, and just as a dust off here, it feels like ancient history as we're going into the new year here, but let's cycle back to Q3 for a second. It was a great quarter just from an execution standpoint, right? But what are some of the underlying drivers that helped define the success and strength that you saw in that quarter?
Yeah, well, great to see you, Mike, and thank you again for having us. I always enjoy speaking with you and certainly happy to speak about Q3. It was a phenomenal quarter across the board for all the growth drivers. I'll start with the cloud. We delivered 50% growth in the cloud. This was driven again by usage over our minimum commitments, but we also saw some expansion with our customers with larger commitments. From an enterprise perspective, we had really strong execution in our enterprise. We delivered 10 new customers over $1 million and 25 customers on a year-over-year basis, so 54% growth in terms of those $1 million-dollar customers. Not to mention what we do at the $100,000, between $100,000 and $1 million, quite a few adds there as well, over 150 adds in that cohort of customers on a year-over-year basis and delivering 16% year-over-year growth.
And then E+, we see many customers now adopting the full platform. In E+, we have now 56% of our revenue coming from that full platform subscription and a 39% year-over-year growth. So execution across the board and all the key growth drivers. Not to mention we delivered very strong free cash flow as well as operating margin. So we continue to remain disciplined, and very efficient in our business as we grow.
We'll unpack that, great overview. If I just come back to cloud for a second, right? It's accelerated throughout calendar 2025 based on, largely the consumption above commitments. Like the, the migrations have been on pause now for some time. What has been the, the source of that, improved demand, or, consumption on behalf of customers throughout the year?
So, Mike, what we are seeing on all cloud, by the way, AWS, Microsoft Azure, GCP. JFrog is a multi-cloud solution, also a hybrid solution, which provides on-prem solution if needed. What we saw in the cloud specifically is that more developers are investing more time in adopting AI technologies. And the way to sense that is to look at the artifacts that are related with what we might call the AI world: Docker, npm, PyPI, Conda. These are the packages that usually you will use when you are implementing AI technologies within your software supply chain. So bringing them from public hubs into Artifactory, or pushing them, distribute them to your on-prem environment, that's the data transfer. Our cloud model is consumption-based data transfer. And storing them in Artifactory is another element of consumption, which grows as well.
Customers, especially the annual contract and the enterprise customers, are committed to a certain quota based on what they predict that they will need. Every usage over the commitment triggers another discussion regarding maybe upgrade or an update. We were very pleased with the past three quarters seeing this growing, and this is what we reported in Q3 as well.
So it sounds, just to punch on that demand for a second, it sounds like the demand that we are seeing from where we sit today is durable, right? And I'm curious, like, is there, are you seeing anything that gives you any confidence or moves you one way or another, just given potential change on the migration activity itself? Again, that has not been a tailwind to you. It's been neutral for what feels like two, maybe three years. But is there any movement in that?
Yeah. So I'll tell you my observation, our observation on what we see in the market. It used to be the front and center of every CIO to speak about cloud migration, if it's happening in the organization or not. And if so, then what's the path and what needs to be done in order to implement cloud technologies into the enterprise. What we are seeing, in 2025 mainly is that the majority of the discussions with our customers, CIOs, CISOs of the organization, is really about AI adoption. This happens rapidly. There is a race of updating AI, becoming more mature, but it's not coming with AI, updates only, or implementation and adoption. It's coming with a very strong demand for trust and security and control that JFrog provides as part of the platform as well.
Ed mentioned also the performance of Enterprise Plus, a subscription that covers the entire platform. We see a growth there, and it's less a discussion about yes or no, are we migrating to the cloud. Actually, we call it out, I think already in Q2 when our customers started to tell us that they build something that will fit for their purpose. Maybe it will be cloud, maybe it will be multi-cloud, and maybe it will be hybrid, not just cloud. We see them all very focused on implementing AI as part of their annual plan. Moving forward, when we ask them about the plan for 2026, it looks like this is going to be their front and center as well. Now, cloud migration might be part of it, but it's not anymore just a discussion by itself.
It's coming with the new technology and the disruption in the market.
Just on the AI element as well, since we're already talking to it, can you help us think about what is the intensity of data transfer when we start thinking about AI, like AI-powered application, or when you start embedding those GenAI technologies? Is it material if there is an uplift to that data transfer intensity?
Yeah. So the basics, if we go down to the basics, I know AI fluff because, you know, everybody wants to speak about AI. So just to be super authentic, AI generates more code because every developer now is powered by a co-assistant that writes source code for her or for him. And that means that every time that you compile this code, you create more binaries. More binaries, more JFrog. This is what we do. This is our forte. Aside from that, you bring AI packages from outside, whether it's coming from open source Hugging Face models or it's coming from Docker containers when you ship AI or when you create AI with script of Python from the world of PyPI hub.
Whether you bring it from outside using JFrog as your proxy to cache those public hub and bring the open source artifacts of AI, or you create code, more code now, a thousand times faster with Copilot or with Cursor or with Claude or with the tools that are creating code faster in the world of AI. Once it's compiled, it's become an artifact, it's become a binary, and then it's stored and it's managed in JFrog, secured by JFrog and distributed by JFrog.
Okay. And for the cloud again, I'm trying to think through the consumption element. And I just wanted to see when customers are engaging with your cloud, is most of the demand you're seeing coming from steady growth, or expansion of existing projects, or is there really? Are you seeing an explosion of new projects coming online? How would you split the drivers behind that underlying consumption?
So the reason, and I think you know us very well, Mike, we are being very conservative with how we guide and how we project the future. We are removing almost all the big projects that are being kind of discussed with us and de-risk them in order to stay focused on the commitment of the customer. But what we are seeing is that it's actually a result of two behaviors. One, the creation of more code that creates more artifacts, and this is more business for JFrog. And that might come from existing customers or from experiments of AI technology within certain organizations.
The second thing that we see is that the automation of AI in the world of DevOps, AI for DevOps, AI for DevSecOps generates a lot of traction, data transfer and storage in terms of their consumption because now you have to handle with a completely different volume when a machine is creating it for you. So we see both new experiments and migration of legacy software into the AI world. It's yet too early to say that this is how it's going to be, but our customers understand that the implementation of AI will come with the efficiency and the security. Without it, it's not going to happen, and they are investing a lot in setting the system and the infrastructure for it.
Sorry to jump around, but maybe a go-to-market question here. I know Ed, you were talking about those large commitments earlier. Has something changed as far as whether it's the process or technologies that you guys have enabled with your go-to-market organization to go out and execute against those larger commitments? Is just because the cadence has been so strong of late, and I'm wondering what you guys have done internally to help execute against that.
Yeah.
You want to go first, Ed?
Yeah, let me go ahead and take that, and then you can feel free to jump in. Well, first of all, the investments that we've made, Mike, were not something that was just done in 2025. Those investments have been made for many years prior. We brought in a security overlay team to educate our sales team that wasn't necessarily versed or had the contacts in terms of security. So that was step number one. Step number two then is to start to incentivize the sales team with targets that are focused specifically on security. So once you start to introduce those products to the market and know that they're sellable, you have to be able to target those and incentivize the sales team. And that's what we've done. And we've been very effective with building the targets and incentivizing the team.
So there was nobody in JFrog that did not have a target, and they couldn't reach 100% of their commissions if they didn't sell a certain amount of security. So clearly there is a strategy and there was an execution, and the two came together and they've done very well. And this is why you're seeing the results. Some of the larger deals that we've talked about come with security, and there's motivation internally to close those deals. You have execution from a product, you have a demand coming from the customer, and you have a sales team that has executed and converting the demand in the pipeline.
And if I may add to it, Mike, go-to-market structure is not something that you, you change in a day. When we decided something like three years ago that we are going up market, we're going to serve the enterprise. And JFrog was born as part of the community as an open source solution. It was a heavy lifting. We invested a lot on all the fronts. So when you serve the biggest enterprise of the world, you better come with a, with a customer success team, with a solution engineer and architects that know how to speak and think enterprise. You better come with a platform and not just a tool, not a point solution, a holistic platform that covers all the needs, not only of the CIO, but also of the CISO and also of the IT operation.
And when a technology like AI emerged, the solution might fit what they need in the future and not only what serves their legacy. So we invested in the go-to-market, in terms of professional services, solution engineers, customer success, support, and obviously salespeople and field CISOs and field CTOs in different regions to make sure that once this opportunity has not just emerged, but mature, JFrog will be able to perform as we performed in the past three quarters.
If we start looking at product, I know you're talking about the platform here and its expansion or breadth that you guys offer today. Let's tackle security because we were talking about that overlay, right? When you are engaging with the customer, do you find an increasing volume of new logos or actually landing with both Artifactory and your security offering, or does it remain? I guess the majority of it is across all motion just given those 6,500+ customers. But how does the go-to-market or the sales rep engage a customer to educate them on that security and walk through that consolidation play you guys are offering?
Yeah, I'll start by saying that we needed, first of all, internally to ask ourselves authentically, what is the value that we give to our customers? And the value to a developer that pays a $1,000 or $2,000 a year is completely different than a value that an enterprise that pays $10 million a year. You have to build not only the right product for it, but also the right go-to-market and support system that comes with it and onboard our customers. So, we invested a lot in, first of all, understand that and put in different categories different customers.
Some of our customers, as we spoke at the end of 2024, some of our customers, we just had to know that they, they will come, they will become self-service customers, and we have to focus on the enterprise. This schizophrenia of enterprise and very small logos just doesn't work. It will not only stop us from performing on the business level, but also on the product level because you don't build the same product for this group. For example, a very small logo will care less about security. A very big enterprise would not even start the discussion with you if it's the yesterday world of point solution security. I think we invested in parallel in both sides, and we chose the enterprise as the road that we should take.
From a competition standpoint, or I guess that consolidation, who is it you're competing against more frequently on the security? Has competitive win rates been relatively consistent? Is it potentially improving given the maturity of your offering?
I'll try to be as focused as we can, but as you probably understand, as we expand our DevOps platform to DevOps and security, and then DevOps and security and governance, and then DevOps security, governance, and MLOps, the list of competitors is growing, obviously, and mainly around the world of DevSecOps, all the point solutions. We see it everywhere, almost on every presale call. Point solutions have no room to grow. CIOs, CISOs are looking for a holistic solution. They are looking for the fundamental part of the platform, which is the system of record. This is what JFrog is providing them: system of record for all binaries, system of record of all software packages, the single source of truth. So if this is not protected, then what's the point? Now, all the point solutions are integrated with JFrog.
They build the integration themselves because they know that without an access to Artifactory, our Artifactory repository, they are blind. They cannot scan anything. They cannot provide the value to their customers. So when you ask a customer, would you prefer to take the repository plus the security, plus the distribution piece, plus the governance piece from one vendor? The answer today is yes, and maybe a few years ago, best of breed was a bit more popular. So our competitors are mainly the point solutions in the world of security that are being displaced quite rapidly, I would say.
And then from an environment standpoint, if I'm thinking about demand here, there's obviously been some high-profile vulnerabilities that have been noted in the news more recently. How much of a tailwind does that benefit you guys when thinking about going to market, and driving these cross-sell or raising that awareness for the product offering as well?
This is a tricky question because the last incident happened in Q4, and we are not here to speak about Q4. npm hit the world. A lot of CISOs woke up one morning understanding that they are not protected. Shai-Hulud was hitting the world, and a lot of CISOs woke up one morning understand that their software supply chain and their system of record is not protected at all, and they have more than five different point solution tools. Our customers sent us an email. Thank you. You saved my day or you saved my holiday in Thanksgiving, so I know that we built something that was addressed to a specific pain, and the pain was very much aligned with the value that we bring. What is it in the world in one word?
In the world of software, the only place that the hacker and the attacker will wait for you is your runtime environment. They don't have access to your offices. So they're waiting at the production environment, and they will attack from there. And what do you have in your production environment? Artifacts and binaries and the asset that JFrog manages for you. So if they have an access from your runtime environment to your repository, this is bad because they got into your organization from a backdoor that they left in whatever public hub that we spoke about. When JFrog built JFrog Curation as a passport control from the get-go, you cannot get in if you don't have the company policy checked.
With JFrog Advanced Security and JFrog Xray scanning your repository and the entire software supply chain, and with JFrog Runtime, secure your runtime and provide you with full visibility and traceability all the way back to the repository. We provide a full holistic software supply chain security from a firewall before your software supply chain and a tool on your runtime environment to make sure that the binaries 360° are protected. So, what we hear from the market is that a lot of point solution just failed to protect them. And even worse, they alerted so many times with false positive that the developer started to look at it as an overhead. And then something like Shai-Hulud happened. So we have to think with the end in mind. The end in mind is how the hacker thinks and not how the developer thinks.
And all of the point solutions that we remember are point solutions that were built for the developer. If you don't have the hacker mindset, you cannot protect the developer. And this is the approach we took, plus the fact that we wanted to have a holistic solution. And, you know, touch wood, it worked very well. You should wait for the Q4 numbers, and we will share more.
Sounds good. One other element too, an important element of the story here is the partnership that you guys have with GitHub. Maybe as a reminder to the audience, can you walk us through how that partnership first came to be and then how it's evolved over time?
So we built a wonderful partnership with GitHub, JFrog and GitHub grew together, from a world of CI/CD and DevOps and the you know, developers as kings of the software supply chain. But there was one thing, even after we went public, we kept hearing people asking, what's the difference between source code and binary? What's the difference between source code and binary? And when you say to, you know, a common person like myself, it's the software, the source code that you write in English and then compile, become a binary, it doesn't say a lot. When we built the integration with GitHub, as you remember, Mike, we first announced it, the first integration was in 2024. The discussion at the customer side stopped.
They knew that GitHub is the best solution, best of breed platform for source code, and JFrog is the best solution, best of breed solution for artifact or for binaries, and we started to build that with this understanding, so for example, security, there are some tools that protect your binaries and there are other tools that protect your source code, and this definition was very helpful because if you are a CIO, obviously you want to take the two number one, and it was an easy bundle for them to take JFrog and GitHub, and if we needed not to go to the CISO or the CIO, but to the developer, we, JFrog, provided all the findings on a single pane of glass for the developer to see it in GitHub because they are not coming to Artifactory or to the JFrog platform.
They are using their GitHub. So we provided all the information over there. Later on, we built the integration with Copilot. So it's not only that we find your vulnerabilities, we also have a way to code fix your vulnerabilities with one click of a button and not the entire flow that replacing a package when you find a vulnerability or a malicious package. And later on, we started to build the relationship around artifact management in two different platforms that have one platform experience for the developer.
So I think that, overall, the integration with GitHub, and we were also honored to be named as the number one partner for the year, the traction around it and the consolidation of two platforms into one experience that changed a lot in the narrative of what we hear from our customers and prospects.
If I shift over to another part of the platform, MLOps is something that you guys have spoken about as well, right? And so trying to get a gauge, where are we and when we should be thinking about the maturity of the JFrog MLOps offering, scaling that up? I know we obviously have the incentives for the sales force to self-service security, just to hit quota. Are we thinking that we can run or should be running a similar playbook behind MLOps, or is it still too early at this point?
That's a wonderful question. The maturity of the MLOps capabilities in the JFrog Platform is done. We acquired Qwak AI in the summer of 2024. We were working almost a year to integrate it completely into the platform, and it's now available, and our customers, some of them are using it, and the capability is there. But the question is, do we think that the market is mature to go full-blown machine, kind of, machine learning operation? And the answer here is not yet, not yet. MLOps is becoming mature, and we hear it from our customers. We see the adoption in the market. We see how it shifted.
When we ask our customers, "Okay, so what prevents you from taking it all the way to production?" we hear that basically the gap is on the security and the trust that they have in the process. When we are building a solution for MCP, whether it's an agent or a developer that interacts with JFrog Platform, that helps. When we are scanning your models and storing your models and serving you as the system of record, that helps. When we are providing you with a solution that not only runs experiments on models, but also distributes the model to production, that helps. Slowly we build the full suite around MLOps capabilities.
I think that this entire practice, as it happened in DevOps 15 years ago, is becoming more and more mature, and MLOps becomes more like the CI/CD of machines. I think that it's much broader now. Some of these capabilities, as we speak, are being built at JFrog to reinforce the solution.
Does MLOps, it'd be interesting to hear like how customers are organized around MLOps, right? I know with security as an example, JFrog needed to take a step back and think about, all right, security is tapping a different persona, a different budget. It speaks a different language, traditionally a bit more of a seat-based pricing model. How is MLOps set up within some of these enterprise organizations? Is it its own sleeve or does it still tap into predominantly that DevOps where you guys have always been?
So, there are different types of customers that we meet. There are customers that will incubate the experience of using and adopting MLOps, and these are small accounts that might use the JFrog monthly solution in the cloud. And there are the enterprises that are going a bit more strategically in how they adopt MLOps and AI. What we start to see is that a lot of them are looking at a center of excellence that provided as a service to the entire organization. So the persona is still the person that provides you with all the system of record services, whether you use it for Java packages or Hugging Face models or Docker containers. And it's under the CIO organization, and the CIO is also the budget owner. We start to speak with new power users. This is not the persona.
Persona for me is the one that will maintain and will pay and will subscribe and will expand and renew. But we start to see power users coming from different disciplines. For example, data scientists. Data scientists are not personas that we used to work with in the past. Now they're using Conda, they're using CRAN, they're using packages that are more for data scientists and of course Python. We start to see more security persona, but different. There is the AppSec guy that this is the immediate suspect that will speak with JFrog, but we start to see some risk experts in the organization that are also looking at governance and what happened if we automated all with agents and machines, what type of governance and compliance can we build on top of it.
The third group is a group that is being introduced to us by partnerships. So with GitHub, it's easy. Developers are developers, whether they use JFrog or use GitHub. But when we build the integration with the ServiceNow and announce that we got introduced to product managers because these are the main personas that will use the ServiceNow flow. When we built the integration and the partnership with NVIDIA, there was a completely different category of users that are building NIM as the model that they will use in the enterprise to better perform on the hardware. So the list is growing, but still the one who holds the checkbook is either the CIO or the CISO.
For the product, I'm trying to think. It's in the field currently. You have customers that are using it. Is there any commonality among the customers that are using the solution, those early adopters? Do they have a similar complexion? Then the follow-up to that is, what has the initial feedback been like since putting that product out there in the field?
Yeah, 100% of them, 100% of them will start by saying, if it's not secure, we cannot use it. And security around this domain is something that, if someone will tell you that we have all the answers, probably I'm coming up with a not accurate answer, to be polite. We are learning what the risks are in the world of AI, what shadow AI means, how well you can scan a package today and say that there might be an IP violation here or there might be a license violation here or a vulnerability or, God forbid, a malicious package. How do you know that your team even used AI? Because everybody can use now Copilot, going home and use Copilot to build a better code.
Providing a security solution that sees holistically the full threat is top in mind, top of mind for all of our, all of these users, more advanced users and also early beginners. They start with this question: Can we trust this to be part of our software supply chain solution?
And I'm seeing we have a couple of questions coming in. We probably have time for maybe two or three more at this point. So let's get to them. One on MLOps, and I guess we should just take a step back and kind of calibrate. The question goes along the lines of how is your MLOps solution different than what your platform does today with MLOps models, imported from Hugging Face? So maybe we could just take a step back and I guess address what is your MLOps solution actually doing for the client here?
Yeah. So, listen, the first integration that we built with Hugging Face was how can we import the open source AI models for you from a public repository like Hugging Face in a secure way that is according to your policy and in the most efficient way for you to manage and to provide services internally in the organization. That was the first thing. Artifactory brought it, stored it, Xray scanned it, easy. Then our customers started to tell us, but listen, once we build with these models and we create our own models, we want to run experiments and we want to run a deployment and we want to have a full, 360 experience with the model lifecycle. This is why we acquired Qwak AI and this is the implementation of the practices that we expanded the platform with.
So basically, if you use JFrog today from the get-go, from bringing the model from Hugging Face, store it in Artifactory, scan it with Xray, distribute it, experiment and distribute it with JFrog ML, you have a full solution that oversee the entire model lifecycle.
And if I shift over, question for Ed here, more to tie into given the operating margins and the growth you guys have, how do you go through that, I guess that balancing act between growth versus profitability, and then in tandem with that, if you could touch on capital allocation?
Sure. So first off, you know, part of our DNA has always been to be very disciplined in the way that we spend and being highly efficient. You see this in the expansion of the margins over the last three years, over 1,600 basis points in terms of margin expansion, and we continue to look at what is going to drive top line growth, and we're going to invest in those areas and continue to double down in those areas, but we're going to do it in a very smart way and make sure that we remain efficient, so we'll continue to invest in areas that are strategic growth drivers, like security, like capabilities for the cloud to continue to drive growth in the cloud, and then we'll invest in R&D and sales and marketing to help fuel that.
From a capital allocation perspective, we continue to see strong free cash flow. We always look at opportunities in terms of the M&A market, what's available, what is going to fill gaps of what we have today in our products, and if those available companies, startups have a reasonable valuation, it might be something that we would explore.
In this domain, first of all, the free cash flow performance is a result of a discipline. It's a discipline in collection. It's a discipline of relationship with the customer, the renewals and the invoice at the end of it that is being approved. And Ed's team did a phenomenal job there. And you see the numbers. But when we are looking at inorganic growth, Mike, when we allocate this money, it's usually subject to two things: how fast we can get to the market if we buy something from outside and expand our platform, and how it will change or expand the value that we provide to our customers. And we all understand that in today's market, it's about the output and the value that you give to your customers and how fast you implement that.
We did quite well buying several companies and as Ed mentioned, we are looking at different targets to expand and to reinforce our platform solution.
All right. And I think that's all we have time for, so we'll leave it there. Thank you again to everyone for joining and thank you to the JFrog management team. Thanks, Shlomi. Thanks, Ed.
Thank you. May the frog be.