Earnings Labs

Datadog, Inc. (DDOG)

Q4 2025 Earnings Call· Tue, Feb 10, 2026

$129.96

-0.95%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

-1.80%

1 Week

-6.08%

1 Month

-3.97%

vs S&P

+0.34%

Transcript

Operator

Operator

Good day, and welcome to the Q4 2025 Datadog's earnings conference call. [Operator Instructions] As a reminder, this call may be recorded. I would now like to turn the call over to Yuka Broderick, Senior Vice President of Investor Relations. Please go ahead.

Yuka Broderick

Analyst

Thank you, Michelle. Good morning, and thank you for joining us to review Datadog's fourth quarter 2025 financial results, which we announced in our press release issued this morning. Joining me on the call today are Olivier Pomel, Datadog's Co-Founder and CEO; and David Obstler, Datadog's CFO. During this call, we will make forward-looking statements, including statements related to our future financial performance, our outlook for the first quarter and fiscal year 2026 and related notes and assumptions, our product capabilities and our ability to capitalize on market opportunities. The words anticipate, believe, continue, estimate, expect, intend, will and similar expressions are intended to identify forward-looking statements or similar indications of future expectations. These statements reflect our views today and are subject to a variety of risks and uncertainties that could cause actual results to differ materially. For a discussion of the material risks and other important factors that could affect our actual results, please refer to our Form 10-Q for the quarter ended September 30, 2025. Additional information will be made available in our upcoming Form 10-K for the fiscal year ended December 31, 2025, and other filings with the SEC. This information is also available on the Investor Relations section of our website, along with a replay of this call. We will discuss non-GAAP financial measures, which are reconciled to their most directly comparable GAAP financial measures in the tables in our earnings release, which is available at investors.datadoghq.com. With that, I'd like to turn the call over to Olivier.

Olivier Pomel

Analyst

Thanks, Yuka, and thank you all for joining us this morning to go over what was a very strong Q4 and overall, a really productive 2025. Let me begin with this quarter's business drivers. We continue to see broad-based positive trends in the demand environment. With the ongoing momentum of cloud migration, we experienced strength across our business, across our product lines and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven, in large part, by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products. Our go-to-market teams executed to a record $1.63 billion in bookings, up 37% year-over-year. This included some of the largest deals we have ever made. We signed 18 deals over $10 million in TCV this quarter, of which 2 were over $100 million and 1 was an 8-figure land with a leading AI model company. Finally, churn has remained low, with gross revenue retention stable in the mid- to high 90s, highlighting the mission-critical nature of our platform for our customers. Regarding our Q4 financial performance and key metrics, revenue was $953 million, an increase of 29% year-over-year and above the high end of our guidance range. We ended Q4 with about 32,700 customers, up from about 30,000 a year ago. We also ended Q4 with about 4,310 customers with an ARR of $100,000 or more, up from about 3,610 a year ago. These customers generated about 90% of our ARR. And we generated free cash flow of $291 million with a free cash flow margin of 31%. Turning to…

David Obstler

Analyst

Thanks, Olivier. Our Q4 revenue was $953 million, up 29% year-over-year and up 8% quarter-over-quarter. Now to dive into some of the drivers of our Q4 revenue growth. First, overall, we saw robust sequential usage growth from existing customers in Q4. Revenue growth accelerated with our broad base of customers, excluding the AI natives to 23% year-over-year, up from 20% in Q3. We saw strong growth across our customer base with broad-based strength across customer size, spending bands and industries. And we have seen this trend of accelerated revenue growth continue in January. Meanwhile, we are seeing continued strong adoption amongst AI-native customers with growth that significantly outpaces the rest of the business. We see more AI-native customers using Datadog with about 650 customers in this group. And we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers. Next, we also saw continued strength from new customer contribution. Our new logo bookings were very strong again this quarter, and our go-to-market teams converted a record number of new logos and average new logo land sizes continues to grow strongly. Regarding retention metrics, our trailing 12-month net retention -- revenue retention percentage was about 120%, similar to last quarter, and our trailing 12-month gross revenue retention percentage remains in the mid- to high 90s. And now moving on to our financial results. First, billings were $1.21 billion, up 34% year-over-year. Remaining performance obligations, or RPO, was $3.46 billion, up 52% year-over-year. And current RPO growth was about 40% year-over-year. RPO duration increased year-over-year as the mix of multiyear deals increased in Q4. We continue to believe revenue is a…

Operator

Operator

[Operator Instructions] Our first question comes from Sanjit Singh with Morgan Stanley.

Sanjit Singh

Analyst

Congrats on a strong close of the year and a successful 2025. Olivier, I wanted to get your updated views in terms of where observability is headed. In the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we've seen from Anthropic and new frontier models from OpenAI, just in terms of like what this means for observability as a category, defensibility of it in terms of can customers use these tools to build homegrown solutions for observability? So just give your latest comments on defensibility of the category, and how Datadog may potentially have to evolve in this new sort of a agentic era?

Olivier Pomel

Analyst

Yes. I mean, look, the -- there's a few different ways to look at it. One is there's going to be many more applications than there were before. Like people are building much more and they are building much faster. We covered that in previous calls, but we think that the -- this is nothing, but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time. And you move a lot of the value from the act of writing the code, which now you actually don't do yourself anymore to validating, testing, making sure it works in production, making sure it's safe, making sure it interacts well with the rest of the world, with end users, make sure it does what it's supposed to do for the business, which is what we do with observability. So we see a lot more volume there, and we see that as what we do basically where observability can help. The other part that's interesting is that we -- a lot happens -- a lot more happens within these agents and these applications. And a lot of what we do as humans now starts to look like observability. Basically, we're here to understand -- we're trying to understand what the machine does. We're trying to make sure it's aligned with us. We're trying to make sure the output is what we expected when we started, and that we didn't break anything. And so we think it's going to bring observability more widely in domains that it didn't necessarily cover before. So we think that these are accelerants, and we -- I mean, obviously, we have a [ horse ] in this one, but we think that observability and the contact between the code, the applications and the real world and production environment and real users and the real business is the most interesting, the most important part of the whole AI development life cycle today.

Sanjit Singh

Analyst

And maybe just one follow-up on that line of thinking. In a world where there's a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it's UI or how workflows work in observability and how maybe Datadog sort of tries to align with that evolution that's likely to come in the next couple of years?

Olivier Pomel

Analyst

Yes, there's going to be an evolution, that's certain. There's going to be a lot more automation. We see it today, like we see the -- all the signs we see point to everything moving faster, more data and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that. And a lot of the interaction will be automated by agent. So we're building the products to satisfy both conditions. So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what their options are, get them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. We mentioned on the call, we have an MCP server that is currently in preview and that is really seeing explosive growth of usage from our customers. And so it's a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.

Operator

Operator

Our next question comes from Raimo Lenschow with Barclays.

Raimo Lenschow

Analyst · Barclays.

Congrats from me as well. Staying on a little bit on that AI theme, Olivier, the 8-figure deal for a model company is really exciting. I assume they try to do it with some open source tooling, et cetera, but -- and actually went from like almost paying not a lot of money to paying you more money. What drove that thinking? What do you think what they saw that kind of convinced them to do that? And it's now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?

Olivier Pomel

Analyst · Barclays.

I mean the situation is just very similar to every single customer we land. Every customer we land has some -- has had some at homegrown. They have some open source. They might still run some open source, like that's typically what we see everywhere. The -- it's cheaper to do it yourself is usually not the case. So your engineers typically are very well compensated in the big part of the spend in this company. Their velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So it's not any different from what we see with any other customer. And also within the AI cohort, it's not original at all like -- AI cohort in general is who's who of the companies that are growing very fast and that are shaping the world in AI and they're all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.

Operator

Operator

Our next question comes from Gabriela Borges with Goldman Sachs.

Gabriela Borges

Analyst · Goldman Sachs.

Congratulation on the quarter. Oli, I wanted to follow up on Sanjit's question on how to think about where the line is between what an LLM can do longer term and the domain experience that you have in observability? If I think about some of Anthropic's recent announcements, they're talking about LLMs as a broader anomaly detection type tool, for example, on the security vulnerability management side. How do you think about the limiting factor to using LLMs as an anomaly detection tool that could potentially take share from observability over time in the category? And how do you think about the moat that Datadog has that offers customers a better solution relative to where the road map in LLMs can go long term?

Olivier Pomel

Analyst · Goldman Sachs.

Yes. So that's a very good question. We see -- we definitely see that LLMs are getting better and better, and we'll bet on them getting significantly better every few months as we've seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So if you feed a lot of data to an LLM and ask for an analysis, you're very likely to get something that is very good and that is going to get even better. So when you think of what we have that is fundamentally our moat here, there's 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that's how we aggregate all the data we get, we parse out the dependencies. We understand how everything fits together and we can feed that into the LMM. That's in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so customers can recombine that in different ways using different intelligence tools. But the other part that we think where the world is going for observability is that right now, we are -- the [ SDLC ] is accelerating a lot, but it's still somewhat slow. And so it's okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling product. Where the world is going is you're going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that's happening in your system. So you need to be proactive. You'll need to run analysis in stream as all the data flows through, you'll need to run detection and resolution before you actually have outages materialize. And for that, you'll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the [ fact ], and 15 minutes later. And that's what we're uniquely positioned to do. We are building that. We're not quite there yet, but we think that a few years from now, that's what the world is going to run, and that's what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems.

Gabriela Borges

Analyst · Goldman Sachs.

That makes a lot of sense. My follow-up...

Olivier Pomel

Analyst · Goldman Sachs.

By the way, the data points we're talking about are very real time, and there are many others of magnitude larger in terms of data flows, data volumes than what you typically feed into an LLM. So it's a bit of a different problem to solve.

Gabriela Borges

Analyst · Goldman Sachs.

Yes. Super interesting. My follow-up for both you, Oli and David, you've mentioned a couple of times now some of the conversations you have with customers about value creation within the Datadog platform. Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up. What are some of the steps that you can take to make sure the customer still feels like they're getting a ton of value out of the Datadog platform?

Olivier Pomel

Analyst · Goldman Sachs.

Well, there's a few things. I mean, first, again, the rule of software always applies. There's only 2 reasons people buy your product is to make more money or to save money. So whatever you do, when customers use a new product, they need to see a cost savings somewhere or they need to see that they're going to get to customers they wouldn't get to otherwise. So we have to prove that. We always prove that. Any time a customer buys a product, that's what is happening behind the scenes. The -- in general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.

Operator

Operator

Our next question comes from Ittai Kidron with Oppenheimer & Company.

Ittai Kidron

Analyst · Oppenheimer & Company.

Congrats, quite an impressive finish for the year. David, I wanted to dig in a little bit into your '26 guide. I just want to make sure I understand some of your assumptions. So maybe you could talk about the level of conservatism that you've built into the guide for the year? And also, you've talked about at least 20% growth for the core, excluding the largest customer, but what is it that we should assume for the large customer? And now when you look at the AI cohort, excluding this large customer, are there any concentrations evolving over there given your strong success there?

David Obstler

Analyst · Oppenheimer & Company.

Yes. There are 3 questions in there. The first is overall on guidance, except what we're going to speak about next, we took the same approach as we looked at the organic growth rates and the attach rates and then the logo accumulation rates and discounted that. So for the overall business, which is quite diversified, we talked about diversification by industry, by geography, by SMB, mid-market and enterprise, we took the same approach. We noted that with the guidance being 18% to 20% and the non-AI or heavily diversified business being 20% plus, that would imply that the growth rate of that core business assumed in the guidance is higher than the growth rate of the large customer. It doesn't mean the large customer is growing any which way. It's just that in our consumption model, we essentially don't control that. And so we took a very conservative assumption there. And the last point, I think you mentioned is the highly diversified. We said 650 names in the AI is quite diversified, essentially would be very similar to our overall business, which we have a range of customers, but not the concentration level. And what we're seeing there is significant growth. But like our overall distributed customer base, a growth and then potentially some working on how the product is being used, but nothing out of the ordinary relative to the overall customer base in the very diversified AI set of customers outside the largest customer. Hopefully that's helpful.

Ittai Kidron

Analyst · Oppenheimer & Company.

Okay. That's great. Yes. And can you give us the percent of revenue of the AI cohort this quarter?

David Obstler

Analyst · Oppenheimer & Company.

We didn't -- have not put it in there.

Operator

Operator

Our next question comes from Todd Coupland with CIBC.

Thomas Ingham

Analyst · CIBC.

I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted?

Olivier Pomel

Analyst · CIBC.

Yes. I mean, there hasn't been -- in the market with customers, there hasn't been any particular change in competition in that we see the same kind of folks and the positions are relatively similar. And we are pulling away. We're taking share from anybody who has scale. And I know there's been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, nothing that we saw in deals, nothing that had a large market impact. And so we don't see that as changing the competitive dynamics for us in the near future. We also know that competing in observability is a very, very full-time job. It's a very innovative market. And we know exactly what it is we had to do and have to do to keep pulling away the way we are. And so we're very confident in our approach and we're going to do in the future there. With the rise of LLM, there's clearly more functionality to build and there are new ways to serve customers. We mentioned our LLM Observability product. There are a few other products in the market for that. I think it's still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are, but we expect that to shake out more into the future. We think, in the end, there's no reason to have observability for your LLM that is different from the rest of your system in great part because your LLM don't work in isolation. The way they implement their smarts is by using tools, the tools on your applications and your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.

Operator

Operator

Our next question comes from Mark Murphy with JPMorgan.

Mark Murphy

Analyst · JPMorgan.

Olivier, Amazon is targeting $200 billion in CapEx this year. If you include Microsoft and Google, that CapEx is going to exceed $500 billion this year for the big 3 hyperscalers, it's growing 40% to 60%. I'm wondering if you've collected enough signal from the last couple of years of CapEx, that trend to estimate how much of that is training related and when it might convert to inferencing where Datadog might be required? In other words, are you looking at this wave of CapEx and able to say it's going to create a predictable ramp in your LLM observability revenue? Maybe what inning of that are we in? And then I have a follow-up.

Olivier Pomel

Analyst · JPMorgan.

I think it's pretty too reductive to peg that on LLM observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it's kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value 2 or 3 or 4 years from now. So I think we'll have to see what the conversion rate is on that. But look, it definitely points to very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy. And so we think it's going to be -- like it's going to be of great help to our business, let's put it this way.

Mark Murphy

Analyst · JPMorgan.

Yes. Great help. Okay. And then as a quick follow-up, there is an expectation developing that OpenAI is going to have a very strong competitor, which is Anthropic kind of closing the gap, producing nearly as much revenue as OpenAI in the next 1 to 2 years. You mentioned an 8-figure land with an AI model company. I'm wondering, if we step back, do you see an opportunity to diversify that AI customer concentration, whether sometimes it might be a direct customer relationship there? Or it could be some of the products like Claude Code being adopted globally, just kind of creating more surface area to drive business to Datadog. Can you comment on maybe what is happening there among the larger AI providers or whether you can diversify that out?

Olivier Pomel

Analyst · JPMorgan.

Yes. I mean, look, we've never been -- we're not built as a business to be concentrated on a couple of customers. That's not how we've become successful. That's probably not how we'll be successful in the long term. So yes, I mean we -- at the end of the day, it should be irrational for customers -- for all customers in the AI cohort not to use our product. So we see -- we have some great successes with the customers currently in that cohort. We see more. By the way, we have more that are more inbound there and more customers that are talking to us from the largest even hyperscaler level AI labs. And we expect to drive more business there in the future. I think there's no question about that.

David Obstler

Analyst · JPMorgan.

And you're seeing that in some of the metrics we've been giving in terms of the number of AI-native customers, the size of some of these customers. So to echo what Oli said, we are essentially selling to many of the largest players, which results in greater size of the cohort and more diversification.

Operator

Operator

Our next question comes from Matt Hedberg with RBC.

Matthew Hedberg

Analyst · RBC.

Congrats from me as well. David, a question for you. Your prior investments are clearly paying off with another quarter of acceleration, and it seems like you're going to continue to invest in front of the future opportunity. I think op margins are down maybe 100 basis points on your initial guide. I'm curious if you can comment on gross margin expectations this year, and how you also might realize incremental OpEx synergies by using even more AI internally?

David Obstler

Analyst · RBC.

Yes. On the gross margin, I think what we said is plus or minus the 80% mark. We try to engineer when we see opportunities for efficiency, we've been quite good at being able to harvest them. At the same time, we want to make sure we're investing in the platform. So I think what we're essentially -- where we are today is very much sort of in line with what we said we're targeting. There may be opportunities longer term, but we also are trying to balance those opportunities with investment in the platform. And in terms of AI, to date, we are using it in our internal operations. So far, it's -- with the first signs of what we're seeing is productivity and adoption. We will continue to update everybody as we see opportunities in terms of the cost structure. Oli, anything else you want to go over?

Olivier Pomel

Analyst · RBC.

Yes. I mean, look, we -- the expectation in the short-, mid-term anyway should be that we keep investing heavily in R&D. We're getting a lot -- we see great productivity gains with AI there, but at this point of detail, it helps us build more faster and get to solve more problems for our customers. And -- but we're very busy adopting AI across the organization.

Operator

Operator

Our next question comes from Koji Ikeda with Bank of America.

Koji Ikeda

Analyst · Bank of America.

Olivier, maybe a question for you. A year ago, you talked about how -- while some customers do want to take observability in-house, it's really a cultural choice. It may not be rational unless you have tremendous scale, access to talent and growth is not limited by innovation bandwidth, which most companies do not. And so it is a year later, and it does seem like the industry and the ecosystem and everything has changed quite a bit. So I was hoping to get your updated views on these thoughts, if it has changed at all over the past year and why?

Olivier Pomel

Analyst · Bank of America.

No. I mean, look, it's something that happens sometimes, but it's a small minority of the cases. Like the general notion is customers start with some homegrown or attempts to do things themselves, then they move to our product, then they scale with our products. Sometimes they optimize a little bit along the way, but the general notion is they do more and more with us. They rely on us for more of their -- solving more of their problems, and they outsource the problem and increasingly the outcomes to us. So I don't think that's changing. Look, we'll still see customers here and there that choose to in-source it and do it themselves, again, hugely for cultural reasons. I would say, economically or from a focus perspective, it doesn't make sense for the very vast majority of companies. And we even see teams at hyperscalers that have all the tooling in the world, all the money in the world, all the know-how in the world and that still choose to use our product because it gives them a more direct path to solving their problems.

Operator

Operator

And our next question comes from Peter Weed with Bernstein Research. Our next question comes from Brad Reback with Stifel.

Brad Reback

Analyst · Bernstein Research. Our next question comes from Brad Reback with Stifel.

Oli, the sustained acceleration in the core business is pretty impressive. Obviously, you all have invested very aggressively in go-to-market over the last kind of 18 to 24 months. Can you give us a sense of where you are in that productivity curve? And if there's additional meaningful gains, you think? Or is it incremental? And maybe where you see additional investments in the next 12 to 18 months?

Olivier Pomel

Analyst · Bernstein Research. Our next question comes from Brad Reback with Stifel.

Yes. I mean we feel good about the productivity. I think the main drivers from us in the future is we still need to scale, and we're still scaling the go-to-market team. We're not at the scale we need to be in every single market segment we need to be in the world right now. And so we keep scaling there. So the focus now is not necessarily to improve productivity, it's to scale while maintaining productivity. And of course, there's so many, many things we can do. Like we -- even though we love our performance, there's always a bunch of things that could be better, territories that could be better, productivity that could be better, things like that. So we have tons of work -- tons of things we want to do, tons of things we want to fix, tons of things we want to improve. But overall, we feel good about what happened. We feel good about scaling, and you should expect more scaling for us on the go-to-market side in the year to come.

Operator

Operator

Our next question comes from Howard Ma with Guggenheim.

Howard Ma

Analyst · Guggenheim.

I have one for Olivier. The core APM product, growing in the mid-30% growth. That is pretty impressive, and I think better than maybe a lot of us expected. The question is, is that a reacceleration? And is the growth driven by AI-native companies that are using Datadog's real user monitoring and other DEM features as compared to -- or as opposed to rather core enterprise customers that are building more applications?

Olivier Pomel

Analyst · Guggenheim.

Yes. I think, I mean, look, APM, in general, I think has always been a bit of a steady eddy in terms of the growth, like it's a product that takes a little bit longer to deploy than others, which is further into the applications. And so it takes a little bit longer to penetrate within the customer environment. That being said, we did -- a number of different things we did that helped with the growth there. One is we invested a lot in actually making that onboarding deployment a lot simpler and faster. So we think we'll have the best in the market for that and it shows. Second, we invested a lot in the digital experience side of it. And it's very differentiated, something our customers love and is driving a lot of adoption of the broader APM suite, and we expect to see more of that in the future. And third, we make investments in go-to-market. We cover the market better. And so we're getting into more looks at more deals in more parts of the world. And so all of that combined helps that product reaccelerate growth quite a bit. And so we feel actually very, very good about it, which is why we keep investing. Overall, we still only have a small part of the pure APM market. That product is scaled at about $10 billion, including DEM, but the market is larger. And so we think there's a lot more we can do there.

David Obstler

Analyst · Guggenheim.

I want to add, we talked about, as Oli just mentioned, that we're not penetrated across our customer base, and therefore, we're continuing to consolidate onto our platform. So we have quite a number of wins where we already have other products. We already have infra and logs, and we're consolidating APM.

Howard Ma

Analyst · Guggenheim.

David, as a follow-up for you on margin, are the large AI-native customers significantly dilutive to gross margin? And when you think about the initial 2026 margin guide, how much of that reflects potentially lower gross margin tied to those customers versus incremental investments?

David Obstler

Analyst · Guggenheim.

On a weighted average, they're not. As we've always said, for larger customers, it isn't about the AI-natives or non-AI-natives, it has to do with the size of the customer. We have a highly differentiated -- diversified customer base. So I would say we're essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it's more or less what we've seen over the past couple of years, not really affected by AI or non-AI native.

Operator

Operator

Our next question comes from Peter Weed with Bernstein Research.

Peter Weed

Analyst · Bernstein Research.

Can you hear me this time?

Olivier Pomel

Analyst · Bernstein Research.

Yes.

Peter Weed

Analyst · Bernstein Research.

Yes, apologies for the last time. Great quarter. Looking forward, I think one of your most interesting exciting opportunities really is around Bits AI and I'd love to hear kind of like how you think that opportunity shapes up? Like how do you get paid the fair value for the productivity you're bringing to the SRE and the broader operations team and really how you see competition playing out in that space because obviously, we've seen start-ups coming in. There's questions about Anthropic and where they want to go. How does Datadog really capture this value and protect it for the business?

Olivier Pomel

Analyst · Bernstein Research.

Yes. I mean, look, the way we currently sell a lot of these products is you show like the difference in time spent. And when the alternative is you try and solve a problem yourself and you have an outage and you start a bridge and you have 20 people on the bridge and they look for 3 hours for the root cause, you know, and you wake up people in the middle of night for that. Like, it's very expensive. It takes a lot of time. There's a lot of customer impact because the outages are long. And if the alternative is in 5 minutes, you have the answer, you only get 3 people looking that are the right folks and you have a fix within 10 minutes, shorter impact on the customer, many, many, many less folks internally involved, lower cost. So it's fairly easy to make that case. And so that's how we sell the value there. Longer term, as I was saying earlier, I think the -- right now, the state-of-the-art for incident resolution is post-hoc. You know, you have an incident and you look into it. And you diagnose it and then you resolve it. So yes, maybe you could be the customer impact from 1 hour to 15 minutes. But you still have an issue, you still have impact, you still distract the team, you still have teammates working on that. I think longer term, what's going to happen is the systems will get in front of issues. They will auto diagnose issues. They will help pre-mitigate or pre-remediate potential issues. And for that, the analysis will have to be run in stream, which is a very different thing. You can massage data and give it to an LLM for post-hoc analysis and a lot of the value is going to be in the gathering the data, but you also have quite a bit of value in the smarts that are done in the back end by the LLM for that. And that's something that is done by Anthropics, the OpenAIs of the world today. I think as you look at being in-stream looking at 3, 4, 5 orders of magnitude, more data, looking at the data in real time, and passing judgment in real time on what's normal, what's anomalous and what might be going wrong doing that hundreds, thousands, millions of times per second, I think that's what is going to be our advantage and where it's going to be much harder for others to compete, especially general purpose AI platforms.

Operator

Operator

Our next question comes from Brent Thill with Jefferies.

Brent Thill

Analyst · Jefferies.

David, I think many gravitate back to that mid-20% margin you put up a couple of years ago. And I know the last couple of years, including the guide are looking at low 20%. Can you talk to maybe your true north, how you're thinking about that, obviously, growth being #1, but how you're thinking about the framework on the bottom line?

David Obstler

Analyst · Jefferies.

Yes, the framework is we try to plan with more conservative revenues, understanding that if the revenues exceed above the targets that we give, it's difficult in the short term to invest incrementally. So what we're trying to do is invest first in the revenue growth and then layer in additional investment as we see -- if we see excess of target. So generally, it reflects, one, the continued investment, which we think is paying off, both in terms of the platform and R&D as well as in -- and including AI as a go-to-market. And then as we've seen over the years in our beat and raise, we've tended to have some of that flow through into the margin line and then re-up again for the next phase of growth.

Brent Thill

Analyst · Jefferies.

And any big changes in the go-to-market or big investments you need to make, David, this year to address what's happened in the AI cohort or not?

David Obstler

Analyst · Jefferies.

We're continuing. It's very similar to what we're doing, which is to try to work with clients to prove value over time that reflects -- that manifests itself in our account management and our CSM as well as our enterprise. So no, I think for this year, we are looking at capacity growth, including geographic, deepening the ways we interact with customers, expanding channels, very much similar to what we've done in the previous years.

Olivier Pomel

Analyst · Jefferies.

That's going to be it for today. So on that, I'd like to thank all of you for listening to this call, and I think we'll meet many of you on Thursday for our Investor Day. So thank you all. Bye.

David Obstler

Analyst · Jefferies.

Thank you.

Operator

Operator

Thank you for your participation. You may now disconnect. Everyone, have a great day.