Earnings Labs

Hewlett Packard Enterprise Company (HPE)

Q1 2024 Earnings Call· Thu, Feb 29, 2024

$27.88

-2.64%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

+2.17%

1 Week

+19.37%

1 Month

+16.22%

vs S&P

+14.10%

Transcript

Operator

Operator

Good afternoon, and welcome to the First Quarter Fiscal 2024 Hewlett Packard Enterprise Earnings Conference Call. My name is Gary, and I'll be your conference moderator for today's call. At this time, all participants will be in listen-only mode. We will be facilitating a question-and-answer session towards the end of the conference. [Operator Instructions] As a reminder, this conference is being recorded for replay purposes. I would now like to turn the presentation over to your host for today's call, Ms. Shannon Cross, Senior Vice President and Chief Strategy Officer and Investor Relations. Please proceed.

Shannon Cross

Analyst

Good afternoon. I'd like to welcome you to our fiscal 2024 first quarter earnings conference call with Antonio Neri, HPE's President and Chief Executive Officer; and Marie Myers, HPE's Chief Financial Officer. After 25 years on Wall Street and over 20 years covering HP, I'm very excited to join HPE as Chief Strategy Officer. I look forward to working with Jeff Kvaal and the rest of the IR team, and I look forward to seeing many of you in the months ahead. Before handing the call to Antonio, let me remind you that this call is being webcast. A replay of the webcast will be available shortly after the call concludes. We have posted the press release and the slide presentation accompanying the release on our HPE Investor Relations web page. Elements of the financial information referenced on this call are forward-looking and are based on our best view of the world and our businesses as we see them today. HPE assumes no obligation and does not intend to update any such forward-looking statements. We also note that the financial information discussed on this call reflects estimates based on information available at this time and could differ materially from the amounts ultimately reported in HPE's quarterly report on Form 10-Q for the fiscal quarter ended January 31, 2024. For a more detailed information, please see the disclaimers on the earnings materials relating to forward-looking statements that involve risks, uncertainties and assumptions. Please refer to HPE's filings with the SEC for a discussion of these risks. For financial information we have expressed on a non-GAAP basis, we have provided reconciliations to the comparable GAAP information on our website. Please refer to the tables and slide presentation accompanying today's earnings release on our website for details. Throughout this conference call, all revenue growth rates, unless otherwise noted, are presented on a year-over-year basis and adjusted to exclude the impact of currency. Finally, Antonio and Marie will reference our earnings presentation and their prepared comments. Before handing the call to Antonio, let me remind you that this call is being webcast. A replay of the webcast will be available shortly after the call concludes. We have posted the press release and the slide presentation accompanying the release on our HPE Investor Relations web page. With that, let me turn it over to Antonio.

Antonio Neri

Analyst

Thank you, Shannon. Good afternoon, and thank you for joining us today. In the first quarter, we are proud to have outpaced our profitability expectations while advancing our long-term strategy. We also continue to scale our recurring revenue, achieving the second highest year-over-year growth rate since we started tracking ARR in late 2019. This is a promising indicator for our ongoing portfolio shift to higher-margin revenues. But overall, Q1 revenue performance did not meet our expectations. During this call, I will address three key points. First, I will touch on our revenue, which was lower than expected in large part because network and demand softened industry-wide and because the timing of several large GPU acceptances shifted. Additionally, we did not have the GPU supply we wanted, curtailing our revenue upside. Second, I will address how we are streamlining our reporting segments, accelerating a new specialized sales model, managing our spending and reinforcing execution discipline. Third and most importantly, I will discuss the progress we're making in executing our long-term strategy, which we remain confident in. Let me first address revenue in the quarter. Similar to peers in the market, we saw campus networking product demand weakened and the decline later in the quarter was greater than expected. This was a large headwind relative to our expectations. Customers are taking longer to digest prior orders than we had anticipated, which partially offset the benefit of our backlog entering the quarter. And Europe and Asia were areas of relative softness. We expect weakness in the networking market to persist, which is likely to impact revenue through fiscal year 2024. That said, we anticipate some improvement late in fiscal year 2024 as inventory clears and we ramp into the purchasing season for state and local education customers in the United States. AI server…

Marie Myers

Analyst

Thank you, Antonio. I'm pleased to be with you today on my first earnings call as HPE's CFO. I've long admired HPE's impressive transformation, and there has never been a more exciting time to be part of this company. We have a growing addressable market, a proven strategy and a differentiated portfolio that is levered to long-term market trends around networking, hybrid cloud and AI. I believe we have a significant opportunity ahead of us. I'm excited to partner with Antonio and the rest of the outstanding HPE team to capitalize on this opportunity and drive value for our shareholders. And as Antonio mentioned, we have much to be proud of. Financial highlights in the quarter included record gross margins and expense discipline, which helped lift non-GAAP EPS to the high end of our guidance range. Demand for our traditional server and storage products has stabilized. Demand for our HPE GreenLake offerings was evident in a healthy ARR growth and demand for our AI systems remains robust. However, demand in Intelligent Edge did soften due to customer digestion of strong product shipments in fiscal year '23, which is lasting longer than we initially anticipated and is the primary reason Q1 revenue came in below our expectations. GPU availability and deal timing also contributed. We are taking swift action to address these headwinds by curtailing costs and driving efficiencies across the business. With that, let's take a closer look at the details of the quarter. Revenue fell 14% year-over-year in constant currency to $6.8 billion. Please recall, we had significant backlog consumption in Q1 '23, particularly in traditional servers and storage. Backlog has now largely normalized across our business with the exception of our APU products. We have strong momentum in HPE GreenLake. ARR exceeded $1.4 billion in Q1. Storage and…

Operator

Operator

We will now begin the question-and-answer session. [Operator Instructions] The first question today is from Meta Marshall with Morgan Stanley. Please go ahead.

Meta Marshall

Analyst

Great, thanks. Maybe on the GPU available -- GPU delays that you're seeing and far as acceptance. Just what are you seeing in terms of -- you identified that power and some of these things were conditions for what the delays were. But just what are you seeing in terms of how long those delays are going to take and how long the delays and the acceptances are going to be?

Antonio Neri

Analyst

Well, thank you, Meta. So as I said in my prepared remarks, we had a couple of deals that slipped from Q1 into future quarters because customers are taking a little bit longer to prepare the data center space, getting the power and the cooling ready. And obviously, those deals will come as we complete those installations. And then on the GPU side, obviously, we continue to experience a tight environment, although we are seeing some improvements. We have already a lot of GPUs that we already built but the customers will take time to accept those systems. But the reality is that we need more supply against the backlog that we announced today, which was $3 billion at the end of the Q1. So that's what we see today. And as we go forward, we expect that improvement to happen, and that's why we are confident on the conversion of the GPU orders into revenue as we go along, not just because of the GPU availability but also the acceptances.

Shannon Cross

Analyst

Great, thank you very much, Meta. Gary, can we have the next question?

Operator

Operator

And the next question is from Amit Daryanani with Evercore. Please go ahead.

Amit Daryanani

Analyst

Thanks for taking my question. I guess, Antonio, maybe just talk about, if I look at the revenue shortfall in Jan quarter, how much of that do you think is because your customers are pushing out their delivery schedules, they don't have power versus you just didn't have enough GPUs? If you were to think about those two buckets. And then as you think about the full year guide, perhaps I didn't appreciate this, but can you just talk about what are you expecting for the networking segment, Intelligent Edge to do in the zero to 2% guide right now? Thank you.

Marie Myers

Analyst

Hey, Amit, good afternoon. Nice to hear from you. So I'll take -- I'll answer the first part of your question around the revenue. So in the first quarter with respect to what drove the $300 million on the revenue, it was mostly actually networking. We did have one deal that moved out. I think Antonio mentioned that in his prepared remarks. And then in terms of just how we're thinking about the second half of the guide, we are expecting a very strong second half, and that's predominantly driven actually by AI systems revenue. We are expecting networking to be slightly more favorable in the back half, but we really see the trough of networking in Q2, Amit.

Shannon Cross

Analyst

Great, Gary. Thank you, Amit. Can we have the next question?

Operator

Operator

And the next question is from Simon Leopold with Raymond James. Please go ahead.

Simon Leopold

Analyst

Thanks for taking the question. I wanted to see if we could drill down a little bit in terms of understanding what's changed in the Intelligent Edge versus 90 days ago. And two elements are crossing my mind. One is really around the pending Juniper deal, whether that's influencing customers to maybe hold off purchases because of the uncertainties that might be affecting their decision-making as to what happens after you're combined. And the other part is just wondering if there's inventory that's been sitting in the channel, why didn't you know about it or why didn't you see it? Just trying to get an understanding of sort of what you've learned over these last 90 days. Thank you.

Antonio Neri

Analyst

Thank you, Simon. So first of all, we saw an acceleration of the demand softness in the back -- at the end of the Q1, really in January, whether it's now people coming back, but obviously, the reality is we saw that as a headwind to our revenue in Q1. I have to say, we do not have a channel inventory problem. Actually, we are in great position, particularly from the enterprise customers and the enterprise product, we do not have that issue at this point in time. What we do see is customers are taking longer with the product we already shipped to them to install it and eventually go through the next cycle. And that's why we said with Marie, we're going to start seeing a slight improvement in the back half with Q2 being the trough. And part of the back half also is the traditional buying season in the United States with state and local education. The pipeline is very good. We have not lost one single deal that I can point to, neither because of the slowdown or customers deferring, not because of the announcement of the acquisition of Juniper.

Marie Myers

Analyst

And maybe, Simon, just to add to Antonio's comments, the only place where we saw a slightly elevated pocket of inventory was in SMB, which is a pretty small part of our business.

Antonio Neri

Analyst

Yeah.

Shannon Cross

Analyst

Thank you, Simon. Gary, can we have the next question?

Operator

Operator

And the next question is from Toni Sacconaghi with Bernstein. Please go ahead.

Toni Sacconaghi

Analyst

Yes, thank you. Sorry, I just have one clarification and a question. Marie, on the [Indiscernible] that the backlog dropdown has contributed, I think, mid-single digits to the Intelligent Edge or something more broad than that? So can you just comment or clarify exactly what the backlog contribution was? And I suppose there's none going forward. And then just on my question, you sound pretty excited about sequential growth over the course of the year, both in servers and storage. Maybe you can just elaborate on why you see that. Is servers -- do you see sequential growth in traditional servers, non-accelerated [indiscernible]. Thank you.

Marie Myers

Analyst

Hi, Toni. So, good afternoon. So maybe I'll take the first part of the question, and I'll turn to Antonio for the second part. So look, in terms of the backlog, we really don't disclose the backlog on edge. But I think what we've said is that we've seen our backlog sort of revert back to normalized levels with the exception, obviously, of our APU or AI system. So that's how we're thinking about the backlog. I think in terms of just some context and commentary, you've seen in the industry that the market has definitely softened. And I think as Antonio said earlier, we saw that late in the quarter. So that's how we characterize the networking demand. And I would say that we do expect the trough in Q2 and to be slightly more favorable in the back half. So that's how we're thinking about the networking market playing out for the year. I think I'll turn to Antonio to comment to servers.

Antonio Neri

Analyst

Yeah, Toni. Thank you for the question. So on the server and server side, obviously, we see signs of stabilization now, has been now a couple of quarters with sequential order improvement. But the reality is that, as we said in our opening remarks, we continue to see the mix shift in the traditional servers, as you call it, to Gen11. By the end of the year, we should be approximately 60% of the way there. Obviously, those services come with a different set of structural configurations and pricing, which obviously is higher. At the same time, we're going to see cost inflation. We believe that's going to be the case, and therefore, we have to eventually pass those as well. But the number of units have been very stable or slightly improving. And that's an important indicator because ultimately, that also drives our attach rate of our Operational Services, which in the quarter was very, very good. So that's why we are confident that sequential improvement from here on. And then on the storage side, obviously, AI is going to be a pull-through demand for us. We introduced a new offer now specifically for file. And remember that HP Electra continue to grow from here on. And a portion of that HP Electra revenue is also in the ARR because remember that software now is completely disaggregated from the solution itself, which means you have the CapEx portion of the revenue recognized in quarter and the subscription part of the software amortized over the period of the contract. So that's why ARR is growing because of the subscription in networking, which was up significantly, the storage and obviously, AI now also contributing to the ARR as well.

Shannon Cross

Analyst

Thank you, Toni. Gary, can we have the next question?

Operator

Operator

The next question is from Aaron Rakers with Wells Fargo. Please go ahead.

Aaron Rakers

Analyst

Yeah, thanks for taking the question. I wanted to ask about the server market. Maybe two parts. I guess when we look at some of your peers, I mean, it seems to be that the lead times have improved on some of the GPUs, particularly the H100. I'd be curious of kind of like can you talk a little bit about what you've seen on lead times there in terms of your ability to deliver on some of this backlog? How that's changed over the course of this last quarter? And then any thoughts on traditional server recovery? How do we think about the pace of that embedded in your expectations looking through this year?

Antonio Neri

Analyst

Yeah, sure. I think I covered the larger part of that question with Toni's question about sequential improvement in the traditional server, which is still very CPU-centric. On a combined server, right, now 25% of the total volume is APUs, which obviously GPU is the biggest portion of it. So we expect that sequential improvement driven by recovery in demand in units and then obviously, the shift to Gen11, which is important in this transition. On the GPU lead times, they have come down but it's still elevated. We're talking about 20-plus weeks at least lead time. And so -- and then it's going to be a combination of multiple type of GPUs, right, because there is still demand for the prior generation to H100. Obviously, the majority of the demand today is on H100. And going forward, we're going to have the Grace Hopper H200 and others, right, including MI 300x and the like. And the difference for us is that because we have a unique network in interconnect fabric, we can support all of them. So that's an important differentiation that I think everyone needs to remind because while a lot of the volume today is NVIDIA and then on the supercomputing, which is also an AI business, by the way, we support all three of them. And so that, for us, gives us the optionality to convert the orders that we have and future orders we see in the pipeline with a little bit more flexibility, I will say.

Shannon Cross

Analyst

Thank you, Aaron. Gary, can we have the next question?

Operator

Operator

The next question is from Wamsi Mohan with Bank of America. Please go ahead.

Wamsi Mohan

Analyst

Yes. Thank you so much. You said some of your demand in AI systems is coming in, we had GreenLake. Can you help us understand the linkage between your view of AI revenue and ARR growth?

Antonio Neri

Analyst

Yeah, Wamsi. I can start and Marie, feel free to add. I mean, the fact of the matter is that when you look at that $4 billion cumulative orders, a significant portion is going to go through the HPE GreenLake. If you recall last year, I announced that a hyperscaler placed an order with us, and that order is going through the GreenLake platform. And so that's why you see a breakdown over time of the AI GPU orders going through the ARR, which is fine. Ultimately, they give us the ability to attach other services, which is important to remember here, because remember, when it goes to the HPE GreenLake, in many cases, we are actually running those systems for the customer. It's not just shipping the system to the customer. We actually put it in a location, whereas our data center footprint with our cooling and power, and then we attach our services, which are the run time plus other things we do. And why it's important also the growth in ARR because that drives margin expansion and accretion over time. So that's what's going on in addition to the fact that, obviously, now we crossed 31,000 customers on HPE GreenLake platform. To put it in context, Wamsi, that's almost 3,000 customers in one quarter. I mean, 8% up quarter-over-quarter, 3,000 customers. And everything we do from the software perspective, it now is a subscription whether you sell HPE ProLiant Gen11, let's say, an AI-optimized server, the software to connect the server actually runs to GreenLake. Obviously, software runs through the GreenLake. A lot of the Aruba software, including Aruba Central, a subscription and now you have AI as well.

Marie Myers

Analyst

And maybe Wamsi, I'll just put a couple of numbers around the APU or the AI system orders that we saw, too. So we ended the quarter actually with $3 billion in backlog, so we really nearly tripled actually year-on-year. And in terms of just the link back to ARR, we shipped around $400 million in revenue but we had incremental revenue actually that went into ARR. And that sort of underscores the growth that we saw in ARR and expect to see going forward as well, Wamsi.

Shannon Cross

Analyst

Thank you, Wamsi. Gary, can we have the next question?

Operator

Operator

The next question is from Samik Chatterjee with JPMorgan. Please go ahead.

Samik Chatterjee

Analyst

Hi, thanks for taking my question. I guess, Antonio, you referenced the increase in demand on AI that you see related to inferencing workloads on the enterprise side. Can you maybe talk a bit in terms of how these deployments are looking different to what you've been doing on the AI training side and maybe with some hyperscalers? And given the lead times, is this demand going to more materialize in relation to revenue more in fiscal '25 or is that just a fair estimate about, given the lead times? Thank you.

Antonio Neri

Analyst

Yeah. No, thank you. That's an excellent question. Obviously, I spoke about the AI life cycle, training, tuning, and inferencing. Obviously, the training side has been more focused on the hyperscalers, so Tier 2, Tier 3 type of providers or companies that are funded well to build these large language models. But when you look at enterprise, most of the enterprises are going to take a model and fine-tune it to give a context to the model with their data. And that, it can happen in multiple locations, right? It can happen in the data centers or potentially in a Coles or in some cases in the public cloud, but we see more focus on where they can control the data in a secure environment. And then the inferencing side, it can happen in a data center or in a public cloud once they are all trained but also at the edge. In fact, we showcase a lot of the inferencing cases at the edge in Mobile Congress at the edge of the network. Think about use cases like the Coles Supermarket, right? So there is a lot of data to the video footage captured in the stores. That video footage needs to be inferenced right there at that given moment with 0 latency in order to deliver the outcome. And there are in manufacturing and the like. And in fact, one of the use cases we saw also for inferencing is a large bank that now are doing some fine-tuning and inferencing to do risk management and other things. So we -- I will say we are kind of getting into it. I will say the growth will happen in the second half in '25. Definitely, the lead times will play a role. But I'm very encouraged about the momentum we see and the opportunity we have also with the combination of Juniper because most of these inferencing requires the network connectivity to deliver it. And that to me is one of the reasons why we went ahead with that acquisition.

Shannon Cross

Analyst

Thank you, Samik. Gary, can we have the next question?

Operator

Operator

The next question is from Tim Long with Barclays. Please go ahead.

Tim Long

Analyst

Thank you. Just on the -- another one on the AI server side. Can you talk a little bit about how you guys are thinking about profitability for the businesses as we get more accelerated compute in your servers? And if you can also kind of break that down between maybe the Cray businesses and the standard compute? Is there going to be more of a margin gap in those two businesses when looking at more traditional going to accelerated? Thank you.

Antonio Neri

Analyst

Sure, I can start. I will say, listen, I think if you look at our server segment that we just published, we delivered very strong performance. I mean, we are in the target range we committed well back of 11% to 13%. And that -- the fact that we're bringing together give us the flexibility, opportunity to maximize the blended margin here as we go forward. To give a reference, when you sell an EX system, generally it's a liquid cool system that tends to gravitate to the supercomputing side or large AI clusters of thousands of GPUs. But an EX system supports today up to 80,000 GPUs in one system, and that's because of our interconnect fabric HPE Slingshot. In fact, some of those systems have 80,000 GPUs and maybe 40,000 CPUs in one system. But then you have other customers are many 2,000 or 4,000 GPUs. And depending on which location they pick, they need liquid cooling, we deploy those. Now generally speaking, the XD platform, the Cray XD platform is the one that has the density and is more oriented and ability to mix much different configurations, and that's where the vast majority of the action is today in AI. And ProLiant Gen11 actually is more used for inferencing or in some areas of fine-tuning as well. So we have the flexibility to meet all those demands with our unique IP. And on top of that, we actually lay our machine learning development environment. In fact, there are customers that come to us just for the MLDE environment. Later on, we pull the server. Now on AUPs, I will you that when you sell an XD, the can be 20 times the value of a traditional server with CPUs. And in EX, it can be up to 35 times. And so as we go forward, the ability to optimize margin through the configs and attach the services, whether it's our data center services plus the software and the Operational Services allows us to really drive the best outcome for our shareholders.

Shannon Cross

Analyst

Thank you, Tim. Gary, we'll take one final question.

Operator

Operator

And that final question will come from Lou Miscioscia from Daiwa Capital Markets. Please go ahead.

Louis Miscioscia

Analyst

Hey, thank you for taking my question. Antonio, I guess the question I have is since you're talking a lot about data centers, I'm wondering what's going to happen is that the vast majority of x86 applications going to start to shift over to really be accelerated with GPUs due to the concern of more coming to end? And what I'm asking is not really inference and it's not training. These are just normal applications, sort of like the same way architectures shifted from IBM mainframes or PA years and years ago to x86 eventually to that and cloud. Do you think that, that's going to shift over to running on GPUs?

Antonio Neri

Analyst

Well, thank you for the question. I think we need to understand there are two worlds that will coexist. There is the cloud-native world. Think about the cloud-native world where you have thousands and thousands of applications running on thousands and thousands of servers and they share everything. That architecture will exist for a long, long time because it's cost efficient. And the realization is that those applications world were designed for that type of environment, for the traditional monolithic approach to more a cloud-centric approach. And then you have these AI applications where you may have one application, only one, running on thousands and thousands of servers, which have accelerated compute. And it's a little bit far-fetched to say everything is going to move there. I argue that you will have inferencing solutions that a CPU will be just fine. Think about your phone, right? The phone will have, at some point, the ability to manage a large language model, let's say, 20 billion or 30 billion parameters, or the PC, maybe in the 80 billion to 100 billion parameters. But when you go up higher than that, obviously, you need potentially a server at the edge. And what I'll call 8-way GPU will be the right way to go. So I argue there will be a mix in the transition here for a long period of time. Not everything will go to a GPU. It also depends how these large language models and all the AI applications get constructed. Now you asked us -- you made another interesting point which I want to make sure the -- all of you remember. We, as a company, have two now public instances of AI, power with renewal energies where we are supporting some of these customers, including a hyperscaler. And going forward, enterprise customers because they don't have the space and the cooling and the understanding how to run the system of scale. That's a unique differentiation Hewlett Packard Enterprise have in addition to build systems and ship them. And I think that's an opportunity for us because that will drive stickiness to our HPE GreenLake platform, which obviously will drive recurring revenues but better attach software and services down the road. And Juniper will play a huge role in that environment.

Shannon Cross

Analyst

Thank you, Lou. Let me now turn it back to Antonio for concluding remarks.

Antonio Neri

Analyst

Well, thank you, Shannon. And thank you, everyone. I know you have been covering multiple calls today. I know it's late on the East Coast, but I will leave you with a few comments. Number one, we have the right strategy and the right team at the right time. This quarter, obviously, it was a little bit mixed because of the revenue. But remember, a lot of revenue also went through the ARR so we need to understand that implication going forward. I'm very confident about the future. And the moves we have made and continue to make, including the Juniper acquisition, will allow us to participate in this inflection point with a unique IP. Everybody, obviously, is focused about this AI momentum and the server side, but you need more than servers. AI will drive the need for more ports. That means you need more networking bandwidth. That's for sure. Also, let's not forget, we need to do this responsibly. One of the things I'm really proud about our company is the commitment to social responsibility. Doing all of this, addressing the sustainability and the ethical challenges and the responsibility around AI. But listen, just we came out two weeks ago where HP was ranked number one in the Just Capital, ranking something that when you are proud of it and I know shareholder value, all of that. We have to take some actions here. We are really focused on the strong execution and discipline, something we have shown now for six years plus. And that’s why I’m confident in the adjusted guidance we provided with Marie. And as we get into ‘25, obviously with the pending acquisition, I feel HPE will be even in a stronger position as we get through 2024. So thank you for your time and hope to connect with you soon.

Operator

Operator

Ladies and gentlemen, this concludes our call for today. Thank you. You may now disconnect.