Earnings Labs

Penguin Solutions, Inc. (PENG)

Q2 2026 Earnings Call· Wed, Apr 1, 2026

$28.29

-2.62%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

+13.37%

1 Week

+25.21%

1 Month

vs S&P

Transcript

Operator

Operator

Ladies and gentlemen, thank you for joining us, and welcome to Penguin Solutions Second Quarter Fiscal 2026 Earnings Call. [Operator Instructions] I will now hand the conference over to Suzanne Schmidt, Investor Relations. Suzanne, please go ahead.

Suzanne Schmidt

Analyst

Thank you, operator. Good afternoon, and thank you for joining us on today's earnings conference call and webcast to discuss Penguin Solutions Second Quarter fiscal 2026 results. On the call today are Kash Shaikh, Chief Executive Officer; and Nate Olmstead, Chief Financial Officer. You can find the accompanying slide presentation and press release for this call on the Investor Relations section of our website. We encourage you to go to the site throughout the quarter for the most current information on the company. I would also like to remind everyone to read the note on the use of forward-looking statements that is included in the press release and the earnings call presentation. Please note that during this conference call, the company will make projections and forward-looking statements, including, but not limited to, statements about the market demand, technology shifts, industry trends and the company's growth trajectory and financial outlook, business plans and strategy, including investment plans, product development and road map, anticipated sales, orders, revenue and customer growth and diversification and existing and potential strategic agreements and collaborations. Forward-looking statements are based on current beliefs and assumptions and are not guarantees of future performance and are subject to risks and uncertainties, including, without limitation, the risks and uncertainties reflected in the press release and the earnings call presentation filed today as well as in the company's most recent annual and quarterly reports. The forward-looking statements are representative only as of the date they are made, and except as required by applicable law, we assume no responsibility to publicly update or revise any forward-looking statements. We will also discuss both GAAP and non-GAAP financial measures. Non-GAAP measures should not be considered in isolation from, as a substitute for or superior to our GAAP results. We encourage you to consider all measures when analyzing our performance. A reconciliation of the GAAP to non-GAAP measures is included in today's press release and accompanying slide presentation. And with that, let me now turn the call over to Kash Shaikh, CEO. Kash?

Kash Shaikh

Analyst

Good afternoon. Thank you for joining our second quarter FY '26 earnings call. This is my first earnings call as CEO of Penguin Solutions, and I'm excited to step into this role. I want to start by thanking Mark Adams for his leadership and for the strong foundation he built. Since joining in early February, I've spent significant time with customers, partners and our teams around the world. I've witnessed the strength of the company, both in our technology and our customer relationships. What is clear is this. AI is moving from experimentation to production with workloads increasingly shifting towards real-time inference. We are already seeing this translate into customer demand beyond hyperscale across enterprise, neoclouds and sovereign AI markets. We expect this transition to expand our addressable market and drive increased demand for integrated AI infrastructure, where Penguin is already winning. We see this firsthand in the breadth of our deployments from a sovereign AI factory, Haein in South Korea to enterprise voice AI with Deepgram to large-scale research systems with Georgia Tech, along with a growing pipeline across all 3 market segments. What makes this opportunity so significant is that the architecture of AI is also changing. Model training was largely compute bound, inference powering agentic AI is memory bound and latency sensitive. We believe this is driving a rearchitecture of the data center across compute, memory, interconnect and software. We also see AI driving memory demand, not only for the high-bandwidth memory or HBM used with GPUs or other accelerators, but also for general-purpose memory. General purpose compute wraps around every GPU build-out and whether it's reinforcement learning pipelines or inference serving, that workload runs on processors backed by significant memory content across the entire system. So while memory markets are cyclical, we believe AI is adding…

Nate Olmstead

Analyst

Thanks, Kash. I will focus my remarks on our non-GAAP results, which are reconciled to GAAP in our earnings release tables and in the investor materials available on our website. With that, let me now turn to our second quarter results. In the quarter, total Penguin Solutions net sales were $343 million, down 6% year-over-year. Non-GAAP gross margin came in at 31.2%, which was up 0.4 percentage points versus Q2 last year. Non-GAAP operating margin was 13.2%, down 0.2 percentage points versus last year, and non-GAAP diluted earnings per share were $0.52, flat year-over-year. In the second quarter of fiscal 2026, our overall services net sales totaled $64 million, up 1% versus the prior year. Product net sales were $279 million in the quarter, down 8% versus the prior year. Net sales by business segment were as follows: In Advanced Computing, Q2 net sales were $116 million, which was 34% of total company net sales and down 42% year-over-year. This sales decline reflects both the ongoing wind down of our Penguin Edge business and hyperscale hardware sales in Q2 last year, which did not recur in Q2 this year. Drilling down deeper into our advanced computing results, our non-hyperscale AI/HPC net sales were down 35% year-over-year in the quarter, but up 50% for the first half of the year. Given the project nature of the business, where sales can be lumpy from one quarter to the next, we believe looking at the multi-quarter trend is a helpful way to evaluate the growth in this portion of our business. In addition to solid first half growth in our non-hyperscale AI/HPC business, we continue to make good progress on diversifying our net sales to new customer segments. For the first half of the year, the non-hyperscale AI HPC business represented more than…

Operator

Operator

[Operator Instructions] Your first question comes from the line of Katherine Murphy from Goldman Sachs.

Katherine Campagna

Analyst

I'll ask about the raised Memory segment outlook for 65% to 75% growth. How much of this is from increased favorable pricing versus demand for new product categories? And as a follow-up, how should we think about the impacts to the operating margin outlook for this segment and the investments that need to be made into new technologies like CXL and photonic memory appliances?

Nate Olmstead

Analyst

Kath, it's Nate. So on the memory outlook, listen, we're really pleased with the demand that we're seeing as well as the favorability that we see in the pricing environment. I would say for the increase that we're seeing in the second half, that's majority pricing but demand is also very strong across telco, networking, AI-driven demand is just very strong. In fact, to get to the high end of that outlook really just refers to our ability to secure materials, which is really the only inhibitor we see right now to raising that outlook here in the second half. So we're chasing materials. We're using the balance sheet to strategically purchase ahead where we can, but the demand is very strong in memory. In terms of the investments, we've reflected it in the outlook. So I kept the OpEx for the year at $250 million, plus or minus $5 million. We're balancing the portfolio as we always do, to look for opportunities to accelerate our investments in innovation in AI or in the memory solutions that we've been talking about. But that's all included in the outlook. I expect the operating margins for memory to remain pretty healthy in the back half of the year. I do expect some pressure on gross margins in AI as we see a higher mix of new hardware shipments in the second half as well as factoring in some of the higher memory input costs that we have in that business.

Operator

Operator

Sorry, your next question comes from the line of Brian Chin. We're experiencing some mild technical difficulties. My apologies. Your next question comes from the line of Brian Chin from Stifel.

Brian Chin

Analyst

Maybe first question, I guess, in Advanced Computing, what changed that caused you to lower the midpoint of your prior guidance to the new range you've communicated? And can you describe how booked you are to that midpoint of that new range?

Kash Shaikh

Analyst

So one of the main factor is the lag between our bookings and the revenue. Our revenue lags about 3 to 6 months from the time of the bookings. And this is primarily driven by the timing of the deployments, in some cases, the material availability and so on and so forth. And given where we are in terms of our fiscal year, we have 5 months remaining. So going forward, most of the bookings that we are expecting may not materialize into the revenue for the second half of this fiscal, but we believe that it will have a positive impact, obviously, going into the first half of the next fiscal. So that's one of the reasons that we are lowering the guidance for advanced computing driven by the deployments. But we are seeing strong momentum in our pipeline as well as bookings. Bookings grew very significantly in Q2 for non-hyperscale AI/HPC business, which is very strategic for us, and we are encouraged to see the progress. We closed 5 new logos with AI/HPC in Q2. And in first half, that takes the total to 7 new logos as compared to 3 new logos last year. So we are very confident in our ability to execute. The main issue at this point is timing.

Brian Chin

Analyst

Okay. Yes. I appreciate that, Kash. And it sounds like you're pretty well booked into the fiscal second half lowered outlook and that some of these new bookings are more kind of beyond a 6-month window. Also thinking about growth in the business, obviously, there's that sort of headwind that you helped to clarify in terms of reduction in hardware revenue to the new hyperscaler, the wind down of Penguin Edge. And so 30 percentage point impact, if we kind of net that against the guidance, maybe 10% growth for this year, net of that in that segment. So moving forward, as you survey the business and you haven't been in the role that long, and you think about what that sort of apples-to-apples growth rate was or is tracking to for this fiscal year, how are you thinking about sort of target growth rates for the advanced computing business moving forward?

Kash Shaikh

Analyst

So overall, let me give you a data point. So the first half of this fiscal, our net sales grew about 50% year-over-year for non-hyperscale AI/HPC business, representing 40% of the overall mix of advanced computing, which is almost 2x of what we closed last fiscal. So the growth is substantial in terms of the bookings as well as the revenue that we see, and we expect that to continue. And as we continue to close the bookings converting the pipeline, we see strong pipeline across all 3 segments that I mentioned between enterprises, on-prem AI deployments, significant activity with sovereign AI customers as well as neocloud customers.

Operator

Operator

Your next question comes from the line of Matthew Calitri from Needham & Company.

Matthew Calitri

Analyst

Matt Calitri here from Needham. Do the new memory launches mark a shift in strategy on that front? Just curious because in the past, the company has talked kind of more about the niche parts of the integrated memory business and noted it's early on things like the CXL front. But now it sounds like memory is expected to be a larger driver as part of this AI factory platform. So just wondering if anything has changed there. And what gives you confidence there's durable demand here?

Kash Shaikh

Analyst

Yes. So it is a part of our strategy. The MemoryAI appliances that we launched about a month ago starting with GTC is a part of us investing more in our AI factory platform strategy. So there are 6 elements to this strategy and MemoryAI is one of the strategic elements where it is very timely if you look at how AI is transitioning from model training to inference. And in the workloads where you are focused on inference, memory becomes an increased requirement because of lower latency as well as larger context size for inference, powering the agentic AI. So this is very strategic for our business. In fact, we are leading the market in this area, taking advantage of our unique position at the intersection of memory and AI infrastructure. and combining the deep understanding and architecture, we introduced this MemoryAI KV cache server as one of the products in the line of MemoryAI. We are working on other products, and we will continue to invest and in fact, invest more in this area to take advantage of the market opportunity because the timing is perfect and our leadership in the MemoryAI line of products. To give you a proof point, the, one of the new logos we acquired Tier 1 financial institution. Not only we are deploying the AI infrastructure, AI factory deployment for them, they also purchased our CXL-based KV cache server, which is a proof point of as customers are transitioning from training and bringing AI on-premise in their factories, deploying on-premise, focusing on inference and powering agentic AI, it is very strategic for us and the timing is just right. So we expect to see this demand, and we plan to continue to invest in this area.

Matthew Calitri

Analyst

Awesome. That's great to hear. And then, Nate, with a new CEO in the seat and some moving pieces around sales cycles and supply chain, did you change the guidance philosophy at all or embed any additional conservatism? Any color on the puts and takes there would be helpful.

Nate Olmstead

Analyst

Yes. Matt, no, no change in the philosophy. We -- Kash and I are very quickly aligned, I think, on how we think about tracking the business and looking at things. And in fact, I think with our new CRO, who came in a couple of quarters ago, he's done a nice job of adding some more rigor to the planning process in our AI business and just improving the visibility there a little bit. But it's a challenging environment from a supply chain standpoint, and we're, of course, got a lot of experience managing supply chain in our memory business. And I think that that's an advantage for us in an environment like this.

Operator

Operator

Your next question comes from the line of Samik Chatterjee from JPMorgan.

Manmohanpreet Singh

Analyst

This is MP on behalf of Samik Chatterjee. So my first question is I just wanted to double-click on your advanced computing guidance. You mentioned that a lag of 3 to 6 months for the revenue, which you will book in your second half. But was there a change observed for the bookings which you did in first quarter or any change relative to what were you expecting to do in 2Q? And I have a follow-up as well.

Nate Olmstead

Analyst

Yes. MP, I think bookings were strong in Q2, really good growth sequentially and year-over-year. I do think that the deployment cycle has lengthened a little bit with some of the supply constraints, in particular, on memory, things have gotten a little bit longer. But we're really pleased with the 5 new logos. And I think demand is good. We're seeing good strength in the pipeline, and it's also diversifying nicely across the non-hyperscale segments such as enterprise and neocloud and sovereign. So I think we feel really good about the demand. I think this is just an issue of a little bit of timing as we can convert bookings into revenue.

Manmohanpreet Singh

Analyst

Okay. And my second question would be also on advanced computing and your AI factory-related business. Like does NVIDIA coming up with their own reference designs for factory-level solutions? Like how does that play relative to you? Like is that a tailwind for you? Or is that a headwind for you? Like can you please help us understand...?

Kash Shaikh

Analyst

Yes, we believe this is an advantage for us. So we work very, very closely with NVIDIA and some of the wins that I mentioned, for example, the Tier 1 financial institution recently along with our MemoryAI product in this transaction. NVIDIA worked very closely with us, and we are working with NVIDIA leveraging their reference design, combining that with our AI factory platform and complementing NVIDIA's NVI as an example, to provide full stack to our customers. So their blueprints are more complementary to our AI factory platform and the components that make up for it. So we are actually quite excited about those blueprints and working very closely with NVIDIA to capture the opportunities, especially as NVIDIA is increasingly focused on enterprise, it aligns with our strategy and go-to-market.

Operator

Operator

Your next question comes from the line of Ananda Baruah from Loop Capital.

Ananda Baruah

Analyst

A couple, if I could. Kash, and maybe Nate as well, earlier remarks were that you're seeing increased momentum across neocloud, sovereign and enterprise. And you mentioned 1 of the 2 of the new wins. Do you have -- and I think, Kash, you had mentioned you've made some specific or at least general inferencing remarks, including around agentic. Do you have any specific context you can give us around what your customers are telling you their thrust in inferencing is right now and maybe the degree to which agentic is showing up there. Like we just want to get a sense of what the customer activity tone is like behaviorally, say, over the last 90 to 180 days. Do you have anything there you can share with us to make it a little bit more experiential for us? And then I have a quick follow-up too.

Kash Shaikh

Analyst

Sure. we believe we are early in the adoption of inference with these customers, but it is increasingly deployed as in customers as they move towards agentic, inference provides the opportunity for powering the agentic. And when you think about inference, I'll give you an example of why the architecture is changing and why memory is becoming increasingly critical in inference as compared to the model training. So for example, let's say, if you are writing a book and if you have to write a new sentence without having the memory as a supporting component for you, you will have to reread the entire book before writing the next sentence. So in the inference, you're doing an inference on a lot of data you already have. And if you have a component where the book you have written so far is stored, so before writing the new sentence, you don't have to reread the book. That's kind of how it is changing for the enterprises and other segments. And we see customers already deploying it and the architecture is changing, which is why not only we have the opportunity and advantage to provide them our AI infrastructure as well as the services, increasingly, we are seeing the demand for our MemoryAI portfolio, where they are deploying AI infrastructure and increasingly inference, they need products like that to be able to provide that memory component for the inference so that the responses of LLMs can be much more faster than they would be otherwise.

Ananda Baruah

Analyst

I got it. That's helpful. And just one last -- one quick follow-up, I'm mindful of the time here in case there's anybody behind me. The CXL product, it sounds like you -- to the earlier question, it sounds like you guys are a little bit more enthusiastic about the CXL sleeve today than you were maybe 90 days ago, you have the new products out at GTC. Is that accurate statement? Are you expecting maybe it's because of these new products, a little bit more -- and certainly some of the NVIDIA announcements at CES as well. But are you expecting a little bit more revenue a little bit sooner than maybe you were CXL-wise 90 days ago? And then a quick second part to that. Do you need photonics to work before you really get CXL amplification? Like do you need CPO or photonics to work before you can really amplify CXL and scale out -- or scale up? That's it for me.

Kash Shaikh

Analyst

Yes. So let me address your CXL question first. I think CXL adoption is timely given the transition to inference because, as I mentioned, with inference, you need increased memory for faster LLM responses. And what CXL provides compute Express Link is you can share the memory between for GPUs and CPUs. So what it allows is new memory pooling, which is an advantage in inference workloads. So while CXL was obviously available for the last, I'd say, few quarters, it is driving that -- that inference adoption is driving the adoption for CXL and this transaction that I mentioned where we received an order, it's actually an enterprise generative AI company working on inference workloads. So you can imagine, CXL cards make sense for them because those workloads need increased memory and the memory pooling capabilities provided by CXL between GPUs and CPUs are an advantage for those kind of customers. And then in terms of photonic memory appliance that we are working on in our partnership with Celestial AI, which is now obviously Marvell, that provides increased capability because obviously, when you have photonic connectivity, then you have increased capacity to share the memory. So it takes it to the next level. However, CXL in itself is an advantage. We can take it to the next level with the photonic appliance. There is another element which is KV Cache that I mentioned, MemoryAI, KV Cache server, which is essentially providing much more responsiveness for larger context workloads, again, used in inference. So various requirements, you can think of it as inference has various requirements related to memory and the type of workloads it has and some of it is latency. So these components between CXL or the CXL-based KV Cache which provides increased responses and larger memory -- largest context sizes and then taking it to the next level photonic memory make up various use cases for inference. So inference gets mainstream, we will have an advantage of this portfolio helping with various use cases of inference.

Operator

Operator

Your next question comes from the line of Kevin Cassidy from Rosenblatt.

Kevin Cassidy

Analyst

And just the gross margin for the memory, your gross margin was up in the quarter and memory revenue was up strong. And I just want to understand what the dynamics were there.

Nate Olmstead

Analyst

Yes, sure, Kevin. We saw a little favorability in memory margins. Some of that is mix, a little bit stronger demand in flash actually, which is a little bit higher margin product for us within the portfolio. And then also some of the pricing increases, we were able to capture a little bit of margin upside on that just based on the timing of our inventory purchases relative to the timing of shipments and sales to customers.

Kevin Cassidy

Analyst

Okay. So you kind of -- as you look out to the second half of the year, you see that catching up to the price increases compared to...

Nate Olmstead

Analyst

Yes. So as the price increases slow, right, if that's an assumption that you use that price increases are going to slow, then we would see -- we would expect to see less margin favorability from that because it'd be less of a timing difference between -- or less of a price variation between the timing between purchasing inventory and selling. But we have been using the balance sheet to try to secure inventory where we can. It's a tight market. So it's not unlimited supply. But where we can, we're using the balance sheet to try to gain a little bit of an advantage.

Kevin Cassidy

Analyst

Okay. And maybe just as we're talking about memory, as you get to these CXL systems, would you expect that's going to be higher margin than the module business?

Nate Olmstead

Analyst

Yes, we do. It's really a solution. It's got software aspects to it, some good differentiation on the hardware as well. So I see that as a nice margin opportunity for us down the road.

Operator

Operator

At this time, there are no further questions. I will now hand the call over to Kash Shaikh, CEO, for closing remarks.

Kash Shaikh

Analyst

Thank you, operator. We see AI shifting towards inference with demand expanding beyond hyperscaler to enterprise, neocloud and sovereign AI customers. We are still in early shift in this transition, but the combination of our customer demand, product innovation and booking momentum gives us the confidence in the path ahead. We believe we are well positioned at the intersection of AI compute infrastructure and memory, and we are making good progress diversifying our customer base. My focus is on strong execution across product innovation, customer engagement and diversification, disciplined capital allocation and investment in our AI/HPC business to support the long-term growth. We look forward to updating you on our progress.

Operator

Operator

This concludes today's call. Thank you for attending. You may now disconnect.