Yes, no, and I made some comments, Amit. So I do appreciate the question. And the one thing that's a little bit different that we talked about today on the prepared comments was we typically always talked to you about design win momentum, and that's continued. But we did give you more highlights about where does revenue go for here. And we told you all year, as we were seeing the ramp in AI, we were going to do about $200 million of revenue this year in AI applications. And certainly, 60% of that would be in the back half. That really hasn't changed.
But when we look at the design momentum that we have and also expectation of what we're hearing from our customers, like I said on the call, we expect that $200 million to essentially double next year to $400 million. And we actually see a path that could get up to $1 billion a year -- a few years after the [ math ].
So we're actually seeing the traction with the design wins, our teams, the investments we're making to ramp it, both from engineering and operations, are in place to drive it. And like you said, when you have to service this area, there's an ecosystem that's here. And that ecosystem, when you look at our engagements, they're with hyperscale customers, some of who are developing their own AI solutions. We also have to work closely with semiconductor companies, including both the processor companies and the other semi players that make acceleration chips and other silicon solutions. So when you look at it, we have to play with everybody in that ecosystem, and our teams are doing a nice job.
And then as important is as you work with them, how do you get on reference designs that then are really ready-to-deploy offerings that do allow further cloud customer deployments. And our sales are across the entire ecosystem, it's not with one. So I like the breadth that we have, and that's really driving the momentum that we have.
And from a product perspective, where you start at, it starts with the socket that's right up against the GPU. You can have things that are on the board. And then you also get into things that are really a cable backplane where you have things that are very important to make sure you don't get the latency and you keep the high speed going to really make sure that cluster can really crack at the speeds they need to, to do the [ LLMs ].
So net-net, it's pretty broad in the products we play. And it's just really, in many cases, what we did on the cloud moving up to the next level of performance in this application.