Yeah, good question, Craig. Thanks for the question. So, indeed, as we improve fidelity with superconducting quantum processors, that has always been a challenge for superconducting quantum processors. The rest of the metrics of scalability and particularly gate speed, as we said, we are in the 60 nanosecond to 80 nanosecond range. And that's very good for practical uses of quantum computing. And our partners, whether it's the DOE, DOD, or the U.K. government, National Lab, or other National Labs around the world, they're very pleased with the progress that they see, both in terms of fidelity as well as gate speed, and particularly the gate speed because, as you all know, CPUs and GPUs go very fast. So, in a real practical sense, to keep up with CPU, GPU clock speeds, you need to be in the nanosecond range. And some other modalities, although they have better fidelities to start because they are dealing with purer atoms or ions, they are three to four orders of magnitude. We are talking 1,000 to 10,000 times slower. So, they're dealing with hundreds of microseconds. And even for some applications, that may not be an issue. For most practical applications, obviously, the speed of a computer matters. I mean, nobody's ever going to say speed doesn't matter in computation. So, it's critical that we continue to improve our speed from the 60 nanosecond to 80 nanoseconds we have right now. And other modalities certainly have a tall task ahead of them to improve the gate speeds to where we are right now. But overall, the combination of fidelity, gate speed, and other metrics makes many of our current customers, the national labs, believe that we are indeed approaching what we call the narrow quantum advantage or the broad quantum advantage era in the next couple of years. Hopefully, that answers your question.