Yes, we expect to continue to do well in data centers. The -- if you look at the background of what's happening, we know that Moore's Law has ended. And while demand for computing continues to grow and more and more of the data center is running machine learning algorithms, which is computationally really intensive, the only way to increase computational demand -- or computational resource is to buy more computers, buy more CPUs because each one of those CPUs aren't getting much faster. And so as a result of that, the data center CapEx would have to go up. The alternative, which is the alternative that we offer and is one of the reasons why the adoption of NVIDIA's accelerated computing platform is growing so fast, is because the approach that we provide allows for a path forward beyond Moore's Law. There are several things that we have done this last quarter that I think is really fantastic. The first is the introduction of a new computing platform, new accelerated platform called RAPIDS. And as you know very well that the vast majority of the industry today, although are super excited about deep learning, deep learning as a method for artificial intelligence is very data-intensive. And in areas where there's a lot of domain expertise, where there's -- whether it's in retail or whether it's in financial services or health care, logistics, there's a fair amount of domain expertise, and the amount of data that they have to fuse together to train a model is quite high. The approach using traditional machine learning is quite successful. That has never been accelerated before. And we worked with the open source community over the course of the last several years to pull together an entire stack that starts from Apache Arrow, the Dask parallel distributed computing engine, and then all of our CUDA and all of our algorithms that run on top of that. We now have an accelerated machine learning platform. That's a brand-new platform, and the excitement around that is really quite incredible. The second thing is the Turing architecture allows us to do film rendering at a much, much more affordable way than Moore's Law would have allowed. And then the third, which we just announced recently, is our first Turing-based T4 Cloud GPU. And along with all of the software stack that we've put on top of it, Kubernetes, the Docker, the TRT inference engine, our second-generation Tensor Core, AI accelerator, all of that together has created a lot of excitement in data center. So I'm expecting our data center business to be -- to continue to do quite well.