Ashutosh Kulkarni
Analyst · Bank of America. Please go ahead.
Yes, great question. Koji respond to both of them in order. So in terms of the product features and the monetization, the vector searching functionality itself is in all editions, but the machine learning, how do you actually do the ingestion and creation of all of the vectors, all of that functionality, the ESRE model that we announced, the hybrid search capabilities that we announced, all of those are in the premium editions. So for all practical purposes, you need the premium editions to be able to build generative AI applications on our platform. So that's the monetization model. Now in terms of the differentiation, I can break it down for you in a few ways, right? So the first thing is, when you think about the emerging AI Stack, right, like I mentioned, there are two things that you're going to definitely need. The first is you're going to need an LLM, a large language model. And the second thing that you're going to need is some system that will allow you to provide the relevant context, and this is the most important piece because for every business, they don't want to ship all of their private data to the LLM. And more importantly, the LLM won't even be able to use everything because they are not built on having everything in real time, right? So your private data is constantly changing. It's moving in real-time. So the key is to provide for any given query from the – any interaction with the large language model, just a relevant context. And for this surface, you need at times the vector search functionality. At times, you need the textual search functionality and more recently, what people are discovering is you actually need a combination of the two in many, many different cases. And then for practical use cases, as you're building these applications, you need to be able to incorporate things like filtering, things like aggregation of these results. So for that reason, if you – if all you have is a vector database, you then need to still combine it with some technology like Elastic's to be able to bring all of this together, in our case, we've built everything in one consistent platform. The APIs are consistent with each other, and you can actually get – take advantage of all of these capabilities, including the ability to bring in external models directly from HuggingFace or any PyTorch model and run them on Elastic. So it's just a much more complete and much more capable solution than anything that's out there on the market. And that's the reason why we feel so confident about the unique opportunity that we have to be a real winner in the space of generative AI for enterprises.