EPISODE 1870 [INTRODUCTION] [0:00:01] ANNOUNCER: A common challenge in data-rich organizations is that critical context about the data is often hard to capture and even harder to keep up to date. As more people across the organization use data and data models get more complex, simply finding the right data set can be slow and create bottlenecks. Select Star is a data discovery and metadata platform that builds a continuously updated knowledge graph of an organization's data by analyzing both its structure and how it's actually used. It enriches data with context such as popularity, lineage, and semantic models, making it easier for AI and teams to discover, trust, and use the right data. These enriched metadata layers are also highly valuable for large language models, significantly improving the accuracy of generated SQL queries. Shinji Kim is the founder and CEO of Select Star, and she joined Sean Falconer to discuss solving metadata curation challenges, managing data context at scale, using LLMs for SQL generation, emerging trends in metadata management, and more. This episode is hosted by Sean Falconer. Check the show notes for more information on Sean's work and where to find him. [INTERVIEW] [0:01:29] SF: Shinji, welcome to the show. [0:01:31] SK: Thanks, Sean. Great to be here. [0:01:33] SF: Yeah, I probably should've said, welcome back, since you've been here before, or those been a couple of years. [0:01:37] SK: Yeah. More than three years ago to introduce Select Star. But I am really excited to be back, and Software Engineering Daily has always been, yeah, also morphing and changing a lot. [0:01:50] SF: Yeah. Well, it's been three years. Why don't you catch us up? I mean, three years, especially in the world of tech, the world of startups, and now what's increasingly becoming the world of AI is a lot of time. A lot could happen in three years. What's happening with Select Star today? Maybe go back even to the beginning. What's the story behind, where you guys started and where are you today? [0:02:11] SK: Amazing. Sure. Yes, so much changed. I started Select Star five years ago after noticing time and time that a lot of enterprises collect store and process data. But to try to use the data, it takes days or weeks to find the right data and actually use it properly. You have to rely on outdated documentation. Usually, you need to just find somebody else rely on tribal knowledge to understand how to use the data. I mean, this is something that I saw firsthand at Akamai, when I was running the product for their IoT data processing, partnering with consumer electronics and automotive enterprises building their next consumer applications. They were looking to pull a lot more telematics data. Especially in enterprise perspective, this was an issue and hence, there are solutions, like traditional enterprise data catalogs that are trying to solve this issue. At the same time, I've noticed that there was a lot more demand around this also, as more companies are adopting "modern data stack" of cloud data warehouses and building their data lakes on the cloud with Snowflake, Databricks. Data discovery, finding and understanding data has been a lot wider issue in organizations. That's where Select Star is really focused on. We provide a very easy to use UI, now MCP server, APIs, Chrome extensions, Slack app, all different places where end users, so whether you are a data scientist, data analyst, software engineer, or product managers, whenever you have to touch or see data, or data products, you can easily access the context about that data, documentation about their data, where did the data come from, who else is using this inside the company, what other data assets, or analysis are already attached, or have been built on top of. There's a lot of, I would say, we are almost drawing a knowledge graph for you in terms of how your data assets are connected and utilized inside the organization today. That's the core of what we do. [0:04:34] SF: Why do you think so much of this metadata has historically been this tribal knowledge? Why haven't we been focused on capturing that as part of the data we collect? We built so much technology for actually collecting data, but then this stuff about why the data exists, how it relates to each other. We've historically, I think, just relied on communicating within the company to ask people, why is it this way, rather than encapsulating that in some sort of piece of technology? [0:05:05] SK: Yeah. I mean, great question. I can just go back to that no one likes documentation, especially, I think, developers. Most of the databases does not have table column comments, and it just follows the code. A lot of the data tables have descriptive names, but I think today, also, more so, the proliferation of data models, and how easy it is to transform and build your own data models, I think it also adds to that. Continuing to writing manual documentation doesn't scale, and it's more of a, always taken as a after the fact. In the beginning, when you are starting off from scratch, you will have entity relationship diagrams as part of modeling the data. But afterwards, as you have more people building different types of domain models on top of the data, I think it gets lost very quickly. Now, I think the metadata collection, in that sense of it, a lot of companies have their own internal tools where they just refer to information schema. That's where most of the metadata resides in, and where most of the data catalogs really depend on. The core part of where we focus on at Select Star and now more modern systems focus on is really, what happens in between the data assets. Who is accessing the data? Which query is accessing this data, and how is this accessing the data? These are the parts of the activity information that I would say, if you can parse them through and look at them in aggregate, that analysis of metadata is something that's very valuable. That is the full system that we built around. Any data warehouse that we connect to, we will parse through all of the activity logs, or SQL query logs to understand how is the data actually being created, all the way to where it's being used, and also, how is this accessed? Which type of select query is coming from? Which applications are querying the data, and how often is this being queried by how many unique users in the last certain time period, which helps us to understand the trends of the data usage as well. I think these are the parts that I would say, hasn't been looked at as much. But as there are more consumers of data, and there are more usage of data, I think there is more need to understand this. I think a big part of it is that a lot of companies have now moved to, it's been now easier than ever to have all of your data in one place in your data lake, data warehouse, or to create a data mart system. There are so many connectors of all different SaaS tools and business tools that can share that underlying system of record data into one place, so that you can join them and then model them on top. I think, yeah, it comes from multiple places. But I think in the past when we used to rely on primarily relational data warehouses, or more, like a Hadoop-based systems, there were a lot less number of, I would say, consumers of data directly. This is probably maybe why metadata hasn't been as a main highlight that people were looking into primarily. [0:08:42] SF: Yeah. I mean, I think your explanation of no one likes documentation is a good one. I think even if someone starts out documenting these things, it's just inevitable that it gets stale over time. It's just every company has the best intentions with a lot of this stuff, even when it comes to coding, or how certain functionality works, internal tools. We all have internal wikis, where we have documentation that's multiple years out of date. You described looking at the activity log. From the activity log, are you reverse engineering what the relationships are between the data based on how queries are run against it? [0:09:22] SK: Yeah. Basically, the way that we look at the metadata is that we look at each of the queries that are coming through and we attach them to each mentioned assets, and then we run a separate analysis on top in terms of how often this has happened, or how many unique users run that through. Did I answer your question? [0:09:48] SF: Yeah. It sounded to me that you're inspecting what the actual behavior of individuals within the organizations, or applications that use the data are actually utilizing the data to figure out what the actual knowledge graph behind the data is. How are different concepts related based on the query execution? [0:10:08] SK: Yeah. We can see a certain amount of information about the user. We may see the user name, but we may not know who that user is, which team they belong to, so on and so forth. That would come from other places, whether if we were to connect to active directory, or having our customers to group their users and so on and so forth. The main piece of where we are putting together this knowledge graph primarily comes from tracking the usage. Which tables are joined together? What's the join condition look like? What are the most used to the least used tables and within those tables, columns look like? This actually gets a lot more interesting when you connect it to other applications, like BI tools. For Power BI and Tableau, or Looker, for this sales dashboard that a lot of people are relying on, which are the specific fields and tables that really power them, and how are each of KPIs being measured, actually defined or calculated? I think there are multiple steps of insights that you can get. We see it in three levels. Either once we connect and ingest the metadata and query logs, there is first layer, which the core metadata, just the physical asset names, descriptions, the operational metadata of how big the table is, or things like, when is it last updated, things like that. Then on top of that, there is the second level of usage and behavior signals. This would include things like popularity, how widely is this being used and trusted. This also would include entity relationships and leading edge. Where does the data come from? Where does it go to? How is this data model related to one another? What are the common queries and joins that's related to this asset? Then there is the third level that we also see, that primarily will be driven by the users, but we will help automate, which would be mostly around business context and semantics. This will be something like collections. If you were to group them for a certain business domain, what would that look like? Any tags that we can infer, or actually put in so that you can actually govern the data. Having business glossary and metrics definition, a lot of this is what we see also as part of now the metadata context that you can put on top of physical assets, so that you have a lot richer context for any of the access, or whenever you're trying to leverage data that you have access to. [0:12:54] SF: For things like popularity, usage metrics, how are those used and what is the value of tracking those for an organization that's using Select Star? [0:13:06] SK: Usually, what we see the most popular, or most interesting have been leveraging both popularity and lineage. The first thing I can think of is just, there's always a lot more added benefit when you have customers starting to realize what is the right data to use, because it's not just about the semantic relativity, it's the trust score. If I'm looking for data related to active users, or our sales regions, the types of data that you want to use would need to come from data such that other people are also using. Popularity really comes in handy for that. If you were to also leverage lineage with that, then you can also see what other impact that the data also has in other parts of the system, or across the system. This is actually also interesting when you are thinking about cost perspective of running a data infrastructure. We have a number of customers that have a saved cost on their cloud billing, primarily for their warehouse building by looking at what is the popularity. Meaning, they noticed that a lot of models, or tables that they have, that they thought were being used, but they weren't. They weren't either being queried, or they are there to load reports on the BI system, but the BI dashboards actually weren't being viewed by the end business users. Combining both lineage and popularity is there's a big understanding cost implication to that. [0:14:43] SF: Okay. What are some of the other use cases, not necessarily restricted to the popularity and usage, but to Select Start in general? [0:14:51] SK: I would say, the number one use case for us just always comes from data discovery. This is something that also correlates really well with how our customers are using Select Start with their AI agents and doing data work with AI. It's really just providing the right types of results when you are trying to, let's say, build a new model, edit a SQL query, or do exploration around data. Popularly score will allow the agents to be able to find and use the right type of tables and columns. It will also be able to provide example queries that are relevant, so that the AI agents can actually build queries that are a lot more accurate. I would say, this is a very much of a use case that flew in from more of a native, or next generation from what our end users used to do. Our end users that are in data teams used to come to Select Star UI to find and understand data, so that they can query those data tables directly, or build dashboards. Now, new use cases that we are seeing is that it's their agents and AI tools that's using our MCP server to find and create queries and model modifications directly. [0:16:16] SF: Okay. Going back to the original problem that we're talking about, the fact that people haven't historically had a good way of really capturing all this metadata and relationship information, or at least keeping it up to date. Maybe, historically, we've been able to get away with that in some capacity, but do you think now when we start to enter this world of people wanting to have AI agents that can in some capacity leverage this huge amount of data that are collecting, and part of them being able to effectively leverage that data is they need to be able to understand it, which gets back to the knowledge graph, the metadata that's associated with it. Does that make these pain points, I don't know, elevate them to a place where this isn't just an admirer annoying us now? This is actually something where a company is essentially not going to be able to move forward and leverage all the greatest innovations that are happening in AI and finally solve this fundamental problem. [0:17:14] SK: Yeah. The way that I see this related to AI, or "hydrating AI with enterprise data," trying to use AI on top of your own data. Today, the ways that this has worked in the POC environment has always been by putting the very specific schema information, query examples, synonyms, so on and so forth, almost like a build your own semantic layer in order for AI to work well. Or, yeah, actually, to train the model with just those data specifically. This is an approach that I would say, gets you to 90%, but is very hard to scale. Without a metadata platform that will continuously evaluate the schema and popularity and leading edge and everything else, there's going to be the manual work of human needing to figure out what should be the part of the metadata that the AI should primarily use. I would say, this is starting to become a lot more important and a lot more companies are starting to look for ways that they can actually scale this part. [0:18:28] SF: Can you explain what a semantic layer is and what the components are to it? [0:18:33] SK: Sure. A semantic layer is usually a separate layer, like a lot of, I think now the definition is starting to get blurred, but usually a semantic layer contains an explanation of data model that's laid out as like a logical data model. It will describe which tables, columns. It should be part of a logical data model and how those fields will make up of a metrics definition, which are the dimensions and measures and facts and how it should be put together, hence, by an AI, or by semantic, any tools that supports semantic layer. The biggest, I guess, difference, or the reason why people have their semantic layer on top of their physical data layer is just so that they can separate out what is considered as verified, or certified data sets that should be and can be used by their business users, or in their reporting purposes. Semantic layer and semantic modeling just have gotten a lot more interest recently, because that itself can really provide the certification to the AI, and AI can just really follow those definitions to use. This is a piece that we've noticed how, like we are starting to see a lot more automation that we can build on top of by just scanning what is being used by your BI dashboards. For example, if you have a Power BI dashboard and you have a data set, or semantic model defined within Power BI, we can map the lineage for the fields and the calculations that you might have defined within BI tool and then translate that into a SQL model that your AI can also use to query. Semantic model and the semantic layer generally is more of just focused on defining how should metric X be calculated and what does that definition look like. Whereas, that used to be seen as a way to consolidate, or govern the metrics calculations when you're connecting multiple different tools together. Today, this is something that, and from where I'm seeing, of the use cases of how the AI can really use that definition to make the queries, instead of trying to come up with its own definition for querying the data. [0:21:12] SF: For construction of the semantic models and the data lineage and ultimately, the knowledge graph, are you leveraging AI internally to automate some of that? [0:21:22] SK: Yeah. We are using a number of different models regarding coming up with, I guess, the generating the queries and also validating the queries related to semantic model. There's also more of the formatting of the files, whether that's a markdown, or YAML in order to have this integratable with other systems as well. The core part of, let's say, where that data comes from when we are defining the logical table, or fields, or verified queries, those are, I would say, more coming from Select Star's metadata infrastructure system, is something that we've built over the last five years. [0:22:02] SF: It's really, I guess, danger with, or consequences to using AI to automate some of the construction of this, and then AI relying on ultimately, the construction to deliver some value. Some AI system is going to leverage this, what Select Star provides in order to be able to say, understand the underlying data better, but since AI is used to construct that, there could be some risk where it's not done 100% accurate and does that create a situation where you get a, I don't know, a cascading set of inaccuracies that could impact each other? [0:22:35] SK: I think that's an interesting question. Every time we're generating a semantic model for our customer, or any metrics definition, this is a part where we will have the user to verify. It can exist and may be used by AI agents readily, but we highly recommend our users to actually take a look at it to actually validate the model. Then the other set of this that I think is also really important is the evaluation side. For the business, when someone is considering building any text to SQL bot, or agent, having this set of business questions that are likely to be asked and all the definitions to be correct, I think this is more of you're trying to build a product that you do need a set of tests to go along with it. I guess, to answer to that, like yeah, I don't think it's something that you should 100% trust. The way that we see this really helps the customers is that you can really kickstart the journey of being able to focus on actually the important part, which is testing and iterating, rather than trying to manually create the YAML files and pick the tables and figure out whether which tables and columns make sense, what their relationship should look like. A lot of the times we see companies going back to doing a ton of data modeling on top of their data mart, and a big part of that is almost a rewriting what they already have in other systems already implemented in BI. [0:24:10] SF: Yeah. It's more of a way to speed up the process, like human in the loop, a human still there to be involved, but you can automate a significant amount of the manual work. [0:24:20] SK: Yeah. That's first, I would say, benefit to start with this approach. Then the second benefit, which we're working on is that because we are tracking the underlying metadata, when there are changes such as new calculations being added on the BI front, or underlying tables missing, things like that, these are operational issues and having these semantic models to be up to date with the current data model is another piece that will make the semantic model scale with usage. [0:24:54] SF: You mentioned MCP earlier. Can you talk a little bit about what you're doing, what your MTP server does and how people use it? [0:25:00] SK: Yeah. Our MCP server today is more of an interface of Select Star. We have, I think, four or five tools today. One is for searching the metadata. The second one is to get asset details. Then third one is getting lineage and traversing the lineage. Just searching the metadata, I've had this, like a test before. I wanted to understand what the customer distribution look like. I asked my cloud desktop that was connected to our MCP. It will start from getting all the metadata, which had more than 200, 300 different tables. From there, using the Select Star's popularity score and other relevancy metrics, it would narrow it down to 20. Then from there, it will pick the tables and columns that it would use to create a query and it will execute the query to get that result. From here, I just talked about search metadata front. Every time that there is a table, then it would use an MCP tool for the get asset details that will get back all the information about that table, including the descriptions, example queries and joins and when it was updated the last. Those are all something that MCP, sorry, the cloud was looking into to determine and also use those examples to actually put together the query. Then other tools, like lineage, walking through the lineage and then checking the lineage, usually that is used for checking the impact. If I were to update my DBT model, or SQL query, while it's doing that, it can check if there will be any downstream impact from changing the column names, or dropping a column. It can also bring a list of users, or owners that may get impacted and needs to be notified from making that change. Those are some of the areas where we've seen a lot of our customers using Select Star MCP server for with their cloud, or cursor, different IDEs that they use for AI work. [0:27:18] SF: Right. This gives them an interface to speak natural language, but be able to interface with the data? [0:27:23] SK: That's right. We are hearing that this has been a really great addition, because they've been using DBT, or Snowflake MCP, or their own homegrown MCP to just execute queries, or having it to grab the schema metadata. The schema metadata alone does not provide the queries that they want. The accuracy only really came after having Select Star starting to provide this direction of popularity score, lineage, example queries, all the documentation, so on and so forth. [0:27:59] SF: Can you share anything around the accuracy boost that you get from using this approach, versus only having the barebone schemas? [0:28:06] SK: I would say, this is not something that we have a scientific measure for, other than the anecdotes and the numerous customer interviews that we've done. We've been watching customers in terms of how they've been using it. It's more of like, if you start using it, then you would never go back to pre - next to MCPs is basically what we've seen. [0:28:27] SF: Why is it that natural language to SQL against real-world databases, it's difficult? [0:28:36] SK: Well, I think that's a really good question. I think there are multiple reasons why. If I think about just a lot of language models directly, we are just at this point only because the foundation models have been trained with just the whole world data, all the books and written literature, and every single one of them are almost just examples of how language has been used. It's not just because you're training the system with instructions of how to speak a language. It's not because you put a rule of how this should work. It really comes from just having a lot of data, or examples of how things have been used. I think this is why example queries come in as one of the parts that makes the query accuracy much higher. I think the other part is just when data models as data models get bigger. If we're just talking about relational database with everything is completely normalized and all the names of columns and tables are very accurate, then it might be easy enough to get accuracy with SQL. I mean, this is why I think we are starting to get to a really high marks on spider, or any of the industry benchmarks for it. It's only the real-world data when you're trying to use any of the benchmarks that it actually fails. That really comes from the real-world data is a lot more messy. There are a lot of similar looking tables and columns and how they are being used. There are also second, third level calculations and metrics that are built on top that you can easily find in a lot of organizations. I think a lot of those all contribute to complexity that makes it easier for LLMs to hallucinate than actually generating the seemingly easy queries. I think it fails because of that. [0:30:42] SF: Yeah, I think that makes sense. I mean, I always say that foundation models are really, really smart about general information, but they're really dumb when it comes to each other specific this, this information because they never trained on it. To get value out of them for specific tasks is all about like, how can you correctly contextualize the prompt? If you're doing this natural language to SQL generation against complex data models that exist in your warehouse, or your lake house, or something like that, then without the correct contextualization of how that data, and essentially, how do you encapsulate the tribal knowledge that people have within the company, if you can't feed that in the model, then there's not really a way for the model to probably accurately run a reasonably complex query against it. [0:31:27] SK: Yeah. I think that's really well put. Everyone says, context is the king. For data, how do you structure that context that's actually relevant for SQL generation and analysis of data, I think has a particular flavor to it. I think that is primarily what we've been focused on, because we understand that something like popularity, or lineage has a very specific implication of how the data should be retrieved, or what type of impact it will have on the use of the data. [0:32:07] SF: Have you thought about extending any of this approach? It sounds like, if I'm using Snowflake, or something like that, then I can run Select Star against my Snowflake. I can go through some process to make sure that what it produces is accurate. Then I can start to use something called a desktop and use your MCP server to explore that data in an actual language way. What about situations where I might want to pull data from other types of systems, not necessarily the warehouse, but I might want to talk to, I don't know, a SaaS API employee, or maybe even transactional database, is there potentially a role for this approach to extend beyond just the understanding of the warehouse data? [0:32:51] SK: Yeah. Yeah, for sure. I think there are now different ETL systems that we connect to, as well applications that we're starting to connect to. I'm seeing that as we add more integrations, so it's not just data warehouse queries, but even in the future, I think we'll be able to start generating dashboards in Power BI and Tableau once they have their own MCP server, for example. I think that is the future that we see. [0:33:19] SF: With some of the stuff that you're doing around the MCP server, given that you're primarily serving metadata, is there the same - I guess, do you need to be concerned about what a specific user is accessing, or is that more going to be a security requirement on where they ultimately are executing that query, because that's where the actual data lives? [0:33:41] SK: Yeah, that's an interesting question. Right now, it really comes down to the end user role of where the query gets executed. We do have policy-based access control support, so that you can limit the user to query, or even just look up metadata within a certain set of schema tables, or logical grouping that you may have. In terms of the actual query execution, we're a little bit decoupled in the way where we're leaving that to the data warehouse user, because that's where the query gets executed. We will generate the query and you can limit the query to only access certain parts. In terms of security perspective of end-user querying, this is something that we offload to the data warehouse side today. [0:34:35] SF: Then, are you offloading the context optimization problems to the engineer that's building this application? Because I would think that some of this metadata could get pretty big, where it starts to eat up a reasonable amount of the context window. How does that optimization work? [0:34:54] SK: If the engineer is using the MCP, this is just really like, we control it on our end. When you say context optimization, it really, I guess, comes down to what context we are exposing. We have our own embedding that we use for our Ask AI, which is AI assistant of Select Star. But that whole context isn't something necessarily we expose fully to the developer. We do this through the MCP server. What I'm saying is we don't necessarily put the full embedding of raw metadata for AI agents to use, but rather, have the MCP server to provide that information upon request by the agent, right? Right now, there isn't really a challenge regarding fitting into the context window. I think the piece that might actually be interesting to you in this regard is a semantic model generation. We do have a way to now summarize and build a semantic model for a customer, so that the customer can basically use that and feed that into their AI application. I would say, that we haven't gotten to a point where it is so large that it doesn't fit into the context window, but it's fairly early days. We've been just testing with a number of customers on this, but haven't really run into that issue. [0:36:20] SF: Great. In terms of what's next for you guys, where's your focus? Is there anything that you can talk about in terms of the challenges that you're working on now, or things that you have coming out relatively soon? [0:36:32] SK: Sure. Yeah. First and foremost, the semantic model is a big part. We have seen that this really helps the text to SQL approaches on building LLM to speak the language and surface business questions really well. We are looking at ways for this to be more general and available for more customers today. That's one part. The other part is having Select Star's Ask AI to use that model and querying the data directly for the end users, so that more users can just ask questions to their data, and then about their data and get answers right away. Last but not least, we have different agent workflows coming up that really helps building the business context metadata more automatically. I'm talking about, we already have ways that we're starting to do a lot of auto documentation of data assets, but a lot of things that we're looking at that are coming up would be tagging the data assets and assigning ownership, or propagating different ways of documentation, which we already do, but putting it into the hands of an agent that maintains the governance is really the direction we're heading today. [0:37:53] SF: What are your thoughts on where some of the value of metadata is going? Historically, we've put a lot of value in the data that a business collects. Historically, when it came to databases and warehouses, there was tight coupling between the compute and the storage. Then eventually, we separated those things. Now, we have these open table floor mats of iceberg and delta tables. We're getting to a place where that data might actually exist just in some cloud storage, bucket that's outside of where the actual compute runs and people want to own the compute work. There's not as much value attributed to the hosting of the data itself. Now, I think one of the things I'm seeing happening in the industry is that a lot of the big data players, they really want to own the catalog and the metadata. Is the new oil of data, especially in the world of AI, really all about the metadata? [0:38:53] SK: I would say, it is the map of where the data is, and that's why metadata is being taken a look at now. It tells you what exists and what's really important. For cloud provider perspectives, I think it's really to expand more capabilities under the same umbrella. That's why thought of the cataloging, or these metadata features are being introduced by larger players as well. Yeah, nonetheless, I think because if you have the map, you can actually leverage that for operational purposes, like automating impact analysis and let's say, a PR, or letting the downstream users know of what's going to change, or even using something like popularity to have AI agents to write the right correct query and pick the right columns and tables. There's just a ton of different things that you can add when you have this context. I think this is why a big focus is starting to be put on metadata and also high-quality of metadata. Yeah. [0:40:02] SF: Yeah, that's great. I wrote an article recently about comparing the semantic web to the world of large language models and use this analogy of how things like the semantic web and ontologies and sparkle and all the things that came from that world was essentially, an architect's view of the world, where we're going to predefine these things and architect the structure of the web at the time. Then with foundation models, they're much more of an explorer, where they don't have that predefined structure. They're just going out, stumbling around and figuring out based on the patterns of behavior, the way that we write, what are the associations between these things. Ultimately, to make them more useful and accurate and prevent things, like hallucinations, they need a map. That map can be things like ontologies, or in the context of what we're talking about, metadata and semantic layer. All comes together eventually. [0:41:01] SK: Yup, exactly. Having that up-to-date knowledge graph of the data is really the key, in order to make it accurate and also to scale for multiple use cases and as the data changes underneath. [0:41:16] SF: Yeah, absolutely. Shinji, I want to thank you so much for your time, for coming back. I really enjoyed this. [0:41:22] SK: Thanks so much, Sean. [0:41:24] SF: Cheers. [END]