[00:00:00] SF: Tyler, welcome to the show. [00:00:01] TW: Thanks, Sean. Really appreciate you having me on. [00:00:04] SF: Yes, thanks so much for being here. I know we had some trouble scheduling it, such as the way sometimes in life when you have a couple of busy people, it can be hard to coordinate schedules. But why don't we start with some basics and have you introduce yourself? Who are you? What do you do? How did you get to where you are today? [00:00:04] TW: Sure, happy to do that. I'm Tyler Wells, and I'm the Co-Founder and CTO of Propel. Propel has been around for a little over two years. Myself and my co-founders started it after – see, I spent seven and a half years prior to that at Twilio. During that seven and a half years, I joined. We were probably under 200 employees. I left we're probably over 5, 000. I did everything there from starting the video organization, building that out. I opened up one of our first remote offices in Mountain View, California. I opened up an office in Madrid. Then, my final year plus there, I started the SRE organization under platform engineering. So, I moved out of the voice and video unit that had been there that whole time and then moved over to platform. Prior to that, before Twilio, I started a another company called Ubix. That one didn't go so well. I ended up leaving, obviously, luckily to go to Twilio. And then before that, I spent a bunch of time at Skype. During my time at Skype, I really got my first real experience of what true scale is. I got to lead and build the Facebook Video Calling that was powered by Skype. This goes way back before the years of WebRTC. And so, we had this plugin that you would have to download and install. The first day we launched was to 13 million people. By the time we got that fully out there, it was in the hundreds of millions of people that have been using it. I’ve been in software now for north of 25 years and very happy to be here and get an opportunity to talk about what we're building at Propel. [00:02:03] SF: Awesome. Yes. So, you were at Twilio, you said from, essentially about around 200 employees to over 5,000. That must have been an incredible journey. I'm sure you mentioned about seeing scale in the days at Skype. But you must have saw massive growth and scale at Twilio during that time, as well as some really interesting problems that you had to try to figure out how to solve. [00:02:27] TW: Yes. I mean, there's incredible amounts of scale. I mean, I remember joining in 2013, and this was a vastly different company than anything I'd ever been a part of. I was always used to shipping software that people downloaded and installed on their machines, and you got this immediate feedback loop. But at Twilio, you're building this platform, and your platform is being consumed by developers in all sorts of interesting ways, and growth doesn't happen overnight. But when it does, it can be wildly fantastic of. I think one time we had Justin Bieber tweet out one of our Twilio numbers for something, and it just was absolute chaos. I got to see scale in a very different way. Because now that scale was horizontal, it is across a bunch of different dimensions. But scale nonetheless. Like you said, there are definitely plenty of interesting problems, and we got to do from those early days of 2013, a lot of growing up that we had to do in order to meet the demands of our enterprise customers and people that were building on top of us. [00:03:33] SF: Yes, absolutely. I feel like even today, Twilio is still changing. It feels like to me as an outsider anyways, transformed a little bit from being very much like a very developer focused, from a messaging standpoint to a little bit more enterprisey, a little bit more sales led. But that's kind of just something that I – a pattern that I've kind of noticed in the last couple of years, which is interesting. [00:03:59] TW: Yes. It feels like the nature of the public company, Beast, right? Everything from the ground up. Our grassroots were developer led and developer focused. Jeff was a developed, still is a developer. He would lead – he would do live coding on stage, and so there was always a huge focus on delivering and ensuring that we are able to wow our developer community, and a huge focus on that. But I think as you start to get bigger, that has to change somewhat how responsible for public markets. And as you start to deal more with the enterprise level, and that level of scale, things like compliance start to come in and you end up growing up, right? In a sense. [00:04:44] SF: Yes, absolutely. There's pros and cons with that and it's a part of the growing up experience of any company, and of certainly any – for Jeff, any CEO leader, or founder of an organization. One of the things that I know that you worked on while you're at Twilio was customer facing analytics. That's something that I really want to kind of focus some of our conversation on today. So, maybe a good place to start to kind of just set the stage here is what is customer facing analytics? How is that different than other types of analytics that people might be used to thinking about? [00:05:24] TW: Yes, absolutely. I mean, I think prior to us, building out our customer analytics story, which from a product standpoint, we called it Insights. The analytics largely was internal, or the vast majority of it. So, if you think of the traditional or classic sense of analytics, it's business intelligence. I'm landing data in a warehouse, I’m landing data in a data lake, I'm using something like a Tableau or a Looker. I'm answering lots of questions from marketing for sales, maybe for engineering, just depending on the level of sophistication or what they need and that's it. Largely, that data stays inside. With customer facing analytics – So, when I say stays inside, it turns into a report, maybe it turns into a dashboard that somebody from marketing can go and look at, but that data is not going in front of your customers. That data is siloed internally and is not going anywhere else. What we started seeing was the data that we're using to help troubleshoot and solve customer problems, specifically at the time was for our voice client, and the voice client was a WebRTC client that had distribution globally, was used in call centers. Customers would write in, and they would say, “Hey, I've got this call identifier, or call SID, how do we how do we figure out what's going on?” We would then take that, we would run it through our internal tooling, and run it through sort of our internal analytics infrastructure, and we respond with kind of a bunch of data that says, like, “Hey, here's what happened. Halfway through the call, you got a bunch of jitter and that call dropped.” Or, “Halfway through the call, they unplugged their microphone, and that's why they couldn't hear anything.” That turnaround time was laborious. It was also costly. So, we came up with the idea that we should put this data in front of our customers. We should empower our customers with that data, and how do we do that? Well, at the time, a lot of that data was landing in Redshift. We used to kind of joke internally that, “Hey, that's where the data went to die.” You would see it land in there, and it would never come out again. Because think about trying to access that data and build an application on top of that. It was governed by a completely different group inside of Twilio. It was in our data engineering team that was treated very differently, that data was predominantly used by internal folks from finance, from marketing from everything else, well, then all of the sudden, we come along, we're like, “Hey, we want to expose this to our customers.” Largely, that answer is going to be no. When I start to think again, so just to kind of make sure I answer your question, the customer facing analytics is that data that you want to use to put in front of your customers, to empower them, to give them insights about how the platform that you're providing to them, or the product you're providing to them is performing, so they can understand is it doing what it's supposed to do? Am I getting the right amount of calls? Are my calls dropping? Why are my calls dropping? I think a great example in some early – well, some early examples of what that looked like was the like button or the like counter in Facebook. That was customer facing analytics at pretty massive scale. But it was in product analytic, but it was a very early version of, “Hey, here's something it's very simple, but it's providing insight to me of how often my post was liked.” Think of the same thing that you get with LinkedIn. LinkedIn has a ton of customer facing analytics as well. [00:08:56] SF: Yes. In this case, where you were trying to essentially like, empower the customers to solve some of their own problems, rather than calling up customer support, and essentially having them run these queries or access these dashboards, and then tell them what was wrong. Imagine the SOA on that, that's probably not great, and it costs a lot of money just from paying people to do something that someone else may be to – a customer will be able to solve that problem themselves. How are those, the challenge of exposing that information externally to a customer different than building those internal dashboards? What are some of the challenges you face? You mentioned essentially, that one, the data is kind of like siloed off in Redshift and owned by a different team. But what are some of the like engineering challenges or product challenges that you had to overcome in order to build something like that? [00:09:46] TW: Yes, so if I think about prior to us building our stack, or I’ll say our services, the SLA for running a query to find out information about a specific call could be in minutes. Sometimes it may be seconds. If you're going to build a customer facing data app or customer facing analytics, minutes, or even seconds, that responsiveness is not going to work. Your customers are going to leave. They're going to say, “This doesn't work for me. It's not going to work.” I can't just sit around, like the responsiveness is – that lag time is just going to be horrible. One of the first things we have to solve is, how do we take all of this data that we're collecting that we can now utilize for troubleshooting? And how do we make that fast? How do we make it responsive? How do we put an API on top of it? How do we enable our front-end developers to not have to go try to write SQL, or put SQL in a middle tier or anything else like that, but have an API that they can interact with, to query that data, and get the responsiveness that we need in order to satisfy something that's a data application? Because your customers, they're just not going to stand for it. If it's taking minutes to run something, or they're asking a simple question about, “Hey, here's a call SID, tell me what happened here.” [00:10:57] SF: Yes. And imagine, there's also some real time access to the data to where, potentially, internally you're okay with like, well, the data is an hour behind what's actually happening live. But as a customer, I want to be able to access this stuff in real time. So, there's also the challenge of essentially, like, how old is the data? I want to be able to access it now, versus 30-minute delay, 60-minute delay is maybe okay for internal people. [00:11:26] TW: Yes. I mean, if you imagine you're running a giant call center, and your giant call center is built on top of Twilio, and you've got that client embedded across 100 desktops in that call center, and something starts going sideways. You can't wait an hour to get the answer in terms of what's happening, right? You've got to get almost immediate access or near real time access to that data to figure out, is it me? Or is it Twilio? That was something we had to make sure that we could deliver upon. So, in order for us to do that, we could not use something like a Redshift. That's just not going to fly. We couldn't use something like Looker. We're not going to embed Looker or Tableau into our console. One, it doesn't look anything like Twilio, and two, the responsiveness of that of that app or that embedded iFrame is going to be pretty horrible. So, we had to start building that infrastructure from the ground up, because there was really nothing else for us to utilize, that we'd found that was sort of commercially available or off the shelf. So, we had to stitch all of that stuff together. [00:12:27] SF: Where did you start with some of that, like, building that infrastructure? What was the makeup of the team when you first started this project? [00:12:35] TW: Yes. We started first with collection. We started to instrument all of the JavaScript clients, that our customers would embed in their apps to send telemetry data back to Twilio. We had collection points, essentially all over the world that would then gather that data. That data would land on a bus like Kinesis, and then we would then send that data at that time, was into something like S3, which eventually made its way into Redshift. What we had to do is we had to break into that pipeline, right? We didn't change at the front end. That stayed all the same, the collection, everything else like that, hit Kinesis. But then we added an additional consumer of that data, and that additional consumer started going into Elasticsearch. Only problem was, we didn't have any people that had ever operated or built Elasticsearch. So long story short is – and the data engineering team wasn't able to help us out. What ended up having to do was build more or less a kind of data, or pseudo data engineering team inside of my product engineering teams, that product engineering teams were running and operating the entire video, or in this case, the client infrastructure. I had to pull some people off there and say, “Okay, we've got to figure this out, how are we going to do it.” So, they came up with the idea of like, “Hey, let's run Elasticsearch.” We had a little bake off, trying to do this with Spark. That really wasn't quite ready where we needed to be. So, we figured out that we can run these Elasticsearch clusters and we can actually start to land that data, and we can start to ask questions about that data to help satisfy debugging of some of our customer issues. That was just the first part. At that point in time, we hadn't even really thought about Insights yet. So, we are doing a lot of this upfront work to help us satisfy those inbound support questions before we even thought about building out the Insights. As I'd said earlier, once our customers starting come to us more frequently, we started seeing the cost and the overhead of answering those questions. It was like, “Okay, we've got to build that product.” Now, I've got only like two people running this infrastructure. We ended up hiring probably another three or four plus a product manager. So now I'm probably up to six people to say, “Okay, we've got this data, we've got to continue to operate this inside of my organization, and now bring it all the way out to those customer facing analytics in the form of dashboards and insights that we can deliver.” That took, essentially, a fully staffed team beam and a whole number of months if not, maybe a year to get that from inception all the way to delivery. [00:15:08] SF: Wow. What was the challenge or limitation you ran into with Spark? [00:15:14] TW: I think at the time, it was really hard to operate back then. I think Spark was still somewhat in its infancy. It hadn't been around all that long, so we're talking probably like 2015, somewhere in there. I think what they ended up finding was the Elasticsearch gave us the query ability and the responsiveness that we wanted, without having to spin up and run these Spark jobs. We ended up bringing Spark on later on. Some of the other folks came in, and what they would use Spark for was to preprocess data and then land it into another Elasticsearch index. When we needed to preprocess things and say, like, “Hey, I want to have – I want to window my data by day, by hour, by week, by month, they're using Spark to do a lot of that processing, and then drop that back into an Elasticsearch index.” But in those early days, we found that Elasticsearch was really given us a responsiveness we wanted. The API was pretty easy for us to use. We could write that mid layer, mid-tier layer, that would connect to Elasticsearch, and would translate sort of REST API calls into those Elasticsearch queries, and then give us the responsiveness that we that we wanted. [00:16:28] SF: You mentioned that this took like a multi-person engineering team, probably like a full year to just completely develop both the infrastructure, the data management, and then the actual UI for bringing up these dashboards that something that a customer can interact with. That’s a significant investment for a project like this. Was essentially the commitment from Twilio or like the thought process there as they really believe that this was something that would bring tremendous amount of value to customers? So, it was worth sort of putting that level of investment into something like this? [00:17:03] TW: In the beginning, I would say, we kind of went rogue and just snuck it in. Because we needed to better operate our systems, and we needed to better operate and troubleshoot when our customers were having issues. So, I’d face similar problems back at Skype, had done it a different way. We ended up burning up Postgres databases in a major way, just because the amount of data that we had, and we never got the responsiveness we want. So, having learned from that, it was like, “Okay, I have to have this, this infrastructure in place in order to troubleshoot a global network.” In the beginning, I just brought it in. I was like, “I can't operate without this.” So, we brought it in, under the cover of darkness and said, “Hey, we forget to have these magical tools that we can answer questions when our customers are having problems.” Then what ended up happening is I started seeing Kibana dashboards or Cabana Grass landing inside of Slack channels, and those Slack channels were like customer support channels. Then, people started asking questions of like, “Well, why are we taking snapshots of Kibana Dashboards, and sending them to support, and support is just turning around and sending that off to the customer? Shouldn't we just build a – shouldn’t that just be in the hands of the customers?” Then it becomes, okay, this can vastly reduce our support overhead. This can allow our customers to better understand their usage of how the platform is performing for them. It created the stickiness and it gave them the confidence that once we saw, we got a few customers out there and on it, that they gained confidence in the platform and the reliability that they can now expand their offerings. It also didn't hurt when Zendesk came to us and said, “If you don't figure this out, we're going to leave you”, which is one of our largest customers, and we just had to figure it out. Then, it became this this thing of like, okay, great, we've got client insights. Now, let's turn that into voice insights. And then as soon as our customers started seeing that, it's like, “Well, why don't you have a messaging insights? Where is where's my chat insights?” So, all the teams, at some point, were like, we have to build an insights product, but there was no platform for that. Of course, they showed up in my organization, like, “Hey, can we follow your lead? Or can we jump on your infrastructure?” And we ended up letting one team do that, and we finally didn't have to tell them, like, “Look. You're killing us. We're having to operate these systems at our scale and your scale, and we can't do it. So, it's either you've got to go run your own cluster, or you got to give us a bunch of people”, and they chose to go run their own cluster and more or less, kind of like carbon copy that infrastructure, but now under the organization. [00:19:44] SF: Yes, makes sense. So, sounds like you've had some strong customer signals that this was something that would actually be adopted in use. Then, of course, you had to additional pressure from Zendesk that they were going to leave and if you didn't figure this out – [00:19:59] TW: That was pretty helpful, yes. [00:20:00] SF: Yes, exactly. Strong motivator. [00:20:02] TW: Strong motivator to put it on your roadmap and deliver it, right? [00:20:07] SF: Yes. So, building a lot of this stuff, figuring out this stuff from scratch, essentially. And also, dealing with you probably significant scale, even at Twilio at that point. What are some of the lessons that you learned from an engineering perspective with building this thing from scratch? [00:20:27] TW: Lesson one, I would say, this stuff is hard. This stuff is hard. If it's not your core competency, it's going to be even harder. It also becomes hard from an organizational standpoint, because you end up building this data engineering team inside of your product engineering teams. That just, at times, or I don't know, if they're – I wouldn't say they're necessarily fundamentally at odds with each other. But it's different skill sets that I have to now bring in and like, okay, how do I get these folks to cohesively exist and deliver these back end analytical products? But at the same time, put that data in the hands of customers? That was challenging, because I'm going to my management and saying, like, “Hey, I need engineers that can run Elasticsearch.” It's like, but you're running voice client, or you're running video. What does that have to do with WebRTC? Or what does that have to do with communications? It's like, well, it does, because that's all the infrastructure that now powers our Insights, and that Insights is giving us that stickiness. So, there's definitely some organizational challenges there. I think the other part is thinking about data products. So, when you're thinking about data products, if I run a query, and say, “I want to understand something that's broken down by week”, and I just get back the results, should I return null, if that week’s empty? Or should I do something called zero filling? In the case of Propel, we do now, it's like, well, I don't want my front-end developers thinking about that. I want the API to handle all of that stuff. Again, it's a different mindset that we have to now bring into an organization that wasn't necessarily designed or set up for that. That definitely posed some challenges. Big lesson learned, if you're going to do this at scale, do it once and do it right, and make a platform out of it. At one point, we had talked about doing that at Twilio. But I just think the appetite wasn't necessarily there at the time to do it, and we ended up having to staff these mini data engineering teams inside of all of the big product organizations, and I would say, to tremendous expense. So, lessoned learned is if this stuff is not core to your business, either find something, somebody, or someplace, or some product that's going to do it for you, or make the investment, like a LinkedIn or a Facebook and go build it for real. Yeah. So, do you think that people end up building these bespoke solutions to this? Because initially, maybe they either underestimate how hard this is going to be, or the complexity, or the early day requirements feel fairly simple so they start out with like, “Okay, well, we only have a couple of requirements, and maybe we don't have huge scale issues, so we can do something fairly simple.” Then, of course, the business scales, the requirements increase, and now they're kind of always going back and manipulate. Then suddenly, you have, like you said, like a 30-person team working on this thing that is not even core to your business. [00:23:34] TW: Yes, so what I would say, like, let's just go two years ago. Two, two and a half years ago, I'd say, it starts like this. I built my company, I'm getting a little – I have some customers. My customers want access to some data. I'm just going to go query a production database, because what can happen? So, somebody goes and starts querying the production database, they start running analytical workloads against that. And they spin up, they take anyone to like Chart.js or something else like that, they slap it into their console, and you've got your stage zero of customer facing analytics, and that works okay for a while. But as soon as you start to gain any level of growth or traction, running analytical workloads against your transaction databases, I mean, what can possibly go wrong there? Next thing you know, somebody goes and runs a big query. It's unconstrained, and it's like, hey, I want to see this analysis from now until the beginning of time, and it just knocks over your transactional database, and now your entire product is down and you're in a really bad place. Once they realize that, or maybe they've done this before, so they don't do that. They start setting up their pipelines, and the pipelines are great, kind of similar to what we did. I'm collecting data from a bunch of places, it's going on Kafka, it's going on Kinesis. Maybe I'm landing it in my warehouse, and I'm doing my analytics on it. Awesome. BI is happy. Marketing is happy. Sales is happy. I want to now deliver that product. I sit down and look at that I'm like, “Okay, so we're going to connect to my warehouse.” Okay, I'm going to connect to my warehouse, I'm going to create that application service. I’m going to put an API on top of that. I've got to deal with pagination. I got to deal with API access control. I got to deal with performance. You can see, it starts to grow in scope very, very fast. Oh, I've got to go hire a product manager or this. You start to like, “Oh, man, this is increasing in complexity and it's increasing in expense and investment that I have to make.” Then the third is, what we're trying to solve is like, why should this not be a platform? Why should this not exist as something where you don't have to do all of that stuff, or you don't have to make that heavy lifting investment. Why not come to something that is going to allow me to utilize my data engineering team for what they're good at, and use my product engineering team for what they're good at, empower them with an API between the two, create this very nice contract, and then allow them to deliver what they need from an analytic standpoint, in a customer facing analytic standpoint. Of course, another route that I've seen people take is like, well, I'll just do embedded Looker. I'll do embedded Tableau. Maybe that's the better third option, right? So, I'm going to now say I've got it in my warehouse, I will just take that that little iFrame, that little constraint iFrame, doesn't look like my console. But I'm going to drop it in there and see what it does for me. What we have customers coming to us with is, okay, this doesn't work. This does not meet – my product managers are mad. My customers are mad. I need to replace this with something else. [00:26:37] SF: Yes. I faced this exact same issue, during my time at Google with essentially, people want to take shortcuts, will embed Data Studio into the console. And you just have not a great experience from a customer's perspective. It doesn't feel – [00:26:54] TW: It’s doesn't feel native. [00:26:55] SF: Yes, it doesn't feel native essentially, of limitations of the platform. So, given you had all this experience in pain and suffering go through this process at Twilio, and that's led you to found Propel. So, I think this is a good place to kind of start to dig into the work that you're doing there. What is propel? And how does it help companies kind of solve some of these problems? [00:27:22] TW: Yes. Propel largely is what we wish we had when we were building Insights, when we were building client insights, voice insights. For us, we view it as a data applications platform. So, this is an API that sits on top of your data, whether that's data exists in snowflake, whether it exists in BigQuery, exists in S3, it's coming to us in the form of webhooks. It's an API on top of that data that empowers your developers to quickly and easily build customer facing analytics into their applications. [00:28:02] SF: Obviously, on this problem, during your time at Twilio, but in order to go from that, to founding a company, how did you sort of validate that this idea was something that people were actually willing to pay for a solution to this problem? Because I think one of the first challenges with building any, like developer facing tool is like, can we actually get developers or organizations to pay for this thing, versus them going off and trying to build it themselves? [00:28:28] TW: Yes, so the first one was, when Zendesk told us, we ended up building it and delivering it, they then turned around and said, “How the hell do you guys do this? We need to do this, too.” And so that was point number one. Then, just after my co-founder, Nico, left Twilio, I was actually still there. He started making the rounds with other founders, other CEOs, or CTOs, pretty much calling on everybody and anybody that he could, presenting the problem statement of like, “Hey, how do you deliver customer facing analytics? Do you know what customer facing analytics are? Do you need these things?” Largely what he got back, I would say, probably from everyone is we need it. It keeps getting kicked down the road. It's on our roadmap. It's been three quarters and we've never delivered it. We can't make the investment. We don't have the team to do it. How do we do this? How do you do it at Twilio? That largely was the catalyst for him to say, “Okay, this is real, this needs to be a company.” We kind of knew that, I would say inherently from having solved this problem at Twilio. But getting that next level of validation through interactions through like Q&A with people that are actually facing this problem in the real world with their own companies, was obviously largely validating, and resonating pretty heavily with our investors as well, and that paved the path for us to go and do it. [00:29:57] SF: Yes, it feels like to me and feel free to correct me if I’m wrong. But it feels like this is kind of like a new category of product. I would imagine, like the sort of main competitor you're going up against in a deal is like, well, we're going to go like go build something ourselves. But so – [00:30:14] TW: I agree, totally. So yes, I mean, I 100% agree, because everybody in the beginning would ask, “Okay, as your main competitor, Tableau, Looker, the embedded solutions.” My response is largely no. My main competitor is building yourself. It's always the build versus buy. Because I think as I start to talk through and look at what the Lookers, and the Tableaus of the world solve, I think it's different. It's built for – it's purpose built for a different audience and a different consumer. What we're trying to build is a different category all on its own. That is, you know, enabling those developer teams, those product engineering teams, to have ownership in the data that they're producing, get it out of those silos and into the hands of their customers where it's needed. [00:31:02] SF: Yes, absolutely. I see it as a completely different product myself. So, given that that is the case, where it's certainly a new category, are you facing, and essentially, an educational challenge as well to teach the market that something like this exists? [00:31:20] TW: Absolutely, 100%. Our go to market strategy is largely a ton of content and education around the space. A lot of content around, here's how to solve this problem. Here's our point of view of how to solve this problem and how to solve it the right way. Here's why buy is very expensive. Trust us. We've done it. We’ve had to beg for the headcount and beg for the budget to do these things prior. I mean, I think it's called what, the challenger sale, where you're out there having to do a lot of education at first and that's fine. I mean, I think when you're trying to build a new category, that takes a lot of time and effort, it's not something that just happens overnight, and it's a lot of hard work to create that mind shift in the community of developers from, “Hey, I'm just going to go build it”, to, “Oh, this is a much better solution. This is a much easier solution. This actually frees me up to do more inside of my organization against my backlog, because now I put my data here, and my data is still being governed and managed by my engineering team. They're still doing the same type of work they would be doing anyway. But now I can actually ship all these cool products off of it.” [00:32:33] SF: Yes. So, it sounds like to use Propel, it's not necessarily build versus buy. It's like build versus buy and maybe build a little bit. It's a platform and API. So, can you kind of walk me through how do I go about actually getting started with Propel and integrating it into my existing infrastructure? [00:32:54] TW: Yes, absolutely. I mean, obviously, the first thing is you got to have data. I don't think there's any modern company today that doesn't have more data than they know what to do with. Then, we can start with the sort of first integration point that we delivered and that was for Snowflake customers. So, Snowflake today does not have an API over the data inside of snowflake, right? You have to do a JDBC connection. You have to somehow connect to it, pull that data out. So, let's start with that. Let's just say you're an existing organization today. You're landing your most valuable data inside of snowflake, but you want to get that into the hands of your customers. You would come to us, you would obviously create an account, you would create what we call a data source. That data source would then connect to your Snowflake. We would then bring that data into Propel, so that we can serve it fast with high concurrency and reliability. Then, you automatically already have that API on top of that. And then, from that API, you can start to create things like metrics. You can start creating things like time series and counters. You can do things like our Reports API. So, you get now this sort of whole analytical toolbox of sorts, that allows you to interact with that GraphQL API and build that directly into your console. Then, if you also choose, you can also say, “Hey, I want to use Propel’s UI components. So, I don't want I don't want to write my own time series. I don't want to write my own counter component.” Where you can now take our components, embed those React components directly in and you've got all the visualizations sort of handled for you. It all starts with data, right? It all starts with the data. That data, again, so I would say snowflake was first and foremost, that's what we started with. But say you're landing that data in S3 and parquet format, we can pull that in as well. Say you're generating a bunch of events. Say, say you've got all your data's going into Dynamo. It's in the form of these nice JSON events. Well, you can take something like AWS Glue, you can then transform that into Parquet, drop it in S3, we’ll pick it up. Maybe you're using something like Kafka or Kinesis as your event bus or AWS EventBridge. You can then have that fire off to Propel to our webhook, we can consume that and make that available, and all the same APIs are there for you. [00:35:06] SF: Can you build custom connectors as well? [00:35:10] TW: Not yet. Not yet. We are pretty much like we chose Snowflake for the ecosystem, definitely was a lot of momentum around Snowflake. We said, “Okay, we're going to build the first one.” Maybe someday we'll get there, to where you could build your own connector. But you get so much flexibility with like the webhook. And then the webhooks, there weren’t enough JSON. So, we have found a lot of our customers are understand event driven architecture are using things like events, can't really do anything with that data, don't want to go through the hassle of transforming them or doing anything else. It's like, all right, we can send it to Propel, and now we can actually build insights or analytics on top of that. [00:35:48] SF: If I'm using, say, the Snowflake connector, what is the latency between the data hitting my Snowflake instance to being available in Propel? [00:35:57] TW: Largely, that's dependent upon the size of your Snowflake warehouse. The first time, let's say you've got a billion rows sitting in your snowflake warehouse. We're going to historically pull all that historical data into, if you so choose, into propel. Then, what we're going to end up doing is we're going to sync that data on a determined time that you can configure a configurable amount of time, that you can say, “Hey, I want this to sync every minute. I want it to sync every hour. I want it to sync every 24 hours.” Largely, what we'll see is that first hydration of data depends on the size of the data, and depends on the size of the warehouse. So, that can take anywhere from minutes to hours. Then, once you're starting to constantly update that data, you're talking minutes depending on the number of rows. We have a customer that I think it's every hour is bringing in three to five million rows of data, and that's taken an order of minutes to actually get freshed up. Get refreshed into Propel. [00:36:58] SF: Okay. I see. [00:36:59] TW: So, it's not one of these things like you would want to use for – it's not like a Datadog. It's not that level of latency. There is definitely – there is a little bit of delay. But we're not saying I'm not going to go build my observability stack on top of this for my real time operations of my services. This is going to be more things where there can be a slight delay, because it's not – it doesn't necessarily need to be real time in nature. [00:37:29] SF: What if I've already built something in house? What is essentially – how do I go about like migrating it, or building everything onto Propel? I'm assuming that is a use case that you must deal with was, as you bring on customers that have already kind of built some something in house. [00:37:50] TW: Yes. The question is always, okay, where's your data? And if you come to me and say, “Hey, my data's in Snowflake. My services are connecting directly to Snowflake. I'm tired of looking at what my bill is. How do I get this out of here?” They could just come to us and then we would ingest that data. They choose which tables they want, and then we bring that data into Propel. Then, they probably already got their Chart.js components that are that are rendering that data, somehow. We would say, “Okay, you don't have to run that API layer anymore. You don't have to run that middle tier that manages that connectivity into Snowflake. You can get rid of that entire service. You can get rid of that pager duty schedule. You're going to swap in our GraphQL APIs.” I think they're super developer friendly. They make it very easy for you to query that data. You can run 50 queries at once in a single response, if you so choose. And I think, it becomes much easier. You kind of go away from that REST response, the REST APIs that we see a lot of people doing where it's like, okay, I make one request, make another request, make another request. Well, what if I have to do that 25 times? I think we've vastly simplify that infrastructure, we vastly simplify the creation of their customer facing analytics by swapping out. [00:39:07] SF: What have been some of the biggest technical challenges that you've had to overcome, taking these lessons learned from building something at Twilio, building a platform that essentially anybody can build on top of and integrate with? [00:39:21] TW: Let's see, learning Kubernetes. Learning Kubernetes under fire. We did not operate containers at least when I was there. Containers were sort of on their way at Twilio. We run our data tier and a number of other things inside of containers. That took us some time to figure out and really understand and it's one of those things that I don't think you ever stop learning there. Learning about all of the little things that you need to do from a data API perspective, to make your developers lives easier. That was a lot of learning and a lot of talking to developers as they started interact with their API and be like, “This is great. But man, it would be a lot easier if you did this.” Then, obviously, always figuring out the scale of how do I run a multi-tenant infrastructure, and ensure that no one customer can take out the entire cluster? So, a lot of those lessons were sort of hard learned or hard fought for at Twilio, where we ran massive, multi-tenant infrastructure, so we're able to bring those over and apply some of the same ideals and principles. But it's still, it's kind of new tech for us, right? This wasn't communication tech. This was us running these different analytical data stores and trying to run them at scale. [00:40:46] SF: What about as a founder? I know this is not your first company that you founded. But what are some of the things that have perhaps surprised you going from working at Twilio? Then, now being the technical co-founder of Propel? [00:41:03] TW: I mean, having to do everything, in a sense, right? No one day is the same. Being at Twilio for seven and a half years, I feel like I was spoiled, because of the infrastructure and the support that we had in every aspect. I'm sure Jeff and everybody went through that, obviously, and put in the foundational work to ensure that that was there for people to be successful. But coming in, and now, it's you either purposefully or somehow you have selective amnesia, you forget about everything that has to be done as a founder. For me, I like it. I think it keeps things very interesting. I can go from one meeting where we're discussing a highly technical implementation of something, or why is something not behaving the way we want to, to talking about the languaging and the copy that we want to use, in the first page of our website, to then hopping over to dealing with maybe a tax issue, because something got screwed up, and I've got to deal with Gusto, or HR systems. I mean, I think it's largely those types of things of remembering, there's no one else to do it. So, you can't just go out and hire for it. Because obviously, cash is a finite resource, and probably our scarcest resource, at least right now. So, you can either let it not get done, which could hurt or harm your company, or more importantly, one of your customers. Or you just get it done and figure out a way to do it. So, you don't have a choice. [00:42:45] SF: Yes, I used to say, when I was a founder of a company at one point, and I remember when I joined Google, and people would complain about having to be on call one week, a quarter or something like that. And I was like, “I was on call for seven years.” Because that's basically what you're taking on when you're a founder, especially in the early days, where you just don't have people to do everything. You're doing everything from writing the code to doing the Internet copy, as you mentioned, to taking out the garbage back when we used to have offices and stuff like that. [00:43:20] TW: I would really say, I'd given an example is like, I'm going to do everything from sell the product to write the code, to put the desk together if I have to. Obviously, now we're remote, so I don't have to put any desk together except for my own. But you're running the gamut of anything and everything that can be done. You as a founder, probably at one point will have done it. [00:43:39] SF: What's the makeup of the team at the moment? [00:43:43] TW: We are 100% remote. We are as far west as Los Angeles, which is where one of my co-founder, Nico is, and Far East as Berlin, whereas my other co-founder, Mark. Then in between, we are Colombia, Brazil, Colorado, New York, and Madrid. [00:44:02] SF: Wow. You got a lot of time zones covered there. [00:44:04] TW: A lot of time zones. But definitely one thing that's nice is the fact that yes, I do carry a pager, like you said, for when you did it for seven years. But having a founder that's in Berlin, gives me the follow the sun model pretty well. So, we split it on 12-hour chunks, which is nice. So, I'm not getting paged at two in the morning, which would be very reminiscent of my Twilio days. I'm very thankful for that. But I very much – I like having the remote first culture that we have today. We will see how long we can make that last. There's obviously a lot of discussion going on right now about is remote dead and is remote over? But I think at our scale today, we can still continue to operate this way for quite some time. [00:44:49] SF: Fantastic. Then, as we start to wrap up, is there anything else that you'd like our audience to know? [00:44:56] TW: I think another thing to remember is customer facing analytics is more than just dashboards. Everybody gravitates there first, because that's the first thing they think about of like, “Oh, analytics, I got to go build a dashboard.” Maybe a dashboard is not the best thing to build, depending on what your type of. What type of question you're trying to satisfy. So, when I think about customer facing analytics from the standpoint of Propel, since we are an API, we offer that flexibility to satisfy the tenant that customer facing analytics are way more than dashboards. They could be – in product analytics, it could be data APIs. They could be things that you want to do. I'm going to do alerts over it. I want to embed this in order to make decisions internally. I want to build internal applications. Because that's that API, you get all of that flexibility, and that was the biggest reason of why we ended up building it that way. [00:45:50] SW: Yes, I think that's a great point. It's back to what you had started at the beginning is one of the examples of customer facing analytics was the like button aligned on Facebook, which is not a dashboard in the traditional – [00:46:04] TW: Not a dashboard at all. Not even close. But if you think of the engineering and everything that has to go behind that in order to satisfy the requirements of that like button, it's a lot more than just thinking of a dashboard. It's a lot more than, “Hey, I'm querying a database.” It's a lot more than, I’m just going to land this data someplace and stand up a couple of services and magic it works. [00:46:29] SW: Yes, absolutely. Well, Tyler, thanks so much for being here. I really enjoyed reminiscing about your time at Google – not Google, sorry, at Twilio. And all the things that you built there and the lessons learned and how you transferred that knowledge to Propel and inspired you to create that company. It sounds like a really, really interesting platform that I'm definitely interested in checking out myself. [00:46:54] TW: Thank you so much. I really appreciate the time and getting to talk about it and it was a lot of fun. Yes, I hope more people think about us when they're thinking about their customer facing analytic problems are building their data applications and start to change that mindset away from building and come talk to us. Thanks again. [00:47:16] SW: Absolutely. Thanks. Cheers. [00:47:19] TW: Cheers. [END]