Streaming Architecture with Ted Dunning

Streaming architecture defines how large volumes of data make their way through an organization. Data is created at a user’s smartphone, or on a sensor inside of a conveyor belt at a factory. That data is sent to a set of backend services that aggregate the data, organizing it and making it available to business analysts, application developers, and machine learning algorithms.

The velocity at which data is created has led to widespread use of the “stream” abstraction–a never ending, append-only array of data. To deal with this volume, streams need to be buffered, batched, cached, mapreduced, machine learned, and munged until they are in a state where they can provide value to the end user.

There are numerous ways that data can travel this path, and in today’s episode we discuss the streaming systems, data lakes, and data warehouses that can be used to build an architecture that makes use of streaming data. Ted Dunning is a chief application architect at MapR, and he joins the show to discuss the patterns that engineering teams are using to build modern streaming architectures. Full disclosure: MapR is a sponsor of Software Engineering Daily.

Meetups for Software Engineering Daily are being planned! Go to softwareengineeringdaily.com/meetup if you want to register for an upcoming Meetup. In March, I’ll be visiting Datadog in New York and Hubspot in Boston, and in April I’ll be at Telesign in LA.

Summer internship applications to Software Engineering Daily are also being accepted. If you are interested in working with us on the Software Engineering Daily open source project full-time this Summer, send an application to internships@softwareengineeringdaily.com. We’d love to hear from you.

Transcript

Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.

Sponsors


There’s a new open source project called Dremio that is designed to simplify analytics. It’s also designed to handle some of the hard work, like scaling performance of analytical jobs. Dremio is the team behind Apache Arrow, a new standard for in-memory columnar data analytics. Arrow has been adopted across dozens of projects – like Pandas – to improve the performance of analytical workloads on CPUs and GPUs. It’s free and open source, designed for everyone, from your laptop, to clusters of over 1,000 nodes. At dremio.com/sedaily you can find all the necessary resources to get started with Dremio for free. If you like it, be sure to tweet @dremiohq and let them know you heard about it from Software Engineering Daily. Thanks again to Dremio, and check out dremio.com/sedaily to learn more.



Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/sedaily.