Episode Summary: AWS Analysis with Corey Quinn
Amazon Web Services changed how software engineers work. Before AWS, it was common for startups to purchase their physical servers. AWS made server resources as feasible as an API request and has gone on to create higher-level abstractions for building applications. In this episode of Software Engineering Daily, we talked to Corey Quinn, Chief Cloud Economist at The Duckbill Group.
Making technical choices can be overwhelming. As software developers, we have to make many choices from what seems like unlimited options. We choose the programming language, libraries, compute, database, schema and many other things. AWS alone offers about 200 services from its data centers across the globe.
When you start with AWS, it seems simple. But as your infrastructure gets huge, visibility becomes a thing that you need to take very seriously.
“Large companies are generally used to the historical data centre model, where you would wind up building things on a capital expense basis. You would plan out your data centre build-outs, and it’s super hard for a single engineer to accidentally order $6 million worth of hardware without getting fired or arrested. The new model though is that someone can inadvertently spin up that level of resource and not only not be aware of it, but no one is aware of that, for in some cases, years at a time. It is not at all transparent what’s happening in your environment.”, says Corey.
One of the other ways in which organizations lose money is by not purchasing reserved instances, because they are convinced they are going to turn off that cluster next week and then months go by and that never gets turned off.
AWS is effectively a microservices-driven company, which means that they have very small teams, which they call Two-pizza Teams, each working on individual projects. Looking at the organizational layout of AWS Corey points out, “The feeling I guess is closest to a bunch of internal startups that are competing for funding, for mind share, and they go through iterative rounds until something winds up getting released. At the end of an entire laborious process of iteration going through series of fundings, their “exit” is when someone at AWS gives the service a stupid name and launches it to the public.”
Open Source is not a business model
What does an open-source project have to do in order to succeed as a product company that might be competing with Amazon’s much cheaper, easier-to-sell hosted product?
“I would say give up because an open-source project is not a business model. It’s a means of development. It’s a means of community engagement. It’s a way of solving technical challenges, but there’s an enormous difference between that and having a viable, functional, healthy business.” says Corey.
Primitive AWS Services
If you manage or use AWS systems, you likely need to know at least a little about all of these. Even if you don’t use them, you should know enough to make that choice intelligently.
IAM: Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.
EC2: Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.
AMIs: Amazon Machine Images
CLBs and ALBs: Classic Load Balancer is intended for applications that are built within the EC2-Classic network. We recommend Application Load Balancer for Layer 7 traffic and Network Load Balancer for Layer 4 traffic when using Virtual Private Cloud (VPC).
Autoscaling: Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.
EBS: Elastic Block Store (EBS) is an easy-to-use, high-performance, block-storage service.
Elastic IPs: Elastic IP address is a static IPv4 address designed for dynamic cloud computing.
S3: It provides access to reliable, fast, and inexpensive data storage infrastructure.
Route 53: DNS and domain registration
VPC: Virtual networking, network security, and co-location; you automatically use
CloudFront: content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
CloudWatch: provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
One of the choices you need to mitigate while spinning up a server is the right operating system. There are various Linux distros, including Amazon Linux. Should you deploy on Amazon Linux? AWS advocates their own Amazon Linux, which emerged from Red Hat Enterprise Linux (RHEL) and CentOS. It is used by many, is heavily tested, and is better supported in the unlikely event you have deeper concerns with OS and virtualization on EC2. But overall, many companies do just fine using a standard, non-Amazon Linux distribution, such as Ubuntu or CentOS. Using a standard Linux distribution means you have a replicable environment should you use another hosting provider instead of (or in addition to) AWS.
Last Week in AWS
Corey also publishes the Last Week in AWS newsletter. “Posts about AWS come out over sixty times a day. The signal to noise ratio is abysmal. I filter through it all to find the hidden gems and the stuff worth reading, and share it with you – minus the nonsense.” says Corey on his website.
Listen to the full Software Engineering Daily conversation: AWS Analysis with Corey Quinn.