re:Invent in Review: Adrian Cockcroft, Abby Fuller, and Deepak Singh Discuss AWS

At AWS re:Invent, the Software Engineering Daily team spoke with AWS technologists:

In 2018, AWS released products for numerous verticals. Some of these products were expansions of mature categories such as machine learning and “serverless” tools. Expansions into newer fields included robotics, satellite technologies, and advanced virtualization technology.

From Lambda to Kubernetes: The Spectrum of Container Runtimes

AWS Lambda helped spark the serverless revolution.

With serverless, everything is an event. AWS Services themselves communicate through events and allow you to orchestrate resources using Lambda functions.

Adrian Cockcroft illustrated a use case of using Lambda to engineer an event-driven architecture: “Where do you run the code to clean up after a resource [such as an EC2 instance] has disappeared?” One of the ways this can be handled is by triggering a Lambda function which can tidy up ECS volumes, IP addresses, and other resources. The lambda is triggered by the EC2 decommissioning event.

For many users, AWS Fargate is the best way to run long-lived container instances. Fargate containers are standalone containers that don’t require you to manage a Kubernetes cluster. If you want to integrate with larger container installations on AWS, Fargate containers can talk to Amazon ECS or EKS. If you want to retain some platform agnosticism, Fargate containers can speak to any Kubernetes instance through the Virtual Kubelet.

When we look back at AWS re:Invent 2018 in a decade, we will be surprised by how much code we still had to write. Serverless is in the early days of eroding the low level technical pains of software engineering.

We see this continued march toward higher levels with more integrations for Step Functions. Step Functions allow users to create workflows and orchestrate activities for applications like DynamoDB and SageMaker. Step Functions presage the higher levels of abstraction that serverless technology moves us toward.

As Amazon CTO Werner Vogels has said, in the future all of our code will be business logic.

AWS FIRECRACKER

The VMs that power Lambda functions have distinct requirements: fast spin-up to avoid the cold start and resource isolation to ensure security and avoid noisy neighbor problems.

Firecracker is an open source virtualization technology for running micro VMs. It’s the underlying technology for Lambda and Fargate. Firecracker providers resource isolation and security for the virtualized workloads with minimal overhead. Firecracker is written in Rust due to its emphasis on speed and security.

Before Firecracker was developed, every Fargate task ran in a virtual EC2 instance. Instances take time to boot, and often waste space on that instance. AWS couldn’t co-deploy tasks on the same instances because of security and isolation concerns.

Singh summarizes, “Firecracker allows us to provide you the level of isolation that we believe meets the bar that customers should be at and meet their expectations but allowing us to be more efficient and provide fast boots. You can launch hundreds of them in one go.”

Since Firecracker is open-source, an ecosystem could develop around multiple cloud providers who want to contribute to a more efficient serverless virtualization layer.

APP MESH, A SERVICE MESH FOR AWS

If you’ve got a microservices model with lots of services calling each other, you need a way to instrument routing, circuit breaking, and policy management. A service mesh provides a data plane and a control plane for instrumenting the flow of data within your distributed system.

As Cockcroft explained, “back when I was at Netflix, we did have a service mesh but we were doing everything in Java and everything was open-sourced in libraries. The concept of a service mesh is that these libraries have all the instrumentation and traffic routing as part of that service mesh.”

The degree to which you tie yourself to a specific cloud provider has become a wide spectrum of choice. This extends to the service mesh layer. If you want to run everything on AWS, you can now use App Mesh, an Envoy-based service mesh. If you want to take the time to run your own service mesh, you can run Istio or Linkerd, even if you are on Amazon EKS.

On Software Engineering Daily, we have covered service mesh in previous episodes about Linkerd, Istio, and Envoy.

Machine Learning and Robotics

During re:Invent 2017, AWS launched SageMaker, a fully managed machine learning service. SageMaker provided accessible, scalable AI tooling to developers and data scientists. Since then, the AWS efforts around machine learning have extended deeper into hardware investments.

One of the main projects Adrian Cockcroft has been involved with over the past year is the RoboMaker project, a service for deploying intelligent robotics applications. AWS brings their AI and Robotics platform together into AWS DeepRacer, a platform for running reinforcement learning experiments and racing those in an actual fully autonomous car (1/18th scale).  AWS is now looking at bringing the technology into high schools and universities and using it for teaching machine learning.

AWS MOVING FAST: STAY TUNED

It was a big year for AWS at re:Invent with many new announcements. Click here to watch videos of talks from re:Invent.

What does AWS have in store for us in 2019? If you are interested in more news and highlights of the latest tech, follow Software Engineering Daily as we feature more episodes on serverless, machine learning, cloud computing, and IoT.

Check out our previous podcast episodes with Adrian Cockcroft and Deepak Singh.

Erika Hokanson

My passion is scaling creative solutions to help people through technology. I am currently the Director of Operations and Sales at Software Engineering Daily.

Software Daily

Software Daily

 
Subscribe to Software Daily, a curated newsletter featuring the best and newest from the software engineering community.