My Connected Car goes Serverless with AWS IoT

Connected everything is the buzz these days and with it comes a variety of use cases, some of which will likely have significant impact on our daily lives. The connected car is certainly one application for the Internet of Things (IoT) that many are watching and are curious about how it will work. Part of solving the system needs for this requires significant compute and storage resources that can scale instantly. These are needed to support the data that will be exchanged between vehicles and the rest of the connected world. This blog post presents an example of how specific services from Amazon Web Services (AWS) can be assembled to accomplish a serverless solution.

Serverless refers to managed software services that you install your application code into that will run securely in virtual containers and can consume whatever amount resources are needed on demand. When the load decreases, those resources are terminated so they can be used by someone else until you need them again. This way you have the compute and storage capacity you need, when you need it and only pay for what you consume. The increment for compute resources (CPU and memory) for AWS Lambda in this case is 100 milliseconds and as a result, costs track actual usage very closely.  Storage costs with DynamoDB follow a similar concept where you only pay for how much data you store and how fast you want to read and write with it.

The goal for my project (Car2Cloud) is to connect my vehicle to AWS cloud services to track driving characteristics and potential vehicle faults.

The high level requirements are to:

  • Access public cloud (AWS) connectivity
  • Leverage complete serverless concepts
  • Automate data collection and reporting

The diagram below is the high level architecture for my prototype and uses a small computer in the car to collect and report data to the AWS IoT service which stores the data in DynamoDB. A simple static web site is hosted in S3 which accesses the data to provide visualizations.

The “device” computer is a Raspberry PI 3 with a Bluetooth OBD-II interface and USB GPS module. Wifi is available on the board which was used to establish internet connectivity with my home network. An optional smartphone hotspot connection is used to enable realtime reporting and tracking.

The headless device runs a Python program on startup that polls the vehicle engine control unit (ECU) for various data. Python-OBD is the library used to communicate with the Bluetooth dongle. Each vehicle supports a variable number of parameter ID (PID), but basic ones include RPM, Speed, etc. I setup polling for every 500 milliseconds and combine that data with the GPS position information to form a complete JSON message (see code snippet below). Since the ECU doesn’t provide data when the vehicle is off, ECU communication state is recorded. Different vehicle and network states are supported, which include:

  • Vehicle on, Network Connected (Driving somewhere with a hotspot)
  • Vehicle on, Network Disconnected (Driving somewhere)
  • Vehicle off, Network Connected (parked at home)
  • Vehicle off, Network Disconnected (parked somewhere away from home)

JSON Messages are sent using the AWS SDK to the IoT service using the MQTT protocol once the device is authenticated. A very handy feature of the SDK’s client connection is that it supports buffering of messages when network connectivity is lost. It can be configured to queue messages on a first-in-first-out (FIFO) basis or unlimited. With this feature, messages are automatically sent to the cloud when network connectivity is restored.

The setup of the IoT service involves registering the device, generating security credentials and applying a policy. The certificate is installed on the device and used when establishing a connection with the AWS IoT service. Note there are best practices to perform automated device registration when running a system at full scale.

Message data received in the cloud is filtered and passed along to other AWS services for storage and to apply business logic. In my prototype I separate the various attributes of the JSON message and store them as separate columns within a DynamoDB table for trend analysis and visualization. I also evaluate certain data to assess different conditions and record those in another table for use in raising alerts. The diagram below shows details of how the IoT service is configured to support these needs.

The configuration involves adding an IoT Service Rule that selects all messages and forks them to two different actions. The first action automatically separates the message attributes and stores them in a designated DynamoDB table.

The service automatically separates out the data attributes from the message and stores them as columns in the table. DynamoDB is a highly scalable No-SQL database service that integrates easily with most other services in AWS. One of the benefits is that data columns are automatically added based on the data received. No configuration has to be done to add columns in advance of loading data. This supports the need to be flexible when connecting to many different kinds of vehicles, each of which will support a variable list of OBD-II PID values.

The second action invokes an AWS Lambda function that evaluates each message received and applies certain logic to establish conditions that are recorded as events in a second DynamoDB table. Like the other services described so far, Lambda is another serverless way to run stateless functions in the cloud. This function is implemented by using Python and the Boto3 library to access the AWS SDK. The “event” parameter contains the JSON message from the IoT service and looks for conditions that are translated into event topics I’ve defined. Those are stored in the second DynamoDB table.

I also incorporate a customer metric in AWS Cloudwatch where an alert is configured to send an email notification via a AWS SNS Topic. In this simple example, the custom metric records a value of 0 or 1 representing the fuel level state. When the fuel level goes low, an email is sent.

Visualization is accomplished using the AWS S3 service and the static web site feature that can be enabled on a bucket. S3 provides a fast and reliable way to store files where access can be opened up to anyone and is another serverless component of the solution. To make the content dynamic, Javascript is used to connect to DynamoDB using AWS Cognito Unauthenticated Identity Pool access. The IAM policy associated with this Identity Pool is restricted to allow read access to only the table that is needed.

The chart visualization uses the Chart.js library which is setup to show the last two-days of data received. Vehicle Speed and RPM are displayed and updated upon refresh.

Location is shown using the Mapbox GL library for the same data in the chart. Latitude and Longitude data is converted to a GeoJSON format and then dynamically loaded on the map.

Events that were detected with a lambda function and stored in DynamoDB are retrieved and displayed for review.

In summary, the prototype works very well and is an example of how serverless computing is accomplished in the cloud. I only pay for the volume of IoT messages that are sent. I have access to a storage database that can scale to an unlimited size, but I only pay for how much data I store in it. I can apply business logic and rules in parallel as data arrives and only pay for the amount of time my process runs. I only pay for the custom metrics I choose to create and for the corresponding notifications that are sent. The web site cost is only for the size of the HTML files stored in the cloud. These types of services provide opportunities to drive down operating expenses for compute and storage while improving the availability of your applications. It also has the added benefit of decoupling and simplifying your architecture to support future growth in your business.

Post navigation

Leave a Reply

Your email address will not be published. Required fields are marked *