In this chapter, you will learn about edge solution concepts such as gateways and how AWS IoT Greengrass is used as a powerful edge appliance to interact with physical interfaces and leaf devices. The goal of this chapter is to start building proficiency with the use of IoT Greengrass for the writing and deploying of software components. This material is foundational to much of the book's hands-on projects and for understanding how we build solutions for the edge.
We will introduce you to the different protocols that IoT Greengrass can support out of the box and discuss commonly used protocols when building edge solutions. Additionally, we will review several security best practices for you to learn how to keep your edge workloads protected from threats and vulnerabilities. The chapter concludes with a hands-on activity to connect your first two device capabilities as components, whether using actual hardware or a pair of simulators.
In this chapter, we're going to cover the following main topics:
To complete the hands-on exercises in this chapter, you will need to have completed the steps in Chapter 2, Foundations of Edge Workloads such that your edge device has been set up with the IoT Greengrass Core software running and the greengrass-cli component installed.
You will want to clone the chapter's resources from the book's GitHub repository, for ease of use, if you haven't already done so. There is a step included in the Connecting your first device – sensing at the edge section that enables you to clone the repository at https://github.com/PacktPublishing/Intelligent-Workloads-at-the-Edge/tree/main/chapter3. You can perform this step now if you would like to browse the resources in advance:
git clone https://github.com/PacktPublishing/Intelligent-Workloads-at-the-Edge-
As a reminder, the hands-on steps for this book were authored with a Raspberry Pi and Sense HAT expansion board in mind. For those of you using other Linux-based systems for the edge device, alternate technical resources are included in the GitHub repository with guidance on how to substitute them.
Solutions built for the edge take on many shapes and sizes. The number of distinct devices included in a solution ranges from one to many. The network layout, compute resources, and budget allowed will drive your architectural and implementation decisions. In an edge machine learning (ML) solution, we should consider the requirements for running ML models. ML models work more accurately when they are custom built for a specific instance of a device, as opposed to one model supporting many physical instances of the same device. This means that as the number of devices supported by an edge ML workload grows, so too will the number of ML models and compute resources required at the edge. There are four topologies to consider when architecting an edge ML solution: star, bus, tree, and hybrid. Here is a description of each of them:
There are two additional patterns that are common when studying network topologies, that is, the mesh and ring topologies:
These patterns emphasize decentralization where nodes connect to each other directly. While these patterns have their time and place in the broader spectrum of IoT solutions, they are infrequently used in edge ML solutions where a gateway or hub device and cloud service are often best practices or outright requirements.
When deciding on a particular topology for your solution architecture, start by considering whether all devices at the edge are weighted equally or whether they will communicate with a central node such as a gateway. A consumer product design for an edge ML solution tends to use the star pattern when thinking about the edge in isolation. A good edge solution should be able to operate in its star pattern even when any external link to a larger tree or hybrid topology is severed. We use the star pattern to implement the HBS product since the hub device will run any and all ML runtime workloads that we require.
IoT Greengrass is designed to run as the hub of a star implementation and plug into a larger tree or hybrid topology connecting to the AWS cloud. Solution architects can choose how much or how little compute work is performed by the gateway device running IoT Greengrass. In the next section, we will review the protocols used to exchange messages at the edge and between the edge and cloud.
Protocols define the specifications for exchanging messages with an edge solution. This means the format of the message, how it is serialized over the wire, and also the networking protocols for communicating between two actors in the solution. Standards and protocols help us to architect within best practices and enable quick decision-making when selecting implementations. Before diving into the common protocols that are used in edge solutions, first, let's review two architectural standards used in information technology and operations technology to gain an understanding of where IoT Greengrass fits into these standards. Using these as a baseline will help set the context for the protocols used and how messages traverse these models in an edge solution.
The Open Systems Interconnection (OSI) model defines a stack of seven layers of network communications, describing the purpose and media used to exchange information between devices at each layer. At the top of the stack is layer seven, the application layer, where high-level APIs and transfer protocols are defined. At the bottom is layer one, the physical layer, where digital bits are transmitted over physical media using electricity and radio signals. The following is a diagram of the OSI model and shows how IoT Greengrass fits in with individual layers:
Here, you can observe that our runtime orchestrator, IoT Greengrass, operates from layer four to layer seven. There are high-level applications and transfer protocols used in the IoT Greengrass Core software to exchange application messages with the AWS cloud and local devices using protocols such as HTTPS and MQTT. Additionally, libraries bundled in the core software are responsible for the transport layer communications in the TCP/IP stack, but then further transmission throughout the OSI model is handed off to the host operating system.
Note that while the IoT Greengrass Core software operates from layer four to layer seven, the software components deployed to your edge solution might reach all the way down to layer one. For example, any sensors or actuators physically connected to the IoT Greengrass device could be accessed with code running in a component, usually with a low-level library API.
American National Standards Institute/International Society of Automation standard 95 (ANSI/ISA-95) defines the process in which to relate the interfaces between the enterprise and control systems. This standard is used in enterprise and industrial solution architectures. It describes another layered hierarchy; this one is for the level at which systems are controlled and suggests the time scale at which decisions must be made. The following diagram presents another frame of reference for how IoT Greengrass and an edge ML solution fit into a holistic picture:
Here, you can observe that IoT Greengrass primarily fits in layer three, the Monitoring and Supervising layer of control systems, to facilitate the upward aggregation of device telemetry, downward distribution of commands, and handle some decision making in a supervisory capacity. IoT Greengrass is useful in any kind of edge solution, be it consumer-grade products or to facilitate the operation of industrial machinery. While our HBS product example is not an industrial use case, the same pattern applies in that our hub device performs as a gateway for sensor monitoring equipment.
Now that you have a framework regarding how IoT Greengrass fits into these hierarchies, we can review common protocols that are used at the relevant layers.
Examples of application layer communications include requesting data from an API, publishing sensor telemetry, or sending a command to a device. This kind of data is relevant to the solution itself and the business logic in service of your solution's outcomes. For example, none of the other layers of the OSI model, such as the transport layer or the network layer, make decisions in the event that a deployed sensor is reporting the ambient temperature at 22°C. Only the running applications of your solution can use this data and send or receive that data by interacting with the application layer.
To communicate between the edge and the cloud, the most popular application layer protocol is HTTPS for request-response interactions. IoT Greengrass uses HTTPS to interact with AWS cloud services for the purposes of fetching metadata and downloading resources for your components, such as the component recipe and artifacts such as your code and trained ML models. Additionally, your custom components running at the edge might use HTTPS to interact with other AWS services, on-premises systems, and the APIs of other remote servers.
To exchange messages between the edge and the cloud, and within the edge solution, bi-directional messaging protocols (also called publish-subscribe or pubsub) are commonly used, such as MQTT or AMQP. The benefits of these kinds of protocols are listed as follows:
IoT Greengrass uses the MQTT protocol to open connections to the AWS IoT Core service in a client-broker model in order to pass messages from local devices up to the cloud, receive commands from the cloud and relay them to local devices, and synchronize the state after a period of disconnection. Additionally, IoT Greengrass can serve as the broker to other local devices that connect to it via MQTT. The following is a diagram of an IoT Greengrass device, such as the HBS hub device, acting as both the client and the broker:
Next up are the message format protocols that describe the way data is structured as it is sent over the application layer protocols.
If a messaging protocol such as MQTT specifies how connections are established and how messages are exchanged, a message format protocol specifies what the shape and content of an exchanged message are. You can consider a telephone as an analogy. The telephone handset represents how speech is sent in both directions, but the language being spoken by the participants at both ends must be understood in order for that speech to make sense! In this analogy, MQTT represents the telephone itself (abstracting away the public telephone exchange network), and the message format protocol is the language being spoken by the people on either end.
For any two participants exchanging data with each other, that data is either transmitted as binary or it will go through a process of serialization and deserialization. Common message format protocols used in edge solutions include JavaScript Object Notation (JSON), Google Protocol Buffers (protobuf), and Binary JSON (BSON). These formats make it easier for devices, edge components, and cloud solutions to interoperate. This is especially important in an architecture that is inclusive of mixed programming languages. The message format is a means of abstraction that is key to architecting solutions. By using a serializable message format protocol, the following diagram shows how a component written in Python can exchange messages with a component written in Java:
You could send all messages as binary data, but you would end up with an overhead in each recipient that would need to figure out what to do with that data or enact strict conventions for what can be sent. For example, a sensor device that only ever sends a numerical measurement in degrees centigrade could just send the value as binary data. If that system never changes, there's limited value to adding notation and serializing it. The recipient on the other end can be hardcoded to know what to do with it, thus saving overhead on metadata, structure, and bandwidth. This works for rigid, static systems and for cases where transmission costs must be the top priority for optimization.
Unstructured data such as images, video, and audio is commonly sent as binary payloads but with an accompanying header indicating what kind of data it is. In an HTTP request, the Content-Type header will include a value such as text/HTML or a MIME type such as video/MP4. This header tells the recipient how to process the binary payload of that message.
The interprocess communication (IPC) functionality that is provided by IoT Greengrass to components to enable the exchange of messages between them supports the JSON message format along with the raw binary format. In this chapter, you will build two components that use IPC to pass JSON messages from one component to the other.
Note
Since IoT Greengrass does not prescribe any particular protocol to interact with edge devices and systems, you can easily implement components that include libraries to interact with any device and any protocol.
The key takeaway regarding protocols is that we can use common protocols for the same, or similar, advantages as we use a good architecture pattern. They are battle-tested, well-documented, easy to implement, and prevent us from getting lost in the cycles of implementation details where our time would be better spent on delivering outcomes. In the next section, we will cover, at a high level, the security threats for an edge ML solution and some best practices and tools for mitigating them.
With IoT security being a hot topic and frequently making headlines, security in your edge ML solutions must be your top priority. Your leadership at HBS will never want to see their company or product name in the news for a story concerning a hacked device. Ultimately, security is about establishing and maintaining trust with your customer. You can use a threat modeling exercise such as STRIDE to analyze atomic actors in your edge system such as end devices, gateways, and software components to reason about worst-case scenarios and the minimum viable mitigation to prevent them. In this section, we will cover common security threats and the best practices for mitigating them.
Let's start with the terminal segment in our edge ML solution along with the appliance monitoring kit itself and its connection to the hub device. The worst-case scenario for this segment is that an unhealthy appliance is mistakenly reported as healthy. If a customer installs the product and it fails to do the one thing it advertises, this will lose all customer trust in the product. To mitigate this scenario, the sensor readings from the monitoring kit must be authentic. This means we must prevent the hub device from receiving false measurements from a spoofed kit.
Here, the best practice is to use some form of secret material that only the kit and the hub device understand. A secret can be a pre-shared key in a symmetrical cryptographic model, or it could be a public key and private key pair in an asymmetrical cryptographic model. If the kit can sign measurements sent to the hub device with a secret, then only the hub device can read them, and it will understand that it could only come from the device that it's paired with. Similarly, the monitoring kit would only act on messages, such as a request to update firmware, if those messages were signed by a secret it recognizes from the paired hub device.
A safe design pattern for our pairing process between the monitoring kit and hub device is to task the customer with a manual step, such as a physical button press. This is similar to the Wi-Fi pairing method called Wi-Fi Protected Setup (WPS). If the customer has to manually start the pairing process, this means it is harder for an attacker to initiate communication with either the kit or the hub. If an attacker has physical access to the customer's premises to initiate pairing with their own device, this would be a much larger security breach that compromises our future product.
IoT Greengrass provides a component called secret manager to help with this use case. The secret manager component can securely retrieve secret materials from the cloud through the AWS Secrets Manager service. You can build workflows into your edge solution, such as the monitoring kit pairing process, to establish a cryptographically verifiable relationship between your devices.
The following list of risks and mitigations focus on the gateway device itself, which runs the IoT Greengrass Core software:
Next, we will move on to the components that are running in your edge solution on the IoT Greengrass Core device:
So, this section covered a few high-risk security threats and the built-in mitigations provided by IoT Greengrass along with several best practices you can implement. Security at the edge is both complex and complicated. You can use threat modeling to identify the worst-case scenarios and best practices to mitigate those threats. In the next section, you will continue your journey as the HBS IoT architect by connecting two devices using components that deliver a simple sensor-to-actuator flow.
In this section, you will deploy a new component that delivers the first sensing capability of your edge solution. In the context of our HBS appliance monitoring kit and hub device, this first component will represent the sensor of an appliance monitoring kit. The sensor reports to the hub device the measured temperature and humidity of an attached heating, ventilation, and air conditioning (HVAC) appliance. Sensor data will be written to a local topic using the IPC feature of IoT Greengrass. A later section will deploy another component that consumes this sensor data.
If you are using a Raspberry Pi and a Sense HAT for your edge device, the temperature and humidity measurements will be taken from the Sense HAT board. For any other project configurations, you will use a software data producer component to simulate measurements of new data. Component definitions for both paths are available in the GitHub repository, in the chapter3 folder.
Both versions of the component have been written for the Python 3 runtime and defined using Python virtual environments to isolate dependencies. You will deploy one or the other using greengrass-cli to create a new local deployment that merges with the component. This chapter covers steps regarding how to install the component that reads from and writes to the Sense HAT. Any procedural differences for the data producer and consumer components are covered in the GitHub repository's README.md file.
Installing this component is just like installing the com.hbs.hub.HelloWorld component from Chapter 2, Foundations of Edge Workloads. You will use the IoT Greengrass CLI to merge in a predefined component using the deployment command:
cd ~/ && git clone https://github.com/PacktPublishing/Intelligent-Workloads-at-the-Edge-.git
cd Intelligent-Workloads-at-the-Edge-/chapter3
sudo /greengrass/v2/bin/greengrass-cli deployment create --merge com.hbs.hub.ReadSenseHAT=1.0.0 --recipeDir recipes/ --artifactDir artifacts/
sudo tail -f /greengrass/v2/logs/greengrass.log
sudo /greengrass/v2/bin/greengrass-cli component list
Now that the component has been installed, let's review the component.
Let's review some interesting bits from this sensor component so that you have a better idea of what's going on in this component. In this section, we will review a few highlights, starting with the recipe file.
In the com.hbs.hub.ReadSenseHAT-1.0.0.json section, we are using a new concept in the configuration called accessControl. This configuration defines the features of IoT Greengrass that the component is allowed to use. In this case, the recipe is defining a permission to use IPC and publish messages to any local topic. The operation is aws.greengrass#PublishToTopic, and the resource is the * wildcard, meaning the component is permitted to publish to any topic. In a later section, you will observe a different permission defined here to subscribe to IPC and receive the messages being published by this component. Here is the relevant section of the recipe file showing the accessControl configuration:
com.hbs.hub.ReadSenseHAT-1.0.0.json
"ComponentConfiguration": {
"DefaultConfiguration": {
"accessControl": {
"aws.greengrass.ipc.pubsub": {
"com.hbs.hub.ReadSenseHAT:pubsub:1": {
"policyDescription": "Allows publish operations on local IPC",
"operations": [
"aws.greengrass#PublishToTopic"
],
"resources": [
"*"
]
}
}
}
}
},
In the preceding JSON snippet, you can see that the default configuration for this component includes the accessControl key. The first child of accessControl is a key that is used to describe which system permission we are setting. In this scenario, the permission is for the aws.greengrass.ipc.pubsub system. The next child key is a unique policy ID that must be unique across all of your components. The best practice is to use the format of component name, system name or shorthand, and a counter, all joined by colon characters. The list of operations includes just one permission for publishing messages, but it could also include the operation for subscribing. Finally, the list of resources indicates the explicit list of topics permitted for the preceding operations. In this scenario, we use the * wildcard for simplicity; however, a better practice for observing the principle of least privilege is to exhaustively enumerate topics.
In the simple "Hello, world" component, there was just a single life cycle step to invoke the shell script at runtime. In this component, you can see that we are using two different life cycle steps: install and run. Each life cycle step is processed at a different event in the IoT Greengrass component life cycle:
Note
The IoT Greengrass Core software supports multiple life cycle events in order to provide flexible use of the component recipe model and component dependency graph. A complete definition of these life cycle events can be found in the References section, which is at the end of the chapter. The Run, Install, and Startup life cycle events are the most commonly used in component recipes.
Let's take a closer look at the life cycle steps of this recipe:
com.hbs.hub.ReadSenseHAT-1.0.0.json
"Lifecycle": {
"Install": {
"RequiresPrivilege": true,
"Script": "usermod -a -G i2c,input ggc_user && apt update && apt upgrade -y && apt install python3 libatlas-base-dev -y && python3 -m venv env && env/bin/python -m pip install -r {artifacts:path}/requirements.txt"
},
"Run": {
"Script": "env/bin/python {artifacts:path}/read_senseHAT.py"
}
}
In this recipe, we use the Install step to make system-level changes that require escalated permissions, such as making sure Python is installed. The Install step uses pip to install any Python libraries defined by the requirements.txt file in our component artifacts. Finally, the Run step invokes Python to start our program.
In this Python component, we are using a feature of Python 3 called virtual environments. A virtual environment allows you to specify an explicit version of the Python runtime to use when interpreting code. We use this to install any dependency libraries locally, so neither the dependencies nor runtime conflict with the system-level Python. This reinforces the best practice of applying isolation to our component. The relative invocation of env/bin/python is telling the script to use the virtual environment's version of Python instead of the one at the system level at /usr/bin/python.
This component uses a requirements.txt file to store information about the Python packages used and the versions of those packages to install. The requirements file is stored as an artifact of the component, along with the Python code file. Since it is an artifact, the command to pip must use the {artifacts:path} variable provided by IoT Greengrass to locate this file on disk.
We could achieve even better isolation for our component in one of two ways:
Since this HBS project is a prototype and we are using a Raspberry Pi device that comes with Python 3 preinstalled, it is within acceptable risk to simply ensure Python 3 is installed. A comprehensive isolation approach with containers for every component could fit, but the lighter-weight approach with Python virtual environments is sufficient at this prototype stage.
The code that reads from your Sense HAT device uses the Sense HAT Python library to read values from the device files that the Unix kernel exposes as device interfaces. These device files, such as /dev/i2c-1 and /dev/input/event2, are restricted to system users in groups such as i2c and input. The root user has access to these devices and a Raspberry Pi, and so does the default pi user. Our default component user, ggc_user, is not in these groups; therefore, code run as ggc_user will not be able to access values from these device interfaces. There are three ways to resolve this issue, which are listed as follows:
The best practice is to update the groups that the ggc_user component user is in. This reduces how often we use privileged access in our IoT Greengrass components and maintains our isolation principle by bundling the requirement in our recipe file. Running the component as the pi user isn't bad; however, as a developer, you should not assume this user will exist on every device and have the necessary permissions just because they are operating system defaults. For convenience, here is another clip of the life cycle step that sets up our user permissions for ggc_user:
com.hbs.hub.ReadSenseHAT-1.0.0.json
"Lifecycle": {
"Install": {
"RequiresPrivilege": true,
"Script": "usermod -a -G i2c,input ggc_user && apt update && apt upgrade -y && apt install python3 libatlas-base-dev -y && python3 -m venv env && env/bin/python -m pip install -r {artifacts:path}/requirements.txt"
},
This covers the interesting new features used in the component recipe file. Next, let's take a deep dive into important bits of the code.
A critical part of monitoring your components is to log important events. These lines set up a logger object that you can use throughout your Python code. This should be standard in every application:
read_senseHAT.py
logger = logging.getLogger()
handler = logging.StreamHandler(sys.stdout)
logger.setLevel(logging.INFO)
logger.addHandler(handler)
When building Python applications for IoT Greengrass, you can copy lines such as these to Bootstrap logging. Note that your logger will capture logs at the INFO level or a level that is higher in criticality. Debug logs will not be captured unless you set the level to logging.DEBUG. You might set different levels of logs depending on where in the development life cycle you are, such as DEBUG in beta and INFO in production. You could set the logging level as a variable with component-level configuration and override it for a given deployment.
Inside the build_message function is some simple code to initiate the Sense HAT interface and read values from its temperature and humidity sensors. These represent the values measured by our HBS appliance monitoring kit, attached to a fictional HVAC appliance:
Read_senseHAT.py
sense = SenseHat()
message = {}
message['timestamp'] = float("%.4f" % (time.time()))
message['device_id'] = 'hvac'
message['temperature'] = sense.get_temperature()
message['humidity'] = sense.get_humidity()
This code builds up a new object, called message, and sets child properties equal to the values we're getting from the Sense HAT library. The code also sets a simple device ID string, and generates the current timestamp.
Next, we will cover the key lines of code inside the publish_message function:
read_senseHAT.py
publish_message = PublishMessage()
publish_message.json_message = JsonMessage()
publish_message.json_message.message = message
request = PublishToTopicRequest()
request.topic = topic
request.publish_message = publish_message
operation = ipc_client.new_publish_to_topic()
operation.activate(request)
future = operation.get_response()
try:
future.result(TIMEOUT)
logger.info('published message, payload is: %s', request.publish_message)
except Exception as e:
logger.error('failed message publish: ', e)
These lines of code prepare the message and the request that will be communicated to the IPC service of IoT Greengrass as a new publish operation. This code will look familiar in any later components that require you to publish messages to other components running on the HBS hub device.
Now that we have taken a tour of the sensor application source code, let's examine what values you are measuring in the log file.
To inspect the values that you are sampling from the sensor, you can tail the log file for this component. If you are using the ReadSenseHATSimulated component, make sure you inspect that log file instead.
Tail the log file:
sudo tail -f /greengrass/v2/logs/com.hbs.hub.ReadSenseHAT.log
2021-06-29T01:03:07.746Z [INFO] (Copier) com.hbs.hub.ReadSenseHAT: stdout. published message, payload is: PublishMessage(json_message=JsonMessage(message={'timestamp': 1624928587.6789, 'device_id': 'hvac', 'temperature': 44.34784698486328, 'humidity': 22.96312713623047})). {scriptName=services.com.hbs.hub.ReadSenseHAT.lifecycle.Run.Script, serviceName=com.hbs.hub.ReadSenseHAT, currentState=RUNNING}
You should observe new entries in the log file with the temperature and humidity measurements sampled. These values are being logged and also published over IPC to any other components that are listening for them. You don't have any other components listening on IPC yet, so now is a great time to move on to your second component.
The previously deployed component acts as a sensor to read values from a fictional appliance monitoring kit and publishes those values over IoT Greengrass IPC on a local topic. The next step is to create an actuator component that will respond to those published measurements and act upon them. Your actuator component will subscribe to the same local topic over IPC and render the sensor readings to the LED matrix of your Sense HAT board. For projects not using the Raspberry Pi with Sense HAT, the simulation actuator component will write measurements to a file as a proof of concept.
Similar to the previous installation, you will create a deployment that merges with the new component. Please refer to the earlier steps for the location of the source files and validation steps that the deployment concluded. For projects not using the Raspberry Pi with the Sense HAT module, you will deploy the com.hbs.hub.SimulatedActuator component instead.
Create a deployment to include the com.hbs.hub.WriteSenseHAT component:
sudo /greengrass/v2/bin/greengrass-cli deployment create --merge com.hbs.hub.WriteSenseHAT=1.0.0 --recipeDir recipes/ --artifactDir artifacts/
Once deployed, you should start seeing messages appear on the LED matrix in the format of t: 40.15 h:23.79. These are the temperature and humidity values reported by your sensor component. The following photograph shows a snapshot of the LED matrix scrolling through the output message:
If, at any point, you get tired of seeing the scrolling messages on your device, you can simply remove the com.hbs.hub.WriteSenseHAT component with a new deployment, as follows:
sudo /greengrass/v2/bin/greengrass-cli deployment create --remove com.hbs.hub.WriteSenseHAT
Read on to learn how this component is structured.
Let's review the interesting differences between this component and the sensor component.
Starting with the recipe file, there is only one key difference to observe. Here is a snippet of the recipe file that we'll review:
com.hbs.hub.WriteSenseHAT-1.0.0.json
"accessControl": {
"aws.greengrass.ipc.pubsub": {
"com.hbs.hub.WriteSenseHAT:pubsub:1": {
"policyDescription": "Allows subscribe operations on local IPC",
"operations": [
"aws.greengrass#SubscribeToTopic"
],
"resources": [
"*"
]
}
}
}
In the recipe for com.hbs.hub.WriteSenseHAT, the accessControl permission specifies a different operation, aws.greengrass#SubscribeToTopic. This is the inverse of what we defined in the sensor component (aws.greengrass#PublishToTopic). This permission allows the component to set up topic subscriptions on IPC and receive messages published by other IPC clients, such as the sensor component. The following diagram shows the contrast of IPC permissions between a publishing sensor and a subscribing actuator:
In addition to this, the resources list uses the * wildcard to indicate that the component can subscribe to any local topic. Following a principle of least privilege for a production solution, this list of resources would specify the explicit list of topics to which the component is allowed to publish or subscribe. Since this hub device is a prototype, the wildcard approach is acceptable. Each of the read and write components accept arguments to override the local topic used for your own experimentation (please check out the main() functions to learn more). Since any topic can be passed in as an override, this is another reason to use the wildcard resource with the component permissions. Recall that this is okay for developing and testing, but the best practice for production components would be to exhaustively specify the permitted topics for publishing and subscribing.
The remainder of the recipe file is essentially the same, with differences simply in the naming of the component and the Python file to invoke in the Run script. Also, note that we add a new user group to ggc_user; the video group enables access to the LED matrix. Next, let's review the interesting new lines of code from this component's Python file.
The business logic for receiving messages over IPC and writing messages to the LED matrix is coded in scrolling_measurements.py. Here are a few important sections to familiarize yourself with:
scrolling_measurements.py
class StreamHandler(client.SubscribeToTopicStreamHandler):
def __init__(self):
super().__init__()
def on_stream_event(self, event: SubscriptionResponseMessage) -> None:
try:
message = event.json_message.message
logger.info('message received! %s', message)
scroll_message('t: ' + str("%.2f" % message['temperature']))
scroll_message('h: ' + str("%.2f" % message['humidity']))
except:
traceback.print_exc()
In this selection, you can observe that a new local class is defined, called StreamHandler. The StreamHandler class is responsible for implementing the behavior of IPC client subscription methods such as the following:
Since the sensor component is publishing messages in JSON format, you can see that it is easy to get the value of that payload with event.json_message.message. Following this, the on_stream_event handler retrieves the values for both the temperature and humidity keys and sends a string to the scroll_message function. Here is the code for that function:
scrolling_measurements.py
def scroll_message(message):
sense = SenseHat()
sense.show_message(message)
That's it! You can view how easy it is to work with the Sense HAT library. The library provides the logic to manipulate the LED matrix into a scrolling wall of text. There are additional functions in the library for more fine-grained control of the LED matrix if scrolling a text message is too specific an action. You might want to render a solid color, a simple bitmap, or create an animation.
Note
In this pair of components, the messages transmitted over IPC use the JSON specification. This is a clean default for device software that can use JSON libraries since it reduces the code we have to write for serializing and deserializing messages over the wire. Additionally, using JSON payloads will reduce code for components that will exchange messages with the cloud via the AWS IoT Core service. This service also defaults to JSON payloads. However, both the IPC feature of IoT Greengrass and the AWS IoT Core service support sending messages with binary payloads.
In the context of the HBS hub device and appliance monitoring kit, the Raspberry Pi and its Sense HAT board are pulling double duty when it comes to representing both devices in our prototype model. It would be impractical to ask customers to review scrolling text on a screen attached to either device. In reality, the solution would only notify customers of an important event and not signal each time the measurements are taken. However, this pattern of sensor and actuator communicating through a decoupled interface of IPC topics and messages illustrates a core concept that we will use throughout the rest of the edge solutions built in this book.
In this chapter, you learned about the topologies that are common in building edge ML solutions and how they relate to the constraints and requirements informing architectural decisions. You learned about the common protocols used in exchanging messages throughout the edge and to the cloud, and why those protocols are used today. You learned how to evaluate an edge ML solution for security risks and the best practices for mitigating those risks. Additionally, you delivered your first multi-component edge solution that maps sensor readings to an actuator using a decoupled interface.
Now that you understand the basics of building for the edge, the next chapter will introduce how to build and deploy for the edge using cloud services and a remote deployment tool. In addition to this, you will deploy your first ML component using a precompiled model.
Before moving on to the next chapter, test your knowledge by answering these questions.
The answers can be found at the end of the book:
Please refer to the following resources for additional information on the concepts discussed in this chapter: