A software platform's success is often defined by the ease with which engineers can build solutions using that platform. What can lead to an elongated Software Development Life Cycle (SDLC) and is a genuine gripe among experienced engineers (such as yourself, of course!) is if the software of their choice does not allow for compartmentalized development and testing.
Apart from support for a compartmentalized development platform, other factors that are extremely important for engineers are ease of tool navigation, support for logic constructs (if, switch, and so on), pre-packaged capability modules (built-in policies), the ability to create custom modules (user-defined policies), and ease of testing. API Connect (APIC) recognizes these developer demands and meets them by providing a comprehensive and feature-rich development toolset.
This toolset's main components are a desktop-based API development tool, API Designer (Designer), a container-based runtime execution platform, Local Test Environment (LTE), and a comprehensive cloud-based tool, API Manager. API Manager, through its web interface, provides administration, configuration, development, and unit testing (based upon DataPower Gateway) capabilities. Designer and LTE can enable you to get started on your APIC journey in a much faster manner by removing dependencies on centralized API Manager and DataPower Gateway.
The goal of this chapter is to take you through a journey of these tools, help you set up your localized development environment, and introduce you to the development aspects of API development for you to build and test great APIs. In this chapter, we will cover the following topics:
After you have finished going through each of the preceding topics, you should be comfortable with the following:
The steps discussed in this chapter are based on Windows 10 Pro, hence you will require Windows-specific software. The setup of the Designer and LTE components requires the following prerequisites:
toolkit-loopback-designer-windows_10.0.1.5.zip
apic-lte-images_10.0.1.5.tar.gz
apic-lte-win_10.0.1.5.exe
Note
You can search for the preceding files in the search results without the filename extensions.
You will use the downloaded prerequisites to set up the development environment on your workstation. But before that, you should understand the various development tools available as part of the APIC toolset. APIC provides development tools for various kinds of requirements and preferences. These tools can be broadly classified into two categories:
You will learn in detail about each of these categories and their installation procedures.
APIC development tools accommodate various developer preferences. These preferences could be development through a command-line environment, a desktop-based UI-driven tool, or a web-based tool. These accommodations are made with the support of the following toolset options:
You can see that APIC provides many development tools to cater to all kinds of developer preferences. The next section will take you through the installation of Designer on your workstation.
Before you start with the installation of the Designer component, ensure that you have downloaded the file toolkit-loopback-designer-windows_10.0.1.5.zip as specified in the Technical requirements section of this chapter. The following steps assume that you are carrying out the installation on a Windows platform. If your environment is either macOS- or Linux-based, please consult the IBM documentation for your environment-specific steps. You will probably have to execute (chmod +x) permissions for the apic file on a non-Windows environment:
apic.exe: Supports the CLI option discussed earlier
api_designer-win.exe: Supports the desktop-based UI-driven option
apic licenses –-accept-license
api_designer-win.exe
You just completed the setup of the CLI toolkit and Designer. As was mentioned earlier, these two tools will help you build your APIs (by using Designer) and manage your cloud environment (by using the toolkit).
You now need a local API execution environment where APIs can be published and tested. The environment to publish and test the APIs is provided by the LTE component. It is discussed next.
One of the vital requirements of a development-friendly platform is its ability to support local/desktop-based testing. The APIC platform meets this critical requirement by providing a local runtime execution environment called LTE for you to publish and test your APIs. This platform is based upon the Docker container framework. It is worth mentioning that the API Manager web-based tool uses DataPower Gateway for API execution.
There are multiple components of this containerized LTE platform. These components are an API management service, two DataPower gateways (API and v5 compatible), an authentication gateway, a local user registry, and a Postgres database. APIC does an excellent job of hiding the many components' installation complexity and provides a more straightforward installation path via a container deployment model. We will cover this installation in this section.
Before performing the installation of LTE on your local workstation, ensure that the following prerequisites are taken care of for a successful installation of LTE:
apic-lte-images_10.0.1.5.tar.gz: Container images for various LTE components.
apic-lte-win_10.0.1.5.exe: This is the LTE binary. This is required to execute the LTE commands. It is advisable to have this file's directory in your PATH variable.
Having ensured that the prerequisites are in place, go ahead and execute the following commands. These can be executed on the Windows command shell, Windows PowerShell, or a Linux terminal (if you are performing this setup on Linux):
docker version
docker run -d -p 80:80 docker/getting-started
docker load -i <images-tar-directory>apic-lte-
images_10.0.1.5.tar.gz
Note
Ensure that previous command successfully loads images apiconnect/ibm-apiconnect-management-apim, apiconnect/ibm-apiconnect-management-lur, apiconnect/ibm-apiconnect-gateway-datapower-nonprod, apiconnect/ibm-apiconnect-management-juhu, postgres, and busybox.
These images are required for the LTE environment to function. If there are any errors in the loading of the container images, then you will need to resolve the errors before performing the rest of the steps.
docker image ls
apic-lte-win_10.0.1.5.exe status
This will get the following output:
apic-lte-win_10.0.1.5.exe start
Note
There are a number of switches that are applicable to the start command. Some of the main switches are discussed next:
--keep-config: By default, apic-lte-win_10.0.1.5 start, without the --keep-config switch, deletes all the previously published API/Product information from the configuration DB and re-initializes the LTE configuration. Essentially, LTE starts with an empty backend DB that does not contain the APIs and Products that may have been published during an earlier run of LTE. To retain the previously published configuration, use the --keep-config switch. When using --keep-config, any other switches (for example, --datapower-api-gateway-enabled --datapower-gateway-enabled) specified for the start are ignored, and instead the same flags that were used during the earlier start are used. The start switch starts the DataPower API Gateway by default.
You will be using a number of the values that are displayed in Figure 4.3. Copy these values to a notepad. They will be used later. There are two important values not displayed here. These are DataPower web admin URLs for DataPower API Gateway and DataPower Gateway (v5 compatible). Sometimes, during your API testing and debugging, you might have to refer to the gateway logs. In such cases, having access to these URLs will be beneficial. These URLs are as follows:
You can log in using admin/admin credentials on both these URLs.
LTE verification will be discussed next.
It is important that you perform some basic verification of the LTE installation that you just completed:
apic-lte-win_10.0.1.5.exe status
docker logs -f -n 10 container-name
Replace container-name with the name of the container whose logs you want to monitor. Container names can be retrieved from the command (apic-lte-win_10.0.1.5.exe status) described in the previous step. They are provided in the Container column of Figure 4.4.
This concludes the verification of your LTE.
Now that you have installed the Designer and the LTE environments on your workstation, it is time for you to connect these two tools together. As you will recall, Designer helps you in the development of your APIs and the LTE provides an execution runtime for your APIs. You will see these tools in action.
Designer allows you to connect to multiple environments. You can connect Designer to the LTE or to the API Manager cloud environment. In all cases, you should maintain the version compatibility between Designer and the execution environment (the LTE or API Manager) being connected to. You will now connect Designer to the LTE:
The details of the key areas of the Designer are provided in the following table:
It is time to take a quick recap of the things you have accomplished thus far. By now, you have set up on your workstation the Designer component (for API development) and the LTE (for API execution). You have also connected Designer to your local LTE. Then you took a quick overview of Designer's main screen and understood the main navigation areas.
The Designer and LTE components provide a great way to perform localized API development. These two tools offer a perfect starting point for many developers. They should be sufficient for you to start API development. Yet, many developers choose to use a more comprehensive web-based API Manager tool for their API development needs. API Manager comes with multiple functionality enhancements, such as working with OAuth providers, user-authentication registries, and a unit test harness.
With Designer and the LTE now in place, it is time for you to start putting these tools to use. The following sections will expose you to the fundamentals of API development using the tools you have set up.
APIC supports OpenAPI design, multiple security standards, has an extensive repository of Out-of-the-Box (OOTB) Policies, and supports various logic constructs. This really enables developers to develop advanced APIs.
It is practically impossible for a single book to cover every feature and function provided by APIC and all the use cases that it can solve; the concepts discussed in the following sections should make it easier for you to undertake API development with much more confidence. And for the preparation of that journey, you will learn about the following in this part of the chapter:
Let's explore each topic in detail.
The OpenAPI Specification (OAS) is the grammar to the language of the API. Grammar is a sign of a language's maturity and advancement. Mature APIs use OAS grammar rules and syntaxes to describe themselves. Any API written following these grammar rules is called an open API.
In other words, an OpenAPI document is a document (or set of documents) that defines or describes an API using OAS. OAS specifies a standard format for describing a REST API interface in a standard, vendor-neutral, and language-neutral way. The OAS standard is necessary because it helps in the API economy's growth through increased API consumers' participation (after all, standards-driven communication leads to more participants).
OAS is the most widely adopted specification today when it comes to describing RESTful APIs. It is worthwhile mentioning that OAS is not the only specification out there. Some notable mentions are RESTful Service Description Language (RAML – https://raml.org/) and API Blueprint (https://apiblueprint.org/).
An API definition (or an OpenAPI document) is written in a file that can either be a JSON or YAML formatted document. The main sections of an OpenAPI document from an OAS 3.0 perspective are Info, Servers, Security (Scheme and Enforcement/Security), Paths, Operations, Parameters, and Responses.
Security Definition and Security Enforcement:
Security Scheme: This is where you can specify the security implementation details such as authentication URLs, OAuth settings, and API keys' header names. It is worth highlighting that creating a scheme does not enforce that scheme on an API consumer. You can define multiple schemes in your API Proxy but have only one of those schemes enforced upon that proxy's consumers. The enforcement of the scheme is done by Security Enforcement.
Security Enforcement or simply called Security: This is where you enforce the Security Scheme on the API consumer.
At the time of writing this chapter (September 2021), OAS 3.1.0 is the latest candidate release. These are the following things that you should be aware of:
Since APIC supports OAS, any API definitions you build using OAS can be imported directly into APIC. You can view the OAS design of your API by clicking the Source icon on the Design tab. Refer to Figure 4.8:
You can update the source code within the source view or use the more user-friendly form view.
Note
There are two important distinctions to be made here:
API Proxy: An API proxy, in general, concerns itself with the implementation of an API definition in an API Management tool such as APIC. Though it is certainly possible to write an API proxy completely within APIC, without it having to connect to a backend service provider, you would typically implement an API proxy that proxies a backend service.
API: An API will mostly be related to the implementation of the service definition by a backend service provider, outside of the API management tool.
It is certainly possible for an API to deviate from the API definition that is proxied by an API Proxy. And that is where various Policies and logic constructs of APIC come into play. Policies and Logic constructs map the API definition of an API Proxy to the definition provided by the actual API.
You will now begin the process of creating an API Proxy using APIC.
The previous section was about understanding the API's grammar. This section will be about building the sentences and starting to put those grammar rules into practice. You will do that by creating an API Proxy using Designer and then test that on the LTE. It is worth mentioning that you can follow the steps provided in this section on the API Manager interface as well. Steps on the API Manager are quite similar to the steps provided for the LTE environment.
It is time for you to start with the API development:
You will be creating your first API utilizing an existing target service.
In the Select API type view, select the From target service option and click the Next button.
Figure 4.10 shows how APIC has organized OAS under various key sections. By clicking on each section of the Design tab and reviewing various parameters of each section, you will see how easy APIC makes it to create the OAS source, instead of typing it all into a file. Of course, you can click on the source view icon to review the OAS in a file format. As you click through the various sections under the Design tab, take time to correlate much of the information in these sections (General | Info, General | Schemes List, General | Security, Paths) with the OAS that you learned about in the OpenAPI design section of this chapter.
APIC provides comprehensive security coverage for APIs. As was discussed earlier, in the OpenAPI design section of this chapter, APIC supports multiple security schemes. You will be getting a comprehensive tour of APIC's API security features in Chapter 7, Securing APIs. For now, you will keep it simple and secure your API using the API key security scheme. Go ahead and review the security definitions of your API. Go to the Design tab | Security Schemes section. You will note that it already has an apiKey with the name clientID. This was created because you kept the option of Secure using Client ID selected in the earlier step. Go ahead and further strengthen your API's security by adding a definition for the client_secret type apiKey.
Note
The X-IBM-Client-Id and X-IBM-Client-Secret header names are not APIC specific. You can substitute these with any other names, for example, X-ABC-Client-Id or X-ABC-Client-Secret. It is still a good practice to use the standardized names of X-IBM-Client-Id and X-IBM-Client-Secret.
Now that you have the security schemes for client_id and client_secret, it is time to use them in your API's Security. You will remember from our earlier discussion that Security Scheme creation and security enforcement are separate steps. Security scheme creation is followed by the security enforcement step. In the previous step, you created the security scheme. Now you will apply that scheme to your API's security.
The invoke policy instructs APIC to proxy the API to the endpoint you provided earlier (https://stu3.test.pyrohealth.net/fhir/Patient/d75f81b6-66bc-4fc8-b2b4-0d193a1a92e0).
Congratulations! You have just created an API Proxy that has been deployed for testing.
Now that you have created your first API Proxy on APIC, it is time to run some tests and see this API Proxy in action.
There are multiple methods to test an API Proxy. You can either use external tools such as Postman, cURL, or SoapUI, or you can use the test capability provided by the APIC platform. You will learn how to use the test facility in APIC so that you can understand the test suite capabilities of the APIC platform.
Testing Note
A more comprehensive testing facility will be introduced in Chapter 13, Using Test and Monitor for Unit Tests, where you can build unit tests that can be executed in your DevOps pipelines.
Before you can test your API, it needs to be put online. To put your API online, simply click on the Test tab as shown in Figure 4.14:
You will notice in Figure 4.14 that the GET operation you created is already displayed. Some other interesting points in the URI are as follows:
Other features you will see are the headers that are already available. The generated client id and client secret that were provided when you started the LTE are automatically inserted. The APIm-Debug header is set to provide you with detailed debugging information.
It is time to test the API Proxy. Here's how:
You have completed the testing of your API Proxy. That was easy.
Note
The test capability you just completed only works on the DataPower API gateway service. This is the default gateway when you create your APIs. If you are supporting the previous version 5 edition of APIC and you want to specify DataPower Gateway (v5-compatible), go to the Gateway tab | Gateway and portal settings in the navigation pane of your API. Scroll to the Catalog Properties section and you will find the Gateway selection option. Choose datapower-gateway.
By default, when you test an API with the Test tab, a range of test parameters are pre-configured, for example, a default Product is automatically generated, to which the API is added, and that Product is published to the sandbox catalog. You can configure testing preferences such as the target Product and target Catalog. Some of these configuration options are not available in LTE though.
You should now feel sufficiently prepared to embark on the journey of enriching your API Proxy with Policies and Variables. Policies and Variables are the building blocks for adding functionality to your Proxies. These are often used to add custom logic, transformation, routing, logging, and security capabilities to the Proxies. Variables and Policies are important to understand so that you can use them to enhance an API Proxy's functionality. You will begin by learning about variables.
Like any mature application development framework, APIC also supports various variable types to help manage/manipulate the data being passed in an API and control API's Policies' behavior. There are two types of variables supported by APIC: Context variables and API properties.
These are the variables that are relevant during the context of an API call. Therefore, their scope is also limited to the context of a specific API call on the runtime gateway. There are several context variables available. You can refer to the complete list at https://www.ibm.com/docs/en/api-connect/10.0.1.x?topic=reference-api-connect-context-variables.
Context variables are categorized into the following categories:
Each category contains information about a particular aspect of an API call. Here are some of the most used categories:
Of course, you may not need to use all the different categories, but you should be aware of some of the more useful context variables. These are shown in Table 4.3.
You can read more about context categories and some associated variables of the message and request categories at https://www.ibm.com/docs/en/datapower-gateways/10.0.x?topic=object-context-api-gateway-services.
Now that you know some of the useful context variables, you should become familiar with how API properties are used.
API properties are used by the gateway to control the behavior of policies. One of the most common uses for custom API properties is to specify environment-specific URLs. In a typical APIC configuration, each catalog generally represents a deployment environment, for example, Development, Test, and Production.
Further, each catalog has a specific gateway URL assigned to all the APIs executing in its context. You can use API Properties to specify environment-specific backend URLs (for each catalog) in the API definition and then use that property (instead of hardcoding the URL) in the invoke policy.
This will result in a dynamic API configuration. You will soon build an example to get a better understanding of this. You will use the API Proxy that you created earlier to create a Catalog specific property, assign it a backend URL, and then use that property through inline referencing (more about this shortly) in the Invoke Policy:
You will notice there already is a target-url property defined in Figure 4.16. You will create a new one for demonstration purposes.
Inline referencing is a variable or property referencing technique using the $(variable) format. Here, variable is the name of a context variable or an API property that is utilized. Often this method is used to construct dynamic URLs in an invoke policy and for building switch conditions. You will see another example of an Inline referencing method when we introduce you to the switch policy later in this chapter.
You have learned about the concept of variables and API properties that are utilized in APIC. Many of these variables are system variables. They allow you to access meta-information about an API call, for example, API version, base path, request payload, and headers. You also reviewed the concept of API Properties that you can leverage to manage environment-specific configuration.
You have just seen an example of an Invoke policy using API Properties. There is more to learn about policies, so you will do that next.
What are policies? Policies are pieces of configurations that invoke a specific type of action (depending on the type of policy) on a message. There are Built-in policies that come pre-packaged in the APIC solution. They are divided into five categories:
These policies support your API use cases with data transformation, message routing, securing, validation, and logging. Essentially, these Policies provide building blocks to truly enhance your API's capability. For instance, you may use these Policies to Map (transformation), Invoke (message routing), Validate (schema validation), apply Client Security/Validate JWT (for security), Log (to control logging information to the analytics server), and Throw (for exception handling).
It is worth mentioning that Catch is not a policy. Catch is implemented by IBM extensions to OAS. You will be introduced to the interplay of the Throw policy and Catch IBM extension through an example later in this chapter.
You should review the documentation for details of all the different Built-in Policies: https://www.ibm.com/docs/en/api-connect/10.0.1.x?topic=constructs-built-in-policies.
Policies are gateway type-specific. All Built-in policies, except for the Validate Username token policy, are available as part of DataPower API Gateway. There are several policies that are not available for DataPower Gateway (v5 compatible). You can review the Gateway specificity of the policies at https://www.ibm.com/docs/en/api-connect/10.0.1.x?topic=constructs-built-in-policies.
A sample screenshot to show you how policies are organized in an API is shown in Figure 4.18:
As you can see, you can apply multiple policies to a single API. To help your understanding of policies, you will be introduced to a few of the policies and how they supplement your API. In this chapter, we are focusing on the Invoke policy. It is the most important policy so you will learn about it now.
The Invoke policy is one of the most heavily used policy types and probably the most feature-rich too. It can often be used (depending upon the nature of the backend service provider) by keeping its default settings and by only specifying the URL property. But other than its basic usage, the Invoke policy provides many advanced properties to control the execution of the backend service call. You will learn about some of these advanced properties by implementing some new behavior.
Within the Invoke policy, there are capabilities that instruct how the Invoke policy will behave on error conditions. A typical error condition would be a backend service that is experiencing slow response times. It is possible that your proxy needs to respond back to the consumer with a response (success or failure) within a defined Service-Level Agreement (SLA). If the SLA is shorter than the Invoke policy's default Timeout value of 60 seconds, an error occurs. This scenario can be easily handled by leveraging the Timeout and Stop on error properties found within the Invoke policy setup. Here is a brief description of these properties:
When you click on Add catch, you have various catch conditions to choose from. When you choose one of the 11 catch types, it updates the Catches section with the selected catch type and allows you to configure how to handle the catch. This is shown in Figure 4.20:
You can add more catches to handle other conditions.
You will now build an example to see the Stop on error and Timeout properties in action:
var errorName = context.get('error.name');
var errorMessage = context.get('error.message');
var errorResponse =
{
"name": errorName,
"message": errorMessage
};
context.message.header.set('Content-Type',
"application/json");
context.message.body.write(errorResponse);
Click Save to save your changes.
When you click on search errors …, a dropdown with error types is displayed as shown in Figure 4.22. Choose ConnectionError to match up with the catch you just configured.
This completes your configuration of handling ConnectionError when such an issue is encountered in the API Proxy. With your API now configured to handle the connection error issues, you will next do a temporary configuration to create this error condition. This error condition will be created by utilizing the Timeout property of the Invoke policy.
Figure 4.24 shows the ConnectionError response "Could not connect to endpoint" in JSON. You have successfully tested the Stop on error and Timeout properties within an Invoke policy.
Remember to change the patient-target-url property back to https://stu3.test.pyrohealth.net/fhir/Patient/ for future testing.
Next, you will learn about another important property of the Invoke policy that is frequently used, and that is changing the HTTP method of the outgoing request (to the backend service).
Typically, an API proxy's operation's method should match the backend service's HTTP method. Sometimes you might come across a scenario where the backend service that you are proxying supports an HTTP method that is different from what is exposed by your API proxy. A typical example is exposing a REST API (GET) method that proxies a SOAP backend service. It is common for SOAP services to be exposed via a POST method irrespective of whether they fetch the results (typically GET) or perform other updates (POST/PUT/DELETE). In such a case, you can use the HTTP Method property (Figure 4.25) of the Invoke policy to match the HTTP method that is supported by the backend service.
You may be wondering about the Keep option. Keep is the default value and it means that the incoming HTTP method should be continued as the HTTP method to the backend service. You will learn more about these options in, Chapter 5, Modernizing SOAP Services.
Often there are strict requirements around the header and parameters that can be passed to the backend service. The Invoke policy provides a couple of properties that can be used to meet such requirements. You'll see an example of this next.
The Header Control and Parameter Control properties control the request's headers and parameters that get passed to the backend service.
Often, you will have a requirement to block certain header values from being passed to the backend service called by your Invoke policy, for example, blocking the API request's X-IBM-Client-ID/X-IBM-Client-Secret values from being passed to the backend service. You can easily handle such a requirement using the Header Control property:
X-Environment header with a value of Skip Me
X-Book-Reader header with a value of Allow Me
This will allow you to see the entire flow from the initial request to the call to the backend and the resulting response.
This example demonstrated the use of the header control property of the Invoke policy and how it can be used to block certain headers from being passed to a backend service. You can use the same method to filter or allow query parameters using the Parameter Control property of the Invoke policy.
Another important capability of the Invoke policy is its ability to utilize variables and properties inside its URL property. This allows for the dynamic building of the backend URL. You will review this capability through an example in the next section.
An Invoke policy's target URL is often built using variables and properties. Earlier, in the Create an API Proxy section, you created a proxy that utilized a static backend URL containing a hardcoded patient-id value (d75f81b6-66bc-4fc8-b2b4-0d193a1a92e0). You will now improve upon that proxy by removing the hardcoding of the patient-id value. Instead, you will now define patient-id as a parameter in your API's definition and pass the patient-id value with the request message. You will then access this patient-id parameter using the Inline Referencing technique and dynamically build the backend service URL of the Invoke policy using the captured value of patient-id.
Before diving into the example, it is important for you to understand key Parameter types. Parameters represent the values that come as part of an API's request. The following are the main types:
Now that you understand the key parameter types, it is time to create a new example of a REST API proxy:
A completed screen is shown in Figure 4.30. It is important to understand all the things that are highlighted in this figure. There is a lot happening here. It is important to note that the parameters Located In path are always Required. Another thing to note is that the parameter defined in the path's Name {patient-id} is inside curly braces and that it matches the value in the Name field.
Now you need to set the target-url property to the backend endpoint.
$(api.properties.target-
url)/$(request.parameters.patient-id)
Note
You should understand the format of $(api.properties.target-url)/$(request.parameters.patient-id). This technique is called Inline referencing. You are using Inline referencing to access an API property (target-url) and a request Path parameter (patient-id) that you just define. You are also combining the two values to construct the final backend target URL.
As you can see in Figure 4.31, the value of patient-id is required. Enter the value "d75f81b6-66bc-4fc8-b2b4-0d193a1a92e0" in the highlighted area. This value is the patient-id path parameter that you defined earlier.
The preceding steps demonstrated the method of using inline referencing to dynamically build a target URL. As mentioned earlier, Inline Referencing coupled with API Properties is among one of the most used methods to dynamically build the target URL. Make sure that you develop a good understanding of this technique.
Note: v10.0.1.5 enhancements
The Invoke policy has introduced the ability to reuse HTTP connections through the persistent connection property.
That was a comprehensive look at the Invoke policy and some of the crafty things you can do with it. We are sure that you can think of many more use cases. Some of those use cases might lead to you using other policies provided with APIC. One other important policy in the APIC arsenal is the switch policy. You will be introduced to this next.
You will make changes to the inline-access API that you recently created. You will use a Switch policy and make a determination based upon a header variable (Environment) to process different branches of the message processing flow. Begin by opening the inline-access API in Designer. Navigate to the Gateway tab to review the single invoke policy:
You will now see a switch policy on the pane and the property screen on the right with Case 0 set to a condition of true.
You will notice that conditional logic for each case is built a bit differently using Inline references to the Environment request header. Notice the use of the $header('Environment') and message.headers.Environment expressions. $header is just a functional extension of the variable message.headers.name in standard JSONata notation. You can use either of these techniques.
You will want to add a second Invoke policy to represent the Test environment.
You have configured your first switch policy. Your screen should look like Figure 4.38. This example assumes that the client application is passing a custom header, Environment, with the request.
Review the highlighted sections in Figure 4.38. You will notice that conditional logic for each case is built using inline references to the Environment request header.
It is time to test your API. You already know that the two Invoke policies point to the same service. So how will you know which branch your Switch policy took? You are going to learn how to use the Trace facility to answer this question.
You should get the same results as before. But since the Invokes looked identical, how do you know whether it really executed Case 1? The next step shows you how.
You have now learned another valuable technique for accessing Parameters and API properties through the Inline Referencing method, as well as how to take advantage of the built-in testing and tracing capabilities of API Connect.
The last feature on the Invoke policy will be how to capture response objects and merge the information. Service chaining is common with APIs.
Under-fetching is a common symptom when services are created where the consumer needs to make multiple calls to obtain the results they desire. Service chaining in APIC can be created to handle such situations. You can build an API Proxy that acts as an aggregator of more than one backend services. You can implement the aggregator API Proxy by chaining calls to the backend services and combining the response data from each of these services into a single response to the consumer. Another solution for under-fetching is using GraphQL. You will learn about GraphQL in Chapter 9, Building a GraphQL API.
To build the aggregator response, you will use a combination of Invoke Policies and a GatewayScript Policy. For this use case, we will use the same backend service we have been using: https://stu3.test.pyrohealth.net/fhir/Patient/.
Previously, you have learned how to modify the titles of invoke policies, set path parameters for patient-id, and pass values to the Test capability of APIC. You will be doing that again with some slight modifications and the assistance of GatewayScript code. You can download patient-response.js from the GitHub site.
The approach you will be taking is constructing a new API definition and implementing two Invoke policies that return a different patient payload, followed by a GatewayScript that merges the responses of the two Invokes. The new skill you will learn is how to capture the response objects individually from each Invoke as query parameters. Once captured, you will filter specific fields from these responses and send a custom aggregated response back to the API consumer.
The patient-id values you will use are as follows:
Here are the steps you should follow:
A quick review of the JSON code would be helpful. The code uses the context.get('response object name') method to retrieve the response objects you designated. It utilizes the technique of converting the response returned by the backend service to a JSON object using the JSON.parse() method. The converted JSON object can then be navigated using the dot notation method. For example, to access the family name of patient 1, you can use patient1Response.name[0].family. The aggregated response is stored in the patientResponse variable. This aggregated response is finally sent as the output to the API caller using the context.message.body.write method:
var patient1Response =
JSON.parse(context.get('patient1-response.body'));
var patient2Response =
JSON.parse(context.get('patient2-response.body'));
var patientResponse= {
patients: [{
"lastName" :
patient1Response.name[0].family,
"firstName":
patient1Response.name[0].given[0]
},
{
"lastName" :
patient2Response.name[0].family,
"firstName":
patient2Response.name[0].given[0]
}]
}
context.message.header.set('Content-
Type',"application/json");
context.message.body.write(patientResponse);
You are now ready to test. Again, you will use the Test tab to execute the API.
Patient1 = 9df54eeb-a9ac-47ec-a8f6-eb51dd7eb957
Patient2 = 97e47441-b8b9-4705-bda7-248ae6ae2321
Click Send to test your API. Your results should appear as shown in Figure 4.43:
Your results should be as follows:
{"patients":[{"lastName":"Parks","firstName":"Christop
her"},{"lastName":"van de Heuvel","firstName":"Pieter"
}]}
Well done!
The previous example covered some important aspects of API development. You were introduced to the concept of service chaining and techniques to access an Invoke Policy's response objects in a GatewayScript policy to create an aggregated response. Capturing the backend service response in a Response object variable is just one of many advanced features provided by the Invoke Policy.
One of the most important aspects of any software development is to build software that handles exception conditions that arise during that software's execution. APIC's development framework also provides you with a catch IBM extension and a throw policy to handle exceptional scenarios. The following example will demonstrate the technique of managing such error conditions within an API Proxy.
Error handling involves handling custom error conditions and pre-defined error conditions. Examples of custom error conditions could be data validation faults (missing last-name, missing first-name, and so on) and business validation faults (typically arising from backend services such as account number not available). You gained some experience in error handling earlier in the Invoke policy section.
Examples of pre-defined error conditions are ConnectionError, JavaScriptError, and TransformError. A complete list of pre-defined errors is available at https://www.ibm.com/docs/en/api-connect/10.0.1.x?topic=reference-error-cases-supported-by-assembly-catches. A Throw Policy is available to throw an error reached during the execution of a message flow. Handling of the thrown error is done via an IBM Catch extension. This extension allows you to catch the thrown exception, build a processing flow to take remedial measures, and generate an appropriate error message for the API consumer. In this section, you will learn techniques to do the following:
You will use the inline-access API proxy that you built earlier to apply some error handling.
The Switch Policy has two execution paths. These two paths will be used to demonstrate the throwing of a custom error using the Throw Policy and the throwing of a pre-defined error in the GatewayScript Policy:
Error Name: UnsupportedEnvironment
Error Message: Set 'Environment' header to 'Test' because 'Production' environment is currently unsupported.
Note: v10.0.1.5 Enhancements
When configuring a throw policy, you can now specify the HTTP status code and reason phrase in the Error status code and Error status reason properties, respectively.
Review the following code. You will see that it checks for the length of the patient-id parameter value and throws an error based on the length of the patient-id value:
var patientid =
context.get('request.parameters.patient-
id.values[0]');
if (patientid.length < 36) {
context.reject('ValidateError', 'Incorrect
patient-id');
}
The code uses the context.get() method to access context variables. Parameters that you receive as part of the request are also considered context variables.
Inline Referencing and GatewayScript Referencing of Parameters/Variables
Techniques for accessing Parameters using Inline Referencing and Context variables are slightly different. Path, query, and header type Parameters can all be accessed using request.parameters.[parameter-name] using the Inline referencing technique. But to access Parameters in the GatewayScript Policy, you will need to use request.headers.[header-name] to access the header type Parameters or request.parameters.[parameter-name] to access the query/path type Parameters.
You can build error-specific execution paths or a single execution path to handle multiple errors. Notice that the UnsupportedEnvironment exception that you recently created is in the list of errors to catch. This is shown in Figure 4.47:
var errorName = context.get('error.name');
var errorMessage = context.get('error.message');
var errorResponse =
{
"name": errorName,
"message": errorMessage
};
context.message.header.set('Content-Type',
"application/json");
context.message.body.write(errorResponse);
The code demonstrates the method of accessing the error context variable in a GatewayScript policy. During the execution time, when an error is encountered, the error.name and error.message variables contain the name and the message of the thrown error. The other critical element to notice is the use of the message.header.set and message.body.write methods to set the Content-Type header and actual response, respectively. You can use the message.body.write method to create a message body for an invoke policy's request as well. Use of the message.body.write method is not limited to generating custom API responses.
Your screen should look like Figure 4.48:
ValidateError testing: Add an Environment header and assign it a value, Test. Put in a random patient-id that is less than 36 characters and see how the API behaves. You should see the error message you configured in your error routine pidcheck.js.
The error message will be presented as follows:
{"name":"ValidateError","message":"Incorrect patient-id"}
UnsupportedEnvironment testing: In your test case, substitute the value of the Environment header for Production. You will remember that in the Production branch of your API proxy, you used a Throw policy to throw the UnsupportedEnvironment exception. Click Send to send the request. What do you observe? You should see an error message like the following:
{"name":"UnsupportedEnvironment","message":"Set
'Environment' header to 'Test' because 'Production'
environment is currently unsupported."}
The preceding section demonstrated ways to handle API Proxy error conditions. You can build upon these examples to handle different error conditions, both custom and APIC runtime specific.
You have now built a solid foundation of various APIC development features. Soon you will discover many other rich features of the APIC framework, starting with its support for modernization patterns, building RESTful services using FHIR (a healthcare standard), API security, GraphQL, advanced transformations, and much more. This chapter was just the beginning of a rich journey that lies ahead of you. This chapter got you to "buckle up." Now is the time for you to "enjoy the ride."
You started this chapter by installing and configuring a local development (Designer) and a testing environment (LTE). After getting a brief introduction to the OpenAPI Specification (OAS), you put your local environment to good use by developing a simple API Proxy. This simple proxy creation exercise should have helped you get your feet wet and prepare you for a long swim in the pool of APIC.
This chapter then took you into a deep dive into the extensive development framework features provided by the APIC platform. You were introduced to many Policy types (Built-in and Custom) and Logic constructs, with a particular focus on the Invoke policy. With the introduction to variables, you learned about various context variables (request and message, especially) available to you for the fetching and manipulation of data flowing through the API. API properties and their usage in an Invoke Policy taught you methods for building environment-specific dynamic target URLs.
As you worked through many step-by-step examples, you were introduced to the vast capabilities of the Invoke policy. You covered the capabilities of applying blacklists and the setup of error handling for timeouts. You even learned how to execute more than one call to multiple backend service providers as part of the same flow. You were also briefly introduced to GatewayScript (as part of numerous examples) and the switch policy. The switch policy introduction contained examples for building switch conditions using scripts and variables. These capabilities will be highly beneficial to you as you begin building more APIs with varying use cases. There was an extensive repository of hands-on examples that covered essential topics of error handling, customizing an API's response, and advanced features of the Invoke Policy.
This chapter also explored multiple testing techniques available to you as a developer to test your APIs. Speaking of testing, you learned how to use the built-in test capability of APIC that includes the powerful feature of tracing. API Connect provides a one-stop shop experience by keeping development, testing, and tracing proxies in a single toolset – an ask of so many engineers!
You have seen just how quickly you can take an existing endpoint and create, promote, and test an API proxy. And with all the available features, Policies/Variables/Referencing, you as a developer can truly build robust, complex, and flexible APIs. But with great power comes great responsibility. The idea of API development is rooted in speed, agility, and simplicity. On your API development journey, if you observe yourself using all these features in developing a single API, it might be an opportunity to reflect on the design of that API itself. Food for thought!
With all this information and a solid base, you are now ready to take it up a notch and start with the modernization journey. The next chapter will teach you the techniques to expose hidden IT services to the outside world.