Testing that API Gateway has been deployed correctly

We also want to be able to test that the serverless microservice API is working correctly after it is deployed. I use a mix of Python and bash to make it easier.

A Python script called serverless-microservice-data-api/bash/apigateway-lambda-dynamodb/get_apigateway_endpoint.py is first used to query AWS API Gateway to get the full endpoint and return a code 0 if it succeeds:

import argparse
import logging

import boto3
logging.getLogger('botocore').setLevel(logging.CRITICAL)

logger = logging.getLogger(__name__)
logging.basicConfig(format='%(asctime)s %(levelname)s %(name)-15s: %(lineno)d %(message)s',
level=logging.INFO) logger.setLevel(logging.INFO)


def get_apigateway_names(endpoint_name):
client = boto3.client(service_name='apigateway',
region_name='eu-west-1')
apis = client.get_rest_apis()
for api in apis['items']:
if api['name'] == endpoint_name:
api_id = api['id']
region = 'eu-west-1'
stage = 'Prod'
resource = 'visits/324'
#return F"https://{api_id}.execute-api.
{region}.amazonaws.com/{stage}/{resource}"
return "https://%s.execute-api.%s.amazonaws.com/%s/%s"
% (api_id, region, stage, resource)
return None


def main():
endpoint_name = "lambda-dynamo-xray"

parser = argparse.ArgumentParser()
parser.add_argument("-e", "--endpointname", type=str,
required=False, help="Path to the endpoint_name")
args = parser.parse_args()

if (args.endpointname is not None): endpoint_name =
args.endpointname

apigateway_endpoint = get_apigateway_names(endpoint_name)
if apigateway_endpoint is not None:
print(apigateway_endpoint)
return 0
else:
return 1

if __name__ == '__main__':
main()

Then we use a shell script to call the Python script. The Python script returns the API endpoint, which is used in the curl with the sample GET request. We then look to see whether we get a valid status code.

Here is the full script for serverless-microservice-data-api/bash/apigateway-lambda-dynamodb/curl-api-gateway.sh:

. ./common-variables.sh
endpoint="$(python get_apigateway_endpoint.py -e ${template})"
echo ${endpoint}
status_code=$(curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET ${endpoint})
echo "$status_code"
if echo "$status_code" | grep -q "HTTP/1.1 200 OK";
then
echo "pass"
exit 0
else
exit 1
fi

Having these scripts set up in this way allows us to easily automate these integrations tests.

Functions as a Service (FaaS) is still a relatively new area. There are still many discussions on the types of integration tests that should be used. One view is that we should do the full suite of testing in a different AWS account, especially the ones that would write or update a data store, such as POST or PUT requests.

I've included --profile and aws_account_id if you want to do this. In addition, with API Gateway, you can use a wide range of test suites that already exist around the HTTP endpoints, but testing other AWS services integration with Lambdas, such as objects being created in S3 that trigger a Lambda, needs a little bit more work and thought. In my view, serverless integration tests are still less mature, but I have already shown how they can be achieved by invoking the Lambda function directly with AWS CLI and Lambda with a JSON event source payload or invoking the API Gateway endpoint directly with a curl command.

Next we will look at how the SAM CLI can also be used for local testing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset