With the connections defined,we can start with the integration. Let's create a new integration with the Basic Map Data pattern. Complete the creation dialog with the following values:
Property |
Value |
Integration Name |
|
Identifier |
This will be proposed based on the connection name and there is no need to change it unless you would like an alternative name. |
Version |
01.00.0000 |
Package Name |
|
Description |
|
With the integration details provided, we can complete the dialog with the Create button, as normal. With this, we will be presented with the normal canvas. Locate the AirportData_Ch9
connection and drop it onto the Trigger pad on the canvas. We can then configure the wizard with the details:
Tab |
Question |
Action |
Basic Info |
What do you want to call your endpoint? |
As with all our other examples-we are keeping things simple with |
What does this endpoint do? |
| |
Do you want to define a schema for this end point? |
As we are only moving files from one location to another we can set this to No. |
This will result in a screen looking like:
Before we move on to the next section using the usual Next button, let's quickly explore the meaning of the schema option. In the Orchestration pattern of integrations, you need to define a structure to the file content if you want to process it in a manner beyond simply moving the file around. We will look at how this works in more detail later in this chapter.
The next panel captures the information about what files to process. So let's complete the dialog with the following details:
Section |
Field |
Value |
Configure File Read |
Select a Transfer Mode |
The files we have provided as my may have noticed are textual. Ensure the ASCII option is selected |
Specify an Input Directory |
This needs to reflect the path once you have connected to the FTP server. In our example you need to define it as | |
Specify a Filename Pattern |
We want to be able to pick-up any CSV file, so use the pattern | |
Maximum Files |
This defines how many files can be processed in any single execution of the integration. You might wish to consider reducing this is the workload is going to be significant as depending on your processing you might experience subsequent problems for example, a slow processing cycle might result in connection timeouts with downstream processes. In this case we are not going to be using large files, and unlikely to actually create 100 files for transfer at once. We can leave the figure unchanged. | |
Chunk size |
This is the smallest number of files to process in one go. When processing by downstream processes carries a very large overhead, then you may choose to let the number of files build up before executing. As we want things to happen immediately-then we should leave this unchanged at 1. | |
Processing Delay |
The amount of time between each file processing that is required expressed in seconds. We can leave this with the default value of 0. | |
Delete files after successful retrieval? |
This setting is self-explanatory. To make it easy for us to see when things have triggered properly, ensure this is set to Yes. |
The configuration should look something like this:
The final step is complete and clicking the Next button will take us to the Summary view, which should look something like this:
We can complete this phase with the Done button. We can now configure the target file system by dropping the same connection onto the Trigger pad on the canvas, so that the integration will look like this:
As with the trigger, the invoke goes through a similar sequence of screens.
Tab |
Question |
Action |
Basic Info |
What do you want to call your endpoint? |
As with all our other examples-we are keeping things simple with |
What does this endpoint do? |
| |
Do you want to define a schema for this end point? |
As we are only moving files from one location to another we can set this to No. | |
Do you want to enable PGP security? |
We can encrypt the file using PGP (Pretty Good Privacy). We will look a bit at PGP later in the chapter, so for now let's set this to No. | |
Configure File Write |
Select a Transfer Mode |
Set this to ASCII as we will be moving CSV files. |
Specify an Output Directory |
Set the target to | |
Specify a File Name Pattern |
It is possible to define a file naming pattern, so that the name can include information such as a sequence number, or a timestamp. To illustrate this we can change the filename to be | |
Append to Existing File |
Leave unticked as we want to have the same number of files in the target location as in the original source-it will make spotting the changes working. |
We should see a Summary screen that should match the following image:
We can complete this configuration with the Done button, as normal. With this, we have an integration defined that should detect and move the files. Note the completion status of the integration. At this stage, the tracking values need to be set. However, the only tracking attribute that can be used is the file name. If you open the Tracking screen, you will see the details defined, and the options to modify this are greyed out.
Close the Tracking screen, Save and click on the Exit Integration to take us back to the integrations list screen. Note how, next to the integration icon, is a schedule symbol reflecting the status of the schedule for detecting files in the source location.
Click on the calendar icon and a new screen will be displayed that will allow us to set when a scheduler will trigger to execute the integration. It is possible to define a one-off event or a reoccurring event, as shown in the following screenshot:
To set the schedule as reoccurring, deselect the Frequency toggle. This will result in a drop-down menu being presented beside the Frequency toggle. Select one of the Days or Weeks menus; you will then be presented with the means to select a number of weeks or months. Return back to the menu by clicking on the cross button at the end of the Frequency details, as can be seen in the following image:
Select Hours and Minutes and set the minutes to 11 minutes. To see the integration working easily, we want the integration to fire regularly. However, ICS will prevent the scheduler from running a schedule any more often than once every 10 minutes. With the frequency settings below, we can configure when to start the schedule from, whether the scheduler should run indefinitely, or until when it reaches a specific end date. We should limit the run of the integration by defining an end event date, so select a date and time and close with the OK button.
With the scheduler configured, we can click on the Save button and then the Exit Scheduler button, which takes us back to the screen to review and start the scheduler if the integration has been activated.
Once the schedule has been defined, we start it by clicking on the Start Schedule. The screen will now change to show when the schedule will be triggered. From this view, you also have several other possible operations, including reviewing previous executions with the View Past Runs button, pausing the schedule (Pause button), along with stopping the scheduled task with the Stop Schedule button. We are, however, going to let the schedule run.
With the scheduler running, if we reconnect to the FTP server with our FTP tool and look at the /tmp/source
folder, we will see the number of files decreasing over time.
Once we have returned to the main integrations list, we can also trigger the integration manually whilst the integration is active. This can be done via the integration's drop-down menu on the main integrations list, which includes the option Submit Now. When clicked upon, it will trigger the integration with a confirmation message, displayed as follows:
The same Submit Now option is presented as a button on the scheduler screen.
If we choose to deactivate the integration, the scheduler associated with it will be automatically stopped and will need reactivating with the integration again.