When can an agent help?

There are a few reasons why we may need an agent, covering one or more factors, ranging from technical, security, commercial and legal compliance perspectives, and we will look at these in turn.

From a technical perspective, an agent may be needed to come into play for several reasons. Some connection techniques and technologies are sensitive (relatively speaking) to the time it can take for a call to be made and responded to. This call and response time challenge, or latency, comes from the fact that, regardless of how good the connections are, data takes time to travel, even over fiber optics. For example, for light to travel from London to Sydney takes 5ms. In reality, we do not have fiber from every location directly to every other location by the shortest path. Even the best parts of the Internet backbone are convoluted and involve moving between servers and network infrastructure as the data works its way across the world, and then will need to go through more infrastructure (firewalls and so on) once into your network. You can see this by using a trace command, against a remotely hosted server address (use the tracert command followed by an address you know to be physically remote from you). You can see an example in the following screenshot, where we took the web address of the Oxley museum in Wellington, New Zealand (which is unlikely to use Internet acceleration techniques) and traced the steps to contact the website:

When can an agent help?

As you can see in the tracert example, several of the steps had to be attempted several times to perform without the process deciding it was taking too long (steps 2, 3, and 14). As you can see from the output, it took 9 hops before the connection left the UK and reached New York. After one third of a second we have only just made it to Australia (step 12), then we can see the final steps traversing several service providers before reaching the host of the museum's website, which appears to be in Australia rather than New Zealand.

If you look at backbone communications statistics published by the likes of Verizon, dotcom-monitor, and Wondershare, you can see latency times exceeding 500ms and occasionally approaching one second between certain parts of the world.

Tip

For more information about latency measures, you can refer to these sites:

Dotcom-monitor: https://www.dotcom-tools.com/internet-backbone-latency.aspx

Verizon: http://www.verizonenterprise.com/about/network/latency/

Wondershare: https://wondernetwork.com/pings

Going back to our trusty database, the reason it is sensitive to latency is because the interaction with a database is not a single step, but actually a series of exchanges, a conversation if you like, between the database and the client. Whilst this conversation takes place, resources need to be dedicated to the conversation, as well as potentially blocking or limiting access to the data for other clients. If we have many slow conversations, then a lot of resources get tied up. The solution for this is for the database to limit the time resources held. You could of course extend the amount of time allowed by the database, but you would also soon see all sorts of knock on effects.

By deploying an agent locally, we can have the conversation with the database run very quickly, and then the data flowing to or from the agent can be passed using less time-sensitive mechanisms, using techniques that allow us to share resources between multiple conversations. 

When can an agent help?

In the previous diagram, we can see a logical presentation showing how the agent fits in. Traditionally, without the agent, the deployment would look more akin to the following diagram:

When can an agent help?

From a security perspective, having systems from outside your network start talking with secure databases is just too dangerous. It would be like allowing anyone to knock at your front door and allow them to go inspect the contents of your safe. Do you trust these visitors? Are they from where they say they are from? Do they have the authority to look that they claim to have? Even if you had an agreement that only visitors who claim to be from the local police station and are wearing a police uniform are allowed, this would be a risk, as you do not know if someone has walked into that police station and borrowed a uniform. It is an IT security person's job to engage and deal with these problems. So, how can we help?

Like using a browser, if we always start the conversation then we can eliminate a range of these concerns as we have chosen when and who we start a conversation with. If we communicate to the outside world through common mechanisms, then the security team do not have to set up any special channels for us. Consider the effort to allow you to use the phone versus establishing a new special courier service. All the security and checks have been done for phones, but not for each special courier. Remember, each time some new mechanism for communicating outside of our environment adds a new risk of error-let's face it, software is complex and written by humans, so it is vulnerable to having bugs that can be exploited; setting up networks is done by people and can be prone to error.

This analogy in IT terms means using web traffic and the security protocols available. Rather than a browser, we use an agent running within our environment.

When can an agent help?

When can an agent help?

The map source--US Department of Commerce and country specific legislation ©Forrester Research (All rights reserved).

The commercial and legal aspects are closely related to the security considerations. In many countries, the location of where information is processed is very simple, to the point that presenting a view of the data using an application or browser means that processing is being performed in the location of where the browser is active. So, as you can imagine, data flowing through a middleware environment, regardless of the quality of hosting and whether it is completely transient, will mean data moving potentially to different countries and agreements and the legality or recognition of agreements is a fairly quickly changing landscape (the previous map helps illustrate the diversity of positions on data security). As a result, particularly in business-to-business and business-to-government arrangements, where data compliance considerations are being cascaded to third or fourth parties, it is often easier commercially to place very simple stipulations on vendors, such as the data can only reside, be processed or be accessed in country x. When dealing with very sensitive data, such that clinical information, or that could be considered national security sensitive (and this can often be not the data, but the existence of the data), the drive to restrict data locations is at least desirable, if not necessary.

With such constraints and the cloud meant to remove physical considerations of where and how systems are deployed, there is a clear conflict.

The final driver we need to consider is a blend of commercial and legacy. Whilst we are seeing the adoption of cloud in its many forms, if a company has invested in an on-premises system, the value in moving such a solution to the cloud can be heavily influenced by investment cycles-if a company has acquired infrastructure in the last five years, then there is the commercial benefit in ensuring the investment is capitalized upon, and potentially even having the value of the asset drawn out if possible. This can often lead to the second consideration, which is that the system concerned is too delicate or heavily modified, which means that any combination of cost, effort, or risk (real or perceived), cannot justify doing anything other than leaving it in place.

As you can see from these challenges, there is a need to be able to interact with on-premises solutions in a manner that means communication is driven from somewhere on-premises and potentially have the raw data processed on-premises as well.

Whilst this describes the underlying challenges that the use of an agent is trying to address, we can look at things in a very practical way. The following table provides examples of using the agent and why it is being used (use-case):

Pattern

Use-Case

Synchronous request from cloud to on-premises to retrieve data

Getting the status of an order from E-Business Suite (EBS) in real time

Cloud events triggers asynchronous message exchange with on-premises events

Creation of an incident in RightNow causes creation of service request in EBS

On-premises events triggers asynchronous message exchange with the cloud

Service request update event results in asynchronous message-based synchronization with RightNow

Synchronize data extracts between on-premises and cloud applications

EBS-based customer data synchronized with Human Capital Management (HCM)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset