9

Forging an Integrated Solution

In this chapter, you will continue learning about Salesforce-specific knowledge areas a CTA must master. Integration architecture is the fifth domain out of seven. You will cover the needed knowledge and then complete a hands-on exercise using a mini hypothetical scenario for each domain.

The enterprise landscape is continuously growing and becoming more complex. There is hardly an enterprise today that would use Salesforce in isolation from other systems. Distributed solutions are the default nowadays because integration today is much simpler than before. Yet, it is still the area that has the most significant impact on an implemented software solution’s success.

Integration is a complex domain but has a limited number of unique use cases. This is somewhat similar to the Identity and Access Management (IAM) concepts you covered in Chapter 4, Core Architectural Concepts: Identity and Access Management, where the concept itself was challenging and complex but the number of authentication flows was limited.

There is a limited number of Salesforce integration patterns. It is important to understand these different patterns, including their pros and cons, when to use each of them, and how vital it is to pass this domain. In this chapter, you will go through each of them and learn about their characteristics and typical use cases.

After you understand when and how to use each of these integration patterns, you will tackle a mini hypothetical scenario where you put that knowledge into action. In this chapter, you are going to cover the following main topics:

  • Understanding what you should be able to do as a Salesforce Integration Architect
  • Introducing the integration architecture domain mini hypothetical scenario—Packt medical equipment (PME)
  • Designing the enterprise integration interfaces to build your connected solution

Understanding What You Should Be Able to Do as a Salesforce Integration Architect

Integration architects are usually technically capable professionals with the ability to understand integrated data structures and their impact on the overall solution.

They recommend, design, and develop a suitable enterprise integration landscape that allows multiple systems to interact in a secure, scalable, and performant way.

Integration architects should be able to select and justify the correct integration pattern based on a given scenario. They should be able to create an integration strategy that considers the integrated applications’ trade-offs, limitations, and capabilities. A Salesforce Integration Architect should also be aware of the out-of-the-box capabilities and limitations of the platform.

Note

According to the Salesforce online documentation, a CTA candidate should be able to meet a specific set of objectives, which can be found at the following link: https://packt.link/ZAjmz.

By the end of this chapter, you should be able to do the following:

  • Recommend the right integration landscape
  • Determine the right enterprise integration architecture technology
  • Design your integration interface using the right integration pattern
  • Select and justify the right platform-specific integration capabilities

Now, it’s time to have a closer look at each of these objectives.

Recommend the Right Integration Landscape

Integration is a complex domain. Determining the components of an integrated solution includes asking the following:

  • How are they connected?
  • How do they securely interact?
  • What data is exchanged?
  • How does it require an in-depth understanding of the integration principles?

In the integration domain, a lack of understanding of technology can easily lead to wasted efforts and fragile solutions.

For example, an integration architect might decide to transfer files across the integrated enterprise systems using a pub-sub approach, overlooking the impact of the file size on this approach. This could dramatically impact performance and stability.

Another similar example is when an integration architect decides to build an entire middleware on top of the core Salesforce cloud, introducing dozens of classes, objects, and components to create an orchestration engine linking several systems. In this case, the technology is used in the wrong context and creates a fragile solution that will inevitably collapse one day.

Failing to understand a technology’s details and why it has been created the way it has could lead you to make bad design decisions. Chapter 3, Core Architectural Concepts: Integration and Cryptography, helps you to understand the history behind each integration approach and technology, how they differ, and good use cases for each. This will help you determine the ideal landscape architecture for your integrated solution.

Determine the Right Enterprise Integration Architecture Technology

Technology is the next topic to consider once you have decided on your integration approach. You need to understand the different tools available in the enterprise landscape. There are several questions you need to consider, such as the following:

  • What is an Extract, Transfer, Load (ETL) tool?
  • What is an Enterprise Service Bus (ESB)?
  • How do ETL and ESB differ from each other? And when should you pick one over the other?
  • What is a reverse proxy?
  • What is an API manager?
  • How can you connect a cloud application to an on-premises application?

You covered most of these in Chapter 3, Core Architectural Concepts: Integration and Cryptography.

It is recommended to get some hands-on experience with an ETL tool or an ESB, or both. This will help you understand the different approaches supported by some tools.

For example, to integrate with an on-premises/behind-a-firewall application, some tools offer the ability to install a secure client (also known as a secure agent). The secure client is a small-footprint application (provided by the same company offering the middleware) that can be installed on the secure intranet (behind the firewall). It can connect directly to a local database or API using specific ports. The secure client would then communicate with the cloud-based solution via other ports, such as 80 or 443, which are used for HTTP and HTTPS, respectively, and usually are permitted by the firewall. On some occasions, you might need to ask the firewall administrator to allow communications over specific ports for specific addresses, assuming the target application has a unique URI.

The following diagram illustrates how a secure client/secure agent works for a cloud-based ETL solution (which could slightly differ from one ETL product to another):

It is a diagram that illustrates how a secure client/secure agent works for a cloud-based ETL solution.

Figure 9.1 – A cloud-based middleware using a secure agent app

So, Figure 9.1 illustrates the following:

  1. The secure agent communicates directly with the cloud middleware, typically using ports 443 or 90. These are outbound communications, and therefore, there is no need to adjust or change the firewall rules. The agent downloads the definition of the integration jobs it is supposed to execute and stores them locally (usually encrypted).
  2. Based on the downloaded job info, the secure agent executes the planned jobs.
  3. The data is extracted, transformed, and transferred directly to the target system (in Figure 9.1, Salesforce). Data is not staged in the middleware cloud (which is usually an essential requirement for data privacy regulations).
  4. Based on pre-defined schedules, the secure agent would send logging and monitoring data back to the cloud-based middleware.

Understanding these details will help you present the full end-to-end solution during the review board. Now, you will explore the next objective a CTA should meet in this domain.

Design Your Integration Interface Using the Right Integration Pattern

Integration patterns describe standard ways to integrate Salesforce with other systems. The principles themselves are not technology-/platform-specific, as they can be applied to any other technologies and platforms. You need to fully understand the purposes of these integration patterns, common use cases, and the Salesforce technologies used to implement them. While creating your end-to-end solution, you need to select which pattern to use based on the use case.

Note

You can find more details about the patterns at the following link: https://packt.link/bw3nY.

You will cover the following six patterns:

  • Remote process invocation (RPI): request and reply
  • RPI: fire and forget
  • Batch data synchronization
  • Remote Call-in
  • UI update based on changes in data
  • Data virtualization

Next, you will dive deeper into the details of these patterns and their characteristics.

Remote Process Invocation: Request and Reply

Here is some high-level info about this pattern:

  • Description: This pattern is used to invoke a remote web service and wait for its response. The system is in a blocking state during this waiting time. In a classic single-threaded desktop application, you will notice that the application is frozen while waiting for a response. This is not the case with web applications (such as Salesforce), but it is common to see a progress indicator spinning while waiting for a response.
  • Timing: This pattern is synchronous, sometimes referred to as real-time.
  • Operational layer: This pattern operates on both the data layer and the business logic layer, depending on the implementation. The name of this pattern could be confusing. RPI integrations operate on the business logic layer, such as invoking a remote weather forecasting web service, passing the location and date, and receiving the expected weather forecast for the given date. But in addition, this pattern can also be used for synchronous integrations on the data level, such as invoking a web service to copy a local record to a remote system and waiting until the transaction is committed and a confirmation is returned. The entire data insertion transaction here is considered an RPI.
  • Key use cases: Invoking a remote web service, executing an operation, and waiting for the result. This could include passing a data structure that gets copied to the remote system.
  • Relevant Salesforce features: Callouts could be considered a relevant feature, such as callouts invoked from a Visualforce page or a Lightning component. This does not include an asynchronous callout invoked using future methods or similar capabilities. Salesforce External Services called from Flows or Apex is also relevant to this pattern.

Note

You can find more details at the following link: https://packt.link/B7jk7.

Remote Process Invocation: Fire and Forget

Here is some high-level info about this pattern:

  • Description: This pattern is used to invoke a remote web service and continue with other processes without waiting for a response. The system is not blocked from doing other operations. In a classic single-threaded desktop application, the application would invoke a remote web service and continue with other operations.
  • Timing: This pattern is asynchronous. In some cases, it can be considered near real-time.
  • Operational layer: This pattern operates on both the data layer and the business logic layer, similar to the RPI request and reply pattern.
  • Key use cases: Invoking a remote web service to execute an operation without waiting to know the result. This could include passing a data structure that gets copied to the remote system. This pattern can be modified to include a callback. In that case, Salesforce would expect the remote system to call back once done processing and sending the result. This is a common pattern due to its near real-time nature, agility, scalability, and small overhead.
  • Relevant Salesforce features: Several Salesforce features are relevant, such as outbound messages, platform events, change data capture (CDC), the Salesforce streaming API, callouts using future methods, callouts using queueable classes, email notifications, and push notifications.

Note

You can find more details at the following link: https://packt.link/dl4Cl.

Batch Data Synchronization

Here is some high-level info about this pattern:

  • Description: As indicated by its name, this pattern is mainly used to copy a dataset from one system to another. The dataset usually contains a bulk of records to be more efficient.
  • Timing: This pattern is asynchronous, usually scheduled to run at a certain time and interval. Execution time is normally subject to the size of the copied dataset.
  • Operational layer: This pattern operates on the data layer.
  • Key use cases: Copy one dataset from one system to another or sync two datasets on two different systems. This pattern is common due to its efficiency, although it does not deliver the desired real-time/near real-time experience demanded by today’s applications. There are many cases where a delayed data sync is acceptable by the business, such as copying invoices from the invoicing system to the data warehouse every 12 hours.
  • Relevant Salesforce features: The Salesforce Bulk API is a relevant feature (as well as the REST, SOAP, and Graph APIs for smaller data volumes). Salesforce Batch Apex with callouts could be tempting, but it is a risky pattern that you are encouraged to avoid. Consider using middleware instead.

Note

You can find more details at the following link: https://packt.link/B5qRq.

Remote Call-in

Here is some high-level info about this pattern:

  • Description: This pattern is used when a remote system wants to invoke a process in Salesforce. From the perspective of the remote system, this will be one of the RPI patterns. From a Salesforce perspective, it is considered a Call-in.
  • Timing: This pattern could be synchronous or asynchronous, depending on the implementation (the external system calling into Salesforce could be waiting for a response or not).
  • Operational layer: This pattern operates on both the data layer and the business logic layer, depending on the implementation.
  • Key use cases: Invoking a standard Salesforce API to create, read, update, or delete (CRUD) data, or invoking a custom Salesforce web service (SOAP or REST), which returns a specific result.
  • Relevant Salesforce features: Several Salesforce features are relevant, such as the Salesforce REST API, the Salesforce SOAP API, the Salesforce Metadata API, the Salesforce Tooling API, the Salesforce Bulk API (for bulk operations), and Apex web services.

It is worth mentioning that Apex web services are particularly useful when you want to expose an atomic functionality that applies the all-or-none transaction principle, for example, to expose a web service that accepts a specific data structure as an input and generates an account, an opportunity, and opportunity line items. Where a failure in creating any record of these objects would result in rolling back the entire transaction, such atomic transactions are essential when data integrity is a concern.

You can also create atomic transactions by utilizing composite requests provided by the Salesforce standard REST API. In a composite request, several dependent requests are executed in a single call where the output of a request could be an input for a subsequent request. You can specify whether an encountered error in a subrequest would result in a complete rollback of the whole composite request or just the subrequests that depend on it.

Note

You can find more details at the following link: https://packt.link/K596j.

UI Update Based on Changes in Data

Here is some information about this pattern:

  • Description: This is a relatively more recent pattern. You use it when you need to update the Salesforce UI based on some changes in Salesforce data, without the need to refresh the entire page.
  • Timing: This pattern is asynchronous.
  • Operational layer: This pattern operates on the UI layer.
  • Key use cases: Building a dynamic page that shows specific values and graphs based on Salesforce data, then updating certain elements of that page when the underlying data is changed.
  • Relevant Salesforce features: Several Salesforce features are relevant, such as the Salesforce Streaming API, push notifications, platform events, and CDC (which is technically based on platform events).

Note

You can find more details at the following link: https://packt.link/S4Ql1.

Data Virtualization

Here is some information about this pattern:

  • Description: This pattern is used to retrieve and display data stored on a remote system in Salesforce without persisting the data. This is very useful when there is no real need to keep the data in Salesforce. The data will be fetched on the fly and on demand and displayed to the user.
  • Timing: This pattern is synchronous. However, some techniques, such as lazy loading, could turn this into a special case of asynchronous communications.
  • Operational layer: This pattern operates on the UI layer.
  • Key use cases: Building UIs that utilize data hosted outside Salesforce, such as archived data. Mashups using remote services such as Google Maps are another valid use case.
  • Relevant Salesforce features: Custom Visualforce pages, Lightning components, Salesforce Connect, and Salesforce Canvas are all relevant features.

Note

You can find more details at the following link: https://packt.link/smgJb.

While designing an integration interface, you need to define the pattern and specify which Salesforce technology to use. This skill is the next thing you are expected to master in this domain.

Select and Justify the Right Platform-Specific Integration Capabilities

Salesforce comes with a set of out-of-the-box functionalities to facilitate the different integration patterns:

Capability

Description

REST API

Salesforce comes with an out-of-the-box REST API that can be used to interact with Salesforce data. REST APIs, in general, are easy to integrate with, especially from mobiles and JavaScript. The Salesforce REST API supports simple HTTP calls (GET, POST, PUT, PATCH, and DELETE).

SOAP API

The Salesforce out-of-the-box SOAP API can be used by consuming one of the standard two Web Services Description Language (WSDL) files. The Enterprise WSDL is strongly typed, making it challenging to maintain. However, it is easier to use if the data model rarely changes. On the other hand, Partner WSDL is loosely typed, which makes it more dynamic and fit to be used by partners or organizations that experience frequent data model changes.

The SOAP API can be used to work with Salesforce data similar to the REST API. However, it is more challenging to implement for thin clients, such as mobile devices and JavaScript apps. Moreover, it tends to have a bigger payload and is therefore considered a bit slower than REST.

Bulk API

This API is designed to work with huge chunks of data. Some ETL tools are built to switch automatically to this API when they are used to load a huge amount of data into Salesforce.

Salesforce Connect

This is a paid feature that allows retrieving data on the fly from an external data source that supports the OData 2.0 or OData 4.0 standards. OData interfaces are REST-based, and you can invoke them using custom code. Salesforce Connect makes that process much more straightforward. External OData tables are automatically represented in Salesforce as External Objects, allowing this data to be linked to other Salesforce data and queried using regular SOQL statements.

Canvas

Salesforce Canvas is a paid feature that enables you to integrate remote web applications within Salesforce. This is typically a UI-level integration, but the Canvas SDK can add more advanced functionalities.

Chatter REST API

Chatter has its own REST API. This API can interact with objects such as Chatter feeds, recommendations, groups, topics, and followers.

Metadata API

This API is unlikely to be used in a data integration scenario. It is mostly used by release management tools. It is used to retrieve, create, update, deploy, or delete Salesforce metadata information, such as page layouts and user profiles.

Streaming API

The streaming API is used to develop modern dynamic pages. It allows your pages/components to receive notifications when Salesforce data is changed, allowing you to update parts of your page/component accordingly. It utilizes the PushTopic functionality.

Tooling API

This API is designed to help with building custom development tools that interact with the Salesforce Platform.

Apex callouts

This functionality allows invoking a remote web service using Apex. Apex callouts could be synchronous or asynchronous. Asynchronous methods must have the @future annotation. You can also utilize the Queueable interface to develop Apex classes with asynchronous methods.

You can make long-running callouts using the continuation class.

Email

This is an old integration technology. However, it is still valid and has its use cases, for example, while integrating with legacy apps that do not support any more recent interfaces. You can utilize the InboundEmailHandler interface to develop custom Apex classes to handle inbound emails received by a Salesforce email service.

Outbound messages

This is a configurable outbound integration capability. It can contain data from one Salesforce object only and is sent using the SOAP standard. This means the consumer must be able to implement a SOAP WSDL. The sent outbound message can include a session ID, enabling the receiver to do a callback into Salesforce, using the session ID for authentication. Outbound messages have a 24-hour retry mechanism.

Platform events

Platform events are based on the pub-sub integration approach. You can define the structure of a platform service in a similar way as you do with custom objects. Platform events can be published using Apex, Flow, and Process Builder, and you can subscribe to them using the same tools in addition to CometD clients. Starting from the Winter 2023 release, you can also use the Pub/Sub API to publish and subscribe to platform events.

Since platform events can be easily subscribed to using different Salesforce features, they can be utilized to create applications based on event-driven architecture within the Salesforce Platform itself, in addition to passing data to external subscribers.

Change data capture (CDC)

CDC is a feature built on top of platform events, which means that they share a lot of characteristics. CDC publishes events representing changes to Salesforce records, such as creating a new record, the deletion and updating of existing records, and record undeleting.

External services

External services allow invoking remote web services using Salesforce flows and Apex.

Table 9.1: Out-of-the-box functionalities to facilitate different integration patterns

Now that you are familiar with this domain’s expectations, it’s time to work on a practical example where you’ll use this knowledge to design an integrated, secure, and scalable solution.

Introducing the Integration Architecture Domain Mini Hypothetical Scenario: Packt Medical Equipment

The following mini scenario describes a challenge with a particular client. The scenario is tuned to focus on challenges related to integration architecture specifically. But as you would expect, there are also requirements related to other domains, such as data and security.

Before you start, make yourself familiar with the six integration patterns mentioned earlier in the Design Your Integration Interface Using the Right Integration Pattern section on using the right integration pattern. In this mini scenario, you will go through practical examples of some of these patterns. This chapter will focus on the logical thinking required to design an integration interface.

You are also advised to go through the scenario once to build an initial picture of the required solution. Then, go through the requirements and try to solve them yourself. Once you are done with your attempt, compare that with the suggested solution. Try to practice explaining the end-to-end solution to someone else, who would play the role of a panel judge, describing a particular integration interface.

Scenario

Packt medical equipment (PME) has been selling medical equipment to health providers worldwide for the past 75 years. PME currently sells more than 20,000 different device models from various brands. PME has experienced massive growth in recent years. They sold nearly 5 million devices last year, and they expect their sales to grow by 10% per year.

Current Situation

PME operates in multiple geographies. Each has a variety of CRM solutions tailored to working with distributors. PME would like to replace all the existing CRM solutions with a Salesforce-based solution. However, they would like to retain two existing systems as they believe they offer a valuable set of tailored services:

  • A centralized, browser-based Enterprise Resource Platform (ERP) system, used by the PME account managers to view the financial details of distributors they have a relationship with. The system is accessible only from within the company’s intranet.
  • A centralized inventory management system that holds data about the devices available at each location. It includes updated information about the arrival date and time of devices in transit to distributors. Moreover, the system allows for getting information about the devices at distributor locations.

Data is currently entered manually into the inventory management system. The system is not integrated with other systems and does not have any exposed APIs. The system utilizes an MS SQL database to store the data. The ERP system is also not integrated with other systems. However, it has a rich set of SOAP and REST APIs.

Users are required to authenticate to each of these systems before they can use them. Authentication is currently done separately as each of these systems stores its own user credentials.

The ordering process consists of three stages, namely negotiation, confirmation, and shipping. Orders are placed quarterly.

PME is looking to modernize its landscape and offer more standardized processes worldwide. Moreover, they are looking to increase the productivity of their users by avoiding double data entry as much as possible.

Requirements

PME shared the following requirements:

  • The PME security team has mandated the use of Single Sign-On (SSO) across all the systems used as part of the new solution, including the ERP and the inventory management system.
  • Device orders should originate from Salesforce.
  • Before the negotiations stage, PME should be able to set a maximum and minimum allocation for each distributor’s device type.
  • In the negotiations stage, the distributor should be able to place an order, which contains a list of the device models and quantities.
  • If each model’s requested quantity falls within the defined allocation per model, the order should be automatically confirmed.
  • If each model’s requested quantity falls outside the defined allocation per model, the order should be sent to the account manager for approval. The system should automatically retrieve the four values indicating the financial status of the distributor from the ERP.
  • The account manager should utilize the financial health indicator values, in addition to the historical orders for this distributor, to determine the right action.
  • When the order is about to be shipped, the inventory management system may create multiple shipments. For each shipment, a list of the devices and their unique manufacturer IDs is created. The shipments and their line items should be visible to the distributor’s fleet managers in Salesforce. The status of each line item should be in transit.
  • When the shipment is delivered, the distributor’s fleet managers are given three days to confirm the shipment’s receipt. They should confirm the status of each and every device identified by its unique manufacturer ID. The status of each line item should be available in stock.
  • The account manager should be notified if more than three days have passed without receiving a confirmation.
  • On a regular basis, the distributor’s fleet managers update each device’s inventory status identified by its manufacturer ID. They should indicate whether the device has been sold, returned, damaged, or is still available in stock. This information should also be updated in the inventory system.

PME is looking for your help to design a scalable integrated solution that meets their requirements and ambitious roadmap.

Designing the Enterprise Integration Interfaces to Build Your Connected Solution

Give yourself time to quickly skim through the scenario and understand the big picture and develop some initial thoughts about the solution. This is an essential step for every scenario. Our approach is to incrementally solve the scenario. However, you still need to understand the big picture first and build some initial ideas, or you might risk losing a lot of time redesigning your solution.

Understanding the Current Situation

The first paragraph has some general information about PME. You learn from it that PME has some legacy CRM solutions and is looking to consolidate them using Salesforce. Moreover, you get to know that they would like to retain two systems. A legacy ERP system that is accessible only via the corporate intranet and a centralized inventory management system.

Both systems are disconnected from any other system. The scenario did not exactly specify where they are hosted, but considering what you know about the ERP solution, it is fair to assume that it is hosted on-premises (or behind a VPN, but this is a less common scenario). It is also reasonable to assume that the inventory management system is also hosted on-premises.

Even at this stage, you need to start thinking: How can I integrate a cloud-based platform such as Salesforce with an on-premises-hosted application?

Note

If you are struggling to find the answer, just flip back to Chapter 3, Core Architectural Concepts: Integration and Cryptography, particularly the Discussing Different Integration Tools section, and make sure you are familiar with each of these tools.

In short, you need some sort of middleware to do outbound calls from Salesforce to an on-premises-hosted application. You do not know yet what kind of middleware you would need. You will find out more while going through the rest of the scenario.

For the time being, capture what you learned on a draft landscape diagram. Your diagram could look like the following:

This is the first draft of the landscape architecture diagram. Salesforce is interconnected to Middleware which is further connected to ERP and Inventory Management System. There is another box ‘Legacy CRM systems’ at the bottom.

Figure 9.2 – Landscape architecture (first draft)

PME wants to modernize its landscape with Salesforce. They also want to integrate it with their legacy applications, and they want that done in the right way to ensure future scalability and extensibility.

They shared a lengthy list of requirements, which you are going to cover next.

Diving into the Shared Requirements

Now, go through the requirements shared by PME, craft a proposed solution for each, and update your diagrams accordingly to help with the presentation. Start with the first requirement, which begins with the following line.

The PME security team has mandated the use of Single Sign-On across all the systems used as part of the new solution, including the ERP and the inventory management system.

You know that the ERP and inventory management systems have their own authentication capabilities. However, there is nothing in the scenario indicating that they do not support standards such as OAuth 2.0, OpenID Connect, or SAML 2.0. You can make that assumption, clearly communicate it, and base your solution on it.

You still need to define an identity provider that can work with all systems. Moreover, you need to explain which system will be holding the user credentials (the identity store). Missing the identification of the identity store is a common mistake that you should avoid.

There is a technique that you will learn later, in Chapter 11, Communicating and Socializing Your Solution, called question seeding. This, in short, is intentionally leaving some lengthy conversational areas unanswered to draw attention to them during the Q&A. It must be well understood and used carefully. When you use that technique, you already know the answer to something. You draw the attention of the judges to it but then leave it open, allowing them to ask questions about it during the Q&A.

When you leave some topics open because you are not aware of their importance or do not know the answer, this is more of an attempt to blow a smokescreen and hope that the judges will not notice. This is a strategy that you should avoid at all costs.

When you are proposing a Single Sign-On strategy, you need to be crisp and clear about all of its areas unless you are intentionally leaving them open for further conversations. In this scenario, you have to explain your identity management strategy. Your proposed solution could be as follows:

I am assuming that both the ERP and the inventory management systems support the OpenID Connect standard. I propose introducing an identity provider and identity management tool such as Ping Identity to the landscape. Ping will utilize its identity store (PingDirectory) to maintain user credentials.

Currently, user credentials are hosted locally by ERP and inventory management. I propose migrating the user details from these systems to Ping.

I am assuming that it would not be possible to migrate the systems’ passwords because only their hashed values were stored. Therefore, we need to send emails to all the migrated customers asking them to set a new password. The email will contain a link with a protected one-time link that allows them to set a new password using a Ping-hosted page.

Ping will handle provisioning and de-provisioning users to all linked SPs. I will utilize a federated ID to uniquely identify the customer across the systems. This could be the employee ID.

When the user is attempting to access the Salesforce instance using the unique MyDomain URL, the system will detect the authentication providers configured for that instance. And if Ping is the only authentication provider, the user will be automatically redirected to Ping’s login page. The authentication will take place using the OpenID Connect web server flow.

Should you stop there or carry on explaining exactly what that flow looks like? It is up to your strategy and how you are planning to utilize the presentation time. If you stopped there, you are effectively seeding a question. If the judges are interested in testing your knowledge about that flow, they will come back to it during the Q&A. If they are not (because they believe you already know what it takes to pass the IAM domain), then they will not.

This is the power of that technique. If used wisely, you can avoid wasting time explaining areas that you do not necessarily need to.

Are you surprised to see an IAM requirement in a mini scenario focusing on integration? You should not be by now. You learned that the domains are not isolated and are very much related to each other. Now, update your landscape architecture diagram, and move on to the next requirement.

Device orders should originate from Salesforce.

This is a short and direct requirement. The orders have to originate from Salesforce. However, that does not mean they will have to stay there for their entire life cycle.

A question that could come to your mind now is: which object can I use for this requirement? The obvious answer is the standard order object. However, do not let the word choices used in the scenario dictate your design decisions. If the scenario calls it an order, it does not mean it has to be translated to the standard order object. You can use the Opportunity object to represent an order in some use cases, mainly when there is a price negotiation process (quotes) or a need to forecast.

You could also decide to use a custom object if any of the standard objects cannot meet the requirements. For the time being, you do not have enough clarity on the situation to decide.

Before the negotiations stage, PME should be able to set a maximum and minimum allocation for each distributor’s device type.

How are you planning to organize and offer PME’s products to its distributors? One way is to utilize Pricebooks (the object is technically called Pricebook2). In this case, you can define maximum and minimum allocations as additional fields on PricebookEntry.

Remember that Pricebooks works with both the order and opportunity objects. You still have not decided which one to use, but while skimming the scenario, you found no requirement that indicates price negotiation or forecasting. You can start by assuming that you will use the order object and update that later if required.

Is this the only way to solve this requirement? You know the answer by now. There are multiple ways to solve a problem. Your solution should offer a valid and technically correct solution considering the assumptions you have made.

Note

Are the requirements in the scenario good enough to propose using B2B Commerce?

From a neutral opinion, no. But if you decide to propose it, be prepared to clearly explain how you plan to configure it to meet all required functionalities.

Your proposed solution could be the following:

I propose using standard Pricebook and PricebookEntry objects to offer products to different distributors. They will allow me to use a different price per distributor based on agreements. I also propose introducing four new fields to the PricebookEntry object to hold the maximum and minimum allowed quantity per year and order.

Note

So far, you do not have any requirement indicating that PME’s distributors will have access to Salesforce. Therefore, you can assume that Pricebooks will continue to be accessible to all sales agents. To adjust this solution to work with partner communities, you need to update the OWD of the Pricebook object to no access or view only, then share the right Pricebook with the right set of users.

Update your data model diagram and move on to the next requirement.

In the negotiations stage, the distributor should be able to place an order, which contains a list of the device models and quantities.

You now have a requirement indicating the need for distributors to access Salesforce. The distributors will need the ability to create orders. They will also require a user license that supports record-sharing capabilities using manual sharing or sharing rules. Both Partner Community and Customer Community Plus provide these capabilities. However, it is common to utilize the Partner Community licenses with distributors (after all, they are business partners). You can assume that and adjust later if needed. Your proposed solution could be the following:

To grant the distributors access to Salesforce to place orders, I propose assigning a Partner Community license to their users. I will update the OWD value of the Pricebook object to no access. The admin will then manually share each distributor’s Pricebook with the right partner role for that distributor.

Update your landscape architecture, data model, role hierarchy, and actors and licenses diagrams. They could now look like this:

This is the second draft of the landscape architecture diagram. It is similar to Figure 9.2 except the box labeled ‘Salesforce’ contains ‘Partner Community’. The box labeled ‘Salesforce’ is now interconnected with ERP and Inventory Management System through Ping Identity.

Figure 9.3 – Landscape architecture (second draft)

Your data model diagram could look like the following:

This is the first draft of the data model diagram. It shows a flowchart that lists ‘Contact’, ‘Account’, ‘Order’, ‘Product2’, ‘Pricebook2’, ‘PricebookEntry’, and ‘OrderItem’.

Figure 9.4 – Data model (first draft)

Your role hierarchy diagram could look like the following:

This is the first draft of the role hierarchy diagram which lists CEO on the top with a subdivision named ‘VP of Sales’. The VP of Sales has multiple divisions and subdivisions.

Figure 9.5 – Role hierarchy (first draft)

Your actors and licenses diagram could look like the following:

This is the first draft of actors and licenses diagram. It lists the functions of the two characters namely account manager and distributor.

Figure 9.6 – Actors and licenses (first draft)

Now, move on to the next requirement:

If each model’s requested quantity falls within the defined allocation per model, the order should be automatically confirmed.

To fully understand this requirement, you need to read the one after. At this stage, you should probably ask yourself: Is there an out-of-the-box functionality that enables me to check these four fields on the PricebookEntry object upon placing an order?

If each model’s requested quantity falls outside the defined allocation per model, the order should be sent to the account manager for approval.

You need to introduce a mechanism to check the four values on the PricebookEntry object and determine the order’s status. In a specific use case, you need to automatically submit the order for approval.

Remember that two of the fields you suggested introducing to the PricebookEntry object were annual minimum and maximum quantities. To check these values, you need to query all the specified distributor orders in the current year.

Our early design decisions are affecting the potential solutions available to us at this stage. If you used a different mechanism to introduce the quantity restrictions, you would have ended up with a different challenge at this stage. However, if your solution was technically correct, you are very likely to be able to work around its restrictions. If your earlier solution had serious drawbacks, you might find yourself too restricted at this stage and might need to reconsider your earlier options.

Your proposed solution could be the following:

I propose introducing a trigger and an Apex trigger handler class on the OrderItem object, which fires upon inserting or saving a record of that object. The trigger would then retrieve the four quantity restriction values from the related PricebookEntry record and compare them against the OrderItem record’s quantity. Moreover, the code would also query all OrderItem records placed by the same distributor for the same product in the current year and determine whether the distributor is still adhering to the annual restricted quantities.

Based on that, the Apex code would either set the order status to confirmed or pending approval, and will submit the record for approval if needed.

The rest of the requirement indicates a need to retrieve four values from the ERP to determine the distributor’s financial health.

The system should automatically retrieve the four values indicating the financial status of the distributor from the ERP.

The scenario could give you the impression that this transaction should happen sequentially after submitting the record for approval. But the questions you should ask yourself are: At which integration level will this interface operate? Which pattern should be used? Do you need to copy the data from the ERP, or can it be retrieved on the fly? What is the timing of this interface?

Now that you understand this, check the next requirement to determine the right integration approach:

The account manager should utilize the financial health indicator values, in addition to the historical orders for this distributor, to determine the right action.

The data is retrieved to support the account manager in making a decision. Consider the following:

  • Timing: What is the ideal timing for this interface? The financial status of the distributor could change from one day to another, perhaps even faster, depending on how frequently this data is updated at the ERP. Ideally, this data should be as up to date as possible.
  • Need to host the data in Salesforce: Do you need to copy the data to Salesforce? What benefit would you gain from that? The data would be reportable, quicker to access, and subject to Salesforce’s user access control and data sharing and visibility. Moreover, you could trigger workflows and other automation based on data change. But do you need any of that in this requirement? Hosting the data in Salesforce has its challenges too. For example, you might find out that this information has to be encrypted at rest, which will open a host of other restrictions. The requirement is simpler than that. Try not to overcomplicate it for yourself. You only want to retrieve these values during the approval process to aid the account manager. This could take place on the fly. There is no need to copy the data or store it in Salesforce.
  • Determine the integration level and the pattern used: This is going to be easy from here onward. You are looking for a pattern that allows near real-time data retrieval, where the data is not stored in Salesforce but simply displayed at the UI level. Go through the patterns listed earlier in this chapter and see which one fits the requirement best.

Your proposed solution could be the following:

To fulfill this requirement, I considered different integration patterns. I believe that the right pattern to use is data virtualization, where the data is fetched on the fly and on demand and displayed only on the UI level. No data will be kept in Salesforce.

I will create a Lightning web component app or Visualforce page that initiates a callout to the middleware upon loading. I am going to add a hyperlink field to the standard approval page. Once the link is clicked, a new tab will open with the Visualforce page. The callout will be made to the middleware, which, in turn, will call the ERP system and retrieve the data. The data will then be displayed on the Visualforce page. I will use the continuation class in my Visualforce page controller to ensure an optimal user experience where the callout is executed in a background thread without blocking the main thread while waiting for the response.

I propose using MuleSoft as an integration middleware. It can consume the APIs exposed by the ERP, and it can handle exceptions and retry connecting if needed. I am assuming that the on-premises firewall will be configured to allow MuleSoft to communicate with these APIs.

I will utilize the named credentials functionality to authenticate from Salesforce to MuleSoft using a named principal. I am assuming that MuleSoft would authenticate to the ERP using simple authentication. The integration channel will be secured between Salesforce and MuleSoft using two-way TLS.

The page will also retrieve and display the distributor’s historical orders. The scenario did not specify the number of orders created every year by each distributor. I am assuming that to be low. I am assuming that there are no more than 100,000 orders generated every year and that PME would archive all orders over five years of age. The scenario did not specify an archiving requirement, so I will not proceed with further details, but I am happy to do so if required.

Once the order is approved, I will utilize a field update to update the status of the order to confirmed.

That provides an end-to-end solution to this requirement. Do not try to over-complicate things for yourself. The scenario did not specify an archiving requirement. You can add some assumptions and let the judges know that you are willing to explain that if needed, but do not assume that you need to solve something that hasn’t been requested and therefore waste valuable time in your presentation.

Note

Remember to use your diagrams while explaining this solution, and any other solution, in fact. Your diagrams are tools to help you tell the end-to-end story.

Update your diagrams; you are going to need them to explain the solution to the judges. Now, move on to the next requirement:

When the order is about to be shipped, the inventory management system may create multiple shipments.

The questions you should ask yourself at this stage are the following:

  • Timing: What is the ideal timing for this interface? When is this data going to be required? The shipments and their line items should ideally be available for the fleet managers as soon as possible, but a slight delay of a few minutes is usually acceptable. Shipments take a long time; a delay of 10-15 minutes is hardly going to make a difference.
  • Need to host the data in Salesforce: Do you need to copy the data to Salesforce? The short answer so far is no. There is no benefit in doing so.
  • Integration level and pattern: What is the integration level? You are not sure yet whether this is going to be a UI-level integration or a data-level integration. It will not be a business process-level integration as there is no business process that you need to invoke on either end.

You need more details before you can come up with the right decision. This is applicable in real life as well. Do not rush to conclusions and assume that every integration is data synchronization. This is a very common mistake that you should avoid. Rushing to conclusions and skipping the organized way of design thinking usually leads to the wrong conclusions.

Take this case as an example. You can rush to the conclusion that the data has to be copied to Salesforce. But what would the justification be? Why can you not simply keep this data where it is and retrieve it on the fly (just like you did for the distributor’s financial health indicators)? If you do not have a solid technical reason, then your solution is missing something. Be prepared to be challenged on that during the review board (the judges will know that this is not well justified, so they will probe with questions).

Now, check the next requirement to further understand how this interface should work:

When the shipment is delivered, the distributor’s fleet managers are given three days to confirm the shipment’s receipt.

You are now learning about a new requirement. Depending on the received data, there is a need to invoke a process within Salesforce.

There are several ways to fulfill this requirement if the data is in Salesforce, such as utilizing time-based workflows, tasks, or scheduled batch jobs (to update the shipment line items with the number of days passed and trigger a notification accordingly). But there is hardly a way to do that if the data is not stored in Salesforce. You would need to modify the inventory management application to introduce this functionality. Updating such a legacy application is a very time-consuming task and typically results in sub-optimal functionality.

Now you have a solid justification for proposing copying the data over to Salesforce. You have an explicit requirement that cannot be easily fulfilled without hosting the data in Salesforce. Your proposed solution could be the following:

To fulfill this requirement, I considered different integration patterns. I believe that the right pattern to use is Batch data synchronization, where the data is copied from the inventory managing system to Salesforce.

MuleSoft will be configured to run a scheduled Batch job every 15 minutes. The job will copy the new shipment and shipment line item records from the inventory system to Salesforce. Ideally, this job should fire whenever a new shipment record is created in the inventory management system. But considering that it is a legacy application, I assumed that it is incapable of notifying MuleSoft of this activity. I also assumed that a delay of 15 minutes is acceptable to the business. If not, the frequency of this job can be increased.

Once the records are created in Salesforce, their status will be set to available and in stock. The shipment records will be linked to the Order object using a master/details relationship. This will ensure the records are visible to all the distributor users who have access to the order record.

I will use an autolaunched flow to create a task due in three days to update the shipment line item status. The task will be assigned to the distributor’s fleet manager’s queue.

This is an inbound interface to Salesforce. MuleSoft would authenticate to Salesforce using an OpenID Connect web server flow (for the first time, during setup) followed by the Refresh Token flow afterward using a named principal (integration user). This interface will be secured using two-way TLS.

I also assumed that MuleSoft would use its own connector to MS SQL to directly access the inventory management database. I assumed that the on-premises firewall would be configured to allow MuleSoft to access the MS SQL server database. This interface will be secured using one-way TLS. Simple authentication will be used.

I assumed that the number of shipments is relatively low. I assumed earlier that there are 100,000 orders created every year. Therefore, I am assuming two shipments per order. That is 200,000 shipments per year every year. According to the scenario, PME sold 5 million devices last year, with 10% year-on-year growth. This means that the shipment line item object could become an LDV after a few years.

Therefore, I propose using a MuleSoft scheduled Batch job to delete shipments sold by the distributors for more than a year from Salesforce. This assumes that these values will already exist in the inventory management system. Therefore, they would not be completely lost after they get deleted from Salesforce. I have considered this as another integration interface in my diagram.

You might think that coming up with all these words during the presentation is too difficult. That is true if you are not utilizing your diagrams. Note that the diagrams will contain all the info about authentication and security. You are simply using your diagrams in your presentation to tell the end-to-end story.

Update your data model, landscape architecture, and list of integration interfaces. Then, move on to the next requirement:

The account manager should be notified if more than three days have passed without receiving a confirmation.

This is another requirement that can be fulfilled in multiple ways. Your proposed solution could be the following:

I will utilize standard reports and dashboards for this. I will create a report showing overdue tasks of a specific type. I am assuming that a report will be created for each account manager and will be scheduled to be sent daily. This will ensure the account manager is notified. I could have used time-based workflows as well, but I believe reports will be good enough.

On a regular basis, the distributor’s fleet managers update each device’s inventory status, identified by its manufacturer ID.

Most of this has been covered already in previous solutions. However, make sure you read the requirement fully. Do not risk skipping a requirement and therefore missing a valuable point. Your proposed solution could be the following:

The first part of the requirement is already fulfilled using the new custom objects that we introduced. The distributor’s fleet managers will be granted read/write access to these objects’ status fields using field-level security. Moreover, I already mentioned that the shipment object would be linked to the order using a master/detail relationship. This means the records will be visible to all partner users who have access to the order record.

To synchronize these values back to the inventory management system, I will utilize another MuleSoft scheduled Batch job. I am assuming that it will run every hour and that this delay is acceptable to the business. The interface will copy the changed shipment line item statuses back to the inventory management system. This will be another interface utilizing the Batch data synchronization pattern.

I thought of utilizing platform events or change data capture (CDC) to send the changed data from Salesforce to MuleSoft. But I preferred using scheduled batch to ensure the consistency of delivering all changes to the inventory management system, especially since there was no shared requirement indicating a need for near real-time data synchronization. This interface will also be easier to build, considering that we already have a similar interface operating in the opposite direction.

The same authorization and security mechanisms used by the shipment synchronization interface will be used by this one too.

That was the last shared PME requirement. You can update all the diagrams and see how they look.

Here is the landscape architecture diagram:

This is the final draft of the landscape architecture diagram. The box labeled ‘Salesforce’ is interconnected to ‘MuleSoft [in the middle] which is further connected to ERP, Inventory management system. ERP and inventory management system is also connected to Salesforce through Ping Identity.

Figure 9.7 – Landscape architecture (final)

Here are the integration interfaces:

This is the final draft of the integration interfaces diagram that shows a list of Interface Codes, Source/Destination, Integration Layer, Integration Pattern, Description, Security and Authentication.

Figure 9.8 – Integration interfaces (final)

And the data model diagram looks like this:

This is the final draft of the data model diagram. It is a flowchart that lists ‘Contact’, ‘Account’, ‘Order’, ‘Shipment__c’, ‘Product2’, ‘Pricebook2’, ‘PriceBookEntry’, ‘OrderItem’, ‘Shipment_Line_Item__c’.

Figure 9.9 – Data model (final)

Note

There are no changes in the other diagrams.

That concludes the scenario. You will continue to see more integration-related requirements in the chapters to come. But you have now learned about the structured way to determine the right design for an integration interface.

In this scenario, you needed to connect to an on-premises-hosted application; therefore, it was easy to justify using middleware. However, integration middleware adds value in many other use cases. Please ensure you go through the details listed in Chapter 3, Core Architectural Concepts: Integration and Cryptography. A lot of technical debt could be created in the Salesforce Core Cloud in the absence of integration middleware.

Summary

In this chapter, you have dived into the details of the Salesforce integration architecture domain. You learned what a CTA is expected to cover and at what level of detail. You discovered the key Salesforce integration patterns and understood their importance and impact on the clarity of the designed solution.

You then tackled a mini hypothetical scenario that focused on integration architecture, and you solutioned it and created some engaging presentation pitches. You learned the structured way of designing an integration interface and encountered some interesting design decisions. You came across multiple examples where you used diagrams to deliver a structured and easy-to-follow solution presentation. You learned how to avoid the negative impact of rushing to conclusions about the integration interfaces.

You also covered other topics in the mini scenario, including migrating users to new identity stores and dealing with partner users, and got a glimpse of a presentation technique known as question seeding.

You will now move on to the sixth domain you need to master, development life cycle and deployment planning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset