CHAPTER 4
Architecture Design Planning

This chapter continues the discussion in Chapter 2, “Configuration Manager Overview,” and Chapter 3, “Looking Inside Configuration Manager.” It covers the requirements gathering, planning, and operational transition required to deliver a successful System Center Configuration Manager (ConfigMgr) deployment and provides a starting point and overview for later chapters. The chapter discusses all areas required for a new deployment and contains helpful information for those upgrading to ConfigMgr Current Branch.

Developing the Solution Architecture

ConfigMgr Current Branch is designed to continue the flexibility and configurability of Microsoft’s flagship change and configuration management product, while dealing with a rapidly evolving world. Like any other technology, ConfigMgr must be delivered as a solution to your business and technical requirements. You must determine your organization’s goals, the current state of your environment, and current constraints of your information technology (IT) service delivery capability. You can then fit ConfigMgr into your solution rather than fitting your solution to technology. Figure 4.1 summarizes the planning and design process. Note that the “Test and Stabilize” phase in Figure 4.1 is not covered in this chapter, but testing should be included as part of a ConfigMgr deployment project to ensure technical readiness and that the design meets functional requirements.

A screenshot shows the overview of planning and design process.

FIGURE 4.1 Planning and design process overview.

Discovering Business Requirements

The most important requirements to consider are those of the business. These vary from one organization to another and are heavily dependent on the business’s industry alignment. For example, the needs of a university vary from those of a retail bank or a consumer packaged goods manufacturer. Attitudes to risk, willingness to adopt new ways of working, and overall views of technology also differ between industries.

It may be difficult to determine business requirements for a management product such as ConfigMgr. Business requirements are often IT or technical requirements masquerading as requirements for the business. For example, security requirements are not business requirements. While adhering to the compliance and regulatory environment in which the business operates is a business requirement and security requirements may underpin that, implementing security updates is not something the business may even care about.

Following are areas to explore with your business representatives as potential sources of requirements:

images User Experience: User experience is often overlooked in requirements. There could be requirements around availability. For example, an emergency 911 call center might require that no more than 15% of users be offline at any time. For an investment bank, the requirement could be that trading desk devices are online during market hours plus one hour before and one hour after trading. Another requirement could be whether there are minimal prompts or whether prompts are always provided, putting the user in control. Formally tracking these requirements helps counterbalance them against software updates compliance and other requirements.

images Speed of Delivery or Availability of (Other) Services: Certain businesses require speed of delivery for various functions of ConfigMgr. A business relying heavily on field sales staff may require that operating system deployments be completed in two hours to refresh a laptop or tablet, minimizing the time spent by sales staff in the office away from customers. This objective may then be tied to requiring infrastructure in field offices to be capable of delivering images at appropriate speeds.

images Cost Controls: Business pressures may force IT to minimize capital expenditures or costs in general; factor this into the solution architecture as a business requirement. While these pressures are often mentioned during project discussions, you should call them out with the nature of the required cost controls. If reducing capital expenditures is a priority, hosting ConfigMgr in Microsoft Azure Infrastructure as a Service (IaaS) might be ideal.

Alternatively, the business might consider a special capital expenditure but may want to avoid fixed/regular operational expenditures. Perhaps your networks are charged based on bandwidth consumption (4G/3G) or with bandwidth caps (satellite links); understanding the business impact of exceeding those limits and corresponding costs for more bandwidth and possible lost business due to an outage is another important cost control business requirement.

These constraints might manifest themselves as enabling a line manager to manage software asset management expenditures; having ConfigMgr feed an asset management system could help with that requirement.

images Compliance and Regulatory Issues: Most businesses are subject to some compliance or regulatory requirements, which might impact the ConfigMgr solution. Payment Card Industry (PCI) compliance comes into play in retail companies, banks, and other service organizations processing payment cards. PCI requires that you address security vulnerabilities either through vendor-supplied (for example, operating system) updates or compensating controls. Government and public-sector organizations may have regulatory requirements, such as those placed on local governments to access state/provincial or federal systems. These could include validation of security configuration and updates.

images Consumerization of IT: Over time, business users and the public have become increasingly tech savvy, and new employees’ expectations of IT service are drastically different from what they once were. Depending on the industry of the business or parts of the business, there could be expectations of a highly user-centric service where end users “pull” services when required or convenient rather than having those services pushed on them. The business may need to have a consumer model of IT services to ensure that it can hire the best and brightest. If the competition provides a better IT service that enables an employee to be more productive, the employee may go there instead.

Discovering IT Requirements

After you capture business requirements, focus on IT requirements. These should include technical requirements such as delivering 98% patch compliance. They also should encompass service delivery or IT/information systems (IS) business requirements, such as minimizing the number of help desk tickets raised during deployment of a key line of business (LOB) application. IT requirements should be distinct from business requirements, although there may be overlap or a certain business requirement could lead to an IT requirement. As an example, PCI compliance as a business key requirement to continue allowing customers to buy your products with a credit card may lead to IT security requiring a 99.9% patch compliance level on servers. These could also be in direct conflict with one another—patching all systems in 48 hours while the business demands that no more than 5% of staff be unable to work at any one time due to IT processes.

Conflicting or complementary requirements are not issues; they are points of discussion in a design workshop. These key discussions should occur up front and should be ratified to ensure that the project continues smoothly through to delivery.

The following are some suggested areas to explore for requirements:

images IT Security: This may include health attestation prior to resource access (conditional access), update compliance requirements, antimalware, or security configuration management.

images Service Availability: This may include the availability of the solution itself as well as the need for the solution to not affect availability of other solutions (for example, do not take more than 50% of a cluster offline for patching).

images Cloud Consumption/Adoption: This is an IT requirement, but it is an important one. Cloud adoption tends to manifest itself in various ways. It includes consumption of services using in-house private cloud or via private cloud service providers, as well as consumption of public cloud IaaS models such as Amazon Web Services (AWS) and Microsoft Azure (particularly with ConfigMgr Current Branch’s support for Microsoft Azure). This is typically reflected in a requirement around hosting ConfigMgr infrastructure in a cloud-based (private or public) fabric.

images Desktop OS Supportability: A ConfigMgr deployment is often tied to an upgrade to the standard operating system (OS) offering. Previously this might have been moving from Windows XP supported by ConfigMgr 2007 to Windows 7 supported by ConfigMgr 2012. This also holds true with Windows 10, as Windows 10 Semi-Annual Channel after release 1511 requires use of ConfigMgr Current Branch. There is a saying that “if you want to go fast with Windows, you need to go fast with ConfigMgr.” The platform being managed cannot outpace the product that manages it.

Assessing Your Environment

The last set of requirements are implicit environmental ones. These are the realities of delivering the solution, and they include the following:

images Organizational Structure: Is your IT organization centrally managed or delivered via a service provider model that requires centralized administrative oversight? Alternatively, is your business split into separate companies, each with its own discrete IT organization that requires complete autonomy?

images IT Service Delivery Processes: Does your IT organization have existing configuration management processes in place? Does that include a configuration management database (CMDB)? Are there change and release management processes in place? What information must be provided to those processes and associated systems?

images Service Level Agreements (SLAs): What SLAs have been agreed upon and must continue to be delivered? Is there scope to change them if needed? Are there underpinning agreements with associated teams, vendors, suppliers, or service providers?

images Dependent IT Teams: What teams will you depend on to deliver the solution and ultimately the service itself once operational? What teams are dependent on your solution and service?

images Datacenters and Server Infrastructure: Where are your datacenters or computing centers? Are there different classes of datacenters? What defines these classes? Are there known limitations within the datacenters (for example, limited network speed), or is the storage area network (SAN) running out of disk enclosure space?

images Virtualization and Cloud Computing: Is this used? If so, how is storage subsystem performance ensured for virtual machines (VMs)? Are there any limits on VM size in terms of memory or CPU?

images Operating Systems and Device Types: What supported operating systems are in use? What types of Windows devices are used, and what is the ratio of desktops, laptops, and tablets? Are mobile devices being managed? What platforms are used? What are your device usage scenarios? This might include kiosk usage, shared devices with one device for multiple users, personal devices, shift workers, and embedded systems.

images Network Topology: Does your network operate a hub-and-spoke model for wide area network (WAN) connectivity? Are there regional network hubs? Are there common contention points, meaning areas of the WAN where multiple point-to-point connections converge onto a single link that is a lower speed than the sum of the individual point-to-point links? How is Internet access provided to mobile devices? Are there Internet-connected offices? Chapter 5, “Network Design,” discusses additional network planning considerations for ConfigMgr.

images Active Directory Configuration: What does your Active Directory Domain Services (ADDS) look like? Do you have a single forest with multiple domains? A single domain? Multiple forests? Are cross-forest trusts in place? Is Active Directory Certificate Services (ADCS) deployed to provide a public key infrastructure (PKI)? Is it an enterprise ADCS deployment?

images Enterprise Storage: Is there a centralized storage solution? What tiers of storage are provided in terms of capacity versus performance? What information does the storage team require to provision storage?

images Server Management and Monitoring: What backup solution is available for ConfigMgr? Does this include SQL Server backup capabilities? Is an enterprise monitoring solution available? Does that solution provide its own monitoring definition, or is an off-the-shelf definition available for ConfigMgr?

As part of the solution delivery, once the high- and low-level designs are in place, you may want to map the service in the context of the overall IT environment. Understand and diagram the dependent services, software, infrastructure, and teams that support them. At a basic level, when these underpinning components fail or are degraded, ConfigMgr as a service fails or is degraded. Include the services and solutions that depend on ConfigMgr; that is, if ConfigMgr fails or is degraded, the services or solutions that will also fail.

Defining these environmental requirements in advance ensures a smooth transition from project delivery into service delivery or production operations, with a clear set of roles and responsibilities for problem and incident management.

Envisioning the Solution and Scope of Delivery

The next element of delivery is packaging the requirements together in a vision and scope document. This document is a first attempt for an architecture and strategy and addresses the remainder of the design and planning phases. It should rationalize the requirements discussed in a design workshop and highlight those requiring additional discussion and investigation. An example might be that the document could establish key priorities and preferences for the solution, such as whether user-centric computing is a priority or whether minimal prompts and interaction are preferred.

Planning for Infrastructure Dependencies

This section looks at infrastructure dependencies for the ConfigMgr solution, specifically around Active Directory (AD). The section is an important early step in your architectural design, as you should understand what external dependencies exist prior to working on the solution. These constraints affect how you can meet the requirements. After establishing the dependencies, you can begin to look at how to architect the ConfigMgr solution, as discussed in the “Hierarchy Planning in ConfigMgr” section, later in this chapter. Chapter 5 provides information about network infrastructure dependencies.

ADDS Considerations

ADDS is required for ConfigMgr, which does not install unless the site server is a member of an AD domain. The following sections discuss other AD requirements.

Deciding Whether to Extend the AD Schema

For a new ConfigMgr deployment, you should decide whether to extend the AD schema. Chapter 3 discusses these schema changes. This chapter looks at reasons for making a substantive change to your AD forest(s). The decision workflow in Figure 4.2 summarizes the reasons to extend the AD schema.

A figure shows the AD schema decision workflow.

FIGURE 4.2 AD schema decision workflow.

The schema extensions, which enable AD integration with ConfigMgr, allow clients to use a trusted source to look up information. When the clients are newly installed or there has been a significant servicing operation (usually recovery of a site), having this information can significantly ease deployment or recovery. The client may use AD in the following ways:

images Client Installation and Site Assignment: Clients can query ADDS to determine configurations specific to initial installation, such as log information, initial download cache size, and site assignment information.

Note: SITE ASSIGNMENT WITHOUT EXTENDING THE SCHEMA

If the schema is not extended, site assignment requires specifying a site code as a command line parameter, either by supplying a management point (MP) or Domain Name System (DNS) domain name. Specifying the DNS domain name requires enabling publishing site information to DNS in the site configuration.

You also cannot customize the command line parameters for the Windows Server Update Services (WSUS) client installation method, which is useful as it does not require pushing to clients—as the client machines pull the ConfigMgr agent.

images Custom Port Configurations for Clients: Custom configurations can allow a client to obtain a port number from ADDS at installation time. If the port changes later, the client can find the new configuration in ADDS. Not publishing site information would require deploying a script to these devices to change their port configuration or reinstalling the client with a new port configuration.

images Client MP Key Exchange: This allows clients to obtain the site server’s public key to confirm the signature on policies from the MP and occurs automatically during installation. However, when the site server’s public key changes, such as when the site server is reinstalled, the client cannot verify the re-signed policies. This is security feature is meant to prevent injecting policy from an untrusted source. Not publishing this information to ADDS means you must reinstall any clients installed before the key changed.

ConfigMgr site servers also use the AD schema extensions for content file-based replication key exchange, which allows site servers in a hierarchy to read public key information for the source site server that replicated content to it. If not published to ADDS, site public keys are manually exchanged using the preinst.exe (hierarchy maintenance) tool. This key is reset whenever a site is recovered (specifically, when the site server is reinstalled as part of a recovery operation), which means preinst.exe must be run to exchange the new keys in order for content to replicate.

Multi-Forest and Workgroup Considerations

A ConfigMgr site can manage workgroup clients and clients in trusted and untrusted AD forests. By default, workgroup clients require manual approval in the console, as—unlike domain member computers—they do not have a computer account and cannot be authenticated by the MP. You can also configure clients to be approved without authentication, but this means potentially sensitive policies (such as task sequences with domain joint credentials stored with reversible encryption) would be delivered to clients that cannot first be authenticated. In addition, if you plan to use packages and programs from a file share directly on a workgroup client, you must have one or more network access accounts (NAAs) configured. This is because the Local System security principal on a workgroup client, which is the context under which the ConfigMgr agent runs, would not have permissions to access Internet Information Services (IIS) or file shares on distribution points (DPs) that are domain member servers. The NAA allows this to occur.

For clients in other forests, client communications depend on whether site systems are installed in that forest or whether a cross-forest trust is in place. If neither is the case, the clients in the untrusted forest are effectively treated as workgroup clients, as the ConfigMgr site systems cannot authenticate the computer accounts. Authentication occurs using a specific subpath on the MP website, even though the rest of the MP website allows anonymous access. This is done to validate whether a client is from a trusted domain and thus known to AD. When a cross-forest trust is in place, the clients have their authentication to the site systems routed through the trust to domain controllers (DCs) in the clients’ forest via the trust.

If you cannot establish a trust, ConfigMgr supports placing site systems in a remote untrusted forest. This is possible for most site systems and all client-facing site systems.

Clients prefer talking to MPs and DPs in their own forest. You can install remote site systems in an untrusted forest to enable clients to authenticate against site systems in their own forest. The site server uses a site system connection account to connect to the remote site system in the untrusted forest; this domain account is in the untrusted forest. If the remote site system requires database access (for example, MP, Software Catalog, or Preboot eXecution Environment [PXE]-enabled DP), the remote site system must be provided with a site database access account, which would be created in the forest of the site server. This configuration allows the remote site system to authenticate to SQL Server and read information from the site database.

NOTE: UNTRUSTED FORESTS AND CONFIGMGR

AD forests are the security boundary for AD, completely isolating one forest from another. In and of itself, a trust does not breach that trust boundary, as enterprise administrators in one forest cannot affect a trusted forest, and users from the trusted forest must still be granted permissions to a resource in order to access the resource.

Deploying a single ConfigMgr hierarchy may implicitly bridge a security boundary, depending on where the ConfigMgr agent is deployed in the untrusted forest. When the ConfigMgr agent is installed on DCs or most clients in a domain, it provides ConfigMgr with administrative control over systems where the agent is installed (using Local System privileges). This control implies a level of trust of the ConfigMgr administrators and domain admins of the AD domain to which the ConfigMgr site servers belong. If you are not able to create a cross-forest trust in your environment, consider the security implications of a single hierarchy between those two forests and determine whether creating the trust would be less of a security concern.

In addition to remote site systems, ConfigMgr can publish information to trusted and untrusted AD forests. Add the forest to your hierarchy from Administration -> Hierarchy Configuration, right-click Active Directory Forests, and choose Add Forest. Figure 4.3 shows the Add Forest dialog, which allows you to configure both forest discovery and publishing. For trusted forests, you can use the site server’s computer account to write to the trusted forest, assuming that the appropriate permissions are granted to the System Management container. You can also specify an account in the forest being configured for publishing or discovery. By configuring publishing to remote forests, you allow clients in those forests (both trusted and untrusted) to discover hierarchy resources.

When deploying to a user collection or using user device affinity, ensure that AD User Discovery is configured. AD User Discovery is required for these two features to work, as it allows ConfigMgr to match up client- and server-side user information. This means you cannot use these features for users with devices in workgroups. You must use an LDAP query to discover users with devices in untrusted forests. This is also necessary with computers when using AD System Discovery. If you plan to use ConfigMgr’s on-premise mobile device enrollment capabilities and have users in untrusted forests, configure an enrollment point in the user’s forest to support this feature.

A screenshot shows the Add Forest dialog box.

FIGURE 4.3 The Add Forest dialog.

TIP: AD DISCOVERY METHODS AND NAME RESOLUTION

AD User Discovery and AD System Discovery have several name resolution requirements when dealing with remote untrusted forests. One is the ability to resolve the DCs in that forest; in addition, for AD System Discovery, the site server must be able to resolve the IP address of the computer being discovered (that is, the client PC).

For each AD location, you can specify an account to perform the AD queries. For untrusted forests, this can be an account in that forest. However, if name resolution does not work, discovery will fail even with a valid account, as the site server cannot find a DC to run the query against. DNS name resolution may not have been configured for the untrusted forest. Configure DNS in the site server’s domain to forward to the untrusted forest’s domain’s DNS or configure the hosts file on the site server to resolve one of the DCs in the remote forest. The authors do not recommend using a hosts file, as it is a static mapping, must be manually maintained, and breaks if the IP address of the chosen DC changes.

Installing the ConfigMgr Agent on Workgroup Clients

Workgroup clients cannot utilize the AD schema extensions or use group policy. However, these clients must be able to find and trust an MP. You can use the command line to configure workgroup clients to trust a specific MP and site; or you can publish a site’s MPs to DNS, enabling them to be located by clients. MP lookup occurs automatically when the Publish Selected Intranet Management Points in DNS check box is checked in the Management Point Component properties, which causes records to be published to the MP’s DNS server if it supports dynamic registration of DNS SRV records. An SRV record can be manually registered when dynamic publishing is not possible or if the workgroup client uses a different DNS infrastructure. For information on configuring these DNS records, see https://docs.microsoft.com/sccm/core/plan-design/hierarchy/understand-how-clients-find-site-resources-and-services.

You must use the DNSSUFFIX command-line switch during client installation, which limits the client installation methods available in your design. If DNS cannot be configured (for example, if you have a client in the demilitarized zone [DMZ] of a network using an ISP’s DNS servers), use the SMSMP command-line switch during client installation to tell the client which MP to use for its initial connection.

Establishing client trust of the MP and site to which it is assigned occurs automatically when the client communicates to the MP during installation, as the MP returns a trusted root key for the hierarchy. The client can then verify the signatures on all policies subsequently sent to it from this MP or other MPs. The client implicitly trusts the first MP it communicates with when the AD schema is not extended or AD is not accessible (for example, workgroup clients). This may be undesirable in secure or high-risk environments; in this case, use the SMSROOTKEYPATH property to hard-code the key as part of the installation command-line.

For more information on command-line options, see https://docs.microsoft.com/sccm/core/clients/deploy/about-client-installation-properties. For information on client installation and management, see Chapter 9, “Client Management.”

Installing the ConfigMgr Agent on Azure AD Join Systems

Windows 10 introduces a new computer membership option outside of workgroup or AD domain join, known as Azure AD Join (AADJ). Using AADJ is an alternative to joining a computer to on-premise AD. It is similar to Workplace Join in Windows 8.1 (called the Add a Work or School Account feature in Windows 10), except it replaces a workgroup or domain join and provides a method for the end user to directly log in to Azure AD rather than to a local computer user account or an on-premise domain user account. There are also differences in terms of single sign-on to Azure AD protected Software as a Service (SaaS) applications. AADJ is primarily designed for use with the built-in mobile device management (MDM) components of Windows 10 and, thus, management through Microsoft Intune.

You can install the ConfigMgr agent on an AADJ device. In this scenario and based on the current release of ConfigMgr when this book was published, the client should be treated as a workgroup client for the purposes of AD discovery methods, deployments, and the other capabilities listed in the “Installing the ConfigMgr Agent on Workgroup Clients” section, earlier in this chapter.

Active Directory Certificate Services Considerations

Certain ConfigMgr Current Branch features and capabilities require PKI-issued certificates. You may use any x.509 PKI implementation that supports version 3 certificates; however, Internet-based client management (IBCM) requires issuing certificates to all your client computers. For this reason, using a Microsoft Enterprise PKI based on Windows Server ADCS is the simplest approach as auto enrollment can be configured using AD group policy. Third-party certificates services and managed PKI offerings also support client computer-initiated certificates, which often mimic ADCS certificate authorities.

The following ConfigMgr features depend on PKI-issued certificates:

images HTTPS Encryption and Authentication of Client Communication: See Chapter 9 for more information.

images Management of Client Devices on the Internet without Using a VPN: Client certs are used to authenticate Internet-based clients when connecting to ConfigMgr without the use of a virtual private network (VPN). This applies to both IBCM and the cloud management gateway (CMG). As an alternative, if you have deployed Azure AD, clients can authenticate to CMG with their Azure identity as of ConfigMgr Current Branch 1706.

images On-Premise MDM: MDM supports Windows 10 and legacy embedded systems. It requires an enrollment certificate on each device for mutual authentication and SSL communication with the site systems.

images Certificate Deployment Profiles: These profiles require a PKI to issue certificates either via Simple Certificate Enrollment Protocol or distribution of a Public-Key Cryptography Standards #12 (PKCS #12) file. On Windows, a PKCS #12 file is also known as Personal Information Exchange (PFX) file.

ConfigMgr also leverages various cryptographic functions to support various internal processes, including the following:

images Client Policy Signing: All client policies are signed by the site server. The self-generated key is created at site installation and re-created during a site recovery. The key is called the trusted root key; the public portion of the key is published to AD. See Chapter 3 for more details.

images Custom Update Signing: Custom software updates must be signed by a publisher. The certificate must be trusted for installation of updates by the Windows Update Agent (implemented using the Trusted Publisher certificate store); ConfigMgr clients require a valid digital certificate for installation.

images Inventory Signing: Clients by default sign their inventory and state messages with a self-signed certificate unless enabled for HTTPS, when they utilize the PKI-issued certificate used to communicate with the MP. Clients can be configured to encrypt their inventory and state messages; this is independent of encryption at the transport layer using HTTPS.

images Site-to-site Communication: Site servers use keys to ensure the integrity of intersite file replication. DRS uses certificates stored in SQL Server to authenticate SQL Server Service Broker (SSB) endpoints.

For a comprehensive list of cryptographic controls maintained by Microsoft, see https://docs.microsoft.com/sccm/protect/deploy-use/cryptographic-controls-technical-reference.

If you are planning to leverage HTTPS for client communication or Internet-based clients using IBCM, review Microsoft’s published list of required certificate properties at https://docs.microsoft.com/sccm/core/plan-design/security/plan-for-security#BKMK_PlanningForCertificates.

It may be that your organization’s PKI was not designed to accommodate deploying client authentication certificates to a large number of client machines. Reasons may include the following:

images High-Assurance PKI: The PKI may be designed for high assurance. High-assurance PKI is designed to provide parties accepting certificates with a high level of confidence over identity validation that occurs as part of their issuance. These types of PKI may have a manual certificate approval process that requires a human approver to approve each certificate. This prevents the bulk issuance of certs to computer systems, as the same level of confidence and assurance cannot be provided, and manual overhead would be prohibitive.

images Manual Review of Certificate Requests: There may be operational processes that require manual review of all certificate requests. This would significantly hamper issuing one certificate per PC.

images Scalability of the PKI: The PKI may have been architected to support hundreds and not thousands of requests. Windows systems where the ConfigMgr client runs not only need an initial certificate but must renew certificates regularly.

images Costs of Certificate Issuance: The PKI may be provided as a managed service or outsourced, meaning there may be fixed contractual costs associated with certificate issuance and management, which would make use of the PKI very expensive and completely invalidate any value from ConfigMgr features that rely on those certificates (especially ConfigMgr client certificates).

Consider deploying a low-assurance PKI designed specifically to issue bulk certificates for computer systems and integrated with AD. This PKI could be limited to only issuing client authentication certificates and could be automatically trusted only by internal AD domain-joined systems. This approach allows you to leverage the features of ConfigMgr that require PKI certificates (such as IBCM and HTTPS client communication) without impacting the existing PKI, reducing that PKI’s assurance level, and incurring per-certificate costs.

Hierarchy Planning in ConfigMgr

After establishing your objectives, constraints, and infrastructure prerequisites, you can start working on the ConfigMgr design tasks. The first design consideration should be determining how to structure your hierarchy. A hierarchy may be as simple as a single site with associated site systems or as complex as a central administration site (CAS) with over a dozen primary sites. ConfigMgr does not allow logically removing or moving primary sites within a hierarchy, so you should spend some time up front determining what structure meets your organization’s requirements. Do not configure a CAS simply because you might need it in the future; ConfigMgr allows you to add a CAS to a standalone primary site and add new primary sites.

Chapter 2 introduced the concept of ConfigMgr hierarchies. Sites in a hierarchy share replicated data, security policy, and administrator-created objects (software updates, boundaries, and so on). A single primary site can be a hierarchy, as it may have secondary sites underneath it. Within a hierarchy are certain site system roles that are hierarchywide and support hierarchywide functions, such as the service connection point (SCP).

ConfigMgr Current Branch supports migration from ConfigMgr 2007. Like ConfigMgr 2012, it cannot support 2007 sites in its hierarchy. ConfigMgr Current Branch also supports migration from ConfigMgr 2012 SP 2 or 2012 R2 SP 1. For these versions, it also supports in-place upgrading to Current Branch. It is more common to upgrade from these versions than to migrate to Current Branch. For more information on migration, see Chapter 7, “Upgrading and Migrating to ConfigMgr Current Branch.”

Microsoft regularly updates ConfigMgr Current Branch. Microsoft currently plans to release updates three times a year. Each update is supported for 12 months, and the latest critical (but non-security) updates always go into the latest updates. This means you should take care in your design to keep your ConfigMgr hierarchies and site infrastructure as simple as possible, as the team operating ConfigMgr in your organization must regularly update them. This is similar to the approximately quarterly release cycle of cumulative updates in ConfigMgr 2012; while the cumulative updates did not define support, as part of a support case, Microsoft Support would likely request that you upgrade to the latest cumulative update.

About Configuration Manager Sites

Every site system and client is part of a site. Each site has a site server, a site database, an SMS provider, and an alphanumeric three-character site code. The site code must be unique throughout the hierarchy and across hierarchies where multiple hierarchies share the same AD forest. There are three different types of sites in ConfigMgr: the CAS, primary sites, and secondary sites. The next sections describe these different types of sites.

CAUTION: RESTRICTIONS AND REUSE OF SITE CODES

You should avoid certain names when selecting site codes. These include reserved names such as AUX, CON, NUL, PRN, and SMS. For more information and a list of reserved names, see https://msdn.microsoft.com/library/aa365247.aspx. You can also use WinObj from Windows Sysinternals (http://technet.microsoft.com/sysinternals) to view reserved names (listed under GLOBAL in WinObj).

In addition, you should avoid reusing site codes from decommissioned hierarchies, as doing so can lead to issues if references to the old site codes were not removed from AD, DNS, or WINS. Client-side troubleshooting can be complicated if a client tries to talk to a now-decommissioned site and the site code is reused.

The Central Administration Site

The CAS acts as the replication hub for primary sites in a hierarchy and is required only when you have multiple primary sites. You can connect up to 25 primary sites to a single CAS, though in practice this number is much smaller in all but the largest ConfigMgr environments that have hundreds of thousands of clients. The CAS does not manage clients directly, clients cannot be assigned to it, and it cannot have secondary sites directly beneath it.

A CAS should not be used as a “future proofing” mechanism in a design unless there is clear data supporting a future increase in client counts, such as an impending merger or acquisition. You can always add a CAS to a single primary site later.

Primary Sites

Every ConfigMgr client is assigned to a primary site and receives policy from its assigned site. Primary sites are used to scale out managing clients, as each primary site can support up to 150,000 clients per primary site or 175,000 for a standalone primary. Microsoft regularly tests Configuration Manager and may revise these figures; for the latest supported client numbers, see https://docs.microsoft.com/sccm/core/plan-design/configs/supported-operating-systems-for-site-system-servers.

Secondary Sites

A secondary site is a form of proxy site for a primary site. Secondary sites are installed directly from the ConfigMgr console using the site server permissions and are upgraded from the console. These sites have a database like the CAS and primary sites, but it is much smaller, given the relatively limited functionality this site type provides. It does, however, mean that data replication between the site databases must be factored into the decision about whether to deploy a secondary site. The secondary site database can be hosted on SQL Server or, more commonly, SQL Server Express, which is installed automatically by the primary site server during secondary site installation.

A secondary site installs an MP (termed a proxy MP) site system role and commonly also has DP and software update point (SUP) roles. These roles are installed to proxy client requests locally. The secondary site’s proxy MP uses the secondary site’s database and the Linked Server feature of SQL Server’s database engine to query the primary site’s database. The proxy MP can then cache client policy requests. The primary design capability a secondary site provides is the ability to host the SUP and allow software update metadata queries to occur locally, which is significant as the results from software update metadata queries may be up to tens of megabytes per client, depending on the OS and software installed.

Chapter 5 provides more information on determining when to opt for a secondary site over a DP.

Hierarchywide Site System Roles

Certain site systems provide services to the entire hierarchy. The following site system roles synchronize with Microsoft services on the Internet, and you can configure them at the top-level site in your hierarchy, either at the CAS or a standalone primary site:

images Asset Intelligence Synchronization Point: This site system role allows you to request software asset classification data that helps improve reporting on software assets in your environment.

images Endpoint Protection Point: This role uses System Center Endpoint Protection (SCEP) installation to pull metadata to help populate SCEP reports. It also defines the default Microsoft Active Protection Service (MAPS) participation level. MAPS allows the Endpoint Protection agent to send telemetry data on suspicious behavior to Microsoft and receive dynamic micro-definitions in response. For more information on SCEP, see Chapter 19, “Endpoint Protection.”

images Service Connection Point: The SCP role provides two key functions:

images For customers using Microsoft Intune integrated with ConfigMgr, the SCP provides the data channel to send and receive information from Intune. For more information on Microsoft Intune, see Chapter 16, “Integrating Intune Hybrid into Your Configuration Manager Environment,” and Chapter 17, “Managing Mobile Devices.”

images The SCP is used as the channel to obtain information from Microsoft regarding new releases of ConfigMgr, individual hotfixes/updates, and new features. It is also used to send telemetry information to Microsoft.

images Software Update Point (top-level): This SUP role pulls metadata from Microsoft Update, and all other SUPs in the hierarchy connect to this SUP to pull metadata. Chapter 15, “Managing Software Updates,” discusses the operation of SUPs in more detail.

You should assign these four site system roles to a server with good Internet connectivity. All four roles support communication via web proxy (including authentication) and do not require direct access to the Internet.

You may install other roles at multiple locations throughout the hierarchy. These hierarchywide roles do not have to be deployed at each primary site and can be installed centrally:

images Data Warehouse Service Point: This role underpins the data warehouse capabilities released in ConfigMgr Current Branch version 1706. The role installs and configures the data warehouse database. It also adds reports that surface the data stored in the data warehouse. The data warehouse stores up to three years’ worth of data, up to 2TB.

images Application Catalog Website Point: This role provides users with access to the software in the application catalog. This is used as a backup location in Current Branch, with the new Software Center (part of the ConfigMgr client) now the primary location for user self-service.

images Application Catalog Web Service Point: This role provides the middle-tier web services between the application catalog website point and the site database.

images Fallback Status Point: The Fallback Status Point (FSP) role provides a way for ConfigMgr clients to report communication failures with their assigned MP(s).

The last two roles are hierarchywide and may be deployed to the CAS as well as primary sites:

images System Health Validator Point: This role helps support Network Access Protection (NAP). NAP is still supported in Configuration Manager Current Branch, although it is deprecated in Windows Server.

images Reporting Services Point: The Reporting Service Point (RSP) role uses SQL Server Reporting Services (SSRS) to provide reports using data from the ConfigMgr site database. It is commonly installed at the top-level site (that is, the CAS or standalone primary site). You can deploy multiple RSPs within a single site to facilitate administrator access to reporting, allowing certain administrators to be granted additional control over SSRS custom reports. You can also deploy RSPs to lower-level primary sites, restricting access to client-generated data only available at that primary site’s site database (in which case all administrator-created objects are accessible). For more information on reporting, see Chapter 21, “Configuration Manager Reporting.”

Planning Your Hierarchy Structure

Similar to ConfigMgr 2012, Configuration Manager Current Branch allows a single site to span multiple geographic locations separated by WANs more efficiently than earlier versions of Configuration Manager. The “Planning for Content Management” section, later in this chapter, discusses content distribution, and Chapter 5 explains how to design for various network architectures. ConfigMgr sites no longer serve as boundaries for security, client settings, or network locations.

A well-designed ConfigMgr Current Branch hierarchy is likely to contain far fewer sites than ConfigMgr 2007 or even ConfigMgr 2012. Aim for a design that leverages sites to scale up management until client support numbers force adding more sites. The smaller and flatter a hierarchy, the less complex and easier it is to manage.

A CAS introduces inherent complexities in designs. These include lag in both the downward propagation of object creations and modification, as well as the upward propagation of client information. This means there is up to a 5-minute lag on the creation/modification of an object (deploying an application or update) due to replication. It also means the change of a dynamic or static rule on a collection incurs a minimum delay of up to 10 minutes—5 minutes to replicate the new rule down, and 5 minutes to replicate the changes in the collection backup. There is also collection evaluation time on the primary site to factor into to the end-to-end timing.

In addition to delays introduced by replication, replication between the CAS and all primary sites must remain active and healthy, which introduces additional operational overhead. This differs from a hierarchy composed of a single standalone primary site with secondary sites, which while requiring replication to support content replication and site status reporting, does not require the same level of replication. By default, an outage of more than five days means that replication must be reinitiated and all data synchronized between the two sites experiencing an outage. See Chapter 3 for more details on replication.

Once you determine whether your hierarchy will consist of a CAS and child primary sites or a standalone primary site, you need to determine what the underlying site structure, if any, will look like. Many organizations choose to use a single primary site with remote DPs. However, there are reasons you might want to have multiple primary sites in your hierarchy, such as the following:

images A single primary site supports up to 175,000 clients. If you anticipate having more than 175,000 clients, plan for more primary sites. A single primary site in a hierarchy supports only 150,000 clients. Check the latest client support numbers at https://docs.microsoft.com/sccm/core/plan-design/configs/supported-operating-systems-for-site-system-servers. These numbers are determined using client defaults for policy polling, software update evaluation, and inventory reporting.

images Additional primary sites distribute client assignment across those sites, reducing the risks associated with the failure of a single primary site. However, these sites should not be used for disaster recovery (DR) or backup and recovery processes. For information on backup and recovery, see Chapter 24, “Backup, Recovery, and Maintenance.”

images You may choose to install an additional site to support Internet-based clients if there is a discrete group of Internet-only clients. The “Planning for Internet-Based Clients” section, later in this chapter, discusses single-site and multiple-site options to support Internet-based clients.

images You may choose to install an additional site to support Intune-managed MDM devices. Generally, this is required only when the total of Intune MDM and ConfigMgr agent-managed devices exceeds 175,000. This additional site has a very small site system footprint, with no need for MP, DP, SUP, or other roles that support the ConfigMgr agent. The “Planning for Mobile Device Management” section, later in this chapter, discusses this further from a design point of view. See Chapters 16 and 17 for additional details.

Planning Boundaries and Boundary Groups

ConfigMgr boundaries define network locations in which clients may reside. As discussed in Chapter 2, boundaries are defined at the hierarchy level and are globally replicated by the CAS to all primary sites in the hierarchy. Boundary groups aggregate boundaries for efficient management. A boundary may define a single network subnet; a boundary group may then represent a branch office. A boundary group could also represent a metropolitan area network or a region rather than a single building. Boundaries and boundary groups serve two key functions:

images Selection of Protected Site Systems: Site systems are associated with boundary groups; design your boundary groups to support this mapping function. These site systems are the MP, DP, and state migration point (SMP).

images Automatic Site Assignment: If using automatic site assignment, you must configure one or more boundary group(s) for automatic site assignment. During automatic site assignment, the client determines whether its current network location corresponds to a boundary configured for site assignment. If the client is within such a boundary, it assigns itself to the appropriate site; otherwise, automatic assignment fails. Automatic site assignment is no longer the default in Configuration Manager. Depending on the client installation method used and the number of primary sites, automatic site assignment may not be required. See Chapter 9 for more details on client installation.

Boundaries must be added to a boundary group before they can be used. Site assignment is configured on boundary groups rather than individual boundaries. Similarly, protected site systems are associated with boundary groups. The following boundary types are defined:

images Active Directory site

images Internet Protocol (IP) subnet

images IP range

images Internet Protocol version 6 (IPv6) prefix

These individual boundary types can be combined in a boundary group. All boundary types are determined by the client and then sent to the MP to locate content or site systems. The only exception is the IP range boundary type, where the client sends up its IP address, and the MP determines the IP range to which the client belongs.

NOTE: SUPERNETS AND CIDR

Supernets, subnets, and classless interdomain routing (CIDR) are subtly different. This is important to understand, as it causes AD sites and IP subnet boundaries to not work correctly with the CIDR method commonly used in network administration today. CIDR uses variable-length subnet masking (VLSM) to provide more flexible addressing and simpler administration than older Class A, B, and C IP subnets. However, network hosts (desktops or laptops running the ConfigMgr client in the case of ConfigMgr) are unaware of CIDR—as only network routers or layer 3 switches understand it for the purpose of routing packets. A network host only needs to know when to send a packet to the network gateway or about the first router in the route to the destination host.

For this reason, in combination with the fact that the ConfigMgr agent determines its subnet based on host OS information, IP subnets must be defined per client subnet and not based on the supernets used by network administrators to define routing within the network. Following is an example of how this works:

images A network team defines four buildings in the same region, using the supernet 10.10.8.0/22, and each building with a subnet from this list: 10.10.8.0/24, 10.10.9.0/24, 10.10.10.0/24, and 10.10.11.0/24.

images A client located in subnet 10.10.10.0/24 with IP address 10.10.10.100 sends up these two pieces of information to the MP.

images The subnet matched by the MP in the site database is 10.10.10.0/24 and not 10.10.8.0/22.

If subnet ID 10.10.8.0 is configured, this client does not fall into that subnet. This is because the client thinks it is in subnet ID 10.10.10.0 and knows nothing about supernet 10.10.8.0/22 or subnet 10.10.8.0 in this case. To support supernet 10.10.8.0/22 using IP subnets, you must define the following subnet IDs: 10.10.8.0, 10.10.9.0, 10.10.10.0, and 10.10.11.0. This configuration supports the individual clients in each subnet and the configuration of each underlying subnet by the network team.

An additional constraint is that over the past decade, AD sites have converged into larger AD sites, covering more network locations. This has occurred as network links have improved while AD client-to-DC traffic has largely remained constant. ConfigMgr content delivery consumes more bandwidth than AD policy and authentication traffic. It is important to understand how your AD teams use AD sites and the AD site topology’s correlation to actual network links. You will want to ensure your AD team knows that your ConfigMgr design and operations depend on their configuration of AD sites.

Microsoft recommends that you create as few boundaries as possible to meet your requirements based on the constraints of your network topology. The following guidelines will help you consider how to minimize the overall number of boundaries in your site or hierarchy:

images Do not use a small IP range that matches one or a handful of IP subnets.

images Do use IP ranges to handle exceptions such as networks used for VPN connections.

images Do use single IP ranges to replace a large number of IP subnets or AD sites.

Boundary groups are used in content distribution to control the DPs from which a client retrieves content. Because boundaries are hierarchywide, DP boundaries are independent of sites, and a DP can be shared between sites. This allows you to optimize content delivery based on network considerations. When clients are not within the boundaries of a DP with the required content, they use the deployment option specified for slow or unreliable networks.

Chapter 5 discusses network considerations for the placement of protected site systems. Chapter 14, “Distributing and Deploying Applications and Packages,” discusses content deployment.

Overlapping boundaries are boundaries that include the same network locations. Overlapping boundaries were explicitly unsupported in Configuration Manager 2007; however, this is no longer the case:

images Automatic Site Assignment: Overlapping boundaries remain unsupported for automatic site assignment. If you use boundaries for automatic site assignment, plan and maintain boundaries appropriate to your network topology and do not overlap. Automatic site assignment can have unpredictable results when a client is located within the boundaries of more than one site.

images Content Location: Overlapping boundaries are supported for content distribution. For clients that fall into multiple boundary groups, the MP returns a complete list of all DPs associated with content requested by the client, based on all boundaries and boundary groups in which the client is located. The client then follows its normal DP location rules to select the best DP from that list.

Microsoft significantly modified boundary groups in the 1610 release, introducing the ability to define relationships between boundary groups. Each relationship can have a defined timeout before failover occurs. This allows for designs where one remote location can fail over to an intermediary or a regional datacenter prior to failover to the core datacenter where the site server is located. Alternatively, relationships can be built to fail over to another boundary group at the same physical location instead of traversing a WAN link. Microsoft also made changes in ConfigMgr Current Branch version 1706 to the fallback behavior of SUPs within and between boundary groups, allowing for more predictable behavior and, in the case of repeat failovers, a more aggressive failover cycle. The objective of these changes is to reduce the need to leverage network load balancing for SUPs.

For more information on the version 1610 and later boundary models (including version 1706 SUP failover changes), see https://docs.microsoft.com/sccm/core/servers/deploy/configure/define-site-boundaries-and-boundary-groups. For legacy information on ConfigMgr behavior prior to version 1610, refer to https://docs.microsoft.com/sccm/core/servers/deploy/configure/boundary-groups-for-1511-1602-and-1606.

Site Planning for Configuration Manager

After determining the number of sites and their scope, the next step is to plan how to design each site. This is a significant element of the design. It involves determining the number of site systems to deploy and their hardware specifications. This phase tends to be the lengthiest part of a ConfigMgr design and may force some reevaluation of your overall site count and hierarchy structure.

Site Servers and Site Systems Planning

The site server and site system servers are the foundation of a hierarchy or standalone primary site. Chapter 2 introduced site system roles; this section helps you determine what site system roles are required and the server infrastructure necessary to deliver those roles. Site system roles may be hosted on the site server itself or remotely on another server. Following is a listing of key considerations for site system role placement:

images Network Topology: Place DPs at each physical site if the site spans a WAN link or is a large metropolitan campus with backbone LAN links (that act as points of congestion). This proximity to the clients may be ideal as it allows them to obtain content locally, albeit at increased server infrastructure costs. Chapter 5 discusses network considerations for DP placement. This should also factor in the bandwidth savings that client peer caching introduces (introduced with ConfigMgr Current Branch version 1610 and improved in version 1706). For more information, see https://docs.microsoft.com/sccm/core/plan-design/hierarchy/client-peer-cache.

images Security: Moving client-facing roles away from the site server allows you to move client network connections away from the site server as well. Client-facing roles include the MP, DP, SUP, and Application Catalog roles. Moving these roles also allows you to remove the need for IIS on the site server, which is more secure when supporting clients on untrusted networks. An untrusted network could be the Internet, a perimeter network, or a DMZ. For clients on the Internet, IBCM may warrant having duplicate sets of MPs, DPs, and SUPs just to support those clients, depending on a company’s internal security policies and risk assessments.

images Scalability: For large sites, moving site system roles off the site server and scaling out may be crucial to achieving software delivery requirements. The MP and SUP no longer support Windows Server network load balancing in ConfigMgr Current Branch via the console. (The SUP supports this via the SMS provider for scripting.) The ConfigMgr client can now automatically switch servers hosting those roles as new instances are added. The DPs continue to provide scalability as clients randomly select from the available list of DPs returned by their assigned MP. If a site needs to scale to the supported limit or close to that limit, multiple site systems (DP/MP/SUPs) are required.

images Management: If a separate team manages SQL Server, corporate policy often says that this team must manage all instances of SQL Server. This can mean that a remote site database must be deployed on an existing instance of SQL Server managed by that team. Note that the ConfigMgr site server’s computer account requires sysadmin rights to the SQL Server instance and local administrative rights to the Windows server where the SQL Server instance is running. Ensure that the SQL Server team is familiar with supporting the SSB and certificate-based authentication. These requirements may make the ConfigMgr site database out of scope for that team.

images Availability: The only supported way to increase ConfigMgr site availability is by adding additional client-facing site system servers (MP, DP, and SUP). For the database layer to be highly available, it must be deployed with a failover cluster or SQL Server Always On availability groups. Neither of these SQL Server high- availability topologies can be leveraged when the site server is colocated with SQL Server, as the site server does not support running on a clustered server.

images Performance: In general, the best performance in larger sites is achieved with dedicated site database servers with hardware profiles specifically designed with SQL Server performance in mind. It also includes having a large amount of system memory dedicated to SQL Server. SQL Server regularly polls the operating system to determine how much free memory is available, with the intention of preventing the OS from paging. If SQL Server is on a dedicated server, these checks allow it to consume the most amount of memory without impacting the OS. When colocated with the site server, it is important for the SQL Server maximum memory to be between 80% and 90% of the system memory (depending on the total amount of system memory). This helps prevent starving of various other components, including Windows Management Instrumentation (WMI), the SMS Executive, IIS, and the file system cache of memory.

The SMS provider role is considered a special case. As discussed in Chapter 3, this is a WMI provider that serves as an interface to the database for the ConfigMgr console, scripts, PowerShell cmdlets, third-party tools, and custom-built applications. The CAS and each primary site require one instance of the SMS provider, although they support additional instances. The decision to deploy additional SMS providers is primarily based on one of the following requirements:

images The ConfigMgr console has an increased level of availability: When there are multiple providers, the console nondeterministically selects a provider to use, enabling the site to sustain a single provider outage. While console errors would occur, a connection can eventually be made. This may be useful in an emergency.

images You need to support many ConfigMgr console connections: You can support an increased number of console connections by increasing site server resources or moving the SMS provider to a dedicated server.

The SMS provider is automatically installed on the site server during site setup. It can also be installed on the site database server or another server. You can change its location by rerunning setup on the site server. Setup is also used to add additional instances of the provider to a site. Following are requirements for a server to host an SMS provider instance:

images The provider must be installed on a server joined to an AD domain that has a two-way trust with the site database and site server’s domain.

images The server cannot host any other site system roles from another site.

images The server cannot host any other SMS providers for any site.

images The server must be running a version of Windows Server supported for a site server. At the time this book was published, this included Windows Server 2012 and later versions. Windows Server 2008/2008 R2 are deprecated and unsupported for all roles other than the state migration point and DP roles. Windows Server 2008 was deprecated in version 1511, and Windows Server 2008 R2 was deprecated in version 1702. ConfigMgr Current Branch version 1602 introduced support for in-place upgrade of the site server from Windows Server 2008 R2 to Windows Server 2012 R2. (For more information, see https://docs.microsoft.com/sccm/core/plan-design/changes/whats-new-in-version-1602#bkmk_UpgradeOS.) For the latest information on ConfigMgr removed and deprecated features, see https://docs.microsoft.com/sccm/core/plan-design/changes/removed-and-deprecated-features.

images The server must have the Windows Assessment and Deployment Kit (ADK) components installed. These components to be installed on a SMS provider are the same components selected when installing the ADK as a prerequisite when installing a site server.

Following are points to consider regarding a location for the SMS provider:

images Site Server: Using a site server is the simplest approach as there are no network connectivity issues. However, the server resources of the site server must be shared between the site server and the provider.

images Site Database Server: Placing the provider here may yield the best performance, as all provider-to-database communication occurs on the server. This option is not available if the site database is on a clustered instance of SQL Server. Note that placing the SMS provider here consumes server resources that would otherwise be dedicated to the SQL Server instance, which may complicate SQL Server performance troubleshooting.

images Any Other Server: This is the only option that allows you to increase availability of the SMS provider function, as discussed previously in this section. The server must have a high-speed network connection to the site server and site database server. This placement requires additional server hardware resources.

Capacity Planning for ConfigMgr Sites

This section focuses on the scalability of sites and site system roles, which plays a role in determining the topology of the hierarchy and of individual sites. Use this section in conjunction with the network guidance found in Chapter 5. Specific guidance is produced by Microsoft and updated regularly based on performance changes made to new releases of ConfigMgr Current Branch and availability of new hardware; see https://docs.microsoft.com/sccm/core/plan-design/configs/recommended-hardware for additional information. Following are certain guiding principles to consider when looking at performance of the site system roles:

images DPs: These servers require the ability to serve large amounts of data to clients via IIS and file shares, resulting in heavy disk read operations and the capability to quickly move data to clients over the network. Even when clients are on the same LAN, you may completely consume larger amounts of bandwidth from storage and network subsystems on the server. This is also true with a virtualized DP, as host storage and network infrastructures are often shared across VMs, with little to no isolation between VMs.

images MPs: These servers tend to be CPU-bound for calculation processes. The MP also requires a relatively quick storage subsystem in large environments, as client data is temporarily stored there before being sent to the site server for final processing.

images Site Server: The site server requires a large amount of CPU and memory resources, second only to the site database. The memory is necessary for the SMS provider and SMSExec process, the core Windows service of ConfigMgr. The CPU is used for processing required for discovery, hash calculations for content, client data, and general information processing. However, the most critical resource to a site server is the storage subsystem, which should support a large number of small random-write operations. These types of storage operations are the most difficult to handle on hard drives. RAID 10 (mirroring and striping) is often recommended to provide the best performance. In the largest ConfigMgr environments, storage controller bandwidth and caches are important considerations.

images Site Database: The site database should have the highest proportion of server resources allocated, including CPU, memory, and storage resources. To support the highest level of scalability, these can be four to six times the memory and twice the processing power of the site server. SQL Server best practices regarding storage also apply, including isolating data and log files from each other and splitting those files to allow SQL Server to perform parallel operations. Microsoft suggests that customers with large deployments use a remote site database instead of colocating it with the site server. This is a change from previous versions, where the guideline was to keep both roles on the same server, largely due to lower-capacity network links within datacenters at that time.

images SUPs: A SUP consumes the most server resources of the key client-facing site systems (DP, SUP, and MP). This can be twice the memory and processor resources of an MP. The SUP is essentially an IIS web service and background WSUS service.

TIP: TWEAKING THE IIS CONFIGURATION OF THE SUP

SUPs require modifications to the WSUS IIS Application Pool configuration, stored in WsusPool Application Pool. Find the setting by right-clicking WsusPool under Application Pools in IIS and selecting Advanced Settings. Within the Advanced Settings dialog, set the following:

images Double the value of (General) -> Queue Length, from 1000 to 2000.

images Quadruple Recycling -> Private Memory Limit (KB), from 1,843,200 to 7,372,800 or set to 0 (unlimited).

These settings allow the SUP’s WSUS components to meet the more complex nature of WSUS metadata queries used by the ConfigMgr software update feature. For the latest guidance, see https://docs.microsoft.com/sccm/core/plan-design/configs/recommended-hardware.

Specific guidance is produced by Microsoft and updated regularly based on performance changes made to new releases of ConfigMgr Current Branch and availability of new hardware; see https://docs.microsoft.com/sccm/core/plan-design/configs/recommended-hardware.

Configuration Manager on Microsoft Azure

ConfigMgr Current Branch fully supports hosting ConfigMgr servers in Microsoft Azure IaaS VMs. When hosting servers in Azure, it is important to follow the TechNet documentation regarding server sizing (see the “Capacity Planning for ConfigMgr Sites” section, earlier in this chapter, and the “ConfigMgr Scalability Limits” section, later in this chapter). You should also follow Microsoft’s guidance regarding Azure storage, which provides limited throughput for standard disks. This can be anywhere from 300 to 500 I/O operations per second (IOPS) depending on VM size. For information about Azure storage scalability and performance, see https://azure.microsoft.com/documentation/articles/storage-scalability-targets/.

Azure Premium Storage provides increased storage throughput but with fixed disk sizes and higher cost. You could use the Storage Pools feature of Windows Server inside an Azure VM to combine multiple lower-cost P10 premium storage disks and provide increased storage. At the other extreme, you could combine multiple P30 disks, the highest specification of Premium Storage, to provide even higher levels of performance, especially to SQL Server. Premium Storage also requires specific types of Azure VMs. For information on Premium Storage, see https://azure.microsoft.com/documentation/articles/storage-premium-storage/.

Not all processor resources are created equal in Azure. Azure VMs have different series, denoted by the alphabetic prefixes A, D, F, GS, and N. The size of a VM in a series is denoted by the numeric suffix (for example, A0, D13, F8, GS5, and N24). VMs supporting Premium Storage are denoted by S (for example, GS5, DS13). Finally, certain series of VMs have a second release; for example, D13_v2 is the second version of the D-series VM.

The series is important, as it alters the underlying processor type used by the host. For example, the Dv2, F, and G series and their xS Premium Storage counterpart all feature Intel Xeon E5-2673 v3 (Haswell) processors, which provide increased compute power over the A and D series VMs. For a complete list of the current Azure VM sizes, processor performance, and VM-level network bandwidth limits, see https://azure.microsoft.com/documentation/articles/virtual-machines-windows-sizes/. Network bandwidth is an important consideration when hosting a ConfigMgr infrastructure in Azure. Azure charges for outbound data when using the site-to-site VPN option to provide connectivity between on-premise networks and Azure virtual networks. The ConfigMgr site server and core roles often push data to clients and ConfigMgr servers in remote locations such as branch offices, which can result in a very large amount of outbound data (from Azure to on-premise). Azure site-to-site VPNs occur over an Internet link, which may not have the capacity to support a large amount of content replication.

Azure offers ExpressRoute as an alternative to using VPNs over the Internet. ExpressRoute, which is provided by your company’s network service provider, supplies a dedicated amount of bandwidth, up to a 10Gbps dedicated fiber link between your corporate network and your Azure virtual networks. It includes unlimited inbound and outbound data transfer costs and can be critical to a successful ConfigMgr deployment on Azure. Note that ExpressRoute is not available in all Azure locations, and not all network service providers globally support ExpressRoute circuits. For information about ExpressRoute, including costs and availability, see https://azure.microsoft.com/services/expressroute/.

Ultimately, consider the option of using Azure in a similar manner to how you would consider hosting VMs in a service provider’s or outsourcer’s datacenter. Take into account the site-to-site connectivity between the two networks and ensure that your provider’s storage subsystem offers adequate performance and throughput for your ConfigMgr environment.

ConfigMgr Scalability Limits

Certain scalability requirements are crucial for determining when to add additional site systems and sites. Consider the following:

images Overall Client Limits: A standalone primary site supports up to 175,000 clients, a child primary site in a hierarchy supports up to 150,000 clients, and a hierarchy supports up to 1,025,000 devices. Those clients are composed of devices types grouped based on the following resource requirements:

images Devices running the ConfigMgr agent (Windows, Linux, or UNIX)

images Devices running the ConfigMgr device agent (Mac or Windows CE 7.0)

images Devices managed via the Microsoft Intune MDM channel (Windows, iOS, Android, or Mac)

images Devices managed via the on-premise MDM channel (Windows 10)

For a complete list of the device type limits for each type of site, see https://docs.microsoft.com/sccm/core/plan-design/configs/size-and-scale-numbers#bkmk_clientnumbers. Secondary sites support up to 15,000 devices running the ConfigMgr agent (Windows, Linux, or UNIX).

images Hierarchy Limits: A CAS can support up to 25 primary sites. Few environments, if any, hit this limit, and such a large number of primary sites should be avoided whenever possible due to the complexity of database replication with that many primary sites. Each primary site can have up to 250 secondary sites.

images MP Limits: Each MP can support 25,000 clients, and each primary site can support 15 MPs. MPs should not be installed across a WAN link from the primary site server or site database. A secondary site can have only one MP, which must be installed on the secondary site server.

images DP Limits: Each DP supports 4,000 client connections. Primary and secondary sites support 250 DPs. If a DP is configured as a pull-DP, an additional 2,000 pull-DPs can be added to the primary or secondary site. Those pull-DPs count against the 4,000 client connections of the DP they pull from. A primary site supports a combined total of 5,000 DPs across itself and all of its child secondary sites. Each DP can have a total of 10,000 applications or packages.

images SUP Limits: Each SUP supports up to 25,000 clients when installed on the primary site server. When deployed remotely from its primary site server and on dedicated hardware, the SUP supports up to 150,000 clients. While there is no documented maximum number of SUPs per site, scalability is not a concern. Because a single SUP role can easily scale to the number of clients supported by a primary site, often two SUPs are all that is required to provide increased availability.

See https://docs.microsoft.com/sccm/core/plan-design/configs/size-and-scale-numbers for a complete list of the various limits. This list is regularly updated based on changes to ConfigMgr and further testing performed by the ConfigMgr product team.

Meeting Availability Requirements

Availability in ConfigMgr has always posed challenges due to lack of support for clustering or other high-availability methods for the site server. However, the other site system roles provide multiple methods for increasing the availability of the site and the services it offers. This section provides a breakdown of key components and how their availability may be increased:

images Site Database: As with previous versions, ConfigMgr Current Branch provides support for Windows Failover Clustering for SQL Server. This helps to ensure that the various operations that rely on the site database can continue to function. ConfigMgr Current Branch version 1602 and later support SQL Server AlwaysOn availability groups, which allow two SQL Server instances to be highly available without using shared storage. An added benefit is that there is no single point of failure in the shared storage subsystem in the cluster, where data corruption can impact availability. Having a highly available database helps ensure the following:

images The site’s MPs can continue to serve existing policy to clients. Clients can install software, run task sequences, and deploy software updates.

images The site’s SUPs can continue to serve metadata for software update scans and deployment evaluations for existing deployments to clients. This requires that the SUP’s WSUS database be stored on the highly available SQL Server instance.

images MPs: ConfigMgr Current Branch allows you to install multiple MPs in the same site, which enables clients to automatically fail over from one MP to another and is critical for enabling client functions to continue without impact. Failover is handled automatically by clients without requiring any load balancing solutions. It is important to deploy additional MPs to provide for availability in addition to scalability. For example, if you are going to support 50,000 clients in a site, you should deploy three MPs. Because the support scalability limit for an MP is 25,000 clients, the additional MP allows a single MP failure to be handled automatically. MPs depend on the site database server to function, so this should be factored into their availability.

images SUPs: ConfigMgr Current Branch allows multiple SUPs to be installed in the same site, which enables clients to continue with software update processes in the event of a failure by a single SUP. Failover is handled automatically by clients, without requiring any load balancing solutions.

images DPs: ConfigMgr allows for multiple DPs in a single site. Clients select DPs nondeterministically after initially grouping fast and slow clients. This helps ensure that content is available for the client to install software, software updates, and operating systems as they are deployed.

images SMS Provider: ConfigMgr allows for multiple SMS providers to be deployed in a single site. These do not provide instance failover or dynamic routing. Instead, all providers are tried nondeterministically, which means you may see errors in the ConfigMgr console when the console attempts to connect to a provider that is offline. The console will eventually reach an online provider. This allows you to avoid a complete console outage; instead, there is a degradation to the console experience in the event of an outage.

As with the availability of any solution, consider your need for a highly available solution in the context of your business requirements. Do not invest in additional infrastructure and associated complexity if your target for speed of software delivery to end users does not warrant that level of availability.

Planning for Content Management

Content in the context of ConfigMgr refers to the files for applications, packages, software updates, and operating system deployments. One of ConfigMgr’s most important functions is its ability to efficiently deliver content to varied network locations. This section provides guidance on planning for content management. Chapter 5 discusses planning for content distribution.

Content distribution starts with the content source location(s). The site server pulls content to the content library on the site server itself and then distributes it to a set of DPs associated with the site. In the case of the content source location, you can choose to specify existing locations where source files are stored or establish a new source location. The choice largely depends on the integrity and ease of management of the existing locations. If these are not secured or are exceedingly complex, take this opportunity to establish a new unified location. Source file locations should be subject to additional technical and operational controls around changes to the sources. Anyone who can modify source files can potentially deliver content to client systems.

ConfigMgr Current Branch leverages the content library feature/architecture introduced in ConfigMgr 2012. This allows for file-level single-instance storage and minimizes duplicate distribution of source files between sites. The content library is stored in a custom format due to the single instancing used.

Following is a set of high-level planning elements to consider in your design for DPs:

images Add DPs for Redundancy: Deploy one or more DPs in the same network as your site server. Adding DPs provides a level of redundancy (see the “Meeting Availability Requirements” section, earlier in this chapter).

images Leverage BranchCache: BranchCache provides peer-to-peer distribution of content at locations with a single subnet, as ConfigMgr only supports BranchCache in distributed mode. Distributed mode uses subnet broadcast to find peer nodes, which means every subnet must have at least one download of the content. Using BranchCache also helps reduce the load on DPs by allowing clients to share content.

images Leverage Peer Cache: ConfigMgr’s Peer Cache feature provides peer-to-peer caching inside the Windows Preinstallation Environment (WinPE). Peer Cache is available for all ConfigMgr Current Branch version 1610 and above client operations and can be used to enable clients to share content for app deployment and software updates. Peer Cache has enjoyed significant improvements with each release of ConfigMgr Current Branch, and the aim is to reduce the need to deploy DPs at every branch office. See https://docs.microsoft.com/sccm/core/plan-design/hierarchy/client-peer-cache for more information.

images Consider DPs for Larger Remote Locations: Deploy protected DPs at larger remote network locations such as branch offices. Associate these DPs with the boundary groups containing boundaries of network locations where that DP is located. If a client is inside a boundary group served by a protected DP, it will prefer the protected DPs first.

images Content for Internet Clients: If you support Internet-based clients, place HTTPS-enabled DPs in locations accessible to these clients. Consider leveraging the Cloud DP role in Microsoft Azure to reduce the Internet connectivity demands of content downloads from Internet clients, serving them from Azure datacenter(s), albeit at a charge per megabyte served and stored.

images Leverage Pull-DPs at Branch Offices: Pull-DPs allow for environments with many branch locations. Using pull-DPs also permits you to support a large number of locations without additional primary or secondary sites. Configure content replication based on your network topology. For example, if you have a single unified Multiprotocol Label Switching (MPLS) network, you may not need to chain content replication to regional or hub locations and instead can leverage the “flat” nature of your network topology.

images Use Distribution Point Groups (DPGs) to Simplify Content Distribution Administration: DPGs allow you to streamline targeting of similar DPs. For example, branch offices often have identical requirements for content. Group all your branch offices together to enable targeting them once rather than multiple times. DPGs can be used to group DPs logically and physically. You could leverage a DPG to identify DPs that serve a particular business unit. Keep in mind that when a DP is added to a group, it automatically receives all content assigned to the group.

images Use Prestaged Content for Sites with Very Slow Links: When WAN connectivity provides limited bandwidth, consider configuring the DP in that location to use prestaged content. This enables you to replicate content out of band of ConfigMgr, enabling you to use postal or package delivery services to distribute content in bulk. This does require additional administrative overhead and cost but can be key to enabling timely services to those locations.

See Chapter 5 for more detailed discussion of content distribution and network design planning. Chapter 14 discusses the operational elements of content management.

Planning for Client Deployment and Settings

The ConfigMgr client is delivered as a single client, with components enabled based on the settings defined by the assigned site. The client must be installed on systems that are to be managed. Installation often requires the discovery of client systems via a discovery method prior to deployment. Discovery can also be helpful in planning.

This section focuses on deploying the ConfigMgr client to Windows desktops and laptops. It does not cover the deployment of the Linux, UNIX, or Mac OS X clients or MDM capabilities provided by Microsoft Intune. For more information on MDM capabilities, see Chapter 16.

The client feature components you enable and their configuration directly affect the user experience, including performance, scalability, and security of the managed environment. This section provides an overview of the considerations for designing and planning around client settings. Chapter 9 provides additional detail related to client settings and their configuration.

Planning Client Discovery and Installation

Before using ConfigMgr to manage a system, you must install the ConfigMgr client, and often you must discover the client. This section introduces some basic considerations to include in planning and design. Chapter 9 provides more details on client deployment and configuring installation methods. Following are methods you can use to install the ConfigMgr client:

images Client Push Installation: This method involves using WMI, remote administration calls, and administrative file shares for the site server to install the ConfigMgr client to potential client systems and invoke the client installation process. Before you can push the client to a remote system, the system must first be discovered. You can enable client push installation on a sitewide basis or selectively install individual or groups of systems in collections. Client push installation has a number of configurable dependencies, and properties are defined sitewide. Client push allows you to control installation entirely within ConfigMgr, which may simplify administration if collaborating with AD administrators requires additional time or effort. Client push requires certain prerequisites, firewall exceptions, and the use of administrative rights, all of which make it less secure. This installation method supports workgroup clients if they are discoverable and you meet access and permissions requirements for those clients (that is, knowing a local administrator account on those devices and that the admin shares are accessible).

images SUP-Based Installation: This method involves using the SUPs throughout your hierarchy to install the client. SUP-based installation does not require discovering a system before installing the client on it. It is best to use group policy preferences (GPP) to set required WSUS client settings, as this allows the ConfigMgr agent to override those GPP settings. This method is a good choice if you already use WSUS for software updates. GPP settings for the WSUS client can be targeted using any controls available with group policy object (GPO) assignment and filtering (that is, organizational units [OUs], the Deny Application security right, or WMI filtering). Bandwidth consumption can be minimized by using Background Intelligent Transfer Service (BITS), but you cannot control when the installation occurs, as it uses a WSUS update with a deadline in the past. You can also use this method with workgroup clients, but you must be able to remotely configure a client’s Registry in order to define its WSUS server as one of the SUPs in its nearest site. Figure 4.4 shows how to configure SUP-based installation. Note that after checking the check box, you must use one of the previously mentioned methods to tell clients to use the SUP as their WSUS server.

A screenshot shows the Software Update-Based Client Installation Properties dialog box.

FIGURE 4.4 Enabling SUP-based client installation.

images Group Policy Installation: This method involves using group policy software installation to invoke a special Windows Installer package designed for this installation method. Like SUP-based installation, this method also provides control over targeting, as it also leverages GPO assignment. Similarly, there are no controls over when installation occurs, as it is only during device startup.

images Manual Installation: An administrator can log on to a system and manually run the CCMSetup.exe client installation program. This does not require prior discovery of the system, has few dependencies, and is a great way to install several test clients; however, it is not scalable.

images Logon/Startup Script Installation: It is possible to automate manual installation by scripting CCMSetup.exe to install the client. This provides an extremely high level of control because you use a custom-developed script to control CCMSetup.exe. There is limited control on when the logon or startup script is invoked, as it is tied to either user logon (in the case of a logon script) or system startup (if a startup script). The same targeting capabilities are available as with GPOs, discussed earlier in this section with SUP-based installation. You can use this method with workgroup machines by leveraging PsExec from Windows Sysinternals (http://technet.microsoft.com/ sysinternals) or any method that allows remote execution of a script on a target system. There is no requirement for client discovery prior to using this installation method.

images Installation via Intune MDM-Managed Windows Devices: You can deploy the ConfigMgr client when Windows 10 devices are enrolled in MDM. This is particularly important if automatic MDM enrollment is configured in Azure AD as part of the join process. For more information on the automatic MDM process, see https://docs.microsoft.com/intune/windows-enroll#enable-windows-10-automatic-enrollment. This method helps ensure that devices can be joined to Azure AD as part of a user-initiated modern device provisioning in Windows 10 over the Internet, while maintaining existing management capabilities and methodologies. It is especially suited to use with the CMG configured with Azure AD authentication. For more information about this method, see https://docs.microsoft.com/sccm/core/clients/deploy/deploy-clients-to-windows-computers#how-to-install-clients-to-intune-mdm-managed-windows-devices. For more information on how to use Intune and ConfigMgr together to manage Windows 10 devices, see Appendix B, “Co-Managing Microsoft Intune and ConfigMgr.”

images Installation via Windows AutoPilot: Similar to the previous bullet point, you can use Windows AutoPilot to automate the Azure AD Join process. Automating this process will trigger enrollment in Intune and using the previous bullet point’s installation method will trigger the installation of the ConfigMgr agent. This allows you automatically to bring machines under management straight from the factory. For more information on Windows Autopilot and its use with Intune, see https://docs.microsoft.com/intune/enrollment-autopilot.

images Upgrade Installation: You can use your existing software distribution infrastructure to upgrade the client. This requires an older version of the ConfigMgr client to be installed on the system and communicating with the site. This is useful if upgrading from ConfigMgr 2012/2012 R2.

Chapter 2 described available discovery methods. Two discovery methods are available to discover potential clients:

images Active Directory System Discovery: This method involves using Lightweight Directory Access Protocol (LDAP) to access AD to extract information about computers in the domain. It also uses DNS queries to resolve IP addresses. If you use this method of discovery, ensure that your AD database is well maintained and that obsolete computer accounts are regularly purged. Alternatively, you can use settings available in the Active Directory System Discovery Options tab to filter out computers that have not logged in (which occurs as part of system startup) or that have updated their computer password, which occurs every 30 days implicitly. Configure this on each primary site managing on-premise clients with the ConfigMgr site needing to discover clients. Determining whether discovery is required should be based on two considerations:

images Whether you are using a client installation method that requires systems first be discovered, such as client push installation.

images When you need to obtain information from AD computer objects to extend the ConfigMgr database. Where possible, scope AD System Discovery at each primary site to minimize unnecessary discovery.

images Network Discovery: This method involves using various network protocols to enumerate IP subnets and hosts, discussed in Chapter 5. The key network discovery method is Dynamic Host Configuration Protocol (DHCP), which is available if you have Microsoft DHCP servers. This method is particularly useful as network discovery does not rely on pulling dynamic data from the network or individual systems. The DHCP method pulls DHCP address lease information directly from specific DHCP servers you define, providing a more predictable method of network discovery. You can configure multiple DHCP servers, as shown in Figure 4.5.

A screenshot shows the Network Discovery Properties dialog box.

FIGURE 4.5 DHCP server configuration in Network Discovery.

You can configure each discovery method at one or more sites in your hierarchy. When an object is discovered, the discovery method creates a data discovery record (DDR), which is placed in the authDDM.box inbox with basic information about the object. The DDR file is processed by the CAS or primary site generating the DDR, causing the information in the file to be inserted in the database and replicated up the hierarchy as part of site data.

ConfigMgr provides additional AD discovery methods for finding information about users and your environment:

images Active Directory Forest Discovery: This method involves obtaining information about AD sites and AD-defined subnets and creating IP range boundaries based on these subnets. This method is useful for small environments as it reduces manual configuration. For larger environments, review the “Planning Boundaries and Boundary Groups” section, earlier in this chapter.

images Active Directory Group Discovery: This method involves obtaining information about security and distribution groups. It appends group membership information, which becomes a string array property of the user object in the ConfigMgr database and can be used to target deployments using rule-based collections. You can also define one or more groups in AD Group Discovery, causing discovery of the members of those groups. This is useful when you cannot scope discovery by another method. In the case of security groups, the actual group is returned as a group object. These objects enable targeting of deployments, specifically software distribution and applications, to groups without requiring rule-based dynamic collection evaluation. Evaluation is performed on the client side, based on the user’s access token group membership, which creates a very efficient method of targeting software deployment via group membership. The user’s access token needs to be updated with each membership change, but this occurs on the user’s system based on logon/logoff or lock (Windows 8.1 and later).

images Active Directory User Discovery: This method involves obtaining information about users in AD.

All Active Directory discovery methods should be run at the site with the best possible connectivity to a DC. Each has delta discovery capabilities, meaning it uses change notification queries to obtain changes that have occurred on the DC since the last delta discovery pass. Delta discovery makes the process extremely efficient and occurs every five minutes by default (compared to every seven days for full discovery). Avoid changing the full discovery cycle if possible, as doing so causes all records to be extracted from AD based on the search criteria defined for that discovery method.

NOTE: DELTA DISCOVERY IS TIED TO A SPECIFIC DC

When delta discovery first runs, it obtains a DC using normal AD client application programming interfaces (APIs). This lookup of the nearest DC is based on the AD site of the site server. Delta discovery next runs LDAP queries against the DC based on the particular discovery method and queries the DC for the highest usnChanged attribute for the objects returned by the LDAP query used by that discovery method. The value of the usnChanged attribute is then persisted.

The usnChanged attribute is specific to a given DC and tracks changes written to the AD database instance on that DC. As new changes are replicated in or made locally on the DC, the usnChanged attribute value is incremented. Subsequent executions of the delta discovery run the same LDAP query as before but only look for objects with a higher usnChanged value than the last delta discovery.

The usnChanged attribute is specific to that DC. If that DC goes offline, the next delta discovery will be a full discovery to ensure that all changes are captured and a new usnChanged value is captured. In large environments where full discovery can take hours, arrange for the AD administrators to notify you of any outages—specifically outages in AD sites that have site servers performing AD discovery. You can then notify teams that depend on AD discovery for software distribution or other ConfigMgr functions.

You can configure the AD User Discovery and AD System Discovery methods to discover any AD attributes of the discovered objects. As you plan your user-centric management, consider attributes that can help you deliver appropriate content to your users. You can also include AD extension attributes, which allow free-form strings to be written to computer objects in AD. This way you can easily add attributes to client systems in the ConfigMgr database without using a custom-developed discovery method.

Planning Your Client Settings

Client settings in ConfigMgr Current Branch are controlled by a default sitewide settings policy and optional custom client settings policies. Custom client settings policies are targeted to collections of users or systems, enabling you to control the behavior or experience of those systems or users.

Core Client Settings for Systems

The following settings areas affect core client behavior or critical elements of the user experience:

images Client Policy: These settings control how frequently the ConfigMgr client polls for machine policy, which are policies targeted to the system rather than to a user. Increasing the frequency at which clients poll for policy decreases the scalability of MPs and the site database. If you need to distribute policy immediately, consider using client notification to push policy retrieval requests to collections. Client policy also controls whether the ConfigMgr client polls for user policy and whether user policy is triggered from Internet clients.

REAL WORLD: USING CLIENT NOTIFICATION

You might use client notification in an emergency security update distribution or Endpoint Protection/Defender definition update in response to zero-day malware, where the ability to trigger immediate policy retrieval is key to speedy security incident response. Using client notification means that policy retrieval can occur more quickly than even a 10-minute policy polling cycle, without incurring the constant scalability impact of a sixfold increase in the frequency at which clients poll the MP and database for policy. Client notification can also evaluate software updates or application deployments and trigger hardware and software inventory.

images Computer Agent: These core settings allow you to control most end-user experience elements of the ConfigMgr client. These settings include end-user deployment deadline reminders, organization name in the Software Center, whether the new Current Branch Software Center is enabled, who has installation permissions on the system (all users, administrators, primary users, or no users), and whether notifications are displayed for new deployments.

This area also controls the Additional software manages the deployment of application and software update settings; if enabled, the client never triggers software updates or application installs. You can use the ConfigMgr SDK and client-side WMI calls to write scripts or applications that trigger application and/or update installations using custom logic or rules.

Another important setting, Disable deadline randomization, causes all schedules to trigger exactly when they are defined to occur, assuming that the client received the policy prior to the deadline. Leaving this setting disabled in large environments helps distribute the load of incoming state and status messages from clients. It also allows you to determine in which systems deadline randomization is desired. You should also enable this setting where there is a shared storage system, such as Virtual Desktop Infrastructure (VDI), and server virtualization environments (Microsoft Hyper-V or EMC VMware), to help ensure that storage I/O is randomized, preventing all clients from triggering their schedules at the same time. See the “Using a Simple Schedule Versus a Full Schedule” section, later in this chapter, for information on ConfigMgr client schedules.

images Computer Restart: The two settings in this section allow you to control the restart user experience, as shown in Figure 4.6:

images Display a temporary notification to the user controls the first notification that the user receives of a pending restart. The user can close this dialog box, and the default is a 90-minute countdown.

images Display a dialog box that the user cannot close controls the final countdown presented to a user prior to restart. The end user cannot dismiss this dialog box, and there is a 15-minute countdown by default.

Depending on your user requirements, you may want to extend one or both of these time-outs to best suit your desired user experience.

A screenshot shows a default settings dialog box.

FIGURE 4.6 Computer Restart section of the client settings.

images Hardware Inventory: These settings allow you to control hardware inventory enablement and configuration. Hardware inventory occurs every seven days by default. Running this more frequently has a negative impact on site performance, as there is more inventory to process and history to maintain. The Hardware Inventory section also allows you to control the hardware inventory classes (WMI classes and registry values) collected by the client. MIF file collection can be toggled selectively per client rather than across the entire hierarchy, as with ConfigMgr 2007 and earlier, allowing you to restrict MIF file collection to only those clients where administrators are trusted not to generate unneeded MIF files that affect the size and schema of the site database.

images Remote Tools: These settings allow you to control whether remote control is enabled and the level of user interaction required for remote control. Disabling the requirement for user permissions prior to taking control of the device may cause privacy, regulatory, and legal concerns, depending on the local laws where the system resides. This section also allows you to configure Remote Assistance and Remote Desktop local policy settings; these are overridden by group policy for domain-joined systems.

images Software Deployment: This setting allows you to control how often application deployment reevaluation occurs; the default is seven days. ConfigMgr triggers deployment reevaluations to determine the current state of all required applications targeted to the system and user. The process is designed to address scenarios where a process external to ConfigMgr changes the state of the system. For example, the user might uninstall/install software, or a group policy startup script may change the system. Reevaluations occur after the initial evaluation to determine whether applications need to be installed on the system. The initial evaluation occurs immediately after policy is received by the client.

images Software Inventory: This settings area allows you to control software inventory. By default, software inventory runs every seven days. You can control the types of files searched for and the folder paths where those searches occur. You can also trigger file collection and send files back to the ConfigMgr site for storage and later review, which is useful when you need to pull critical log files from client systems. These files should be small both for transmission and data storage reasons. Review these settings to ensure that user privacy and any local privacy laws are not violated.

Software inventory is actually file inventory. Add/Remove Programs (Programs and Features) data and installed MSI information is gathered by hardware inventory from the Registry and WMI. Hardware inventory is far more efficient for the client and ConfigMgr site infrastructure. Reserve software inventory or tightly scoped searches for certain files that occur infrequently. By default, software inventory is enabled but with no rules, which means it is effectively disabled.

CAUTION: PERFORMANCE IMPACT OF SOFTWARE INVENTORY

Software inventory can have a large impact on site performance. The information is written to several very large tables in the site database. In larger environments, these tables can contain tens of millions of rows, and because there is no logical place to partition data such as a WMI class or Registry key, these tables cannot be partitioned into separate tables, as with hardware inventory.

images Software Metering: This settings area allows you to specify whether the software metering component is enabled and the frequency at which metering information is reported. Rules are not configured; they are in the console under Assets and Compliance -> Software Metering. By default, the client reports metering information every seven days. Software metering is often confused with inventory, as the terms inventory and metering are sometimes used interchangeably. In ConfigMgr, metering is the act of measuring how often, for how long, and by whom software is run. Installations of software are best tracked via hardware inventory and asset intelligence or, less efficiently, via software inventory.

images Software Updates: This area allows you to define the configuration of the Software Update component of the ConfigMgr client. It controls the frequency of software update scans and deployment reevaluations.

Software update scans detect the compliance state for software update products and categories configured in the Software Update component settings of the site or hierarchy. Deployment reevaluation is only concerned with the installation and compliance information for updates deployed to the system rather than with all metadata stored on the SUP that applies to the system.

The other crucial setting in this area is for installing other software update deployments, when any software update deployment deadline is reached, install other software update deployments. This setting allows you to configure the ConfigMgr client to bring forward deadlines for any updates to be installed in the future, so you can minimize the impact on end users by reducing the number of restarts. Enabling the setting generally results in a more positive experience for users.

Additional Client Settings for Systems

The following additional settings areas are available for systems (see Chapter 9):

images Background Intelligent Transfer

images Cloud Services

images Client Cache Settings

images Compliance Settings

images Endpoint Protection

images Enrollment

images Metered Internet Connections

images State Messaging

images User and Device Affinity

You can also define client settings that can be targeted to users. This allows additional flexibility over targeting systems for these settings areas. Only a subset of the available settings areas can be deployed to users—specifically Cloud Services, Enrollment, and User and Device Affinity.

Using a Simple Schedule Versus a Full Schedule

Schedules are defined throughout ConfigMgr for various recurring activities. A full or custom schedule is usually represented as a reoccurrence pattern and an effective date for the schedule to start. For example, the following is a default full schedule in the console: “Occurs every 7 days effective 2/1/1970 12:00 AM.” To define a simple schedule, you select the Simple Schedule radio button. Simple schedules are useful because they are relative. They trigger when the defined time has elapsed and are relative to the last time each client performed an activity. A full schedule causes client activity to coalesce, assuming that clients are online when the schedule is due to occur.

Use simple schedules wherever possible to help distribute client load. Combine this with setting Disable Deadline Randomization to No in the Computer Agent settings area to further ensure distributing client load. In specific scenarios that require precision, use a full schedule and select Disable Deadline Randomization.

Defining the User Experience

A key element of a ConfigMgr design is determining how ConfigMgr interacts with end users. This is often forgotten in the rush to meet IT requirements such as application distribution SLAs and security requirements such as security update compliance. Some assume that ConfigMgr should not interact with end users and should remain in the background, with the best user experience being no user experience.

Getting feedback about your business users’ expectations of how their devices should be managed is critical to a successful ConfigMgr implementation and operations. While this does not have to include direct engagement with end users, having stakeholders provide input and sign off on the user experience ensures business buy-in about how their devices are managed.

User interaction is not necessarily the goal. However, you do want to apply a level of user choice and consent; it should not override governance and policies that maintain the security and compliance of a user’s device. If there is no direct conflict between choice and security, allowing end users to decide and allowing them to opt in or out of default behavior can go a long way toward improving satisfaction with IT.

REAL WORLD: USER EXPERIENCE MODELING

Consider choosing whether a user should be prompted to install software updates or whether they should install silently. IT typically wants to install silently, without user prompting. The issue with a silent install is that software updates almost always require a restart, and users often request incredibly long restart countdowns. aEven for updates that do not request a restart, installation may impact the device’s performance.

An alternative to silent forced installs and long restart counters is to allow a user to have a grace period prior to the deployment deadline, enabling the user to choose the most convenient time for servicing to occur on the device. This is a simple way to improve the end-user experience with ConfigMgr—as well as perceived control. Security is maintained as the deadline is still enforced, but the user could choose to opt in early. Users who do not want to receive a prompt can define business hours and allow ConfigMgr to automatically install outside those hours.

In some cases, the nature of the overall business or a specific department can help determine the optimal user experience. Consider a call center with shift workers, where there is a threshold of the minimum number of desktops required to run a shift that could be used to define maintenance windows in ConfigMgr. Such maintenance windows allow installations and restarts to occur without impacting the business function of the call center. Each shift could also determine machines not to use because they will be restarted and when that will occur. Communication is critical, as otherwise the behavior may appear random to a user.

ConfigMgr provides various powerful methods for configuring and controlling devices. Those capabilities should not be used to the detriment of the end-user experience. Ultimately, ConfigMgr should not get in the way of doing work; careful consideration during the design and planning phase can help mitigate the risk of impacting business departments.

Planning for External Device Management

This section discusses the components that deliver additional capabilities to a device management solution. These components do not in themselves provide service capability; they extend the reach and capability of ConfigMgr to provide those service capabilities to remote PCs and mobile devices. In the case of mobile devices, these capabilities can help consolidate systems into a single solution. Internet-based client management enables you to continue managing devices after they leave your organization and to manage devices that never enter your network. The following sections cover the high-level planning that goes into a Configuration Manager design. Details of these capabilities can be found in Chapters 9 and 16.

Planning for Internet-Based Clients

IBCM provides a secure connection between the ConfigMgr client and the ConfigMgr client-facing site systems (MP, DP, SUP, and so on) while that client is on the Internet. IBCM has been available since ConfigMgr 2007. ConfigMgr Current Branch has fewer requirements than ConfigMgr 2007 for IBCM.

IBCM allows the ConfigMgr infrastructure to communicate with a client on an untrusted network. While its name implies that this is purely the Internet, IBCM has uses outside pure Internet-connected scenarios. It can support clients on untrusted networks, such as servers in a perimeter network or DMZ. This includes partner or cross-organization networks. IBCM can also support clients on networks where client authentication is not possible.

Figure 4.7 provides a high-level network topology view of a typical IBCM deployment. For the sake of clarity, the diagram does not show the following communication methods:

images Site server-to-site system file replication (TCP/445)

images Site server-to-site system role installation (TCP/445 and WMI over RPC)

images IBCM SUP to intranet SUP communication (TCP/8530, TCP/8531, TCP/443, or TCP/80, depending on WSUS and IIS installation configuration)

It also does not show IBCM site system to AD domain controller traffic.

A figure depicts the architecture of typical IBCM (Internet Based Client Management).

FIGURE 4.7 IBCM high-level network architecture.

The network and site system topology required to support IBCM was often a limiting factor in its use. In ConfigMgr Current Branch version 1610, Microsoft introduced the CMG, which eliminates the need to deploy dedicated servers in your DMZ. The CMG is deployed as one or more Azure VMs (using Azure’s Platform as a Service [PaaS] model rather than its IaaS model). A CMG connector point role is deployed on-premise and communicates with CMG VMs in Azure. You can create multiple CMGs for improved availability.

There is currently a list of unsupported features when using the CMG. For more information on the CMG, the latest information on supported scenarios, its costs, and Azure requirements, see https://docs.microsoft.com/sccm/core/clients/manage/plan-cloud-management-gateway.

IBCM Requirements

Client connections from the Internet or an untrusted network require a higher level of security than intranet-based clients. For this reason, IBCM requires client authentication certificates issued by a PKI. Using a PKI-issued certificate allows ConfigMgr to increase the security of the connection, as the default behavior of intranet-based ConfigMgr clients is to generate a self-signed certificate for authentication to the MP. In contrast, a PKI certificate is issued by an independent authority (the PKI) and helps ensure that the client is trustworthy. The issuing authority of the client authentication certificate must be trusted by the site system servers.

IBCM requires that client-facing site systems—such as the MP, DP, and SUP—be published on the Internet, which you can achieve by enabling existing site system servers to accept Internet connections. This requires that the site system be configured to use HTTPS for both Internet and intranet client communication.

Alternatively, in large and/or secure environments, you might deploy dedicated Internet-facing site systems that are configured to only support HTTPS and only accept connections from Internet-based clients; they would actively reject intranet-based clients. These site systems are often deployed into DMZs or perimeter networks. While the site systems must still be joined to an AD domain, that domain can be specifically created for the DMZ or perimeter network and potentially only to support those site systems. In addition, this domain/forest does not have to be trusted by the primary site’s domain/forest. You can configure the site server and site system to each use service accounts from the other server’s domain for communication.

As an additional security precaution, you could configure the site server to pull information from the Internet-facing site systems, which would allow you to configure any intervening firewalls or router access control lists (ACLs) to prevent the site system from making inbound connections to the site server. Core communication occurs over Server Message Block (SMB, TCP/445) and Remote Procedure Call (RPC, random high value TCP port and TCP/135). Instead, the site server can be configured to initiate outbound connections to the site system, which enables you to configure intervening networks to allow outbound connections from a high-trust network (your intranet) to a lower-trust network (your DMZ/perimeter network).

The site system’s HTTPS port (TCP/443 by default) must be open to the Internet or published via reverse proxy; exact publishing methods vary based on your organization’s network. Discuss the publishing methods available in your environment with your network team.

The Internet-based MP requires a read-only connection to the site database. This requires that a SQL Server connection port (TCP/1433 by default) be open inbound from your DMZ/perimeter network into your corporate network. If this is not acceptable to your network and/or security teams, you can deploy a replica database into the DMZ/perimeter network, although that increases costs. Alternatively, you can deploy a reverse proxy in the DMZ/perimeter network to proxy and perform SSL bridging while keeping the MP within your corporate network (or an intermediate network, if one is available).

The HTTPS connection’s SSL/TLS tunnel may be bridged or tunneled if published; if the port is opened directly, the tunnels are established directly to the server. Microsoft recommends publishing IBCM systems via reverse proxy or a similar publishing solution and using TLS/SSL bridging rather than tunneling. This is recommended because connections only reach the site system server after authentication at the network edge instead of authentication of the connection occurring at the site system server. See https://docs.microsoft.com/sccm/core/clients/manage/plan-internet-based-client-management for additional information.

Client Roaming Behavior with IBCM

IBCM does not support roaming between primary sites. Networks providing Internet connectivity often use network address translation (NAT), a web proxy server (transparent or explicit), or other translation/proxy methods to protect client systems and reduce the number of IPv4 addresses used. This translation makes it impossible for the ConfigMgr client to determine the actual IP address or subnet of the IP address making the network connection, which means ConfigMgr cannot accurately determine its physical location and closest primary site.

You must provision Internet-facing site systems at the primary site. Determine if you can publish servers to the Internet from all your datacenters. If not, you could host each site’s site systems in the same location, but they would have to traverse datacenter-to-datacenter WAN links to access the site server and site database. Alternatively, you could use Microsoft Azure to host the Internet-facing site system; this is an especially viable option when your organization has an ExpressRoute connection per datacenter. If you do not need multiple primary sites for scalability and require IBCM, it might be simpler to design for a single primary site than for multiple primary sites.

Using the CMG and Cloud DP in Place of IBCM

As discussed in the introduction to this section on IBCM, ConfigMgr Current Branch version 1610 includes the CMG. CMG vastly simplifies managing Internet-based ConfigMgr clients. CMG can be combined with cloud DPs to provide an Azure-based footprint to provide services to your Internet-based clients, removing the need for both on-premise servers in your DMZ and open inbound ports on your network. It also eliminates the need to use your Internet bandwidth to serve content and policy for ConfigMgr. Conversely, it requires paying Microsoft to both host the needed PaaS VMs and any outbound traffic from Azure to the Internet.

Prior to ConfigMgr Current Branch version 1706, to use the CMG, you had to issue client certificates to authenticate clients. In ConfigMgr Current Branch versions 1706 and later, you can configure the CMG to leverage Azure AD identity as an alternative to client certs for Windows 10 clients. This simplifies the prerequisites for utilizing the CMG, ultimately making the CMG the simplest ConfigMgr method for providing management to external clients.

Leveraging Azure AD for client authentication requires that your Windows 10 clients be Azure AD joined or domain joined and hybrid Azure AD joined. For more information on Azure AD Join and hybrid Azure AD Join, see https://docs.microsoft.com/azure/active-directory/device-management-introduction.

If you are upgrading an environment and IBCM is not deployed, using the CMG and cloud DPs can be a simple way to rapidly provide services to Internet-based clients. For more information, see the following sources:

images CMG: https://docs.microsoft.com/sccm/core/clients/manage/plan-cloud-management-gateway

images Using Azure AD with the CMG: https://docs.microsoft.com/sccm/core/clients/deploy/deploy-clients-cmg-azure

images Cloud DPs: https://docs.microsoft.com/sccm/core/plan-design/hierarchy/use-a-cloud-based-distribution-point

Planning for Mobile Device Management

MDM is becoming a crucial requirement for supporting mobility services. It is therefore important to determine whether you will include MDM in your ConfigMgr deployment design. In addition to its legacy MDM capabilities for Windows Embedded/CE, ConfigMgr has two key methods that support modern MDM:

images MDM support for customers with Windows 10 devices without an Internet connection: On-premise MDM was added to ConfigMgr to support customers with Windows 10 devices without an Internet connection. For Internet-connected devices, on-premise MDM provides a subset of the capabilities provided by Microsoft Intune and is not the Microsoft recommended approach. Devices without Internet-connectivity might include devices such as barcode scanners that run Windows 10 IoT Mobile Enterprise, which replaced Windows Embedded Handheld.

images Microsoft Intune integration in a “hybrid” deployment topology: ConfigMgr supports Microsoft Intune integration in a hybrid deployment topology. (See Chapter 16 for more details on setting up this topology.) With Intune hybrid, ConfigMgr is responsible for creating and deploying policy to users and devices. Intune delivers mobile device discovery, inventory, compliance, and application installation status to ConfigMgr; ConfigMgr supports managing Apple iOS, Google Android, and Microsoft Windows mobile devices (including Windows 10 PCs via the MDM channel). Using Intune also enables cloud-only features such as app protection policies (APP), formerly known as Intune mobile application management (MAM), that do not rely on device enrollment. APP is accessed via the Azure portal (https://portal.azure.com), under the Microsoft Intune resource type.

If you are planning to deploy Intune, a key consideration is whether to leverage Intune in a hybrid topology or keep it decoupled from ConfigMgr (known as Intune on Azure). This decision is crucial as it affects the speed at which you receive updates, with Intune standalone being updated monthly (at the time this book was published). With ConfigMgr Current Branch 1610, Microsoft has enabled customers to switch from Intune hybrid to Intune on Azure by removing the SCP. Currently, changing topologies also requires that all MDM configuration be reimplemented in Intune on Azure but does not require device reenrollment.

If you are upgrading from ConfigMgr 2012 or an older version of ConfigMgr Current Branch, you can find information on how to switch to Intune on Azure using the steps at https://docs.microsoft.com/sccm/mdm/deploy-use/change-mdm-authority. If you want to switch authorities but are not ready to move all users over at one time, Microsoft provides a mixed authority mode that allows you to transition user by user from hybrid to Intune on Azure. For details on how to use this method, see https://docs.microsoft.com/sccm/mdm/deploy-use/migrate-mixed-authority.

Microsoft recommends the Intune on Azure topology for Microsoft Intune implementations over hybrid with ConfigMgr. This is especially true as the new co-management capabilities in ConfigMgr Current Branch 1710 are not supported with hybrid (see Appendix B for more information on co-management). Microsoft has stated that it is committed to continuing to support customers on Intune hybrid. With that said, Microsoft actively asks customers to consider Intune on Azure first and provide feedback if they decide to use Intune hybrid. For a complete breakdown of the latest on choosing between hybrid and Intune on Azure, see https://docs.microsoft.com/sccm/mdm/understand/choose-between-standalone-intune-and-hybrid-mobile-device-management.

Planning for Continuous Updates

This section discusses the new servicing model in ConfigMgr Current Branch. It is important that the design and any associated deployment project consider this model as part of transitioning into operations. Understanding why Microsoft chose this approach along with strategies to manage changes helps ensure that your organization can get the most out of the new model. The model includes a completely rewritten updating process for ConfigMgr that automates replication and installation of updates.

Servicing and Updates in Current Branch

ConfigMgr Current Branch includes a new Updates and Servicing node that is available under the Cloud Services node of the Administration workspace. This is not only a notification area for new updates and releases; it is a front end to a completely rewritten servicing model for ConfigMgr that allows one-click upgrades of hierarchies with control over client behavior. This feature provides a quicker and more robust upgrade experience, which is important with the more frequent release cycle for ConfigMgr. Instead of waiting years for new features in a service pack or product release, a smaller set of updates is available multiple times a year. There are two primary reasons for the new servicing model:

images Windows 10 Servicing Model: Microsoft is constantly updating Windows 10, with updates currently released approximately twice a year. This speed of release is a massive change for enterprise IT. The primary OS that ConfigMgr manages is the Windows client OS. To manage an OS that is constantly updating, ConfigMgr itself must be updated at a similar frequency. There is little point in a management solution that cannot manage the latest capabilities of the devices it is managing.

images Hybrid Intune MDM: Microsoft Intune is updated monthly, including both back-end service updates and, for Intune standalone, updates to the web console. For hybrid customers to access these changes, ConfigMgr must release both console and server code changes.

Updates to Current Branch are delta updates, which makes them different from service packs in previous versions. A ConfigMgr service pack was often the same size as the full product and frequently contained several years of SQL Server database changes for modifying the database schema to support the cumulative updates and any new features. Current Branch releases contain several months of changes and only those since the last update. It therefore takes far less time to actually apply these releases.

In addition, Current Branch releases, unlike historic service packs and cumulative updates, are installed directly in the console. ConfigMgr is constantly polling Microsoft web services for new releases and automatically downloads them as they become available. Selecting Install from the Updates and Servicing node of the console causes the content to be replicated throughout the hierarchy. A prerequisite check runs; after each site reports that the prerequisite check has passed, the installation begins in top-down order from the CAS.

Each release includes a list of new features available with that release. You can choose to make those features available throughout your hierarchy, enable only a subset of features, or not make new features available; this gives you control and additional testing time for new features, if required. Microsoft occasionally adds pre-release features, which are not meant for production usage but are features Microsoft believes require testing in production with a pilot to fully validate the feature. You do not have to enable these features or use them until they exit the pre-release stage. Usually the transition from pre-release to release occurs during the next release, though Intune features may be released in response to an Intune service change.

At the time this book was published, configuration Manager Current Branch had had six major versions since its 1511 release in November 2015. These are listed in Table 4.1:

TABLE 4.1 ConfigMgr Current Branch Releases and Dates

Release

Date

Includes…

ConfigMgr Current Branch 1602

February 2016

images First major update to Current Branch

ConfigMgr Current Branch 1606

June/July 2016

images Support for Windows 10 Anniversary Update (version 1607)

ConfigMgr Current Branch 1610

October/November 2016

images New boundary model

images Cloud management gateway

images Peer Cache

ConfigMgr Current Branch 1702

March/ April 2017

images Windows 10 Creators Update support

images Office 365 ProPlus update management

images Conditional Access

images Enhancement to deployment management

images Android for Work support and other MDM hybrid parity features

ConfigMgr Current Branch 1706

June/July 2017

images Added data warehouse feature

images Azure AD integration

images Peer Cache & SQL Server availability group enhancements

images Ability to run PowerShell scripts directly from the console

images Management of Microsoft Surface driver updates

images Integration with Windows Analytics

images Enhancements to Device Guard policies

images Intune hybrid MDM capability enhancements

ConfigMgr Current Branch 1710

November/December 2017

images Support for co-management of Windows 10 devices by both ConfigMgr and Microsoft Intune simultaneously (see Appendix B)

images Support for configuration of Windows Defender Application Control, Application Guard, and Exploit Guard

As Table 4.1 shows, Microsoft has followed a release schedule of approximately three times a year. It is critical that you have a plan for handling these updates as part of an operational procedure to get the most value out of your investment in ConfigMgr. Also keep in mind that Microsoft supports each release only for a single year. For example, version 1602 was not supported as of March 2017.

In comparison, ConfigMgr 2012 R2 had a quarterly release cycle of cumulative updates. These updates were not required for supportability but were often recommended as part of standard troubleshooting by Microsoft Support. The installation of urgent hotfixes required installing the most recent cumulative update. These updates were released more frequently than Current Branch releases. Cumulative updates were more difficult to deploy as the updates had to be distributed and installed on each site server in order, and client updates had to be deployed. Generally, it should take less time overall to deploy a Current Branch release than a cumulative update. It is important to take all this into consideration as part of your operational plans for Current Branch.

Testing and Release Management of Current Branch Releases

Considering how to manage the regular release schedule of Current Branch is a crucial element of your ConfigMgr design. Previously, this planning could be left to operational teams, given its infrequency. Those teams would know that a service pack release meant that a 12-month countdown had started, giving them a year to upgrade before they lost support.

With ConfigMgr Current Branch, each release includes a one-year supportability countdown similar to a service pack, but these releases occur approximately three times a year. Working with your operational teams and making the operational procedures associated with servicing part of your handover to your operational teams is key to both keeping in line with Microsoft’s support policies and leveraging the most value out of ConfigMgr Current Branch (and potentially Windows 10 and/or Intune hybrid investments). Read Microsoft’s support policy for ConfigMgr Current Branch versions at https://docs.microsoft.com/sccm/core/servers/manage/current-branch-versions-supported.

The most common model for release and change management is the test and production environments, where all changes go through testing and then are released to production. Some environments have multiple stages of test environments, referred to as development, staging, pre-product, integration, and user acceptance testing. Ultimately, ConfigMgr Current Branch releases should cascade from your test environment(s) to production like any other changes.

It is also important to determine whether you will enable a feature as part of deploying a new update. Just because a feature exists does not mean you must enable it. It makes sense to get on the latest build as a first priority and then determine if and when you will make those features available, especially in a large organization where you operate the ConfigMgr infrastructure on behalf of other teams.

The Updates and Servicing node also allows you to configure pre-production collection of clients that automatically receive the new ConfigMgr client. After performing any additional testing required by your change and release management processes, the updated client can be released to production clients (all clients not in the pre-production collection). ConfigMgr provides in-console monitoring for the entire process, including the server upgrade and client upgrade elements.

Microsoft often provides first-wave and general access for the same release, which means you can select the speed at which you receive access to a release. You could place your test environment on the first wave to provide additional time to test new releases. (The amount of time varies with every release.) The scripts that enable access to the first wave are published with the announcement of each release on Microsoft’s Enterprise Mobility and Security Blog, at https://cloudblogs.microsoft.com/enterprisemobility/?content-type=announcements&product=system-center-configuration-manager.

You may also want to consider ConfigMgr Current Branch Technical Preview. This is similar to the old open betas for ConfigMgr 2007 and 2012, except Technical Preview is a discrete branch of updates that are released as a perpetual 90-day evaluation (with each update resetting the timer). This is similar to the Windows Insider program for Windows 10. Updates are released monthly or every other month, and Technical Preview enables you to experiment with features before they are released and provide feedback to Microsoft through the Configuration Manager UserVoice page (https://configurationmanager.uservoice.com/). Sites that are installed from Technical Preview media are updated from the console directly, like sites installed from production media. It is not possible to move from production to Technical Preview or vice versa. Features included in Technical Preview may not be included in the next production release; something being included in Technical Preview does not imply a release schedule.

CAUTION: DO NOT USE TECHNICAL PREVIEW FOR PRODUCTION TESTING

Avoid using Technical Preview in test labs where you test production changes. Technical Preview is almost a completely isolated branch of releases, with features that may not be available for several public production releases. Deploying Technical Preview to the lab where you perform production testing would invalidate that environment, as it would no longer be representative of production.

It also would not make for a good test, as you would not be testing the same thing. If you do not have an environment available for Technical Preview, either run it on a single VM or skip using Technical Preview.

The following is an example of a release cycle plan for Current Branch:

images Include an agreed-upon and pre-planned process for handling Current Branch releases for your operational team once in production.

images Cascade ConfigMgr Current Branch releases through your test environments and into production.

images If you need to carefully manage your environment, do not enable new features; follow change and release management processes for each feature.

images Keep in mind that you can use the pre-production clients feature to phase out the rollout of the new ConfigMgr client included in each ConfigMgr Current Branch release. This allows you to perform production pilots, which can be useful even in test environments.

images Where possible, use Technical Preview to get continuous insight into what is coming with the new releases. You will then have fewer surprises with each ConfigMgr Current Branch release.

Planning for Restorability and Recoverability

The following sections discuss planning for backup, restoration, and recovery, discussing the various supported backup methods. The sections explain how to build a design that includes restorability and recoverability in addition to availability. A design lending itself to restoration and/or recovery from a serious outage is as critical as one that is highly available. It is also important to plan for a supported restoration/recovery method, as you do not want to face limited support from Microsoft during an outage.

For information on configuring backups and performing restores, see Chapter 24.

Availability, Restorability, and Recoverability

It is important to differentiate between several similar and interlinked terms related to backup and recovery:

images Availability: Availability relates to how available the service or solution is. This can be achieved both proactively and reactively. They can be proactive in the sense that SQL Server databases may be clustered and multiple MPs can be deployed; these architectural decisions are designed to improve the fault tolerance of your ConfigMgr infrastructure. Reactively refers to what you can do to quickly recover from a fault to ensure availability.

images Restorability: Restorability relates to the ability to restore stateful elements of the service from backup. Stateful is an important consideration, as stateless elements of the service (such as a MP) can be rebuilt without data loss. Restorability is specifically called out because it is different from recoverability, as you may be dealing with corruption or human error rather than rectifying a fault. The ability to restore is equally important to disaster recovery. You are more likely to need to restore from backup than you are to perform a complete disaster recovery. You need the ability to recover data if data corruption or human error occurs.

images Recoverability: Recoverability relates to the ability to recover from a disaster. This could be the loss of a datacenter or another major event. Having a restorable service does not imply recoverability, but restorability is often critical to recoverability. Recoverability also implies having processes and procedures in place that allow someone unfamiliar with the service to recover it.

These three elements should be considered in your ConfigMgr design to ensure the following:

images Preventing or mitigating faults by planning for availability

images Restoring after corruption and human error by planning for backup and restorability of those backups

images Recovering from serious disasters by planning for recoverability

Do not neglect any element, or you risk being unprepared for a low-risk but high-impact disaster or a higher-risk but lower-impact human error.

Determining Your Recovery Time and Point Objectives

An important step in planning any recovery process is determining your recovery time objective (RTO) and recovery point objective (RPO). These objectives determine the requirements your design needs to meet. Your RPO defines the data and time your organization is willing to lose as part of recovery. For example, if backups occur only once a day, your RPO is up to a single day. If you only replicate backup to your alternate datacenter once a week, your DR RPO is one week. You should consider how much work you are willing to lose during a recovery. Perhaps for the duration of a critical global project that relies on ConfigMgr with globally distributed delegated administration, you need to run more than one SQL Server database backup a day to ensure that the work of multiple administrators is not lost in the event of an outage.

Recovery time is how quickly service can be restored. You may have different RTOs for normal faults (disk failure, human error) versus DR scenarios. In general, your RTO for normal fault is the shortest time possible, while a DR is necessarily more complex and often has a higher RTO.

Consider the business and IT importance and impact of ConfigMgr when determining RTO and RPO. Do not adopt objectives for your business-critical systems simply because they can be met. Justify the objectives with empirical data to support the funding needed to perform the backup recovery testing and other operational procedures.

ConfigMgr can be used to underpin recovery of other systems. For example, it might be used to build PCs shipped to an alternate site to recover a business-critical system. This increases that business system’s RTO as it cannot be quicker than ConfigMgr’s RTO.

REAL WORLD: AVOID USING UNSUPPORTED BACKUP METHODS

Alternative DR and service continuity methods are often used to speed up server recovery, particularly with VMs. These methods utilize storage snapshots of the entire server and data replication to move the data between geographic sites. These methods are not supported by Microsoft’s Configuration Manager product team. This does not mean that they do not work but rather that they have not been tested. For this reason, it is always a good idea to use a supported backup method (see “Planning for Backup” in this chapter and Chapter 24).

There are good reasons to use supported methods: They have been extensively tested by Microsoft, and they are what Microsoft Support is familiar with. This is important for DR scenarios where the ConfigMgr administrator might not be available and your organization may be relying on someone unfamiliar with ConfigMgr to recover a system requiring Microsoft Support’s assistance. There are other considerations if you have a hierarchy, as your sites are replicating data sequentially at the database level. If any site reverts to an older version of the database, it is likely to break replication. In addition, you need to use supported backup methods to provide for scenarios where there is database corruption or human error (for example, a deletion of a large number of systems or objects from the database). Replicating server storage between geographic sites would just replicate an error in the database.

In some scenarios, using an unsupported backup or storage replication option may be unavoidable due to organizational standards or RTO requirements. In such situations, it is still a good idea to have a backup process to provide a safety net and address the concerns highlighted in the previous paragraph.

Planning for Backup

There are two main supported backup methods in ConfigMgr:

images ConfigMgr Backup: This is conducted by the site server and backs up the file system, Registry, and site database into a folder that can then be backed up normally using any file backup tool.

images SQL Server Backup: Introduced with ConfigMgr 2012, SQL Server backups are not specific to ConfigMgr, and open up the ability to use any Microsoft-supported SQL Server backup method.

ConfigMgr backup is most commonly used and has the fewest dependencies on other technologies and the teams that run those technologies. No one needs to configure SQL Server to perform backups, and all Windows storage snapshots are handled by the site server. However, this method often results in very large backups, especially in environments with hundreds of thousands of clients. Also, the information stored in the file system under the ConfigMgr installation folder is not critical to the recovery of the site.

SQL Server backup provides several key benefits over traditional ConfigMgr backup. It gives your SQL Server database administrators flexibility to choose the database backup model that suits them best. SQL Server backups can be heavily compressed in large environments, reducing time to replicate the backup and allowing more backups to be maintained. You can also use features such as SQL Server Managed Backup if hosting your ConfigMgr infrastructure in Microsoft Azure. If you use SQL Server backup, you will lose any files being processed in the ConfigMgr inboxes at the time of a failure. In general, this is not a major concern unless you regularly have backup logs in those inboxes. (A backlog that is so persistent should be addressed separately anyway.)

Certain items cannot be recovered using either method. These include certificates and passwords stored in the database, which are backed up by both ConfigMgr backup and SQL Server backup but are stored encrypted. The encryption key is not backed up for security reasons, which means any certificates and passwords must be reentered after the site is restored. Package and application source files are not backed up; however, if stored on a remote file server, that file server should have its own backup. If you are using Intune hybrid, the authors recommend opening a support case so that Microsoft Support can assist with recovering the site and ensure that the connectivity between your tenant and the restored site is healthy.

Summary

This chapter discussed the key elements for developing a ConfigMgr design. It covered requirements gathering and the scoping of the technical elements of a design. It also provided an overview of the key elements of both client and server design elements, including site and site system scalability and placement, along with client discovery and settings. (For further details, see Chapter 9.) Content management and network design were also briefly covered; for a full discussion, see Chapters 14 and 5. The chapter also discussed how to maintain and manage ConfigMgr Current Branch, given the new release schedule and evergreen/always-updating servicing model. To gain a better understanding of co-management and how it helps to modernize the management of Windows 10, see Appendix B. This chapter should be followed up with the various in-depth chapters based on your requirements and planning for deploying ConfigMgr.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset