Firewall Scenarios and Configuration

Properly configured firewalls and reverse proxy servers provide a high level of first-line defense in your Office Communications Server infrastructure. The goal is to design a defense-in-depth strategy, of which the firewalls play an initial role. Regardless of how external communication is reaching your internal deployment, the traffic needs to be confined in such a way that only expected data flows of a predictable type are being allowed into your domain and enterprise. This is the express purpose of the firewalls and reverse proxies.

A number of firewall configurations can be implemented. Two of these implementations will be discussed and detailed in the following sections:

  • Back-to-back firewall

  • Three-legged firewall

Back-to-Back Firewall

The back-to-back firewall, shown in Figure 4-27, is named for (at least) the two firewalls that make up the design. The outermost firewall provides a first level of security. It is configured to enable only traffic that will communicate with the servers that are placed in between the firewalls or in the network perimeter.

Back-to-back firewall design

Figure 4-27. Back-to-back firewall design

For Office Communications Server, the Edge Servers and reverse proxies are placed in the network perimeter between the external and internal firewalls. Before you begin creating rules for the firewall, you should gather the specific ports, protocols, and direction of traffic for each server role that you intend to deploy. The set of rules that must be configured for the firewalls will vary based on the service that each component in a consolidated Edge Server will provide.

Three-Legged Firewall

The three-legged firewall is a configuration in which there is only one firewall, but there is still a need to create a port and protocol screened area to which data flow can be confined before being allowed into the internal network. Figure 4-28 shows a three-legged firewall configuration. The third interface in the firewall creates what is commonly referred to as a screened subnet.

Screened subnet configuration using one firewall to manage inbound and outbound traffic

Figure 4-28. Screened subnet configuration using one firewall to manage inbound and outbound traffic

A screened subnet is a separate subnet that has its own IP network address. Servers are then configured to receive and send traffic on this screened subnet. The screening is the data flow control the firewall imposes based on the firewall’s rule configuration. Traffic such as mail traffic over TCP port 25 would be sent to the IP address of the mail server on the internal network. Note that in the case of an Edge Server, only the external interface is screened. The internal interface is connected to the internal subnet side of the firewall.

In the following discussions of port and protocol configuration, the diagrams and firewall rules assume a back-to-back firewall configuration. If your enterprise is using a single firewall with a screened subnet, only the external rules will apply because router rules on the internal side of the single firewall would segment the Edge Server.

Port and Protocol Configuration for Edge Servers

Office Communications Server 2007 R2 supports only a consolidated Edge Server configuration. This does not mean that there is only one set of ports and protocols to configure. A consolidated Edge Server hosts the three server roles that make up the Access Edge Server, Web Conferencing Edge Server, and A/V Conferencing Server.

Because there are three roles on each Edge Server, you should assign three IP addresses to the external interface of the server or three virtual IPs to the HLB. This will enable specific and finite rules to be created to manage the data flow to a server role.

One configuration option to consider is to dedicate a subnet in your network perimeter for Office Communications Servers only. Set router rules to not enable any other traffic to the Office Communications Server subnet except traffic that is specifically destined for one of the servers or is destined for the internal servers. What this accomplishes is isolation, and it prevents any other traffic from outside your perimeter from affecting or attacking the Office Communications Servers.

Examining Rules for Access Edge Servers

The Access Edge Server initiates most of the communication for the other server roles. A client contacts the Access Edge Server and requests connection to a given set of services. This makes the Access Edge Server important in your environment.

The Access Edge Server handles SIP, a signaling protocol that carries messages requesting specific actions to be carried out. For example, the Access Edge Server handles IM without assistance from any of the other server roles. However, if a Web conference is requested, the Access Edge Server must be involved in the setup and management of the Web conferencing for the duration of the conference.

Access Edge Servers require MTLS certificates because they do mutual authentication with other servers that they communicate with. This communication is established over TCP/5061, as shown in Figure 4-29.

SIP traffic through an Access Edge Server

Figure 4-29. SIP traffic through an Access Edge Server

For the firewalls, the rules would be configured to allow the settings in Table 4-4.

Table 4-4. Firewall Rules for the Access Edge Server

ACCESS EDGE SERVER SERVICE: RTCSRV

FIREWALL

DIRECTION

PORT / PROTOCOl

External

Inbound to external interface IP of Access Edge

5061/TCP SIP/MTLS

Internal

Inbound to interface of IP FE/Director[1]

5061/TCP SIP/MTLS

Internal

Outbound to internal interface IP of Access Edge

5061/TCP SIP/MTLS

External

Outbound from external interface IP of Access Edge

5061/TCP SIP/MTLS

External

Inbound to external interface IP of Access Edge

443/TCP SIP/TLS

[1] A Director, although not required, is a recommended role that does pre-authentication of inbound SIP.

Examining Rules for Web Conferencing Edge Servers

Web Conferencing Edge Servers handle traffic for users outside of your organization that are invited or conduct conferences using your internal Web Conferencing Servers. Specifically, this would be remote-access, federated or anonymous users. Internal users use the internal servers but can attend or conduct meetings with remote and federated users.

Persistent Shared Object Model (PSOM) traffic is used for Web conferences. The purpose of PSOM is to send data to and from Office Communicator/Live Meeting for the actual slide or multimedia information shown by the meeting presenter. PSOM uses port 8057 over TCP.

Authentication is performed between the server and clients by using TLS. Authentication between role servers is done using MTLS. A remote user who is a member of your domain will use NT LAN Manager (NTLM) and credentials will be authenticated by Active Directory. Users from a federated domain will be authenticated by their domain and allowed to interact with the Conferencing Server in your network because of the federation trust that is in place between your Office Communications Server infrastructure and the federated partner’s. (For more on federations and how this trust is created and managed, refer to Chapter 8.) Anonymous attendees—people who are not members of your domain and are not members of a federated domain—authenticate by use of a digest authentication derived from the conference location and the unique conference key that is created for each Web conference.

An external Communicator client connects to the Edge Server by using PSOM over port TCP/443. A TLS connection is established between the Web Conferencing Edge Server and the client software. Figure 4-30 shows the data flow from client to Edge Server and the internal Web Conferencing Server to the Edge Server.

Traffic to the Web Conferencing Edge Server

Figure 4-30. Traffic to the Web Conferencing Edge Server

The flow of traffic and the rules on firewalls in your environment are shown in Table 4-5.

Table 4-5. Firewall Rules for Web Conferencing Edge Servers

WEB CONFERENCING EDGE SERVER SERVICE: RTCDATAPROXY

FIREWALL

DIRECTION AND RULE

PORT/PROTOCOl

External

Inbound to external interface IP of Web Conferencing Edge Server

5061/TCP SIP/MTLS

Internal

Inbound to interface IP of Web Conferencing Server

5061/TCP SIP/MTLS

Internal

Outbound to internal interface IP of Web Conferencing Edge Server

8057/TCP PSOM/MTLS

External

Inbound to external interface IP of Web Conferencing Edge Server

443/TCP PSOM/TLS

Examining Rules for A/V Edge Servers

The A/V Edge Server is unique because the requirements for this role are complex and different from the other two roles. It requires a publicly addressable IP address.

The primary reason that the A/V Edge Server needs a publicly routable IP address is due to the nature of A/V streams. They are sensitive to latency and for that reason cannot handle the overhead of mechanisms such as Network Address Translator (NAT) that are found in nearly all firewalls.

NAT was never intended as a security mechanism. Although it does obfuscate the actual address of the internal client or server, the real purpose is to enable a single public IP address to be used to service thousands of users in a reserved IP address range (see Internet Engineering Task Force [IETF] RFC 1631). There is no NAT on the tunneled IP that directly exposes the A/V Edge Server. Internal clients communicating with external clients use their actual internal IP addresses. Initially, this might look like a security problem, but it is mitigated by the fact that the client and server establish a secure connection via TLS, and therefore the connection is encrypted.

The specific reason that NAT cannot be used is better explained by the operation of Interactive Connectivity Establishment (ICE) and Simple Traversal of UDP through NAT (STUN). ICE and STUN rely on a public IP address to work properly. IETF RFC 3489 explains this in much greater detail but essentially states that STUN assumes the server exists on the public Internet. If the server is on a private address, the user may or may not be able to use the discovered address and ports to communicate with other users. Worse, there is no reliable way to detect a condition in which communication will fail.

Another area of uniqueness for the A/V Edge Server is the number of TCP and UDP ports that optionally might be open to work correctly in a federation scenario. By default, TCP and UDP ports 50,000–59,999 must be open for federation.

The 10,000 ports are required only if you are using the A/V or Application Sharing Server role with a federated partner. Remote users accessing the A/V Edge Server will not use this port range. This range of 10,000 UDP ports is associated only with RTCMEDIARELAY, the service that hosts A/V on the Edge Server. The A/V Edge Server service allocates random ports only when they are assigned to incoming clients. These ports are then communicated to the client over SIP, notifying the client which ports to specifically connect to. Although the firewall may enable access to these ports on the A/V Edge Server, the ports are not opened by the services on the A/V Edge Server until allocated and communicated to the client.

UDP is preferred over TCP for A/V Edge Server use, and application sharing always uses TCP. UDP is prioritized over TCP is because UDP does not have the overhead that TCP does. UDP does not do a handshake; it merely sends the packet and, without waiting for a response, sends the next. This is a much more efficient method for A/V traffic, where the client at the receiving end can make up for packets that might be dropped, but it can’t do anything about severe latency in the data stream that might be imposed by TCP overhead delays.

Figure 4-31 shows the port/protocol traffic associated with the A/V Edge Server.

A/V Traffic in the typical Edge configuration

Figure 4-31. A/V Traffic in the typical Edge configuration

Table 4-6 defines the firewall rules that must be configured to ensure a properly operating A/V Edge Server.

Table 4-6. Firewall Rules for the A/V Edge Server

A/V EDGE SERVER SERVICES DATA: RTCMEDIARELAY

A/V EDGE SERVER SERVICES AUTHENTICATION: RTCMRAUTH

FIREWALL

DIRECTION AND RULE

PORT/PROTOCOl

Internal

Outbound to A/V Edge Server internal interface IP

443/TCP STUN/TCP

Internal

Outbound to A/V Edge Server internal interface IP

5062/TCP SIP/TLS

Internal

Outbound to A/V Edge Server internal interface IP

3478/UDP STUN/UDP

External

Inbound to A/V Edge Server external interface IP

443/TCP STUN/TCP

External

Inbound to A/V Edge Server external interface IP

50,000-59,999/TCP RTP/TCP

External

Inbound to A/V Edge Server external interface IP

3478/UDP STUN/UDP

External

Inbound to A/V Edge Server external interface IP

50,000-59,999/UDP RTP/UDP

External

Outbound from A/V Edge Server external interface IP

50,000-59,999/UDP RTP/UDP

[*]External

Outbound/inbound to/from A/V Edge Server external interface IP

50,000-59,999/UDP RTP/UDP

[*]External

Outbound/inbound to/from A/V Edge Server external interface IP

50,000-59,999/UDP RTP/UDP

[*] Only needed when federating with a partner using Office Communications Server 2007. Rules should be reconfigured when all partners have upgraded to Office Communications Server 2007 R2. When initiating A/V communication, the client sends a series of SIP packets across the firewall to the Access Edge Server. This happens because of the way that ICE works. Each client allocates a port at the local endpoint (itself) and at the remote endpoint (the external edge of the A/V Edge Server). These allocated ports are called candidates. Because of the need to reduce latency as much as possible, each client needs both candidates. Once these candidates are acquired, they are sent to each of the other clients joining the same A/V conference. The candidates are communicated over the established SIP channel. When these candidates are received, each client builds a matrix of possible combinations. The clients each use the matrix of possible connections to send test packets to determine which port combination succeeds. ICE logic is used to accomplish this, and it prefers UDP over TCP. When the best path is determined, the clients proceed with the A/V conference.

Examining Rules for the Web Components Server

Office Communications Server 2007 R2 uses a Web service managed by the Web Components Server role to enable users to join a Web conference session, upload and download documents during a Web conference, expand distribution groups (DGs), and download the Address Book when connecting externally. Office Communications Server also offers another server role, called Communicator Web Access, for users to sign in to Office Communications Server by using a Web browser.

Internal users can connect to both of these server roles directly. For remote users and anonymous users, these server roles must be accessible externally from the Internet. The recommended way to securely expose your Web Components Server is through an HTTPS reverse proxy.

Users use a published URL to directly connect to the reverse proxy on a secure connection (HTTPS). The reverse proxy then proxies the client request over another HTTPS connection to the Web Components or Communicator Web Access server through a private URL. This is shown in Figure 4-32.

HTTPS reverse proxy

Figure 4-32. HTTPS reverse proxy

The reverse proxy is deployed in the perimeter network, whereas the Web Components and Communicator Web Access Servers are deployed on the internal network. Any reverse proxy can be used; however, Microsoft has tested only ISA Server 2006. For more information about how to securely publish Web applications to the Internet by using ISA Server, see the TechNet article at http://go.microsoft.com/fwlink/?LinkID=133670.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset