Internet and Web Activities

The Internet was established so universities and government organizations could communicate in a more instantaneous manner and share information easily. It not only provided a different communication path, it opened the door to the possibility of mass communication, as well as a new and exciting mechanism that could provide layers of functionality and potential for individuals and businesses all around the world.

Communication on the Internet consisted mainly of nongraphical e-mail, news groups, and File Transfer Protocol (FTP) sites to exchange files. When the Hypertext Markup Language (HTML) came to life, people were able to make graphical representations of their concepts and ideas. These sites provided static pages with a small amount of capability to accept information from Internet users through forms and scripts. When entrepreneurs realized the Internet was a new money-making forum for advertising and selling, sites became more abundant, web pages became more complex, and more products and services were offered. Companies started integrating this new communication mechanism into their business model.

A game of leapfrog began between telecommunication capabilities, Internet protocols, hardware platforms, and supporting applications. As HTML became more dynamic, web server applications were developed to manage these web pages and the back-end processes. This increased the need for more hard drive space, processing power, and memory accessible to the applications. Protocols evolved and matured to create a more stable and meaningful experience on the Internet. These protocols enabled confidential information to stay secret and provided the necessary level of integrity for data being transmitted. Web servers became more powerful in processing as they offered more functionality to the users. As more sites connected to each other, the Internet led to the development of the World Wide Web.

The Web is actually a layer that operates on top of the Internet. The Internet provides the hardware, platforms, and communication mechanisms, whereas the Web provides the abundant software capabilities and functionality. Figure 2-3 illustrates this difference.

Figure 2-3. There is a difference between the Internet and the World Wide Web. The Web is a layer that exists on top of the Internet.


As companies connected their networks to the Internet and brought their services to the Web, they connected to the world in an entirely new way. It is a great marketing tool for a business to enable thousands or millions of people to view its product line, understand its business objectives, and learn about the services it offers. However, this also opens the doors to others who are interested in finding out more about the company’s network topology and applications being used, accessing confidential information, and maybe causing some mayhem here and there in the process.

Offering services through the Internet is not the same as offering just another service to a customer base. It can be a powerful and useful move for a company, but if done haphazardly or in a manner that is not clearly thought out, implemented, and maintained, it could end up hurting a company or destroying it.

The decisions regarding which software to use, which hardware configurations to make, and which security measures to take to establish a presence on the Web depend on the company, its infrastructure, and the type of data it needs to protect. In the beginning, a web server was just another server on the Internet with a connection outside of the network. Static pages were used, and no real information came from the Internet to the company through this channel. As forms and Common Gateway Interface (CGI) scripts were developed to accept customer information, and as the Internet as a whole became more used and well known, web servers were slowly moved to demilitarized zones (DMZs), the name given to perimeter networks (see Figure 2-4). Unfortunately, many web servers today still live inside of networks, exposing companies to a lot of vulnerabilities.

Figure 2-4. Web servers were eventually moved from the internal network to the DMZ.


As web servers and applications evolved from just showing customers a home page and basic services to providing complete catalogs of products and accepting orders via the Internet, databases had to be brought into the picture. Web servers and databases lived on the same system, or two systems within the DMZ, and provided information to (and accepted information from) the world. This setup worked until more customers were able to access back-end data (within the database) and corrupt it accidentally or intentionally. Companies eventually realized there were not enough layers and protection mechanisms between the users on the Internet and the companies’ important data. Over time this has been improved upon by adding more layers of protective software.

Note

Today, most web-based activities are being carried out with web services with the use of XML, SOAP, and other types of technologies.


This quickly brings us to where we are today. More and more companies are going online and connecting their once closed (or semiclosed) environments to the Internet, which exposes them to threats, vulnerabilities, and problems they have not dealt with before (see Figure 2-5). If a company has static web pages, its web servers and back-end needs are not half as complicated as the companies that accept payments and offer services or hold confidential customer information. Companies that take credit card numbers, allow customers to view their bank account information, and offer products and services over the Web can work in a two-tier or three-tier configuration.

Figure 2-5. Attackers have easy access if databases are directly connected to web servers with no protection mechanisms.


Two-Tier Architecture

A two-tier architecture includes a line of web servers that provide customers with a web-based interface and a back-end line of servers or databases that hold data and process the requests. Either the two tiers are within a DMZ, or the back-end database is protected by another firewall. Figure 2-6 shows a two-tier architecture.

Figure 2-6. A two-tier architecture consists of a server farm and back-end databases.


This architecture is fine for some environments, but for companies that hold bank or credit card information or other sensitive information, a three-tier architecture is usually more appropriate. In the three-tier architecture, the first line consists of a server farm that presents web pages to customers and accepts requests. The farm is usually clustered and redundant, to enable it to handle a heavy load of connections and also balance that load between servers.

The back-end tier is basically the same as in the two-tier setup, which has database(s) or host systems. This is where sensitive customer information is held and maintained. The middle tier, absent in the two-tier setup, provides the most interesting functionality. In many cases, this is where the business logic lives and the actual processing of data and requests happens. Figure 2-7 shows the three-tier architecture.

Figure 2-7. A three-tier architecture is comprised of a front-end server farm, middle servers running middleware software, and back-end databases.


The middle tier is comprised of application servers running some type of middle-ware, which communicates with the Web (presentation tier) and can be customized for proprietary purposes and needs, or acts basically as another layer of server farms with off-the-shelf products. This layer takes the heavy processing tasks off the front-line servers and provides a layer of protection between the users on the Internet and the sensitive data held in the databases. The middleware is usually made up of components built with object-oriented languages. The objects are the entities that work as binary black boxes by taking in a request, retrieving the necessary information from the back-end servers, processing the data, and presenting it back to the requesting entity. Figure 2-8 illustrates how a component works as a black box.

Figure 2-8. Components take requests, pass them on, and process the answer.


The three-tier architecture offers many advantages. Security can be supplied in a more granular fashion if it is applied at different places in the tiers. The first firewall supports a particular security policy and provides the first line of defense. The first tier of web servers accepts only specific requests, can authorize individuals before accepting certain types of requests, and can dictate who gets to make requests to the next tiers. The middle tier can provide security at the component level, which can be very detail-oriented and specific in nature. No requests should be made from the Internet directly to the back-end databases. Several middlemen should have to pass the request, each looking out for specific security vulnerabilities and threats. The back-end databases are then acted upon by the components in the middle tier, not the users themselves.

The second firewall should support a different security policy. If an attacker gets through the first firewall, it makes no sense for the second firewall to have the same configurations and settings that were just defeated. This firewall should have different settings that are more restrictive, to attempt to stop a successful intruder at that particular stage.

Database Roles

Many times, databases are configured to accept requests only from predefined roles, which ensures that if an intruder makes it all the way through the middleware and to the place that holds the goods, the intruder cannot make a request because she is not a member of one of the predefined roles. This scenario is shown in Figure 2-9.

Figure 2-9. This database accepts requests only from members of the operators, accounting, and administrators roles. Other paths are restricted.


All access attempts are first checked to make sure the requester is a member of a predefined and acceptable group. This means individuals cannot make direct requests to the database, and it is highly unlikely an attacker would be able to figure out the name of the group whose members are permitted to make requests to the database, much less add herself to the group. This is an example of another possible layer of protection available in a tiered approach of web-based operations.

Caution

If group names are obvious or have not been changed from the defaults, extrapolating the group information from a network and making assumptions based on their names may be only a trivial task. Naming conventions should be ambiguous to outsiders and only known to internal security staff.


The discussion of Internet and web activities thus far has focused on architectural issues, giving you a broad overview of the network and how large components are configured to secure the network. However, security vulnerabilities usually are found in smaller components and configuration details that are easier to overlook. A great three-tier architecture can be set up by strategically placing firewalls, web servers, and databases to maximize their layers of functionality and security, but an attack can still take place at the protocol, component, or service level of an operating system or application. The types of attacks cover a wide range, from Denial-of-Service (DoS) attacks, spoofing, SQL injections, and buffer overflows to using an application’s own functionality against itself.

In other words, the company could set up the right infrastructure, configure the necessary firewalls, disable unnecessary ports and services, and run the IDSs properly, yet still lose control of thousands or millions of credit card numbers to attackers because it failed to update the security patches.

This example shows that vulnerabilities can lie at a code level that many network administrators and security professionals are not necessarily aware of. The computer world usually has two main camps: infrastructure and programming. Security vulnerabilities lie in each camp and affect the other, so it’s wise to have a full understanding of an environment and how security breaches can take place through infrastructure and code-based means.

So where do the vulnerabilities lie in web-based activities?

  • Incorrect configurations at the firewall

  • Web servers that are not hardened or locked down and are open to attacks to the operating system or applications

  • Middle-tier servers that do not provide the right combination and detailed security necessary to access back-end databases in a controlled manner

  • Databases and back-end servers that accept requests from any source

  • Databases and back-end servers that are not protected by another layer of firewalls

  • Failure to have IDSs watch for suspicious activity

  • Failure to disable unnecessary protocols and services on computers

  • Failure to keep the computers patched and up-to-date

  • Failure to train developers on key security issues

  • Failure to sanitize data provided by clients through the web forms

The list is endless, but one last item is important to touch on that is not approached as much as it should be in security: application and programming security. Security is usually thought of in terms of firewalls, IDSs, and port scanners. However, the vulnerabilities exploited are within the code of the operating systems and applications. If these problems did not exist in the programming code in the first place, there would be nothing to exploit and no real reason to have firewalls and IDSs.

Programming has usually been approached only in terms of how it can provide more functionality to the user, not in how it can protect the system it is installed upon or the data it holds and processes. Attacks and exploits that are taking place today were not even in the minds of the programmers while they were coding their programs a couple of years ago. Thus, they most likely did not think of coding differently and testing for these specific weaknesses.

The real security problems companies are dealing with are embedded within the products they purchase and install. Only recently have vendors started to take these issues seriously and think about how programming should be done differently. However, proper techniques and extensive testing add a lot of expense and delay to developing a product, and most vendors are not willing to take on those extra expenses and delays without seeing more profit in the end. They have developed the mindset that it is more profitable to get the product to market quickly and worry about patching problems later, and consumers for the most part have acquiesced to this system. It is really up to the consumer market to demand more-secure products and to buy only the products that have the necessary embedded protection mechanisms and methods. Until then, administrators will spend their days patching systems and applications and adjusting firewall and IDS configurations to thwart new vulnerabilities. They will need to continually update attack signatures, and thus the rat race of trying to outrun hackers will continue.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset