Chapter 13. Java and Security

  • Safety in Java

  • The Java Security Model

  • Java Class Security

  • Encryption

  • Authentication

  • Secure Sockets Layer

  • The Government and Security

We have all heard that Java is a "secure" programming language. What exactly does that mean? In this chapter, we discuss the unique features of Java that make it the ideal choice for distributed network programming. Furthermore, we will discuss the nuances of the applet host security model, as well as how security is handled from within your Java applications.

We will also touch very briefly on Internet security and some of the alternatives you may want to explore in your own networked applications to make them safe for cross-network transmission. We begin our examination with the topic of cryptography. The primary goal of cryptography is to provide data privacy, but, as we will see, cryptography can be used to provide other essential security principles including nonrepudiation, data integrity, and access-controlled authentication. We will then look at the issues surrounding authentication, a security process that attempts to identify a participant (user, server, and applet) transaction.

Safety in Java

When we refer to Java as a safe language, we are referring to the fact that you cannot "shoot yourself in the foot." There are no memory leaks, out of control threads, or chance of ending up in the dark spiral of C++ debugging. Make no mistake—Java is a powerful language, and you will always end up with the possibility of sitting in an infinite loop. You can still freeze your Java code with thread deadlocks, and you can certainly end up accessing parts of an array that aren't really there. In short, Java is safe, but it isn't idiot-proof. The fact remains that, in order to screw up your Java programs, you still have to make a major effort.

Most Java programmers are pleased that Java has no pointers to memory locations. This makes program debugging much easier, and it also makes security verification possible. It cannot be verified at compile time that a pointer will do no harm. It can be loaded at runtime with a naughty address to poke a hole in the password file or branch to some code that sprinkles at-signs all over a disk. Without pointers, Java ensures that all mischief is done within the downloaded applet running inside a Java Virtual Machine. Moreover, memory is not allocated until runtime, and this prevents hackers from studying source code to take advantage of memory layout because it is not known at compile-time. Attempts to write beyond the end of an array, for example, raise an ArrayIndexOutOfBoundsException. Had the C language had this feature (array bounds checking), the infamous Morris Internet worm would not have been able to trick the sendmail daemon (running with root privileges) into giving root access to the worm.

Garbage collection, exceptions, and thread controls are part of Java no matter how you try to use it. But, security and safety are two entirely different things. Safety refers to protecting ourselves from our own misadventures. Security refers to protecting ourselves from other people's devices. Because Java objects are loaded dynamically, Java ensures that the objects are "trusted." Java's class security mechanism makes sure that your applications are using the proper objects and not an object that someone has slipped into the download stream to wreak havoc on your machine.

The Java Security Model

The Java security model has been a constantly evolving part of Java. In the JDK 1.0 model, the "sandbox" concept was introduced. In the sandbox model, all local code (JDK-provided code) was run as part of the Java Virtual Machine, and all code retrieved from remote servers (applets) was run in a "sandbox" area of the JVM that provided only a limited set of services. The reason for doing this was based on the fact that any remotely retrieved code could be hostile. To protect the local machine the sandbox provided only minimal access to the machine resources (Figure 13-1).

JDK 1.0 sandbox model.

Figure 13-1. JDK 1.0 sandbox model.

The JDK 1.1 added to the JDL 1.0 security model the concept of "trusted applets" that could run with the same privileges with respect to the local hosts system resources as local code. This was done through the advent of the Java Archive file format and the inclusion of a correctly signed digital signature in the JAR file. Unsigned applets in JDK 1.1 sill run in the sandbox (Figure 13-2).

JDK 1.1 security model.

Figure 13-2. JDK 1.1 security model.

The JDK 1.2 evolves the security model by changing the goals to make it:

  1. Easy to use fine-grained access control

  2. Easy to configure security policy

  3. Easy to extend the access control structure

  4. Easy to extend security checks to Java applications as well as applets (Figure 13-3).

    JDK 1.2 security model.

    Figure 13-3. JDK 1.2 security model.

Easy to Use Fine-Grained Access Control

Fine-grained security has always been a part of Java; the main problem was that the JDK 1.0 and 1.1 models made it extremely hard to use. To get the degree of control required, subclassing and customizing of the SecurityManager and ClassLoader classes is required (not a task for the uninitiated or the faint of heart). This required quite a bit of programming and an in-depth knowledge of computer and Internet security.

Easy to Configure Security Policy

Because of the amount of code required to configure security policy with the earlier JDKs, it would be more user friendly if the software developers and users could easily configure the security policy via an external policy file built with either a text editor or a GUI tool.

Easy to Extend Access Control Structure

To extend the access control structure in JDK 1.1 required adding additional "check" methods to the SecurityManager class. The new model does not require the addition of new "check" methods to the SecurityManager; the new architecture is based on permissions in the policy file. Each permission defines access to a system resource.

Easy to Extend Security Checks to Applications

In an effort to simplify things and have all code treated equally, the JDK 1.1 concept of "trusted" code was dumped in favor of a model where all code (local or remote) is treated equally, including JDK 1.1 trusted applets. It is for this reason that some JDK 1.1 applications and trusted applets will fail with security exceptions when run under the JDK 1.2 virtual machine.

Java Class Security

Java's security model is made up of three major pieces:

  • The Bytecode Verifier

  • The Class Loader

  • The Security Manager

The Bytecode Verifier

The designers of Java knew that applets could be downloaded over unsecured networks, so they included a bytecode verifier in the Java Virtual Machine's interpreter. It checks to make sure that memory addresses are not forged to access objects outside of the virtual machine, that applet objects are accessed according to their scope (public, private, and protected), and that strict runtime type enforcement is done both for object types and parameters passed with method invocations. The bytecode verifier does these checks after the bytecodes are downloaded but before they are executed. This means that only verified code is run on your machine; verified code runs faster because it does not need to perform these security checks during execution.

The Class Loader

Each imported class executes within its own name space. There is a single name space for built-in classes loaded from the local file system. Built-in classes can be trusted, and the class loader searches the local name space first. This prevents a downloaded class from being substituted for a built-in class. Also, the name space of the same server is searched before the class loader searches other name spaces. This prevents one server from spoofing a class from a different server. Note that this search order ensures that a built-in class will find another built-in class before it searches an imported name space. So, when classes are downloaded, the client's built-in classes are used because they are trusted (See Figure 13-4).

Downloaded Java objects use the local built-in classes rather than their own.

Figure 13-4. Downloaded Java objects use the local built-in classes rather than their own.

The Security Manager

New to Java in the JDK 1.2 is the ability to define a security policy that can be defined for each application separately from the Java code in a policy file. The policy defined in this external file is enforced at runtime by the Java security manager class. Java classes that have the possibility of doing things that might violate the security policy have been rewritten to include checks of the defined policy so as to verify that the application writer really wants to allow certain operations.

Java 1.2 Security Policies

New to Java with the release of Java 1.2 is a methodology that provides a much finer-grained approach to the security of important system resources like the file system, sockets access, system properties, runtime facilities, and security facilities themselves. This is done by establishing security policies; when an application/applet/servlet is loaded, it is assigned a set of permissions that specify the level of access (read, write, connect,…) that the code has to specific resources. If code isn't specifically given permission to access something, it won't be able to. These sets of permissions are specified in an external text file called a policy file. Policy files can be created with a text editor or by using the policy tool that comes with the JDK.

For the sample code in this book, a policy file called "policy.all" is provided on the CD. This file will grant all permissions to everything (which is good for the purposes of this book but bad from the standpoint of production code deployment; code placed into a production environment should define only the permissions that it needs to run).

Policy Files

Policy files are made up of a set of "grant" statements that have the general form of:

Grant [SignedBy "signer names"] [, CodeBase "URL"]
{
    permission "permission_class_name" ["target name"]
        [, "action"] [, SignedBy, "signer names"];
    permission. . ..
}

where

  • SignedBy—Indicates that this is signed code (as in a signed JAR file) and that signatures should be checked. This is used to verify that downloaded code is from a trusted source. This is an optional attribute; if it is absent, signature checking is skipped.

  • CodeBase—A URL (usually either http:// or file://) of either a file or a directory to the grant applies.

  • permission—The class that enforces the policy; the most commonly used are:

    • java.io.FilePermission—access to files

    • java.io.SocketPermission—access to sockets

    • java.lang.RunTimePermission—access to threads and system resources

    • java.util.PropertyPermission—access to properties

  • target—A path to the resource. This is optional and, if absent, refers to the current directory.

  • action—Operations allowed (read, write, execute, delete).

  • SignedBy—Signers of the permission classes; if signers can't be verified, the permission is ignored.

There are, by default, two policy files that establish the permissions that an application runs under—a system-wide policy file and an optional user (application) specific policy file. The system-wide policy file is kept in /java.home/lib/security/java.policy (java.home is a system property that contains the name of the directory that the JDK is installed in).

The default policy java.policy follows. It grants all permissions to standard extensions, allows anyone to listen in on ports above 1024, and allows any code to read standard system properties that aren't considered sensitive.

grant codeBase "file:${java.home}/lib/ext/*" {
    permission java.security.AllPermission;};

// default permissions granted to all domains

grant
{
   // Allows any thread to stop itself using the
   // java.lang.Thread.stop() method that takes no argument.
   // Note that this permission is granted by default only to remain
   // backwards compatible.
   // It is strongly recommended that you either remove this
   // permission from this policy file or further restrict it to code
   // sources that you specify, because Thread.stop() is potentially
   // unsafe. See "http://java.sun.com/notes" for more information.
   // permission java.lang.RuntimePermission "stopThread";
   // allows anyone to listen on un-privileged ports
   // permission java.net.SocketPermission "localhost:1024-", "listen";
   // "standard" properties that can be read by anyone

  permission java.util.PropertyPermission "java.version", "read";
  permission java.util.PropertyPermission "java.vendor", "read";
  permission java.util.PropertyPermission "java.vendor.url", "read";
  permission java.util.PropertyPermission "java.class.version", "read";
  permission java.util.PropertyPermission "os.name", "read";
  permission java.util.PropertyPermission "os.version", "read";
  permission java.util.PropertyPermission "os.arch", "read";
  permission java.util.PropertyPermission "file.separator", "read";
  permission java.util.PropertyPermission "path.separator", "read";
  permission java.util.PropertyPermission "line.separator", "read";
  permission java.util.PropertyPermission "java.specification.version","read";
  permission java.util.PropertyPermission "java.specification.vendor", "read";
  permission java.util.PropertyPermission "java.specification.name", "read";
  permission java.util.PropertyPermission "java.vm.specification.version", "read";
  permission java.util.PropertyPermission "java.vm.specification.vendor", "read";
  permission java.util.PropertyPermission "java.vm.specification.name", "read";
  permission java.util.PropertyPermission "java.vm.version", read";
  permission java.util.PropertyPermission "java.vm.vendor", read";
  permission java.util.PropertyPermission "java.vm.name", read";
};

User- or application-specific policy files are kept by default in user.home/.java.policy (user.home is the system property that specifies the user's home directory.

Your overall security policy is created at runtime by first setting up permissions in the java.policy file and then setting the permissions found in the user policy file. To set up the system policy to your own policy just set the java.security.policy property to the URL of the policy file to be used. The URL can be specified as:

  1. A fully qualified path to the file (including the file name).

    java -Djava.security.policy=c:advjavacd
    mistats1policy.all
    
         rmi.Stats1.StatsServerImpl
    
  2. Any regular URL.

    java -Djava.security.policy=http://policy.allStatsServerImpl
    
  3. The name of a file in the current directory.

    java -Djava.security.policy=policy.all rmi.Stats1.StatsServerImpl
    

The policy.all file we have been referring to follows:

// this policy file should only be used for testing and not deployed
grant
{
    permissionjava.security.AllPermission;
};

Security Tools

The JDK comes with several tools to help you manage the security of code that you write and wish to deploy:

  1. policytool—A Java application that comes with the JDK and that provides you with a GUI tool for creating and maintaining policy files.

  2. keytool—Used to create digital signatures and key pairs and to manage the keystore database.

  3. jarsigner—Allows the attaching of a digital signature to a JAR file.

For detailed instructions on these tools, refer to the JDK documentation and the security path of the Java Turorial at http://java.sun.com/docs/books/tutorial/security1.2/index.html.

Security Problems and Java Security Testing

Finally, the Java language has been thoroughly field-tested by high school and university students, college dropouts, and professional hackers lurking in the dark alleys of the World Wide Web. Each and every one of their creative minds was confident it could find a flaw in such a seemingly wide-open door to any system in the world! The most publicized security breaches happened early in Java's distribution, and all have been corrected in the current releases. It has been very quiet ever since. The flaws that were uncovered were implementation errors, not design problems. One group was able to insert its own class loader instead of the one loaded from a secure local file system. Clearly all bets are off if an untrusted class loader that doesn't enforce the class search order we described earlier is used. Another implementation bug was exploited by using a bogus Domain Name Server in cahoots with an evil applet. Java 1.0.2 uses IP addresses instead of hostnames to enforce the network access security levels described earlier.

Details about these early security flaws and their corrections can be found at http://java.sun.com/sfaq.

Encryption

In this section, we describe some of the techniques commonly used to provide privacy during data exchanges between two parties. Data traveling through the Internet can be captured (and possibly modified) by a third party. Certainly, you do not want your credit card number to be revealed to a third party and you probably also want the merchandise you purchased to be delivered to your address and not to a different address inserted by a third party. Data encryption ensures that a third party will not be able to decipher any message sent between a client and a server.

A very simple algorithm used to scramble "sensitive" jokes on the Internet is called "rot13" because it rotates each character by 13 positions in the alphabet. That is, "a" is mapped to "n," "b" is mapped to "o," and so on. This algorithm also decrypts a message that was scrambled by it. This is adequate for its purpose: to protect people from reading a joke that they might feel is offensive. This is an example of symmetric key encryption, where both sides use the same key (13) to encrypt and decrypt a scrambled message (see Figure 13-5).

Symmetric key encryption decodes messages with a key on both the sending and receiving ends.

Figure 13-5. Symmetric key encryption decodes messages with a key on both the sending and receiving ends.

In its most commonly used mode, data encryption standard (DES) uses a 56-bit key to scramble message blocks of 64 bits; in this form DES encrypts large amounts of data relatively fast. DES is currently one of the encryption algorithms used by Secure Sockets Layer (SSL). Recent research has shown that 56-bit DES is becoming insufficient for providing robust encryption for security-sensitive applications. Many companies now use "triple DES," which encrypts each block of data three times with three different keys.

One problem with symmetric key algorithms such as DES is key distribution (i.e., how do I share the private key securely among the participants?).

Public key, or asymmetric cryptography, uses a pair of mathematically related keys for each user. Everyone can know a user's public key, but the private key must be kept secret. To send data to another user, the sender encrypts the data using the recipient's public key and sends the encrypted message to the recipient. The recipient decrypts the message using his or her private key. Because only the recipient knows the private key, data privacy is ensured. Asymmetric algorithms are inherently slower than their symmetric counterparts. The key distribution problem of symmetric algorithms is overcome through the use of the public/private key pairs because the public key can be widely distributed without fear of compromise. There is still one problem with key management in public key encryption schemes. Namely, how do I know that the key I am using for Joe is really Joe's public key? It could be possible for a network interloper to substitute his or her public key for Joe's public key. A variety of trust models have risen to combat this problem. For corporations, the most prevalent model is the hierarchical trust model, which relies on the use of digital certificates and certificate authorities to validate users' public keys.

Real-world cryptographic implementations utilize a combination of public and private key encryption to provide not only data privacy but also nonrepudiation (via digital signatures), access control, and authentication. These solutions use the strengths of both public key (key distribution) and private key cryptography (speed); an example follows.

John creates a document and wants to send it to Mary. John first encrypts the document using a symmetric algorithm (like DES) and a randomly selected key. The randomly selected key is then encrypted using an asymmetric algorithm (like RSA) and Mary's public key. A message digest function (one-way mathematical function (like MD5) is performed on the original document producing a fixed-length message digest. This message digest is encrypted using an asymmetric encryption algorithm using John's public key. These three elements are then sent to Mary over some unsecured communications link. This is shown in Figure 13-6.

A combination of symmetric and asymmetric encryption.

Figure 13-6. A combination of symmetric and asymmetric encryption.

The process of decrypting and verifying the encrypted document is shown in Figure 13-7 and goes something like this: Mary uses her private key to retrieve the random symmetric key used to encrypt the document. Because Mary is the only one who knows her private key, she is the only one who can open the "digital envelope," thus ensuring data privacy. The retrieved symmetric key is used to decrypt the document. Using the same message digest function as John, Mary produces a message digest for comparison to the one sent by John. Mary now uses John's public key and the asymmetric encryption (RSA) to retrieve the message digest sent with the document. By using John's public key to retrieve the message digest, Mary has also verified that the message was sent by John (i.e., retrieved his digital signature) because only John's private key could have been used to encrypt the message digest. The message digest sent with the document is compared with the one computed by Mary. This comparison ensures the data integrity. If the digests match, the document was unaltered during transmission.

Decryption of example.

Figure 13-7. Decryption of example.

Java Cryptography Extension (JCE)

The JCE provides a set of APIs that allow you to encrypt, decrypt, and password-protect information.

Authentication

In many applications, it is important to authenticate the identity of a client making a request for a service. Examples include banking, financial, real estate, medical records, and ISP (Internet Service Provider) applications. An ISP, for example, wants to ensure that Internet access is being provided to a paying customer and not the customer's housekeeper. The online stock trading application wants to make sure that it is the portfolio owner who is making trades.

The usual way to do this is to require an account number or customer name and a password. This is adequate for workstations and time-sharing systems and client/server sessions such as calling Charles Schwab to manage your stock portfolio. In a distributed system, many different servers provide services. Instead of a single authentication to a single server or application, that server must authenticate each service request sent over the network.

One obvious requirement of such an authentication system is that it be transparent to the user. The user does not want to type in a password for each service each time it is requested. Another requirement is that it be available at all times because, if a server cannot authenticate a request, it will not provide the service. When the authentication service is unavailable so are all the services that use it. A less obvious requirement is that authentication must be protected against capture and playback by another user on the network. Capture cannot be prevented on broadcast media such as an Ethernet cable, so the authentication procedure must be able to prevent a playback by an impostor.

Kerberos

One popular authentication system is Kerberos, which is named after a three-headed guard dog in Greek mythology. It depends on a third party that is trusted by both client and server (see Figure 13-8). Clients request a ticket from the third party. The ticket is encrypted using the server's secret password, so the server trusts the client when it can decrypt the ticket. The server's password is known only to itself and the third party. The third party knows everyone's password! This means that all systems are vulnerable if the trusted third party is compromised.

Servers can trust clients only if they can decrypt the ticket from the Kerberos server.

Figure 13-8. Servers can trust clients only if they can decrypt the ticket from the Kerberos server.

A well-known bank has two major data centers, one in San Francisco and the other in Los Angeles. Each center backs up its data at the other site. In this way, the bank can resume operation soon after serious damage to either data center. The Kerberos servers are replicated at both sites and kept behind "the glass wall." In fact, there is a sealed walkway with locked doors at both ends and a badge reader with a video camera in the middle. If your face doesn't resemble the one on the badge, you are not allowed into the room that houses the Kerberos servers. In fact, two or more very large people will promptly escort you out of the building.

Including a timestamp in the ticket thwarts playback. That is, the Kerberos server encrypts the client's IP address, a session key, and a timestamp using the server's key. The client encrypts its service request message with the session key and sends it, along with the ticket, to the server. The server uses its key to decrypt the ticket. If the IP address in the ticket matches the IP address in the IP packet header and the timestamp is within a few milliseconds of the current time, then the server accepts the client's request. It uses the session key to unscramble the request and perform the service. It's as simple as that. Playback is impossible because the encrypted timestamp will have "timed out" before an impostor can capture and try to replay the request. Also, the IP address of the impostor will not match the IP address encrypted in the ticket.

Digital Signatures and Public Key Encryption

The theory behind digital signatures and public key encryption is that in a given system every user has a pair of digital keys. In the case of the Web, the mere act of installing a browser on your system will generate the private and public keys to be used with that browser. If you have two browsers installed (e.g., Netscape and IE4), then you will have two sets of private and public keys, one set for each browser. The basic premise behind public key encryption is that using some algorithm you can use your private key to generate a permutation (encrypt) of a message that can only be decrypted using your public key. If you carefully distribute your public key to the people you normally deal with, anytime you send them a message they will be able to read it using your public key.

Secure Sockets Layer

By far the most widely used authentication and encryption on the Internet in general and on the Web specifically is Secure Sockets Layer. SSL can be used with any connection-based protocol. It's called a layer because we essentially insert an additional protocol layer between TCP and the Application layer of the TCP/IP stack (see Figure 13-9).

Secure Sockets Layer.

Figure 13-9. Secure Sockets Layer.

SSL adds the following features to the reliable stream provided by TCP/IP:

  1. Authentication and nonrepudiation of the server via digital signatures.

  2. Authentication and nonrepudiation of the client via digital signatures.

  3. An encrypted stream to provide privacy.

  4. Data integrity through message authentication codes.

Netscape Corporation designed SSL as a way of ensuring secure communications between its browser and server products. SSL has become the de facto standard for secure communications between Internet clients and servers.

For a look at the SSL v3.0 specification, see http://home.netscape.com/eng/ssl3/ssl-toc.html. To use SSL requires cooperation between the client browser and the server. At the server, a secure instance of the Web server must be running on the well-known port 443. (Some Web sites run both an unsecure and a secure instance of the Web server on the same machine. The unsecure instance is listening on port 80, while the secure instance is running on port 443. Some sites run the unsecure and secure instances on completely separate hosts.)

At the browser end, all references to the URLs of documents or applications must be preceded with the protocol https rather than http. As long as the protocol notation is https, the port is defaulted to 443 (the secure server port).

The attachment of an SSL client to an SSL server starts off with what is known as an SSL handshake. During the handshake, the client and server agree on the protocol version to be used, select the cryptographic algorithm they will use to protect data transmission, optionally authenticate one another, and use public-key encryption to generate shared secrets. After this has been done, the rest of the transmission takes place in an encrypted manner using the parameters selected during the handshake.

The Government and Security

The issue of security on our computers is greatly affected by the restrictions on security technology placed on a company by its home government. Because this is not by any stretch of the imagination a comprehensive text on security, we instead outline the two major controversies concerning government intervention in computer security. We attempt not to pass judgment on either the government or the security community; you can make that determination for yourself. Instead, in this section, we simply point out the two sides to the arguments of governmental control of security export and the government's right to possess keys to domestic security apparatuses.

Export Control

The United States government is extremely adamant about protecting against U.S. technology falling into nondomestic hands. Two of the more important regulations that are in place are the DoD International Traffic in Arms Regulation (ITAR) and the U.S. Department of Commerce Export Administration Regulations (EAR). Both sets of regulations concern the export of technology to foreign governments; ITAR primarily concerns U.S.-based defense contractors, and EAR applies to all commercial ventures that involve the sale and export of technology-related items to non-U.S. persons.

Because the Internet is a worldwide medium and social phenomenon, without boundaries and governments to hinder it, the government realizes that some form of security technology must be used to transmit information across national boundaries. Therefore, the U.S. government restricts the level of security found in certain products that are international in nature. For example, the Netscape browser has two versions. One is a U.S. domestic version with full browser security features. The other is an international version that implements the Secure Socket Layer with less security. The international version may be exported outside the United States, whereas the domestic version may be used only within the United States.

Never mind the inability to actually protect against the dissemination of the more powerful security technology to international audiences, the United States simply makes the distinction. If Netscape were to blindly distribute the domestic version without making a statement such as "Domestic Use Only," they would be breaking the law. Is the law enforceable to end users? Probably not, but the law is there, written as plain as day, and should be followed by "morally upstanding citizens." For you, as application programmers, secure networked applications should follow the same kind of export controls if they are applicable.

The "Clipper" Controversy

Historically, the U.S. government has always known that there are ways for its citizens to keep information hidden from the government. In fact, the Fourth Amendment to the Constitution of the United States of America specifically outlines this right that all American citizens possess:

  • The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

But, over the years, a distinction has been made as to what is "unreasonable." The government, in interests of "national security," may, with permission from the Judicial branch, execute a search of one's property and possessions. How does this apply to the digital age?

The entire "Clipper chip" controversy centers around the government's willingness to publish an encryption algorithm for telephones, computer files, and any other form of communication. The transmissions would be encrypted and mathematically impossible to break. However, the government would always be able to have a "back door" to the encryption with its own special key. As outlined in the Fourth Amendment, the government may use the key only with a written warrant; nevertheless, the idea that "Big Brother" may be watching is enough to bring chills down the spines of some people.

Lost in the argument is the fact that there are several other encryption methods that could be used instead of Clipper (e.g., PGP) and that are just as good and do not encourage governmental interference. Clipper represents the entire belief that, in the end, the U.S. government, as well as the other governments of the entire world, has no idea how to protect itself in the digital age without sacrificing intellectual freedom.

Summary

Secure, networked transmissions are of the utmost importance to many people. If the Internet is truly to become the focus of all our communication in the next century, then we must all have confidence that no one can intercept and decode our innermost thoughts. Although we have very briefly outlined the concerns of the U.S. government, we hesitate to endorse or criticize any one position. In the end, the debate over the involvement of government authorities will be settled in another, more appropriate, forum. For now, as application programmers, you should be keenly aware of the position of your government, whatever it may be, on how you can send secure transmissions.

With this solid base of network programming underneath us, we must now make a decision about which alternative to choose. Each has its advantages and disadvantages, and we will discuss them in detail in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset