Chapter 5. Alternate Methods of Identification

Because of the heavy intersection between mobile devices, desktop clients, and a new breed of connected hardware out of the Internet of Things, the demand for a new class of authentication and authorization technology is on the rise. This chapter covers upcoming standards such as FIDO that enable covering multiple form factors and are able to scale beyond software-based authentication technology.

Device and Browser Fingerprinting

Next to regular authentication and authorization scenarios, device and browser fingerprinting allows for a more passive way to identify users across a big target group. Applications like Am I Unique? are broadly available and can leverage many factors in order to determine whether a user is unique.

When performing device and browser fingerprinting, the user is usually tested against some very general and broad factors—such as the device’s platform, the current browser, or whether cookies are enabled on the device—and then against more granular and subtle determinants, like the device’s resolution, time zone, the browser’s enabled plug-ins, or user agent. When Flash is enabled, services like Am I Unique? or Panopticlick are even able to obtain a list of currently installed fonts.

Eight factors can be concatenated and lead to a browser’s fingerprint (Table 5-1).

Table 5-1. Browser measurements to determine uniqueness
Variable Obtained through

User Agent

HTTP

HTTP ACCEPT headers

HTTP

Cookies enabled

HTTP

Screen resolution

AJAX

Timezone

AJAX

Browser plugins

AJAX

System fonts

Flash or Java applets, collected through AJAX

Supercookie test

AJAX

Additional factors, such as the user’s geolocation, can be obtained through HTML5 if the user agrees to share them or by analyzing the user’s IP address (which does not require the user’s consent).

Panopticlick released a paper on browser uniqueness that is a worthwhile read and a great source for further information on this subject.1

Two-Factor Authentication and n-Factor Authentication

Because of the known weaknesses and issues that come along with basic authentication through passwords, the demand for more secure login methods is high. Two-factor authentication (2FA) relies on the addition of another token, such as a one-time password, which is consumed after usage and therefore prevents common security exploits, such as replay attacks. This section explains the basic concepts of both two-factor authentication and the upcoming n-factor authentication technologies.

n-Factor Authentication

n-factor authentication, also known as multifactor authentication (MFA), is based on assuming that every individual should have three basic components:

  • Something you know

  • Something you have

  • Something you are

When examining these three requirements, you will quickly realize that they match concepts that we have discussed before. Something you know is the most basic component and can be assumed as granted: it can be as simple as a password or passphrase.

The second item on our list, something you have, aims at securing passwords or passphrases by adding another layer of protection. Popular examples are smart cards or RSA Tokens, which are used for RSA’s SecurID network authentication technology. As of 2014 about 1.75 billion people worldwide have access to mobile phones—a small and affordable piece of technology that can easily act as an additional physical layer in authentication and authorization technology.2 By being able to receive text messages and/or emails and allowing for the installation of authentication applications that generate one-time passwords, such as Google Authenticator and Authy, people are able to secure existing logins.

Lastly, something you are focuses on the individual’s identity and adds a handful of new challenges that we discuss in the following section. The basic assumption here is that the usage of something intrinsic, such as the individual’s fingerprint, uniquely identifies the user among all users and therefore adds a third layer of security.

One-Time Passwords

One-time passwords, known as OTPs, have been positioned in the industry as a means to fight traditional password weaknesses and exploits. By being ever-changing and usable only once, they reduce an application’s attack surface drastically.

Currently, there are three ways to generate one-time passwords. The first implementation, time-synchronization, generates short-lived tokens. Popular two-factor authentication applications, such as Authy or Google Authenticator, use this method to generate OTPs.

Both the second and third implementations are based on mathematical algorithms and generate long-lived tokens. One way to handle these OTPs is generating them based on the previous password, and therefore requiring them to be used in a predefined order. The other way to handle mathematically generated OTPs is generating them based on a random challenge.

When not being generated by a client-side application, OTPs can be delivered by either text messages or emails. The industry tends to favor text over email at the moment because it’s broadly available; a phone number is rated to be unique across all users, and can be made accessible through text-to-speech and therefore also cover landline phones. A reason to use emails instead is the cost of sending a text message and the inability to check whether the text message arrived at its destination. Another issue of text messages is the weak (A5/x) or nonexistent encryption standards that allow for man-in-the-middle attacks.3

On mobile devices, using emails to transport one-time passwords has one big advantage for the user experience: applications can automatically open and import the OTP, which heavily reduces friction and is being used by companies like Slack (Figure 5-1). The key to automating this process is registering a custom URL handler (via Android’s application manifest4 and utilizing URL Schemes on iOS5) that detects when URLs of a certain markup are handled.

Slack's mobile sign-in flow
Figure 5-1. Slack’s mobile sign-in flow

This sign-in flow results in an email similar to the one shown in Figure 5-2.

Slack's sign-in email for mobile devices
Figure 5-2. Slack’s sign-in email for mobile devices

When analyzing the email’s source code, you’ll notice that a URL like the following is used: https://slack.com/z-app-211=9624547-19991285158-cJ1DJfifFa?s=3Dslack. Because Slack clearly owns the authority over slack.com (and no other applications should claim any URIs containing this domain), no custom scheme workaround like myapp://auth.com/ is needed. By clicking the “Sign in to Slack on your mobile device” button from within your mobile email client, you open the Slack application and will be signed in.

Note

Since version 6.0 (Marshmallow), Android enables declaring website associations.6 This mechanism helps to protect your native applications by preventing third-party applications from accessing information that is meant for internal consumption only.

Implementing Two-Factor Authentication with Authy

Now that you understand how OTPs work, let’s see how to implement these within our own apps and websites. For this example, we’re going to use a Twilio authentication service called Authy. Authy will allow us to do things that we’ll need for a 2FA system, such as the following:

  • Register/delete 2FA user accounts on our service.

  • Send SMS verification codes to those users.

  • Verify the verification codes after users enter them on the website to verify themselves.

With that said, our first task is to set up an application with Authy and get a key that we will use to verify our application against the service. We can do that by following these steps:

  1. Go to the Authy dashboard at https://dashboard.authy.com/signin.

  2. Sign in or register a new account.

  3. Click Access Authy Dashboard.

  4. Click Enable Two-Factor Authentication, and select your preferred verification method, which is required to create a new application.

  5. Click Create Your First App.

  6. Enter an application name on the form that pops up; then click Create. Follow the rest of the instructions to create the application.

Once the application dashboard comes up, at the top of the page you will see an information section, which includes your hidden product and sandbox keys. We’re going to be using our production key, so click the eye beside the hidden key to reveal it (Figure 5-3).

The Authy dashboard key information section
Figure 5-3. Authy key details

Take note of that key, because we’ll be using it in our Authy 2FA example.

Note

The complete sample code for the following Authy example is available at https://github.com/iddatasecuritybook/chapter5/tree/master/authy-2fa.

With our key in hand, let’s dive into a practical walk-through of how to implement 2FA using the service. First we need to install a few npm modules, specifically:

body-parser

For dealing with JSON/URL-encoded payloads post Express 4.0

authy

A helpful module for working with Authy functionality for users and tokens

We can pull down these packages via npm with the following terminal commands:

npm install body-parser --save
npm install authy --save

We can now create our .js file, and instantiate our packages and the body-parser functionality:

var express = require('express'),
    app = express(),
    bodyParser = require('body-parser'),
    authy = require('authy')('YOUR AUTHY PRODUCTION API KEY');

//to support JSON-encoded bodies
app.use(bodyParser.json());
//to support URL-encoded bodies
app.use(bodyParser.urlencoded({ extended: true }));

In the preceding code, we set up our express, body-parser, and authy variables. With the Authy instantiation, we pass in that Authy production key that we obtained when we created our application on the site. In the last two lines, we then set up body-parser to be able to handle JSON- and URL-encoded objects that we will need to parse from our requests later.

With Express, we can now set up a few routes in our application to handle POST requests to different endpoints for working with user setup and token verification. Let’s start with defining a route to handle user registration:

When should you register a new user with Authy?

The Authy registration for new users should be done when you have a new user creating an account with your site or service, during your regular registration process. As you are storing user information for your site, you will also store the user ID that Authy provides during registration.

//route: register a new user via provided email and phone number
app.post('/register', function(req, res){
    var email = req.body.email;
    var number = req.body.number;

    authy.register_user(email, number, function (err, response){
        //expected response:
        //{ message: 'User created successfully.',
        //  user: { id: 16782433 },
        //  success: true }
        res.send(response);
    });
});

The preceding route will allow any POST request sent to the /register endpoint of the application to be handled. The endpoint is expecting two pieces of data to be sent to it:

email

The email of the user to be registered. This should match the email that was stored in your user records when you registered the individual.

number

The phone number of the user to be used for the SMS 2FA process.

With that information obtained, we then make a request to authy.register_user(…), passing in the email and number that we just pulled from the POST body. If all is successful, the return value (stored in response), should contain three pieces of data:

message

The human-readable success message.

user

The user ID of the newly registered user. This should be stored in your user database for sending the 2FA requests.

success

A Boolean true/false indicating the success state.

Next on our list is to set up the ability to send SMS 2FA messages to a given user ID:

When should you send the SMS verification code?

SMS verification should be conducted during login. When a user enters her first set of credentials (typically username/password), you can then send the SMS message from Authy for a second level of authentication.

//route: send authy SMS message with verification code
app.post('/sms', function(req, res){
    var uid = req.body.uid;

    authy.request_sms(uid, function (err, response){
        //expected response:
        //{ success: true,
        //  message: 'SMS token was sent',
        //  cellphone: '+1-XX12362760' }
        res.send(response);
    });
});

This route will accept any POST request to the /sms endpoint, and will expect one piece of data to be POSTed:

uid

The user ID that was obtained from registering the user with Authy, during the last step.

Once we pull out that value, we can then make a request to authy.request_sms(…), passing along that UID and a callback. This will attempt to send an SMS verification code to the phone number that is registered for that given user during the registration step. In the response object (on success), we are expecting a few parameters:

success

A Boolean true/false indicating the success state

message

The human-readable success message

cellphone

The cell phone number that the SMS was transmitted to

At this point, the user has obtained a verification code. She will enter the code on your site, and you will need to verify that it is correct:

How and when should you validate a verification code?

When users are sent the SMS verification code during the login step (for second factor verification), you should supply a method for them to enter the code that they see on their mobile device on your site.

//route: verify a provided verification token against a given user
app.post('/verify', function(req, res){
    var uid = req.body.uid;
    var token = req.body.token;

    authy.verify(uid, token, function (err, response){
        //expected response:
        //{ message: 'Token is valid.',
        //  token: 'is valid',
        //  success: 'true' }
        res.send(response);
    });
});

This route will handle the verification step. It will accept a POST request to the /verify endpoint, and will expect two pieces of data in the POST body:

uid

The user ID that Authy provided during the registration step

token

The verification token that the user was sent via SMS during the last step

Once we obtain that information, we can then call authy.verify(…), passing in the UID, token, and a callback function. If the verification step completes successfully, we can expect three pieces of data to come back from the response:

message

The human-readable success message

token

Verification of whether the token is valid

success

A Boolean true/false indicating the success state

Once we verify the token is legitimate, we can then allow the user to enter the site, and the 2FA process is now complete.

If a user deletes her account from our site we may need a last step. We want to ensure that we clean up all residual user information, including her Authy user registration data:

When should you delete users from Authy?

When users delete their account with your site or service, you should also clean up their information in Authy by deleting the registered account. The registration/deletion steps should be synced with your site/service registration and deletion steps.

//route: delete an existing user
app.post('/delete', function(req, res){
    var uid = req.body.uid;
    authy.delete_user(uid, function (err, response){
        //expected response:
        //{ message: 'User was added to remove.', success: true }
        res.send(response);
    });
});

This route will accept a POST request to the /delete endpoint, and expect one item in the POSTed data:

uid

The user ID that Authy provided during the registration step

When obtained, we then make a call to authy.delete_user(…), passing along the user ID and a callback. If the deletion is successful, we should see the following parameters come back in the response:

message

The human-readable success message

success

A Boolean true/false indicating the success state

Once done, the user has been removed from the Authy registration system. In our app sample, the last thing we need to do is start the server:

app.listen(process.env.PORT || 3000);

This will listen on the expected port (such as if running via Heroku), or on port 3000 otherwise. Once the server is up and running (assuming on localhost port 3000 in this case), we can then run some tests by sending POST requests from the terminal to each of the endpoints we set up.

First we issue a request to register a new user:

curl -H "Content-Type: application/json" -X POST -d
'{"email":"[email protected], "number":"18675309"}' http://localhost:3000/register

We send an HTTP POST request to the register endpoint, passing along an email and phone number in the POST body. The JSON response from that will give us the user ID for the newly registered person, which we will use for the next step.

The second step is to trigger the send of an SMS to the phone of that registered user:

curl -H "Content-Type: application/json" -X POST -d
'{"uid":"16572253"}' http://localhost:3000/sms

From the registration request, we obtained a user ID from Authy. We send that UID through to the SMS endpoint. The response should be a text message showing up on the registered phone number. The response from that will provide us with a verification code via SMS.

Next, we send the token we have in our SMS through for verification:

curl -H "Content-Type: application/json" -X POST -d
'{"uid":"16572253", "token":"0512278"}' http://localhost:3000/verify

We send the user ID and token via a POST body to the /verify endpoint, which should provide us with a message stating that the token is valid, if the request was successful.

The last step is to clean up the user records by deleting the user we just created:

curl -H "Content-Type: application/json" -X POST -d
'{"uid":"16572253"}' http://localhost:3000/delete

We send the user ID to the /delete endpoint, which, on success, will provide us with a success message response.

With all of that in place, we now have the structure to provide 2FA SMS token verification for our users.

Biometrics as Username Instead of Password

With the growing availability of fingerprint scanners on mobile devices, such as the iPhone device family and newer Android devices, more and more applications are trying to identify use cases that enhance the overall user experience. This surge in new technology has led to people wanting to use their fingerprint to replace password prompts on their phones.

From a logical standpoint, it might seem an easy choice to leverage biometrics to authorize access to applications, unlock a device’s screen, and much more, but this creates new issues. Passwords are traditionally vulnerable, as we discussed in Chapters 1 and 2, and can be leaked or exposed to third parties. When using simple passwords, we can simply alter that password and exchange it for a new, more secure one. When using fingerprints, we run into a whole new dimension of issues: human beings have a maximum of ten fingers, and it is highly desirable that those fingerprints are not invalidated by being exposed to the public.

Using Fingerprints as a Security Mechanism

The German Chaos Computer Club managed to bypass the security mechanisms of Apple’s TouchID in 2013. By replicating a fingerprint using a high-resolution photograph, the CCC managed to trick an iPhone 5s TouchID sensor into unlocking the phone. In the exploit’s summary, the CCC highly recommends not using fingerprints to secure anything.7

Tim Bray, co-inventor of XML and contributor to the IETF’s OAuth work, expressed his opinion about using fingerprint scanners and other biometric factors in a blog post8 that led to an interesting discussion with John Gruber, inventor of the Markdown standard.9 Gruber states that using a fingerprint is still better than using no security (like not locking your phone with a four-digit pin or a passphrase) or weak security.

Considering the discussion between Bray and Gruber and the fact that the CCC managed to exploit fingerprint scanners into unlocking, it might be wise to consider biometric factors less as a security mechanism and more as a mechanism to prove identity.

How to Rate Biometric Effectiveness

When handling biometric factors for authentication scenarios, the false-positive rate, also known as false-acceptance rate, of the used mechanism is critical. Google requires third-party manufacturers that want to implement fingerprint scanners for Android phones to use an implementation that leads to a false-positive rate of not higher than 0.002%.10 False-rejection, another confounding factor, leads to user frustration and should be avoided—Google’s guidelines define a maximum rate of 10% and a maximum latency between scan and action of 1 second. A third important criterion to secure fingerprint scanning is limiting the number of false attempts before disabling fingerprint scanning; Apple allows for three false attempts on iOS devices before asking the user to unlock the phone differently, while Google defines a maximum of five tries before disabling fingerprint scanning for at least 30 seconds (per the manufacturer’s guideline).

Face Recognition

Facial recognition aims at using either digital images or videos to identify people. This process extracts and processes a wide array of so-called landmarks and features in order to match profiles. Factors such as the relative position and size of those landmarks are normalized and compared using either geometric (comparing distinguishing features) or photometric (generating statistical values) approaches. Emerging three-dimensional recognition systems have proven to be less sensitive to changes in lighting and can help improve recognition by scanning different angles (often at the same time, by stacking multiple sensors on the same chip).

Various banks such as the national banks of Costa Rica and Ecuador have announced that they will use facial recognition technology on mobile devices to secure access to banking accounts.11 Alipay, the Alibaba Group’s online payment platform, announced in November 2015 it will roll out facial recognition to both iOS and Android devices.⁠12 Both examples demonstrate that the finance industry does not seem to be completely behind fingerprint technology and tries to evaluate other biometric factors on a broader scale.

Retina and Iris Scanning

In a similar fashion to face recognition, retina scans rely on identifying unique patterns. When observing a person’s eye, blood vessels can be analyzed to identify users (Figure 5-4). Even identical twins do not share the same blood-vessel network and therefore cannot circumvent this security concept.13

While both retina and iris scanning use cameras to identify people, the key difference lies in the identification process itself. Whereas retina scans rely on light being absorbed by blood vessels in order to analyze a person’s retina, iris scanning takes an image of an eye that is then analyzed to identify structure. These images can be captured from a distance of 3 to 10 inches and therefore are considered less intrusive than retina scans that require the the user’s eye to be much closer to the scanning device. An iris is supposed to have 266 unique spots that can be leveraged to determine uniqueness.14

Anatomy of an Eye—taken from National Eye Institute
Figure 5-4. Anatomy of an eye (illustration courtesy of the National Eye Institute)

While a person’s retina might change because of temporal or permanent effects (like diabetes or high blood pressure), the retina supposedly stays the same between the birth and death of a human being.15

Vein Recognition

While fingerprints remain usable as long as they can be duplicated or obtained in any other way, a person’s veins are viable for authentication mechanisms only as long as blood flows through the body. Fujitsu has deployed palm vein recognition as a solution across ATMs in Japan that leverages biometric details to encrypt the dataset itself and therefore removes the need for encryption keys.16

Upcoming Standards

When analyzing the current authentication and authorization standards, it becomes quite apparent that the industry has not decided on a common standard. In this section, we present three currently viable contenders with very different focus and industry backing: The FIDO Alliance, Oz, and the Blockchain.

FIDO Alliance

The FIDO Alliance, which stands for Fast Identity Online, is a new industry alliance between major contributors such as Google, BlackBerry, Microsoft, PayPal, and Lenovo. FIDO provides a scalable identity solution for multiple platforms and covers the three basic principles of authentication—something you have, something you know, something you are—by providing two scenarios: Universal Authentication Framework (UAF) and Universal 2nd Factor (U2F).

Both U2F and UAF are compatible with current federated identity services such as OpenID, SAML, and authorization protocols like OAuth.

UAF

UAF was designed with passwordless and mutlifactor authentication flows in mind. A trust relation is established by leveraging local mechanisms, such as using microphone input, entering a PIN, or fingerprint-scanning. The beauty of the protocol is that various factors can be combined; this kind of security-layering is a concept that is outlined in Chapter 6.

From a privacy perspective, the FIDO alliance dictates that only the minimal data needed should be collected, and used only for FIDO purposes. User verification is handled locally on the device and does not convey any biometric details to third parties (Figure 5-5).

FIDO UAF authentication
Figure 5-5. FIDO UAF authentication

U2F

While UAF combines various factors to provide a secure and passwordless solution, U2F augments an existing authentication implementation by adding a second factor. The second factor simplifies password requirements to four-digit PIN codes and manifests in a device that presents the second factor via USB or NFC. This piece of hardware is usable across all implementing online services as long as the web browser supports the U2F protocol.

The devices are supposed to be designed with mainstream adoption in mind. This is why the design principles are minimal and allow for affordable hardware that can be distributed widely. From a security perspective, a secure key will be provided to manufacturers of secure elements and will change with every chipset batch.

U2F was designed with flexibility in mind: multiple people can share one device, and each person can use multiple devices to secure accounts across implementing sites (Figure 5-6).

FIDO U2F authentication
Figure 5-6. FIDO U2F authentication

U2F utilizes a special registration and authentication message format to communicate with all supporting devices and browsers. Table 5-2 lists the requirements for the authentication message.

Table 5-2. Authentication message format
Parameter Description

Control byte

0x07 to check if the key handle was created for the provided application parameter, 0x03 if a real signature and user presence is required

Challenge

SHA-256 hash of client data (stringified JSON)

Application

SHA-256 hash of the application identifier

Key handle length byte

Defines the length of the following key handle

Key handle

Provided by the relying party and obtained during registration

In case of a successful authentication, the response contains a parameter that provides information about the user presence, a counter that increments whenever a successful authentication operation was performed, and a signature consisting of the following:

  • Application parameter (32 bytes)

  • User presence byte (1 byte)

  • Counter (4 bytes)

  • Challenge parameter (32 bytes)

Oz

Eran Hammer, known for his contributions to both OAuth 1.0 and 2.0, published a web authorization framework called Oz in September 2015. This framework compiles industry best practices to provide not just a protocol but a concrete implementation that is opinionated about details such as client-side cryptography using HMAC.

This framework does not try to be a solution that covers all platforms and form factors, but rather a viable tool for JavaScript-based applications that aim to implement a secure solution for authorization.

Oz provides an OAuth 1.0-esque authorization flow and is based on two current solutions: Hawk, a client-server authorization protocol, and Iron, a tool that allows encoding and verifying JavaScript objects. As opposed to OAuth, Oz tries not to handle user authentication; its sole purpose is handling application-to-server-authorization scenarios. From an architecture standpoint, Oz is similar to a slimmed-down implementation of OAuth 2 enriched with security best practices.

The Blockchain

Developed to verify Bitcoin transactions, the blockchain is slowly becoming a powerful tool beyond the scope of cryptocurrency and the payment landscape. The idea behind using the blockchain for identity scenarios is simple: a user can store proof of certain attributes—such as first and family name, address, or date of birth—and make the cryptographic hash of these attributes publicly available to anyone who is able to provide the user’s public key. This allows individuals to verify information, while authenticity of these details can be ensured. The interesting twist in this concept is the ability to decide which pieces of information to share.

Let’s use the example of a car accident. Somebody scratches our car and wants to provide important information such as insurance details, contact name, and phone number. We can only hope that the person gives us the correct details because we cannot verify anything until it is probably too late. Utilizing the blockchain, we could rely on exchanging cryptographic hashes and verify all information provided on the spot.

A company called ShoCard tries to build upon this concept by providing a consumer-friendly mobile application. All information is stored in the public blockchain data layer and made accessible on demand.

Wrap Up

In this chapter, we explored upcoming standards and technologies that will provide simpler authentication flows and promise better security. In contrast, the following chapter provides an overview about currently available browser technology, Node modules, and integral server components that vastly enhance security and help us, the developers, build better applications.

1 http://panopticlick.eff.org/browser-uniqueness.pdf

2 http://www.emarketer.com/Article/Smartphone-Users-Worldwide-Will-Total-175-Billion-2014/1010536

3 http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2006/CS/CS-2006-07.pdf

4 http://stackoverflow.com/a/2448531/636579

5 http://code.tutsplus.com/tutorials/ios-sdk-working-with-url-schemes—mobile-6629

6 http://developer.android.com/training/app-links/index.html#web-assoc

7 http://www.ccc.de/en/updates/2013/ccc-breaks-apple-touchid

8 https://www.tbray.org/ongoing/When/201x/2013/09/22/The-Fingerprint-Hack

9 https://twitter.com/gruber/status/381857717406560257

10 https://static.googleusercontent.com/media/source.android.com/en//compatibility/android-cdd.pdf

11 http://www.biometricupdate.com/201601/facephi-facial-recognition-solution-to-authenticate-banco-nacional-of-costa-rica-clients

12 http://findbiometrics.com/alipay-facial-recognition-comes-to-ios-android-212227/

13 http://blog.m2sys.com/biometric-hardware/iris-recognition-vs-retina-scanning-what-are-the-differences/

14 http://www.globalsecurity.org/security/systems/biometrics-eye_scan.htm

15 http://blog.m2sys.com/biometric-hardware/iris-recognition-vs-retina-scanning-what-are-the-differences

16 http://www.biometricupdate.com/201510/fujitsu-laboratories-develops-method-to-convert-biometric-data-into-cryptographic-key

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset