CHAPTER 1

Introduction

1.1 WHAT IS A MOBILE WIRELESS APPLICATION?

I will start at the beginning with a working description of the term I will be using throughout the rest of this work: mobile wireless application.

Mobile refers to the intent, the devices are portable, often lightweight devices that move around, often carried by their user. The devices are generally powered with a small battery, which implies a tradeoff between power, functionality, and battery life.

Wireless devices communicate with other devices without physical wires or cables.

Application refers to the software used by the user on the device. The application may be written to run on the mobile device or take advantage of existing software on the device such as a web browser.

The term includes mobile phones and handheld devices that communicate over wireless networks. This book also covers testing of some aspects of the servers that support the mobile wireless applications.

Mobile wireless applications include communications over-the-air (OTA) between the mobile wireless device and servers. Connections are mainly over external mobile phone networks, although WiFi is an option on newer devices.

A brief introduction to mobile network history and terminology is available at http://umtsmon.sourceforge.net/docs/terminology.shtml.

1.2 CLASSIFICATIONS OF MOBILE WIRELESS APPLICATIONS

As I have gained experience with mobile wireless applications, I have come to identify ways to group similar types of these applications using an inform al classification, which has helped me to understand their similarities and differences.

  • Client applications. These are split into two groups: native and portable,
  • Messaging applications,
  • Browser applications. Also split into two groups: markup and AJAX applications.

I will be using these terms throughout the rest of the document. I encourage you to adapt the classification scheme to suit your needs.

1.2.1 Client Applications

Client applications are installed on a mobile device and run on that device.

The application may be written to look and feel like a native application for specific phone models. A native application should behave and look like an integrated part of the installed phone software. Generally, custom compilers and tools are needed to build native software specifically for those phone models.

Portable applications are generally able to run with few changes across a wide range of phone models and manufacturers. The user interface is not as well integrated with any individual phone model and the software may not be able to take advantage of all the features provided by particular phone models.

1.2.2 Messaging Applications

Current messaging applications use SMS messages as the communications medium. Typically the user can use the standard “text messaging” feature provided with the phone.

A single SMS message contains between 70 and 160 characters depending on how the characters are encoded. The protocol has been extended to send longer messages (http://en.wikipedia.org/wiki/SMS).

The servers need to receive and respond to the specifications of SMS messages. The messages are packed and need to be decoded before being used. Virtually every mobile phone includes full support for SMS messaging; and manufacturers provide SMS software libraries if you want to incorporate SMS communication into a custom application.

1.2.3 Browser Applications

Browser applications are server-based applications that can be accessed through a web browser via a URL from a mobile device. There are a variety of web-based markup languages, dictated by the capabilities of the web browsers for different geographic regions, etc. Mobile web browsers are less flexible or capable than desktop web browsers, e.g., they are unlikely to support extensions and media players (such as Flash).

Markup applications are generated and run within the server. The client displays, or renders, the pages generated by the server and provides basic user-interaction. User input is sent by the browser to the server for processing.

Modern mobile web browsers are beginning to have support for AJAX applications—JavaScript that runs within the web browser on the client, and enables developers and designers to create richer applications. The JavaScript often modifies the page content within the browser, and interacts directly with the server.

1.2.4 The Supporting Servers

The servers include varying degrees of customization for mobile wireless applications. For instance, web servers can detect requests from mobile devices and tailor content accordingly. The customization helps to provide content specialized to mobile constraints, such as limited bandwidth and the small screens and fiddly keyboards on many devices.

For client applications they tend to offer a message-based protocol. Some protocols are based on the ubiquitous HTTP web protocol. Others include audio and video content (using protocols such as RTP), and messages (using protocols such as RSS), etc.

Servers for browser applications need to provide content that meets the needs and limitations of the device's browser. They detect the device and browser by matching various protocol headers in the HTTP requests (e.g., the user-agent string, and endeavor to return appropriate content). The content may need to be pared down to meet limitations of size and complexity; and the format needs to match the markup language used by the browser (e.g., Wireless Markup Language [WML] for older phones).

1.2.5 Things That Do Not Quite Fit

Some technologies do not quite fit my classifications, for instance: Multimedia Messaging Service (MMS) is another messaging service and supported directly by many smartphones. However, unlike SMS it uses HTTP requests and responses (http://en.wikipedia.org/wiki/Multimedia_Messaging_Service)/

RSS “feeds” messages and updates from a server to clients who subscribe to the feed. RSS is similar to a browser application, but uses another piece of software to display the content. There are several interpretations of RSS (see http://en.wikipedia.org/wiki/Rss for more information).

1.3 CURRENTLY OUTSIDE THE SCOPE OF THIS BOOK

This book does not cover:

 

  • Testing of the physical devices, their operating system, or of the platform (except where it affects applications that run on that platform);
  • Automated test suites to certify the run time platform, such as Java Certification;
  • Testing the internals of the base stations and carrier networks. However, we touch on these topics where they can materially affect the performance of mobile wireless applications; and
  • Embedded devices or technologies I do not yet know about. However, many of the principles and techniques may be relevant.

 

If you would like to contribute ideas, experiences, and material please contact me—I would be happy to incorporate relevant work and acknowledge your contributions.

1.4 SCOPE OF MOBILE WIRELESS TEST AUTOMATION

My scope is fairly broad, it ranges from unit testing to system and field testing, and includes anything that helps to partially or fully automate testing of mobile wireless applications.

1.5 CHALLENGES IN TESTING MOBILE WIRELESS APPLICATIONS

We face various challenges inherent to testing mobile wireless applications ranging from practical limitations, to tedious, mundane tasks, to understanding what factors and issues affect the results of our testing (Figure 1.1).

Trying to test using all possible devices is impractical. Trying to multiply that testing across the rest of the factors exacerbates the problems (e.g., network operator, different versions of the underlying software, etc). Even the first stage of configuring handsets—so we can run and test an application—is error-prone and time-consuming.

images

FIGURE 1.1: Testing challenges.

The software installed on phones is constrained by various tradeoffs and decisions made by the provider of the phone. The provider may have customized the software installed on the phone to change the default behavior. Neither the original software nor the changes are well documented.

Detecting rendering, or display, issues generally needs someone to look at the content on the screen. As the User Interface (UI) code can be up to half of all the application's code, and as the UI is such an important factor for most mobile applications, the need for human involvement needs to be factored into much of our test automation.

Some factors that could affect the test results may be outside our direct control. They may hard to even identify and therefore even harder to measure. When we do test, accurate test data may be hard to obtain, and there are numerous gaps and contradictions in the data we have which we need to sift through to determine the key issues and their likely impact.

Measuring performance of mobile applications is an imperfect art, and particularly error-prone when trying to obtain consistent, accurate results.

1.6 PROBLEM SPACE

The world of mobile phones is complex; where the device is frequently provided by the network operator as part of a service. The software on these devices is often customized by the operator and the software may include significant changes to the user interface and to the functionality. For instance, some operators disabled the “Voice Over IP” (VoIP) feature from Nokia's very popular N95 handset to prevent their users from using this service.

There are hundreds of network operators—with multiple Internet price plans and particular price plans offer or prohibit particular services and have specific network configurations.

There are also hundreds of models of handsets, each may have many variants—e.g., firmware from a particular network operator—giving thousands of possible variations.

Combinations of price plan, network configuration, and phone firmware may limit or even disable part or all of an application.

The choice of handset also affects the runtime environment (e.g., some support Java ME and others support C++ programs). The preinstalled web browser(s) on a handset also determine which markup language(s) the server needs to use. While modern devices often support relatively complete xHTML or HTML markup, older devices might use more limited markup languages such as WML, C-HTML, or i-mode.

When testing application software we need to consider:

 

  • The human languages (e.g., French, Kanji);
  • The locales (e.g., UK English, Australian, and US English), which affect things like formatting numbers and currency symbols;
  • Who pays for updates to be downloaded (users may be unwilling to pay to download updates OTA);
  • How the software is installed on the device (e.g., in terms of security permissions); and
  • The number of applications and versions you need to support in parallel.

 

Finally, there is the vital topic of what testing resources you have available to test each version and release of an application, and to decide how best to spend that time—e.g., which handsets from which network operators should you test with? We will cover the testing focus in more detail shortly.

The following diagram provides an overview of the problem domain.

images

1.6.1 Transcoding Web Content

Some content is unsuitable for devices—e.g., it may be too complex or contain images in a format not supported on a device. Google and other companies transcode content in order to make it more suitable for mobile devices. For example, Google's mobile search transcodes results by default for many mobile devices (and offer users the ability to view the original page if they prefer). Some carriers also transcode web content to do a similar job.

Here is a diagram of how a transcoder converts a graphical static web page to suit a generic web browser.

images

Essentially a transcoder acts as an intermediate device which interprets both the HTTP requests and processes the HTTP responses. In the HTTP requests they typically process things like the user-agent string (to recognize which device is making the request), and in the responses they process things like content type and content length to determine whether content should be converted on-the-fly (dynamically) or whether it is appropriate to pass through unchanged.

1.7 OUR TESTING FOCUS

Given the vast problem space and our typically severely constrained resources, we need to focus our testing if we are to be effective. When automation is used appropriately we can be significantly more effective and reduce the overall time needed to test each software release. Some types of applications can be automated relatively easily and successfully, while others are more challenging (e.g., client applications). Finally some aspects are better tested manually—e.g., to assess the rendering of the UI on actual devices.

For applications that run on a range of devices, where there are lots of variations between devices and where upgrades can be expensive or difficult, we first want to focus on finding and addressing problems that would prevent users from being able to use the application on their device. These problems include:

 

  • Finding incompatibilities ranging from not installing to poor rendering;
  • Discovering and working around limitations in the software on the device, including browser issues, J2ME bugs, etc.; and
  • Detecting content or behavior that may adversely affect the behavior of the device (e.g., where a large web page may not be shown at all on some devices).

 

Once we have tested for these issues the next step is determining whether users get the most technically suitable content for their device. For instance some smartphones from Nokia support both JavaME and C++, and they may have several web browsers installed (e.g., one that supports xHTML and another that supports HTML). C++ applications tend to be faster and take better advantage of the features of the smartphone, but they have to be “trusted” by both the user and often by the network operator who may prohibit unapproved software from being installed or used.

Pick a representative subset of the set of all the intended devices. I suggest you slice the set in various ways to increase the chances of finding meaningful bugs.

 

  • Pick some of the most popular models and for these pick a model with the most popular version of the manufacturer's firmware (another term for the preinstalled software). For example, a Nokia N95, an iPhone with version 2.1 of the operating system, and a T-Mobile G1 with cupcake installed would represent a significant subset of the set devices with capable web browsers.
  • Pick one model from a set of similar models—e.g., for the older Nokia Series 60 second edition devices a N6680 or an N70 are good representatives for the rest of the range of models. They have similar web browsers, JavaME runtime, and support the same C++ applications.

 

For all our applications we want our users to like using them. After all, unless we have a monopoly (e.g., for internal company applications), then our users have plenty of alternatives available. Here we focus on:

 

  • Usability, the wow factor, etc.; and
  • Performance, which is an umbrella term that includes: a user's perception of responsiveness, client-side rendering, OTA transmission times, and server-side timings.

 

Test design helps us to increase the effectiveness of each test, and the test coverage, without testing every possible permutation! Thankfully we can adopt existing techniques and good practices from elsewhere in the software testing communities. For example, we can use combination testing techniques to select our test cases and use exploratory testing techniques to help guide our testing.

1.8 OUR GOALS WHEN TESTING

Testing our software is a “means to an end”—part of the journey rather than the ultimate goal. However, if we have clear, measurable goals then we can keep track of how well we are doing and whether our testing is useful for the applications we are testing.

Here are some of the goals I have used over the years to help you identify goals that suit you and your work.

 

  • To ensure we deliver attractive, easy-to-use, working applications for as many users as practical. Lots of happy, frequent users help show our software is successful and useful.
  • To have justified confidence in the quality of our software. Providing accurate information on the quality of software is an important aspect of software testing. When we test well, and communicate the results so other people understand the strengths, weaknesses, risks, etc., with releasing our application, there should be few surprises after deploying the software to our users. Ideally, most of the bugs would be found and fixed before the software is widely used.
  • Fast feedback to developers. Fast feedback helps them to fix the code while it is still “warm,” while they are still intimately familiar with it.
  • To quickly detect issues so they can be addressed. This is particularly relevant when the problem is related to external factors (e.g., an operator's network configuration or a specific handset model). Note: We tend to make changes to our software, as that is the fastest way to fix the issue from the user's perspective. We can then work with the relevant third parties to address the underlying issues in a more considered fashion.

 

For each of your goals, try to find ways to collect useful metrics (e.g., the number of bugs found in testing compared to the number reported by users). Dorothy Graham coined the term Defect Detection Percentage to measure these bugs—more information is available on her blog, http://dorothygraham.blogspot.com/.

1.9 OUR OVERALL TESTING STRATEGY

Start by underpinning manual testing with automated testing. As manual testing is very time-consuming, and often limited to testing through the limited user-interface of the mobile device, find ways to automate parts of the application code and the system. For example, use automated unit tests to test the business logic and the communications libraries of the client application. And automate the testing of the client-server protocols and interfaces by using custom clients, or independent automated tests from desktop computers, to send messages and inspect the received messages.

Rely on existing test automation tools and libraries if they exist: e.g., J2MEUnit for Java ME applications, and WebDriver for testing AJAX applications. In some cases there is no suitable tool, so you may decide to create one if you have the time, skills and resources. For instance we ended up creating JInjector so we could automate system testing and generate code coverage for our Java ME applications, and the IPhoneDriver for WebDriver. Both these tools are open-source and available free of charge. We have found “open-sourcing” our tools to be useful both for us and for the wider testing communities. We get their feedback and support, and they are able to use and extend our work.

Sometimes problems can be isolated and fixed sooner by splitting the code apart—e.g., to replace the UI with a text equivalent (sometimes known as a “headless” version). While doing so may seem like extra work, generally the debugging tools are less sophisticated than when debugging server code. Also, once you have a headless version, the tests should be able to run without (much) human involvement, unlike testing through the UI.

Consider reducing problems to their essential details, to divide-and-conquer issues. Note: mobile client code may be less elegant than equivalent server code, partly owing to restrictions imposed by the development platform and libraries, and partly because developers want to optimize to reduce size and increase speed of the application. Consider testing the servers in isolation, testing by using protocol emulators, testing locally on the device, etc.

Automate more of the build and deployment processes in order to accelerate and streamline the testing. Another benefit is that automated processes help to reduce the risk of human error in the deployment.

Seek ways to automate more of the end-to-end on-device testing, both to reduce the need for manual testing and to help identify device-specific issues cost-effectively.

Seek also to provide effective test output to reduce the effort required to identify and address problems. As mobile applications may have very restricted reporting capabilities—e.g., when running within the Java ME “sandbox”, where applications need express permission to write to the filesystem—consider writing the results from the client to a server using HTTP, MMS, or even SMS messages.

One of my strategies, and one reason this book was written, is to allow other teams and groups to automatically test their software so that I can then get out of the way and leave them to it!

Finally for this section, do not be afraid to seek some quick wins as well as trying to address longer-term automation goals. In terms of testing mobile applications, some seemingly simple tools can significantly improve our effectiveness. These tools include:

 

  • User-agent capture tools;
  • Using SMS messages to send test URLs and download links to devices;
  • Screen-capture tools; and
  • Using “contact-sheets” that collect many screenshots into a single display, which enable lots of screenshots to be reviewed quickly.

 

All these tools are covered later in this book. You are welcome to add to the list and tell me about your favorite tips and tools.

1.10 CORE CONCEPTS

There are some core concepts which underpin our approach to testing mobile wireless applications. Let us start with connectivity.

 

  • GSM and CDMA networks provide support for data.
  • Phones include a GPRS (etc.) modem which provides the underlying connection. Web browsers on the phones (or applications that use the internet for communications) use the data connection.
  • We can either use phones or dedicated modems (which are also known as data cards) to establish a similar connection for testing.
  • This material concentrates on HTTP connections that underpin the majority of connections between mobile phones and servers.

 

To help you understand how HTTP is used for connectivity the first chapter on automation “Testing techniques for markup applications” starts at a relatively basic level and builds the test code in small discrete steps until a basic test is created for a search page. Here is an overview of what that chapter includes:

 

  • Sending an HTTP request and receive the response;
  • Analyzing the request and response;
  • Device Emulation, starting by adding a user-agent setting and adding more HTTP headers until we manage to convince the server that our tests should be treated as that device; and
  • HTTP + Device Emulation + Content Validation.

 

Subsequent chapters include code snippets to highlight specific aspects or topics of test automation. A mix of programming languages are used, typically the same as would be used to develop the respective application.

Terms such as emulate and emulation are used throughout this material. Emulation means our software pretends to be the real device (e.g., our tests can send information which the server uses to determine whether the request is from a mobile device). Emulators are also provided by software vendors which behave sufficiently closely to real devices to enable our applications to run on our computers.

images

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset