Chapter 13

Gearing up for Your Product Launch: The Qualify Phase

IN THIS CHAPTER

check Discovering how success is built on qualifying your product

check Running a successful beta test program

check Deciding to ship the product

You and your team have been working hard on creating a great product. You identified the target personas, chose the best opportunity, prioritized the features, made challenging trade-offs, and have been in development for quite some time. The next step in the Optimal Product Process is to prepare for running a beta program, getting the near-finished product into customers’ hands, and ensuring that it’s ready to be released to a wide range of customers.

Getting Up to Speed on the Qualify Phase

Many times, you may be tempted to skip or shortcut this phase because of schedule and time constraints, but don’t make this mistake. Use this phase to be absolutely sure the product meets the quality level you need. Then you can be certain the features and benefits it provides are more than adequate for customers to justify paying for the product. Beta testing (along with your own internal quality assurance and testing efforts) lets you know that the product is ready to shine when released to customers — or that you have more work to do.

tip Alpha refers to the first phase of internal testing after some of the features are in place and the quality assurance team can begin testing it for problems and bugs. Beta refers to the phase when the product has all its promised features complete and the internal quality assurance team (as well as external testers) are looking only for areas where the product doesn’t function correctly. Don’t start testing with external users during the alpha phase; the product will generally be too unreliable, and the testers will get frustrated.

Ensuring internal and external quality validation

As you go through development, your internal quality assurance team develops a testing plan and tests the product. As the product manager, you should read and approve the plan to make sure quality assurance is testing the most common customer scenarios. Make sure that the plan includes a rigorous variety of common scenarios and setups that the target customer is likely to have in their environment. For example, for a web software product, make sure your internal testing team includes a variety of operating systems and browsers in their testing that are representative of what you customers use.

Rigorous internal testing will find many of the problems with your product. So why test externally at all? Because even with the best possible internal quality assurance engineers and testing, circumstances arise where real-world customers use the product in a different way that internal testing can’t anticipate.

Figure 13-1 helps you determine how big a need for a beta program your product has by asking you to evaluate the potential risk any post-release bugs pose to the customer against the difficulty of fixing them. For some products, such as consumer web software where the risk to the customer is low, you may want to run an open beta program for anyone. Google popularized this approach years ago as a way to release products without the company having to be responsible for bugs, problems, or things that didn’t work as advertised. In fact, many Google applications have never left beta.

image

© 2017, 280 Group LLC. All Rights Reserved.

FIGURE 13-1: Difficulty of change versus risk.

For an application where the risk to the customer is low or for products that can easily be changed and updated rapidly after customers begin using them, the strategy of running a large open beta for anyone can be viable. For products that are mission-critical for customers or for products that can’t easily be changed — such as hardware, physical products, financial services software, or other applications that deal with sensitive data — it may not be a viable approach.

warning For all types of products, the risk you run in not holding an adequate beta program is that you may damage the brand reputation with customers and the market if the product isn’t solid. Even if it’s a web-based application, customers who have a negative experience may never be willing to use it again. Too many companies have put out products too early without getting adequate beta validation, and they simply could never recover.

Creating a beta plan

The beta plan is critical in that it helps formulate plans well in advance and ensure the corresponding beta program based on the plan will adequately achieve the desired results. The Product Management LifeCycle Toolkit (included as a free download with this book; see the Introduction, page 4, for details) includes a beta plan template you can use in creating your plan.

Beta programs take a lot of time and effort to execute in order to be done effectively. Table 13-1 suggests a timeline for each part of your beta program. Plan on 9 to 12 weeks from when you start planning the program, and make sure you have an employee or contractor dedicated at least half-time to running the program. Planning, recruiting customers, gathering feedback and suggestions, running an exit survey, and tallying the results to determine whether the product is ready to be released are all time-consuming yet critical activities that need to be covered.

TABLE 13-1 Beta Program Timeline

Task

Amount of Time

Set goals, write plan, and sign off

1 week

Recruit and receive applications

3 weeks

Select, notify, and send agreement

1 week

Run program

3–6 weeks

Conduct exit survey, tally results, and write final report

1 week

TOTAL

9–12 weeks

Dodging typical beta testing mistakes

The most common mistakes made in the beta program are as follows:

  • Not building enough time into the schedule to do adequate beta testing with real-world customers: Development almost always takes far longer than expected, and the company is anxious to release the product on time. As a result, the time set aside for beta testing is often reduced dramatically at the very end, resulting in an ineffective (sometimes nonexistent) beta program.
  • Choosing beta testing customers that don’t represent the actual customers and personas: Friends and family often beta test. Though these people can provide some good feedback, don’t assume that they’re representative of how your actual customers will use the product or of the kind of environment customers will use it in on a day-to-day basis.
  • Running a beta program without predefined goals and metrics: If you don’t establish what the goals and metrics are prior to running the program, you won’t be able to determine whether the product is actually ready to release. Having some concrete metrics in place can help make your decision to cut corners at the very end of development a far easier decision.
  • Underestimating how difficult recruiting participants for the program will be: Many times, companies incorrectly believe finding customers who are willing to spend time beta testing the product will be easy. In real life, finding people who are dedicated, and willing to spend the time and effort required to provide useful feedback is often far more difficult.

Putting a Beta Program in Place

Once you have a comprehensive beta plan ready you can execute your beta program. To do a thorough beta program well requires a lot of time and effort, but the feedback you get from customers letting you know whether the product is ready for primetime makes it well worth it.

Setting appropriate goals

Make sure you define your goals upfront and early. What are you really trying to accomplish? Do you want a broad cross-section of customers to spend extensive amounts of time using the product to prove that it’s ready to ship? Or is your existing quality assurance (QA) work extensive, and you just need a few customers to validate readiness with some hands-on customer usage? Do you want to gather early feedback for the next version? Or do you just want to find a group of customers willing to talk to the press or provide you with quotes or testimonials?

Making your goals concrete

remember The earlier you set your goals and the more concrete they are, the better. When you don’t define success upfront, creating your overall plan and knowing that you have reached your goals are difficult. In addition, if the testing shows that the product isn’t ready to ship, you need concrete goals that allow you to push back on the team and delay launch.

For example, your goal may be that a certain number of customers must have installed the product, used it for N days (or N hours or number of times) successfully, and found no major crashing bugs. One goal should be that you’ll survey the beta testers at the end of the program and find that a certain percentage of them (for example, 90 percent) indicate they believe the product is ready to ship. Flip to the later section “Exit surveys” for details on the survey part of the process.

Other goals may include finding three to five customers that are willing to give you quotes and talk to the press if reporters ask for references. A real bonus is if a customer is willing to create a blog post for you or recommend the product on your website or in social media. You may want to set a goal that the number of bug reports must show a decreasing trend to an acceptable level before the product can be declared ready to ship. Obviously, you can use a number of metrics.

Recruiting participants

Your success at recruiting customers depends on a number of factors, including whether it is a brand new and unknown product or an upgrade from a previous version, how well known and prominent your company is, and how popular your products are. For example, Apple could probably find iPhone beta testers pretty easily. Other factors include how easy finding your target customers is and how much effort and time you’re asking them to put in.

If you have an enterprise software application from a new, completely unknown start-up company that will require the beta customers to go through a lengthy installation (or worse yet, installation may affect their other mission-critical systems), you may have a difficult time recruiting. On the other hand, if you have a consumer application that takes little time to install or try out and provides immediate benefits, you may find recruitment easier.

Depending on all these factors, you need to gauge what kind of recruiting program you need to do and how strong the incentives need to be (see the section “Incentives” later in this chapter). You may need to make personal phone calls and visits in order to convince customers to participate. Or you may be able to use email or even a form on your website.

Sources for recruiting include the following:

  • Current customers
  • Prospective customers that didn’t purchase before
  • Venture capitalists and investors who can refer you to representative customers
  • Your personal network (but be careful using friends and family; see the earlier section “Dodging typical beta testing mistakes”)
  • Your sales force and leads
  • Advertisements (online, in the local newspaper, and so on)

Incentives

In terms of incentives, you can offer participants many things. Certainly the “Help us improve the product” angle works to some degree. If you’re lucky and have a fanatical user base, you may actually have to fight off customers wanting to participate. You may also want to offer free or reduced pricing or upgrades. Another option is to have a contest to keep users motivated. For example, you can run a program called the “Great Bug Hunt” where for each valid bug submission the beta tester receives an entry into a drawing for a tablet or some other prize. This approach not only encourages testers to sign up but also gets them enthused to continue using the product throughout the course of the program.

Response rates

You need to contact customers to get them to participate. The response rate you get in terms of participation and actual usage for a beta program varies widely based on a number of factors, including these:

  • Popularity of the product
  • Whether the product is completely new and unproven
  • Who your company is (well known or unheard of)
  • How personal and compelling your recruiting approach is
  • How stringent you are at selecting customers that fit your profile
  • Whether your product will affect mission-critical systems of the customer
  • How much time and effort you’re asking the customer to put in

Just how many customers do you need to contact to get an adequate number that agree to be part of the program? More importantly, how many will actually end up testing the product and doing what they’ve promised for the duration of the program? To give you an idea of the range of actual participation you can expect, here are some numbers from actual beta programs that the 280 Group has run for clients.

For an existing (versus brand new) product that is very popular, is low-risk to install and use, and doesn’t take a lot of time and effort, you may be able to get away with contacting only 25 people initially. If your targeting was accurate, expect that 20 of the 25 would likely fit your criteria and 15 may sign up. Of these, you can expect 8 to 10 to actually use the product enough to give you some valid beta feedback. Your success rate is roughly 30 percent of the initial number contacted.

On the flip side, for a brand new unknown product from an unknown start-up that has a high risk and time commitment for installation, you may contact 100 customers initially, find 40 that are interested, have 20 sign up, and in the end have 5 to 8 that are actually active. And that’s a success rate of roughly 5 to 8 percent of the initial number contacted.

Beta agreements

When you’ve got participants on board, you’ll want to have them sign a beta agreement. It doesn’t have to be a formal agreement. Recognize that in bigger companies (either yours or theirs) you’ll be forced to use an actual contract. Focus on clearly stating what the commitments and expectations are, including maintaining confidentiality until the product is released publicly. Make it as simple as possible. Include details such as length of program, incentives and rewards for participation, responsibilities of participants, expected amount of usage, and support that will be provided. Getting participants to actually sign an agreement makes their commitment much more real. After they’ve signed, you have a higher probability that they’ll actually use the product and provide feedback.

Kicking off the program

After you’ve lined up all of your participants and gotten them to sign a beta agreement you’re ready to kick off the program. The most important thing at this point is to do everything you can to avoid a false start. If the participants have a bad first experience with the product, your chances of getting them to continue putting in effort are much lower.

To avoid a false start, make sure you’ve agreed with your team about what the criteria are for starting the program. For example, you may all agree that the program can’t begin until all fatal/crashing bugs have been fixed or until N users have been running the product internally with no major problems for at least a week. And you may agree not to start the program until all participants are signed up.

The other approach you may want to use is to deploy with one or two participants who are more technical and/or whom you know better than the others. You may even want to go to their company if they’re local and watch them get up and running so that you can figure out where any of the likely bumps in the road may be for other participants.

tip Make sure all components of the product are solid before deploying anything. Check and double-check the installers, prepare a FAQ document so that participants won’t have to contact you to get answers, and include documentation and help system content if at all possible.

After the program has started, communicating regularly with both the participants and your internal team is critical. For smaller programs, you should call the participants at least once a week to check in and make sure they’re using the product and having no issues. For larger programs, send regular email communications. Make sure you communicate about the overall status of the program, bug fixes/new versions that testers need to install, updated FAQs, and details about contests or incentives. If the participants hear from you regularly, they’ll be reassured and be much more likely to continue putting in time and effort to help you with the product.

Also make sure to communicate internally with your team. Provide a weekly status report, including number of bugs reported, usage by participants, whether you’re meeting the stated goals, and what you need from the team to continue to be successful. Keeping the team informed is crucial so that as the program gets ready to wrap up, everyone knows the status and no one experiences any surprises or disconnects about the results.

Exit surveys

When you’re ready to end the beta program, have the participants complete a short exit survey. You can send it via email or through an online survey tool such as SurveyMonkey or Google online forms. Ask the participants how much they used the product, what their overall impressions are, whether they believe the product is ready to ship, and what can be improved. Also ask them to rate the features of the product on a scale of one to five (five being the highest ranking).

tip If you offer an incentive or a contest, make it a requirement for participants to complete the survey in order to qualify.

Final report

Take the results of the survey and deliver a final report to your team. This account should include a short testing summary, bug trend information, information on whether you met the original agreed-upon goals, and a summary of customer opinions and feedback. Deliver this report to the team prior to making the decision to ship the software; it’s a fact-based tool to make an informed choice.

Making the Decision to Ship the Product

You set your goals, recruited participants, and have run a thorough beta program. You’ve got the data in hand about how successful customers have (or haven’t) been using the product and whether they recommend that it’s ready to release to the general public. Now you have to make the final decision about whether the product is ready.

This point can be a challenging time for the product manager for a variety of reasons. Your engineering team is tired and most likely wants to ship the product and take a short break, your management wants the additional revenues associated with the product, and your customers may be hounding you for it. Yet you still must carefully consider whether the product is ready because releasing a product that has severe deficiencies can not only kill the product’s revenues but also severely limit your career.

Some of the key factors to consider when making the decision include the following:

  • What percentage of the beta testers believe the product is ready to be released?
  • What’s your confidence level in how solid the product is? Are you 50 percent confident? 95 percent confident? Is either of these numbers acceptable?
  • What is the risk if you know there are problems, and how quickly can they be fixed?
  • Do you have the luxury of waiting, or will your competition steal too much market share from you if you wait?

The great (and also challenging) thing about being a product manager is that you’re where the buck stops. You own the decision about whether the product is ready to be released to customers. By all means, explain your decision to the development and management team. Don’t be afraid to make the right decision at this point. After all, your biggest responsibility is to make sure the product is as successful as possible.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset