Chapter 12: A Target Trial Approach with Dynamic Treatment Regimes and Replicates Analyses

12.1 Introduction

12.2 Dynamic Treatment Regimes and Target Trial Emulation

12.2.1 Dynamic Treatment Regimes

12.2.2 Target Trial Emulation

12.3 Example: Target Trial Approach Applied to the Simulated REFLECTIONS Data

12.3.1 Study Question

12.3.2 Study Description and Data Overview

12.3.3 Target Trial Study Protocol

12.3.4 Generating New Data

12.3.5 Creating Weights

12.3.6 Base-Case Analysis

12.3.7 Selecting the Optimal Strategy

12.3.8 Sensitivity Analyses

12.4 Summary

References

12.1 Introduction

The comparative effectiveness of dynamic treatment regimes is often assessed based on observational longitudinal data. Dynamic treatment regimes are different from static regimes in that the treatment for a given patient is not fixed but potentially changes during the trial in response to the observed outcomes over time. An example of a dynamic regime that will be used later in this chapter is to “start opioid treatment when the patient first experiences a pain level of at least 6 on the BPI Pain scale.”

This chapter starts with the concept of a “target trial” (Hernán and Robins 2016), a hypothetical trial where subjects are randomized to follow specified dynamic regimes of interest. Discussion of a target trial helps avoid potential biases from using longitudinal observational data such as immortal time bias and time-dependent confounding. With the same causal estimand in mind, we then turn to the use of longitudinal real world data for estimating the causal treatment effect where some patients might not follow precisely the dynamic regimes of interest. One key step in the analysis is to censor each patient at the point when they deviate from a given regime and then use inverse probability of censoring weighting (IPCW) to estimate the expected outcomes if all patients (contrary to the fact) would have followed that regime. This is similar to the use of the inverse probability of treatment weighting for estimating the marginal structural models discussed in Chapter 11. Different treatment regimes can be evaluated from the same observational data by creating multiple copies (replicates, clones) of each patient, as many as there are treatment regimes of interest. Treatment comparison can be done by estimating a single repeated measures model with IPCW.

Lastly, we use the longitudinal data of the REFLECTIONS study to demonstrate the design and analysis steps and provide the SAS code for each of these steps. This guidance includes phrasing the appropriate and precise research question, defining the corresponding inclusion criteria, defining treatment strategies and the treatment assignment, censoring due to protocol violation, estimating stabilized weights for artificial censoring and for loss to follow-up, and applying these weights to the outcome model. Also, we show the appropriate order of data manipulation and the statistical models for weight estimation and the estimation of the effect of various treatment regimes on the outcome. This chapter focuses on comparing several treatment strategies asking the question of when to initiate opioid treatment. “When” is defined by the time-varying pain intensity rather than by mere time.

To avoid confusion regarding the term “trial,” we need to differentiate between the actual REFLECTIONS study and the target trial. We use the term REFLECTIONS study when we address the study that has been performed in the real world and the related observed data set. We use the term target trial when we address the design and analysis plan of a hypothetical randomized controlled trial (RCT) that could be performed to answer our research question (in the SAS code abbreviated as “TT”).

12.2 Dynamic Treatment Regimes and Target Trial Emulation

In Chapter 11, we introduced marginal structural models (MSM) to assess the unbiased estimate of the causal effect of a static treatment regime (that is, immediate treatment versus no treatment) from observational longitudinal data with time-dependent confounding under the assumption of no unmeasured confounding. In this chapter we introduce another method to draw causal conclusions from observational longitudinal data with time-dependent confounding. Specifically, we describe how to compare multiple dynamic treatment strategies. In our case study of the observational REFLECTIONS study, the dynamic treatment regimes are described by different “rules” or algorithms defining at which level of pain opioid treatment should be started to optimize the outcome. After providing a general definition and description of the terms “dynamic treatment regimes” and “target trial” and related conceptual aspects, we will illustrate the technical application to the REFLECTIONS study along with the SAS code.

12.2.1 Dynamic Treatment Regimes

The term “dynamic treatment regime” describes a treatment strategy (that is, rule or algorithm) that allows treatment to change over time based on decisions that depend on the evolving treatment and covariate history (Robins et al. 1986, 2004). Several options of dynamic treatment regimes exist. For instance, treatment might start, stop, or change based on well-defined disease- or treatment-specific covariates.

For assessing the effectiveness of dynamic treatment regimes, a common design approach is the Sequential Multiple Assignment Randomized Trial (SMART) design for randomized experiments (Chakraborty and Murphy 2014; Cheung 2015; Lavori and Dawson 2000, 2004, 2014; Liu et al. 2017; Murphy 2005; Nahum-Shani et al. 2019) while for analysis using observational data g-methods have been used (Cain et al. 2010, 2015; HIV-Causal Collaboration 2011; Sterne et al. 2009).

12.2.2 Target Trial Emulation

The target trial concept is a structural approach to estimate causal relationships (Cain 2010, 2015; Hernan and Robins 2016; Zhang et al. 2014; Garcia-Albeniz et al. 2015; Emilsson et al. 2018; Kuehne et al. 2019; Zhang et al. 2018; Hernan et al. 2016; Cain et al. 2011; Hernan and Robins 2017; Hernan et al. 2004). It is consistent with the formal counterfactual theory of causality and has been applied to derive causal effects from longitudinal observational data. However, it is especially useful for analyzing sequential treatment decisions. The principal idea of the target trial approach is to develop a study protocol for a hypothetical RCT before starting the analysis of the observational data. For this task, it is sometimes useful to forget about the observational data (with all its limitations) and to put yourself in the shoes of someone designing an RCT and carefully prepare a protocol for this RCT. After this step, the data adaptability needs to be confirmed. This helps avoid or minimize biases such as immortal time bias, time-dependent confounding and selection bias. Furthermore, designing a hypothetical target trial is an extremely helpful tool for communication with physicians because they are usually very familiar with the design of RCTs.

The hypothetical target trial has the same components as a real trial. Each component of the target trial protocol needs to be carefully defined. The following paragraphs briefly describe each component and its specific issues (Hernan and Robins 2016). An application of these concepts using the REFLECTIONS study is provided in Section 12.3.

Research question

As in an RCT, the research question of the target trial should consist of a clear treatment or strategy comparison in a well-defined population with a well-defined outcome measure. This guarantees that vaguely defined questions such as “treatment taken anytime versus never taking treatment” will not be used.

Eligibility criteria

The population or cohort of interest needs to be well-described (Hernán and Robins 2016, Lodi 2019, Schomocher 2019). As in an RCT, this description should include age, gender, ethnicity, disease, disease severity, and so on. However, the eligibility criteria also define the timing of intervention or, in the case of a research question, elaborating the best time of intervention; it defines the first possible time of intervention. Hernán (2016) and others describe that aligning start of follow-up, specification of eligibility, and treatment assignment is important to avoid biases such as immortal time bias. The time of intervention should reflect a decision point for the physician. This might be the time of diagnosis, time of symptoms, specific thresholds of biomarker, progression, and so on. These decision points should also be reflected in the eligibility criteria.

Technical note: The data might not provide the exact time point of crossing a biomarker threshold or experiencing progression. Here it is often sufficient to take the first time the data show those decision points because a physician might not see the patient exactly at the time a threshold is crossed. Hence, the decision point is even more accurately described by the first time data show progression than the exact time of progression.

Treatment Strategies

The definition of the treatment strategies is similar to RCTs. The dose, start, duration, and discontinuation need to be well-described. The available observational data might not provide exact dosing, though often algorithms to estimate the doses are available.

Randomized Assignment

By randomizing individuals to the treatment strategies to be compared, RCTs produce balance in pre-treatment covariates (measured and unmeasured) between the groups receiving the different treatments. As nothing other than randomization triggers which treatment strategy is assigned and the treatments are controlled, baseline confounding is avoided. The target trial concept suggests also randomly assigning individuals (that is, individuals of the data set) to the different treatment strategies. When analyzing observational data and actual randomization is not feasible, this idea is mimicked by “cloning” the individuals and assigning each “clone” to each potential treatment strategy. This is done by copying the data as many times as there are treatment strategies to be compared. Further discussion of the replicates approach is given throughout this chapter.

Follow-up

Follow-up starts with time zero and is defined for a specific time period. As described above, time zero, treatment allocation, and time of eligibility need to be aligned. End of follow-up is reached at the time the outcome occurs, at the administrative end of follow-up (defined as time zero plus the defined follow-up period), or death, whichever occurs earlier.

Outcome and Estimand

The outcome should be clinically relevant, clearly defined, and available in the data set. Independent outcome validation is often desired as knowledge of the received treatment can influence the measurement of the outcome (measurement bias) if the outcome is not an objective outcome such as death. Often, the estimand of an RCT is based on a statistic such as the odds ratio (OR) or hazard ratio (HR) comparing the outcomes of the different treatment arms. In some cases, as in our target trial, the goal could be to find the treatment strategy that maximizes a positive outcome.

Causal Contrast

The causal effect of interest in the target trial with cloning (replicates) is the per-protocol effect (that is, the comparative effect of following the treatment strategies specified in the study protocol). The intention-to-treat effect (effect of assigning a particular treatment strategy) that is often the effect of interest in an RCT cannot be analyzed when clones have been assigned to each treatment arm.

In the per-protocol analysis, those subjects (or replicates) that do not follow the assigned treatment strategy are censored (termed “artificial censoring” or censoring “due to protocol” in this text). Some subjects might not follow the treatment from the beginning of the trial while other subjects might first follow the protocol and violate the protocol at a later time. Censoring occurs at the time of protocol violation. This protocol violation is based on treatment behavior and is therefore informative and needs to be adjusted for in the analytical process.

Analytical Approach

To estimate the per-protocol effect of the target trial, adjustment for baseline and post-randomization confounding is essential because protocol violations are typically associated with post-baseline factors. Hence, g-methods by Robins and others are required, and inverse probability of censoring weighting (IPCW) may be most appropriate (Cain et al. 2015; Emilsson et al. 2018; NICE 2011, 2012, 2013a, 2013b; Almirall et al 2009; Bellamy et al 2000; Brumback et al 2004; Cole and Hernán 2008; Cole et al 2005; Daniel et al. 2013; Faries and Kadziola 2010; Goldfeld 2014; Greenland and Brumbak 2002; Hernán and Robins 2019; Hernan et al. 2005; Jonsson et al. 2014; Ko et al. 2003; Latimer 2012; Latimer and Abrams 2014; Latimer et al. 2014, 2015, 2016; Morden et al. 2011; Murray and Hernan 2018; Pearl 2010; Robin and Hernan 2009; Robins and Rotnitzky 2014; Robins and Finkelstein 2000; Snowden et al. 2011; Vansteelandt and Joffe 2014; Vansteelandt and Keiding 2011; Vansteelandt et al. 2009; Westreich et al. 2012). IPCW is an analytical method very similar to the IPTW discussed in Chapter 11. IPCW is borrowing information from comparable patients to account for the missing data that occur due to artificial censoring. This is accomplished through up-weighting censored subjects by applying weights that are based on the probability of not being censored. This creates an un-confounded pseudo-population that can be analyzed using general repeated measurement statistics (Brumback 2004, Cole and Hernán 2008, Cole et al. 2005, Daneil 2013).

IPCW requires two steps. First, one needs to estimate the probability of not being censored at a given time. Second, these weights must be incorporated into the outcome model. To estimate the weights, we estimate the probability of not being censored. Because censoring is dependent on not complying with the treatment strategy of interest, such as starting treatment at a specific time, we can estimate the probability by estimating the probability of starting (or stopping) treatment at each time point based on time-varying and non-time-varying covariates. This is only done for individuals who are at risk for censoring. Thus, subject time points where that subject is already following the treatment or is already censored are omitted from the model. Stabilized weights are recommended to achieve greater efficiency and to minimize the magnitude of non-positivity bias (Cole and Hernán 2008, Hernán et al. 2002). The formula for stabilized weights is as follows:

where C(k) represents the censoring status at time k and C(k-1) represents the censoring history prior to time k, V represents a vector of non-time-varying variables (baseline covariates), and L(k) represents a vector of time-varying covariates at time k.

In the second step, the outcome model is estimated using a repeated measurement model including the estimated weights. This model does not include time-varying covariates because the included weights create an unconfounded data set. However, the baseline covariates might be included as appropriate (Hernán et al. 2000).

Identifying the Best Potential Treatment Strategy

When comparing more than two treatment strategies, a natural goal would be to find the strategy where the positive outcome value is maximized. In the example of the REFLECTIONS study, this could be pain reduction or minimizing the negative impact of pain (Cain et al. 2011, Ray et al. 2010, Robins t al. 2008). To find the optimal outcome value, we fit a model where the counterfactual outcome is a function of treatment strategies.

Assumptions

The assumptions necessary for the IPCW in MSM include the same assumptions that are necessary for other observation analyses. (See Chapter 2.)

1. Exchangeability: Exchangeability indicates the well-known assumption of no unmeasured confounding.

2. Positivity: The average causal effect must be estimable in each subset the population defined by the confounders.

3. Correct model specification: The model for estimating the weights as well as the outcome model must be specified correctly using valid data and assumptions.

The assumptions and robustness of the analyses should be tested in sensitivity analyses.

12.3 Example: Target Trial Approach Applied to the Simulated REFLECTIONS Data

12.3.1 Study Question

For the case study based on the REFLECTIONS data, we are interested in the question of when to start opioid treatment. The start of the treatment could be defined by the pain intensity or the time since study initiation. Clinically, it is more relevant to identify the pain level at which the opioid treatment should be initiated. Thus, in Section 12.3.3 below we follow the steps of the previous section and develop a target trial protocol to assess various dynamic treatment strategies based on the pain level of initiating opioid treatment using the REFLECTIONS study. Specifically, we are interested in assessing changes in BPI-Pain scores over a 12-month period between the following treatment strategies:

1. Start opioid treatment when first experiencing a pain level ≥ 4.5

2. Start opioid treatment when first experiencing a pain level ≥ 5.5

3. Start opioid treatment when first experiencing a pain level ≥ 6.5

4. Start opioid treatment when first experiencing a pain level ≥ 7.5

5. Start opioid treatment when first experiencing a pain level ≥ 8.5

6. Start opioid treatment when first experiencing a pain level ≥ 9.5

The comparator treatment strategy of interest is that of no opioid treatment.

Because we want to compare entire dynamic strategies consisting of different pain level thresholds for initiation of opioid treatment, we conceptualize a randomized trial shown in Figure 12.1: Design of a Target Trial. For this target trial, all patients are randomized at the beginning to one of many dynamic treatment strategies and outcomes are followed up over the following one-year period.

Figure 12.1: Design of a Target Trial

Before developing the target trial protocol, in the next section we briefly review the key data aspects from the REFLECTIONS study relevant to our desired target trial.

12.3.2 Study Description and Data Overview

To illustrate the implementation of the target trial approach for performing a comparative analysis of multiple dynamic treatment regimes, we use the same simulated data based on the REFLECTIONS study used in Chapter 3 and also in Chapter 11. In the REFLECTIONS study, data on a variety of characteristics were collected through a physician survey and patient visit form at baseline, and by computer-assisted telephone interviews at baseline, and one, three, six, and 12 months post-baseline. The outcome measure of interest for this analysis was the Brief Pain Inventory (BPI) scale, a measure of pain severity and inference, with higher scores indicating more pain.

At each visit, data was collected regarding whether the patient was on opioid treatment or not. At baseline, 24% of the 1,000 patients were prescribed to take opioids. This changed to 24.0%, 24.5%, 24.1%, and 24.7% of 1,000, 950, 888, and 773 patients, at visits 2–5, respectively. The distribution of pain levels shows that all pain levels (0–10) are present, and opioids are used at each pain level as shown in Table 12.1: Pain Categories (Based on BPI Pain Scale) by Opioid Use2.1.

Table 12.1: Pain Categories (Based on BPI Pain Scale) by Opioid Use

Table of OPIyn by Pain_category

OPIyn
(Opioid use)

Pain_category

0

1

2

3

4

5

6

7

8

9

10

Total

No

7

80

142

338

474

578

518

430

248

108

26

2949

Yes

1

12

36

98

168

220

271

238

157

66

32

1299

Total

8

92

178

436

642

798

789

668

405

174

58

4248

Frequency Missing = 363

Hence, we can compare all of the previously mentioned treatment strategies of interest in the target trial.

Time varying data (collected at each visit) from REFLECTIONS included data on satisfaction with care and medication (SatCare, SatMed), pain (BPIPain, BPIInterf), physical symptoms (PHQ8), and treatment (OPIyn). Baseline data include the baseline values of the time-varying data mentioned above as well as the following:

● Age

● Gender

● Specialty of treating physician (Dr Specialty)

● Baseline body mass index (BMI)

● Baseline values of total scores for symptoms and functioning:

◦ BPI Pain Interference Score (BPTInterf_B)

◦ Anxiety (GAD7_B)

◦ Depression (PHQ8_B)

◦ Physical Symptoms (PhysicalSymp_B)

◦ Disability Severity (SDS_B)

◦ Insomnia (ISIX_B)

◦ Cognitive Functioning (CPFQ_B)

◦ Fibromyalgia Impact Questionnaire (FIQ_B)

◦ Multidimensional Fatigue Inventory (MFIpf_B)

The data set (REFLvert) is in the long format (one observation per patient per visit). This means for each individual (ID), several rows exist (max. 5). Each row has the information of one visit including the baseline variable information. The variables of the data set are described in Table 12.2.

Table 12.2: Variables in the REFLECTIONS Data Set

Variable Name

Description

SubjID

Subject Number

Cohort

Cohort

Visit

Visit

Gender

Gender

Age

Age in years

BMI_B

BMI at Baseline

Race

Race

Insurance

Insurance

DrSpecialty

Doctor Specialty

Exercise

Exercise

InptHosp

Inpatient hospitalization in last 12 months

MissWorkOth

Other missed paid work to help your care in last 12 months

UnPdCaregiver

Have you used an unpaid caregiver in last 12 months

PdCaregiver

Have you hired a caregiver in last 12 months

Disability

Have you received disability income in last 12 months

SymDur

Duration (in years) of symptoms

DxDur

Time (in years) since initial Dx

TrtDur

Time (in years) since initial Trtmnt

SatisfCare_B

Satisfaction with Overall Fibro Treatment over past month

BPIPain_B

BPI Pain score at Baseline

BPIInterf_B

BPI Interference score at Baseline

PHQ8_B

PHQ8 total score at Baseline

PhysicalSymp_B

PHQ 15 total score at Baseline

FIQ_B

FIQ Total Score at Baseline

GAD7_B

GAD7 total score at Baseline

MFIpf_B

MFI Physical Fatigue at Baseline

MFImf_B

MFI Mental Fatigue at Baseline

CPFQ_B

CPFQ Total Score at Baseline

ISIX_B

ISIX total score at Baseline

SDS_B

SDS total score at Baseline

OPIyn

Opioids use

SatisfCare

Satisfaction with Overall Fibro Treatment

SatisfMed

Satisfaction with Prescribed Medication

PHQ8

PHQ8 total score

BPIPain

BPI Pain score

BPIInterf

BPI Interference score

BPIPain_LOCF

BPI Pain score LOCF

BPIInterf_LOCF

BPI Interference score LOCF

12.3.3 Target Trial Study Protocol

To ensure that the correct implementation of the target trial, we will first provide the protocol of the target trial and then describe the data analysis and the sensitivity analyses for our case study using the REFLECTIONS data. The protocol follows the steps for a target trial outlined in section 12.2.2 and is described in brief.

Eligibility criteria

The same inclusion criteria as in the REFLECTIONS study are used with one additional criterion (Robinson et al. 2012, 2013). Patients are only included in the study when they have crossed the first potential treatment threshold, which is a pain score of at least 4.5 in this example (Hernán 2016, Hernán and Robins 2017).

Treatment strategies

The following dynamic treatment strategies are compared where the pain levels are rounded to the nearest integer. Each treatment strategy will be compared to the no treatment strategy.

1. Start opioid treatment when first experiencing a pain level ≥ 5 (Intervention)

2. Start opioid treatment when first experiencing a pain level ≥ 6 (Intervention)

3. Start opioid treatment when first experiencing a pain level ≥ 7 (Intervention)

4. Start opioid treatment when first experiencing a pain level ≥ 8 (Intervention)

5. Start opioid treatment when first experiencing a pain level ≥ 9 (Intervention)

6. Start opioid treatment when first experiencing a pain level ≥ 10 (Intervention)

7. No opioid treatment at any time (Comparator)

Because the strategies include the statement that the pain threshold is crossed for the first time, we assume that the pain before entering the REFLECTIONS study is lower than the one measured at baseline.

For simplicity reasons, we are only interested in opioid initiation. Hence, we assume once opioid treatment started, the patient is on opioid treatment. In Table 12.3 the definition of each treatment strategy is shown.

Table 12.3: Treatment Strategies of the REFLECTIONS Target Trial

Treatment strategy

Definition of new variable

Value of variable OPI at pain level

Treat at

Regime

5

6

7

8

9

10

Pain ≥ 5

5

1

1

1

1

1

1

Pain ≥ 6

6

0

1

1

1

1

1

Pain ≥ 7

7

0

0

1

1

1

1

Pain ≥ 8

8

0

0

0

1

1

1

Pain ≥ 9

9

0

0

0

0

1

1

Pain ≥ 10

10

0

0

0

0

0

1

Never

11

0

0

0

0

0

0

Randomized Assignment

For this target trial, to mimic the concept of randomly assigning patients to each treatment strategy, we “clone” (replicate) each subject’s data in order to allocate each subject to each treatment strategy. That is, the data of each subject is copied seven times and allocated to each treatment arm.

Follow-up

The start of the target trial is the earliest time when treatment could potentially be started. In this example, the earliest treatment threshold is at pain level 5. Hence, the starting point for each individual in the target trial is at the time when they first experience a pain level 5. For our case study, we assume pain levels prior to start of REFLECTIONS were < 5, thus patients entering the REFLECTIONS study with a pain score of at least 5 start our target trial at visit 1. This is also the time of treatment allocation and eligibility. By coinciding these time points, the target trial concept helps avoiding immortal time bias (Hernan et al. 2016). The follow-up time of the REFLECTION study is 12 months, so the longest follow-up time is 12 months. However, as the inclusion into the study is dependent on the pain level, the follow-up time for some individuals might be shorter. Patients that are lost to follow-up are censored at their last visit. Because leaving the study could be informative, inverse probability of censoring weighting (IPCW) will be applied. Censoring at the end of follow-up (after 12 months) is not informative and will not be adjusted. For demonstrative purposes, to have treatment visits at even intervals, our target trial assumes data collection every six months. Thus, we ignore the data from REFLECTIONS collected at other visits in this analysis.

Outcomes

The parameter of interest in the REFLECTIONS study and target trial is the pain (BPIPain) score, which is measured at each visit. As we are interested in changes over time, the primary outcome for the target trial is the change from baseline in BPI-Pain scores over a 12-month follow-up period. Certainly other outcomes could be considered such as the average rate of change in pain, the endpoint pain score, and so on.

Causal Contrast

Our effect measure is the absolute difference in pain relief for the intervention strategies compared to the strategy of no opioid treatment. We will apply the per-protocol analysis because we will have exactly the same individuals in each treatment arm due to the replicates (cloning) analytical approach. Of course, not all patients/replicates follow the protocol and thus are censored when they deviate from the strategy. Because deviation from protocol is based on other parameters, censoring is informative and will be adjusted using IPCW.

12.3.4 Generating New Data

To demonstrate a replicates analysis of the target trial, we follow a structural programming approach and first generate all necessary variables and then conduct the analyses. As this target trial implies that the start of the trial does not depend on the visit number in the REFLECTIONS study but on the pain level, we are interested in the same interval length. Hence, we will only include visits 1, 4, and 5 of the original REFLECTIONS data into the analyses as shown in Program 12.1.

Program 12.1: Generate New Data

DATA TT31;

  set REFLvert;

  if BPIPain = . then delete;

  if visit=2 or visit = 3 then delete;

  if OPIyn=”No” THEN OPI=0;

  if OPIyn=”Yes” THEN OPI=1;

run;

Handling of Missing Data and Measurement Error

We assume that the data in the data set reflect the data available to the physician. Whenever data are missing, the physician would have used the data provided earlier. Hence, we will apply the method of last value carried forward for any missing data as shown in Program 12.2. That is satisfaction with care, satisfaction with medication, and pain. The baseline parameters also have a few missing baseline values. Hence, those values need to be estimated in order to be able to carry the value forward. For simplicity reasons, we just apply the mean value, knowing that more sophisticated methods exist.

Program 12.2: Method of Last Value for Missing Data

Proc means data= TT31;

  var SatisfCare;

  where Visit=1;

  title ‘Mean baseline value of satisfaction with Care to use for missing

         values’;

run;

Data TT31;

  set TT31;

  if SatisfCare=. and Visit=1 then SatisfCare=3;

run;

Data TT31;

  set TT31;

  by SubjID;

  retain Satisf_CF;

  if first.SubjID then Satisf_CF = SatisfCare;

  if SatisfCare ne . then Satisf_CF = SatisfCare;

  else if SatisfCare=. then SatisfCare=Satisf_CF;

  drop Satisf_CF;

run;

Proc means data= TT31;

  var SatisfMed;

  where Visit=1;

  title ‘Mean baseline value of satisfaction with Medication to use for

         missing values’;

run;

Data TT31;

  set TT31;

  if SatisfMed=. and Visit=1 then SatisfMed=3;

run;

Data TT31;

  set TT31;

  by SubjID;

  retain SatisfM_CF;

  if first.SubjID then SatisfM_CF = SatisfMed;

  if SatisfMed ne . then SatisfM_CF = SatisfMed;

  else if SatisfMed=. then SatisfMed=SatisfM_CF;

  drop SatisfM_CF;

run;

Pain Categories

Treatment strategies are based on pain levels, so we create pain categories in Program 12.3 where the pain score is rounded to the nearest integer. For the analyses, we need several pain categories:

1. Baseline pain category

2. Current pain category at given visit

3. Maximum pain category up to given visit

4. Previous maximum pain

Program 12.3: Create Pain Categories

DATA TT32;

  SET TT31;

  Pain_cat_B=.;

  if BPIPain_B<0.5 AND BPIPain_B >=0 then Pain_cat_B=0; else

  if BPIPain_B<1.5 AND BPIPain_B >=0.5 then Pain_cat_B=1; else

  if BPIPain_B<2.5 AND BPIPain_B >=1.5 then Pain_cat_B=2; else

  if BPIPain_B<3.5 AND BPIPain_B >=2.5 then Pain_cat_B=3; else

  if BPIPain_B<4.5 AND BPIPain_B >=3.5 then Pain_cat_B=4; else

  if BPIPain_B<5.5 AND BPIPain_B >=4.5 then Pain_cat_B=5; else

  if BPIPain_B<6.5 AND BPIPain_B >=5.5 then Pain_cat_B=6; else

  if BPIPain_B<7.5 AND BPIPain_B >=6.5 then Pain_cat_B=7; else

  if BPIPain_B<8.5 AND BPIPain_B >=7.5 then Pain_cat_B=8; else

  if BPIPain_B<9.5 AND BPIPain_B >=8.5 then Pain_cat_B=9; else

  if BPIPain_B<10.5 AND BPIPain_B >=9.5 then Pain_cat_B=10; else

  Pain_cat_B=.;

run;

DATA TT32;

  SET TT32;

  Pain_cat=0;

  if BPIPain<0.5 AND BPIPain >=0 then Pain_cat=0; else

  if BPIPain<1.5 AND BPIPain >=0.5 then Pain_cat=1; else

  if BPIPain<2.5 AND BPIPain >=1.5 then Pain_cat=2; else

  if BPIPain<3.5 AND BPIPain >=2.5 then Pain_cat=3; else

  if BPIPain<4.5 AND BPIPain >=3.5 then Pain_cat=4; else

  if BPIPain<5.5 AND BPIPain >=4.5 then Pain_cat=5; else

  if BPIPain<6.5 AND BPIPain >=5.5 then Pain_cat=6; else

  if BPIPain<7.5 AND BPIPain >=6.5 then Pain_cat=7; else

  if BPIPain<8.5 AND BPIPain >=7.5 then Pain_cat=8; else  

  if BPIPain<9.5 AND BPIPain >=8.5 then Pain_cat=9; else

  if BPIPain<10.5 AND BPIPain >=9.5 then Pain_cat=10; else

  Pain_cat=.;

run;

*** compute maximum pain category for each individual;

proc sort data= TT32;

  by SubjID Visit;

run;

DATA TT32;

  SET TT32;

  by SubjID;

  retain Pain_cat_max 0;

  if first.SubjID then Pain_cat_max=Pain_cat;

  if Pain_cat>Pain_cat_max then Pain_cat_max=Pain_cat;

run;

*** create previous maximum pain categories;

Data TT32;

  set TT32;

  by SubjID;

  pv_Pain_max=lag1(Pain_cat_max);

  if first.SubjID then do; pv_Pain_max=0;end;

run;

Continuous Opioid Treatment

We assume that once opioid treatment is started, the patient is on opioid treatment. Hence, the treatment variable “OPIyn” will be 1 in each interval after first time of treatment initiation as shown in Program 12.4. Further, we need to know the opioid use at baseline as well as during the previous visit.

Program 12.4: Create Opioid Treatment Variables

proc sort data= TT32;

  by SubjID Visit;

run;

Data TT33;

  set TT32;

  by SubjID;

  retain Opi2;

  if first.SubjID then Opi2 = Opi;

  if Opi ne . then Opi2 = Opi;

  retain Opi_new 0;

  if first.SubjID then do; Opi_new=0;end;

  Opi_new=Opi_new+Opi2;

  if Opi_new > 1 then Opi_new=1;

  drop Opi2;

run;

*** Opioid status at baseline (visit1);

data TT33;

  set TT33;

  by SubjID;

  retain Opi_base 0;

  if first.SubjID then do; Opi_base = Opi_new; end;

  Opi_base=Opi_base;

run;

*** create variable pv_Opi=previous Opi;

Data TT33;

  set TT33;

  by SubjID;

  pv_Opi=lag1(Opi_new);

  if first.SubjID then do; pv_Opi=0; end;

run;

Defining Target Trial Baseline

We are comparing treatment strategies where pain treatment is provided after first crossing the pain level of 5 or higher. Hence, we exclude individuals who never reach that pain level and visits for individuals prior to reaching that pain level. From a programming perspective, we delete visits where the maximum pain level (up to that point in time) is below 5. Once an individual is eligible (1st pain level of 5 or higher), they are included in the target trial. This is the baseline of the target trial and Program 12.5 creates the baseline variables to match this new baseline. This includes updating all baseline variables that are also time-varying variables, such as BPI Pain score, BPI Interference score, Opioid treatment, etc.

Program 12.5: Create Baseline Variables

Data TT34;

  set TT33;

  if Pain_cat_max <5 then delete;

run;

proc sort data= TT34;

  by SubjID Visit;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain Visit_TT; * this will be the new target trial visit number;

  if first.SubjID then Visit_TT = 0;

  Visit_TT=Visit_TT+1;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain Opi_base_TT;

  if first.SubjID then Opi_base_TT = Opi_new;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain pv_Opi_base_TT;

  if first.SubjID then pv_Opi_base_TT = pv_Opi;

  if pv_Opi_base_TT=1 then delete;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain Pain_cat_b_TT;

  if first.SubjID then Pain_cat_b_TT = Pain_cat_max;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain BPIPain_B_TT;

  if first.SubjID then BPIPain_B_TT = BPIPain;

run;

Data TT34;

  set TT34;

  by SubjID;  

  retain BPIInterf_B_TT;

  if first.SubjID then BPIInterf_B_TT = BPIInterf;

  if BPIInterf_B_TT=. then BPIInterf_B_TT=BPIInterf_B;

run;

Data TT34;

  set TT34;

  by SubjID;

  retain PHQ8_B_TT;

  if first.SubjID then PHQ8_B_TT = PHQ8;

  if PHQ8_B_TT=. then PHQ8_B_TT=PHQ8_B;

run;

proc means data=TT34 n nmiss mean std min p25 median p75 max;

  var PHQ8_B_TT BPIInterf_B_TT BPIPain_B_TT Pain_cat_b_TT pv_Opi_base_TT

      Opi_base_TT;

  title ‘Summary Stats: new baseline variables’;

run;

proc freq data=TT34;

  table Opi_base_TT*Opi_base;

  table Pain_cat_b_TT*Pain_cat_b;

  table pv_Opi_base_TT*pv_Opi;

  title ‘baseline variables REFLECTION vs. target trial’;

run;

proc freq data= TT34;

  table visit*visit_TT;

  title ‘visit in target trial vs visit in Reflection study’;

run;

Visits / Time Intervals / Timely Order

To compute the appropriate weights for each patient, one must first clarify the order of the collection of the time varying information in terms of the treatment decisions that are made. The time-varying data are the data on satisfaction, pain, and treatment. We assume that the data on satisfaction and pain (SP) at a given visit influence the treatment (Tx) prescribed at that visit. The treatment prescribed at one visit has an impact on satisfaction and pain at the next visit. Similar to the DAG approach of Chapter 2, this is shown in Figure 12.2 below.

Figure 12.2: Order of Information and Influence on Treatment Decisions and Corresponding Analytical Visits

Outcome Variable Pain Difference

To compute the change in outcomes at each visit, we need to first create the variable “PainNV”=Pt+1 for each interval (t) in order to have all the needed variables in each interval/visit (see Program 12.6). The difference of “PainNV” and the current pain is the outcome because we are interested in the effect of this visit’s treatment onto the future pain score.

Program 12.6: Compute Outcome Variable

proc sort data= TT34;

  by SubjID descending visit_TT;

run;

Data TT36;

  set TT34;

  PainNV=lag1(BPIPain);

  by SubjID;

  if first.SubjID then do; PainNV=.; end;

  Pain_diff=PainNV-BPIPain;

run;

proc means data=TT36 n nmiss mean ;

  class visit_TT ;

  var Pain_diff;

  title ‘Pain difference by visit’;

run;

proc sort data= TT36;

  by SubjID Visit;

run;

proc means data=TT36 n nmiss mean std min p25 median p75 max;

  class Opi_new pain_cat_max ;

  var Pain_diff BPIPain PainNV;

  title ‘Summary of unadjusted outcomes by original Cohort’;

run;

Variables Needed to Create Weights

For applying the weights in the inverse probability of censoring weighting, we need to calculate the probability of being censored. This is determined by the probability of starting opioid treatment and by the probability of being loss to follow-up.

Predict the Probability of Starting Treatment

The protocol is very specific on when to receive opioid treatment. When deviating from that protocol, the individual is censored. Hence, starting opioid therapy is influencing whether individuals are censored. We need to predict the probability of starting opioid treatment for each patient at each visit. For stabilized weights, estimate this probability using two models:

1. Baseline model with only baseline parameters (for the numerator of the weight calculations).

2. Full model with baseline and time-varying parameters (for the denominator of the weight calculations).

Starting opioid treatment implies that opioids have not yet been taken. We therefore restrict our prediction function in Program 12.7 to individuals with pv_Opi=0.

Program 12.7: Predict Treatment Using Baseline and Time-Varying Variables

*** predict treatment using only baseline variables (for the numerator);

proc logistic data= TT36 descending;

  class SubjID VISIT(ref=”1”) Opi_new(ref=”0”) DrSpecialty Gender Race;

  model Opi_new = visit DrSpecialty BMI_B Gender Race Age BPIPain_B_TT

                  BPIInterf_B_TT PHQ8_B_TT PhysicalSymp_B FIQ_B GAD7_B

                  ISIX_B MFIpf_B CPFQ_B SDS_B;

  where pv_OPI=0;

  output out=est_Opi_b p=pOpi_base;

  title ‘treatment prediction using only baseline variables’;

run;  

proc sort data= est_Opi_b; by SubjID visit; run;

proc sort data= TT36; by SubjID visit; run;

data TT37;

  merge TT36 est_Opi_b; by SubjID visit; run;

proc sort data= TT37; by SubjID visit; run;

*** predict treatment using baseline and time-varying variables (for the denominator of the weights);

proc logistic data= TT37 descending;

class SubjID VISIT(ref=”1”) Opi_new(ref=”0”) DrSpecialty Gender Race;

model Opi_new = VISIT DrSpecialty BMI_B Gender Race Age BPIPain_B_TT BPIInterf_B_TT PHQ8_B_TT PhysicalSymp_B FIQ_B GAD7_B ISIX_B MFIpf_B CPFQ_B SDS_B

SatisfCare SatisfMed BPIPain;

where pv_OPI=0;

output out= est_Opi_b2 p= pOpi_full;

title ‘treatment prediction using baseline and time-varying variables’;

run;  

proc sort data= est_Opi_b2; by SubjID visit; run;

proc sort data= TT37; by SubjID visit; run;

data TT371;

  merge TT37 est_Opi_b2;

  by SubjID  visit; run;

proc sort data= TT371; by SubjID visit; run;

Predict the Probability of Being Loss to Follow-up at the Next Visit

Loss to follow-up describes situations where individuals that should have had another visit did not have another visit in the data set. Thus, the end of a trial is not loss to follow-up even though there is no further data collection. To compute weights to account for loss to follow-up, we need to create a variable indicating loss to follow-up in the next visit (given that it is not the last visit of the trial; LFU=1 if next visit qualifies as lost to follow-up; LFU=0 otherwise). We then predict this variable (LFU) for each patient at each visit.

To predict the probability of being lost to follow-up at the next visit, we create two models in Program 12.8:

1. Baseline model with only baseline parameters (for the numerator of the weight calculations).

2. Full model with baseline and time-varying parameters (for the denominator of the weight calculations).

Patients in the last visit are not at risk of being lost to follow up in the future so we create the models only using visits that are not the last scheduled visit.

Program 12.8: Predict Loss to Follow-Up

* Define LFU;

proc sort data= TT371;

  by SubjID descending visit_TT;

run;

Data TT371;

  set TT371;

  LFU=0;

  by SubjID;

  if first.SubjID AND visit ne 5 then LFU=1;

run;

proc freq data= TT371;

  table LFU*visit_TT;

  title ‘loss to follow up by visit’;

run;

proc sort data= TT371;

  by SubjID visit_TT;

run;

* predict LFU using only baseline variables (for the numerator);

proc logistic data= TT371 descending;

  class SubjID Opi_new(ref=”0”) DrSpecialty Gender Race;

  model LFU = VISIT_TT DrSpecialty BMI_B Gender Race Age BPIPain_B_TT

              BPIInterf_B_TT PHQ8_B_TT PhysicalSymp_B FIQ_B GAD7_B ISIX_B

              MFIpf_B CPFQ_B SDS_B;

  where VISIT_TT ne 3;

  output out=est_LFU_b p=pLFU_base;

  title ‘prediction of lost to follow up using only baseline variables’;

run;  

proc sort data=est_LFU_b; by SubjID VISIT_TT; run;

proc sort data= TT371; by SubjID VISIT_TT; run;

data TT372;

  merge TT371 est_LFU_b; by SubjID  VISIT_TT;run;

proc sort data= TT372; by SubjID Visit; run;

*predict LFU using baseline and time-varying variables (for the denominator);

proc logistic data= TT372 descending;

  class SubjID Opi_new(ref=”0”) DrSpecialty Gender Race;

  model LFU = Opi_new pv_OPI VISIT_TT DrSpecialty BMI_B Gender Race Age

            BPIPain_B_TT BPIInterf_B_TT PHQ8_B_TT PhysicalSymp_B FIQ_B

            GAD7_B ISIX_B MFIpf_B CPFQ_B SDS_B SatisfCare SatisfMed

      BPIPain;

  where VISIT_TT ne 3;

  output out= est_LFU_b2 p= pLFU_full;

  title ‘prediction of lost to follow up using baseline and time-varying

         variables’;

run;  

proc sort data= est_LFU_b2; by SubjID VISIT_TT; run;

proc sort data= TT372; by SubjID VISIT_TT; run;

data TT372p;

  merge TT372 est_LFU_b2; by SubjID  VISIT_TT; run;

proc sort data= TT372p; by SubjID Visit; run;

Create Clones (Replicates)

To create clones, we copy the data from all patients seven times and create a variable “Regime” indicating a different assigned treatment strategy for each subject clone. For the first set of copies, we set Regime=11, indicating the no opioid treatment arm. For the second set of copies, we set Regime=10, indicating the treatment arm starting opioid at pain level 10. And for the seventh set of copies, we set Regime=5, indicating the treatment arm starting opioid at the pain level 5. (See Program 12.9.)

Table 12.4: Regime Defining Opioid Use in the Protocol in Cloned Individuals

Treatment strategy

Definition of new variable

Value of variable OPIyn at pain level

Treat at

Regime

5

6

7

8

9

10

Pain ≥ 5

5

1

1

1

1

1

1

Pain ≥ 6

6

0

1

1

1

1

1

Pain ≥ 7

7

0

0

1

1

1

1

Pain ≥ 8

8

0

0

0

1

1

1

Pain ≥ 9

9

0

0

0

0

1

1

Pain ≥ 10

10

0

0

0

0

0

1

Never

11

0

0

0

0

0

0

Program 12.9: Create Clones

data TT38;

set TT372p;

  array regime (7);

run;

data TT38;

  set TT38;

  regime1=1; output;

  regime2=1; output;

  regime3=1; output;

  regime4=1; output;

  regime5=1; output;

  regime6=1; output;

  regime7=1; output;

run;

data TT38;

  set TT38;

  array regime (7);

  do i=1 to 6 ;

    if regime[i]=1 and regime[i+1]=1 then regime[i]=. ;

  end;

run;

data TT38;

  set TT38;

  if regime1=1 then regime=5; else

  if regime2=1 then regime=6; else

  if regime3=1 then regime=7; else

  if regime4=1 then regime=8; else

  if regime5=1 then regime=9; else

  if regime6=1 then regime=10; else

  if regime7=1 then regime=11;

  newid=SubjID||regime;

  drop regime1 regime2 regime3 regime4 regime5 regime6 regime7 i;

run;

proc sort data= TT38;by visit;

run;

Censoring

Before determining censoring status for each replicate at each time point, omit individuals that do not follow the assigned regime from the very beginning as shown in Program 12.10.

Program 12.10: Censor Cases

* Delete cases that do not follow assigned strategy for a single visit;

data TT39;

  set TT38;

  if Opi_base_TT=1 and Pain_cat_b_TT < regime  then delete;

  if Opi_base_TT=0 and Pain_cat_b_TT >= regime then delete ;

run;

proc freq data= TT39;

  table visit*regime/ nopercent norow nocol;

  where visit=1;

  title ‘number of individuals starting given regimes’;

run;

Censoring Due to Protocol Violation

Patient replicates are considered censored when they start treatment either too early or too late (only individuals not yet receiving opioids can be censored due to protocol violation). We create a variable “C=.” indicating censoring status due to protocol violation (0=no censoring, 1=censoring at the corresponding visit). When protocol violation occurs, “C” is set to one at the corresponding visit and all following visits. The visits where the censoring variable “C” is set to one (indicating censoring) are deleted from the analysis data set. Table 12.5 shows for each treatment strategy (”regime”) and each pain level the definitions of OPI_new where the censoring variable “C” should to be set equal to one.

Table 12.5: Conditions That Lead to Censoring Due to Protocol Violation

Treatment
strategy

Censor at pain level AND
OPI_new=

Treat at

Regime

5

6

7

8

9

10

Pain ≥ 5

5

0

Pain ≥ 6

6

1

0

Pain ≥ 7

7

1

1

0

Pain ≥ 8

8

1

1

1

0

Pain ≥ 9

9

1

1

1

1

0

Pain ≥ 10

10

1

1

1

1

1

0

Never

11

1

1

1

1

1

1

This can also be expressed by an equation using the threshold information indicated by the variable “regime.” Individuals are censored when they cross the threshold indicating the visit at which opioid therapy should

have been started without starting opioid therapy. Hence, in Program 12.11 the censoring variable “CensP” is set to one (indicating censoring) when

● regime ≤ pain_MAX AND OPI_new = 0

● regime > pain_MAX AND OPI_new = 1.

Program 12.11: Censor Cases Due to Protocol Violation

*** Remove visits for censoring due to protocol violation;

proc sort data= TT39; by newid Visit ; run;

data TT39;

  set TT39;

  c=.;

  elig_c=.;

  if Opi_base_TT ne 1 then elig_c=1;

  if Opi_new=0 and regime <= Pain_cat_max and elig_c=1 then c=1;

  if Opi_new=1 and regime > Pain_cat_max and elig_c=1 then c=1;

run;

data TT39;

  set TT39;

  by newid;

  retain cup;

  if first.newid then cup = .;

  if c ne . then cup = c;

  else if c=. then c=cup;

  drop cup;

  if c=. then c=0;

  if c=1 then delete;

run;

proc means data= TT39 n nmiss mean std min p25 median p75 max;

  class regime visit_TT;

  var pain_diff;

  title ‘outcome summary of non censored indiv by Regime’;

run;

proc freq data= TT39;

  table Opi_new*regime*Pain_cat_max;

  title ‘actual treatment vs regime vs max pain score’;

run;

Individuals Complying with Strategies

Program 12.12 summarizes the number of subjects following each treatment strategy at each visit along with their unadjusted changes in pain scores. The output is displayed in Table 12.6.

Program 12.12: Summarizing Subjects Following Each Treatment Strategy

proc means data= TT39 n nmiss mean std min p25 median p75 max;

  class regime visit_TT;

  var pain_diff;

  where pain_diff ne .;

  title ‘Observed Pain change in individuals complying to strategies’;

run;

Table 12.6: Observed Pain Change in Individuals Complying to Strategies

Analysis Variable: Pain_diff

Regime

Visit

N Obs

Mean

Std Dev

Min

25th Pctl

Median

75th Pctl

Max

5

1

190

-0.57

1.65

-6.00

-1.50

-0.50

0.50

4.00

2

142

-0.13

1.63

-4.50

-1.25

-0.13

1.00

4.25

6

1

314

-0.40

1.67

-6.00

-1.50

-0.25

0.75

3.75

2

165

0.15

1.58

-3.50

-0.75

0.00

1.25

4.50

7

1

395

-0.40

1.68

-6.00

-1.50

-0.25

0.75

3.75

2

208

0.18

1.62

-4.25

-0.75

0.25

1.25

4.50

8

1

467

-0.44

1.73

-6.00

-1.50

-0.50

0.75

3.75

2

263

0.08

1.67

-4.25

-1.00

0.00

1.25

4.75

9

1

500

-0.50

1.74

-6.00

-1.75

-0.50

0.75

3.75

2

282

0.06

1.63

-4.75

-1.00

0.00

1.25

4.75

10

1

507

-0.55

1.77

-6.00

-1.75

-0.50

0.75

3.75

2

286

0.03

1.64

-4.75

-1.00

0.00

1.25

4.75

11

1

510

-0.54

1.75

-5.75

-1.75

-0.50

0.75

3.75

2

287

0.03

1.64

-4.75

-1.00

0.00

1.25

4.75

Table 12.7 lists the variables computed by the code thus far.

Table 12.7: List of Newly Created Variables

Variable Name

Description

OPI

Opioid treatment

Pain_cat_B

Categorical pain level at baseline

Pain_cat

Categorical pain level

Pain_cat_max

Maximal categorical pain level

pv_Pain_max

Maximal categorical pain level at the previous visit

Opi_new

Opioid treatment where starting treatment means being on treatment

Opi_base

Opioid treatment at baseline

pv_Opi

Opioid treatment last vivist

Visit_TT

Visit of the target trial

Opi_base_TT

Opioid treatment at target trial baseline

pv_Opi_base_TT

Opioid treatment prior to target trial

Pain_cat_b_TT

Pain category at target trial baseline

BPIPain_B_TT

Pain at target trial baseline

BPIInterf_B_TT

BPI Interference score at target trial baseline

PHQ8_B_TT

PHQ8 total score at target trial baseline

PainNV

Pain level at the following visit

Pain_diff

Pain change from current pain to next visit (6 months)

_LEVEL_

Response Value

pOpi_base

Estimated probability

_LEVEL_2

Response value

pOpi_full

Estimated probability

LFU

Indicating lost to follow-up in the next visit

_LEVEL_3

Response value

pLFU_base

Estimated probability of being lost to follow-up in the next visit using the base model

_LEVEL_4

Response value

pLFU_full

Estimated probability of being lost to follow-up in the next visit using the full model

regime

Treatment strategy of the target trial

newid

Target trial ID indicating clone ID and allocated strategy

c

Indicates censoring due to protocol violation (artificial censoring)

elig_c

Eligibility for artificial censoring

12.3.5 Creating Weights

Creating the weights for each patient at each visit contains several steps.

1. Compute the probability of not being censored due to protocol violation (artificial censoring)

a. for the numerator (using the baseline model)

b. for the denominator (using the full model)

2. Compute the probability of not being censored due to loss to follow-up

a. for the numerator (using the baseline model)

b. for the denominator (using the full model)

3. Compute the stabilized and unstabilized weights for protocol violation and loss to follow-up. Computations also are cumulative (cumulative weight for visit n is the cumulative weight for visit n-1 times the weight for visit n).

4. Combine (multiply) the weights for protocol violation and loss to follow-up. For step 3, the formulas for the unstabilized weight (usw) are:

.

Similarly, the formula of stabilized weight (SW) are:

.

where V represents the vector of non-time-varying (baseline) confounders, and L(k) represents the vector of time-varying confounders at time k.

In this example, censoring could be due to protocol violation or loss to follow-up. Hence, the probability of not being censored is the product of the probability of not being censored due to protocol violation and the probability of not being censored due to loss to follow-up. (See Program 12.13.)

The probability of not being censored due to protocol violation (step 1) equals one when opioid treatment is already started at prior visits. Table 12.8 shows the probability of being uncensored when opioids were not started at prior visits.

Table 12.8: Probability of Being Uncensored (for Protocol Violations)

Treatment strategy

Probability of being uncensored at pain level

Treat at

regime

5

6

7

8

9

10

Pain ≥ 5

5

pOpi

1

1

1

1

1

Pain ≥ 6

6

1-pOpi

pOpi

1

1

1

1

Pain ≥ 7

7

1-pOpi

1-pOpi

pOpi

1

1

1

Pain ≥ 8

8

1-pOpi

1-pOpi

1-pOpi

pOpi

1

1

Pain ≥ 9

9

1-pOpi

1-pOpi

1-pOpi

1-pOpi

pOpi

1

Pain ≥ 10

10

1-pOpi

1-pOpi

1-pOpi

1-pOpi

1-pOpi

pOpi

Never

11

1-pOpi

1-pOpi

1-pOpi

1-pOpi

1-pOpi

1-pOpi

Program 12.13: Computing Patient Weights

***********

*Step 1: censoring due to protocol violation

***********;

data TT41;

  set TT39;  

  if pv_Opi=1 then den_weight_arti=1; else

  if regime = pain_cat_max then den_weight_arti = pOpi_full;

  if regime < pain_cat_max and regime > pv_Pain_max then den_weight_arti =

        pOpi_full;

  if regime < pain_cat_max and regime le pv_Pain_max then den_weight_arti =

        1;

  if regime > pain_cat_max then den_weight_arti = 1-pOpi_full;

  if pv_Opi=1 then num_weight_arti=1; else

  if regime = pain_cat_max then num_weight_arti = pOpi_base;

  if regime < pain_cat_max and regime > pv_Pain_max then num_weight_arti =

        pOpi_base;

  if regime < pain_cat_max and regime le pv_Pain_max then num_weight_arti =

        1;

  if regime > pain_cat_max then num_weight_arti = 1- pOpi_base;

run;

***********

Step 2: Lost to follow-up

***********;

data TT41;

  set TT41;

  if pLFU_full=. then den_weight_LFU=1;

    else den_weight_LFU= 1-pLFU_full;

  if pLFU_base=. then num_weight_LFU=1;

    else num_weight_LFU= 1-pLFU_base;

  if visit = 5 then do;

  den_weight_LFU=1;

  num_weight_LFU=1;

  end;

run;

***********

*Step 3a Cumulative weights: censoring due to protocol violation

***********;

data TT42;

  set TT41;

  by newid;

  retain dencum_arti;

  if first.newid then do; dencum_arti = 1 ; end;

  dencum_arti = dencum_arti * den_weight_arti;

  retain numcum_arti;

  if first.newid then do; numcum_arti = 1 ;end;

  numcum_arti = numcum_arti * num_weight_arti;

  unstw_arti= 1/ dencum_arti;

  stw_arti= numcum_arti/ dencum_arti;

run;

***********

*Step 3b Compute cumulative weights: due to loss to follow-up

***********;

data TT42;

  set TT42;

  by newid;

  retain dencum_LFU;

  if first.newid then do; dencum_LFU = 1 ;end;

  dencum_LFU = dencum_LFU * den_weight_LFU;

  retain numcum_LFU;

  if first.newid then do; numcum_LFU = 1 ;end;

  numcum_LFU = numcum_LFU * num_weight_LFU;

  unstw_LFU= 1/ dencum_LFU;

  stw_LFU= numcum_LFU/ dencum_LFU;

run;

***********

Step 4. Combine the weights for protocol violation and loss to follow-up

***********;

data TT44;

  set TT42;

  st_weight=stw_LFU*stw_arti;

  unst_weight=unstw_LFU*unstw_arti;

run;

proc univariate data=TT44;

  var unst_weight st_weight;

  histogram;

  title ‘distribution of unstabilized and stabilized weights’;

run;

Figure 12.3: Distribution of Unstabilized Weights

Figure 12.3: Distribution of Unstabilized Weights displays the distribution of unstabilized weights. Note that Program 12.13 computes both the standard unstabilized and stabilized weights. However, Cain et al. (2010) comment that the standard stabilization approaches for weighting, as used here, are not appropriate for a replicates analysis of dynamic treatment regimes. Because there are no extreme weights, our analysis moves forward with the unstabilized weights. Despite the lack of extreme weights, for demonstration we also conduct a sensitivity analysis by truncating the weights at the 5th and 95th percentile. Cain et al. (2010) propose a new stabilized weight for dynamic treatment regimes analyses where the stabilization factor included in the numerator is based on the probability of censoring from the dynamic strategy. However, little research has been done to establish best practices for weighting in this scenario, and readers are referred to the Appendix of their work.

12.3.6 Base-Case Analysis

The final step of the MSM analysis is to specify the outcome model including the estimated weights. In this case, it is a weighted repeated measures model (PROC GENMOD) estimating the time-dependent six-month change of the BPI pain score as shown in Program 12.14.

The model specifications are:

a. Outcome variable:

Pain change

b. Influential variables:

Baseline variables

Treatment strategies (regime)

Interaction with time

c. Weights:

Unstabilized weights (unst_weight)

Program 12.14: Base-Case Analysis

PROC GENMOD data = TT44;

  where visit_TT < 3 AND pain_diff ne .;

  class SubjID regime(ref=”11”) visit_tt;

  weight unst_weight;

  model Pain_diff = regime visit_tt regime*visit_tt;

  REPEATED SUBJECT = SubjID / TYPE=EXCH;

  LSMEANS regime visit_tt regime*visit_tt / pdiff;

  TITLE ‘FINAL ANALYSIS MODEL: target trial’;

run;    

As a reminder, this type of methodology is relatively new, and best practices are not well established. Cain et al. (2015) used a bootstrap estimate for the standard errors, and we recommend following their guidance as opposed to the standard errors from the GENMOD procedure in order to account for the replicates process. While Program 12.14 does not incorporate the bootstrap procedure in order to focus on the analytic steps, we also implemented bootstrap estimation (bootstrapping from the original sample) of the standard errors and recommend readers do the same. In the analyses below, the standard errors for estimating differences in effect of each regimes from bootstrapping were slightly increased compared to the standard GENMOD output, though inferences remained unchanged.

Tables 12.9 (main effects and interactions) and 12.10 (pairwise differences) and Figure 12.4 display the least squares means from the MSM analysis of the effect of different treatment strategies on 6-month change scores. With Regime 11 (no treatment) as the reference case, there were no statistically significant treatment differences for any of the other regimes. Compared to no treatment, only strategies 6–8 (starting opioid treatment at a pain level of 6, 7, or 8) yielded numerically more pain relief. However, differences between all dynamic treatment strategies were small and not statistically different. In all treatment strategies, the pain is slightly increasing after visit 2 after an initial pain relief at the first post-initiation visit.

Table 12.9: MSM Base Case Analysis Results (Least Squares Means): Six-Month Change in Pain Scores

Effect

regime

Visit_TT

Estimate

Number of Bootstrap Samples

Bootstrap Standard Error

Lower limit of 95% CI (bootstrap)

Upper limit of 95% CI (bootstrap)

p-Value

regime

5

_

-0.4417

1000

0.063022

-0.56519

-0.31814

<.001

regime

6

_

-0.5013

1000

0.072539

-0.64346

-0.35911

<.001

regime

7

_

-0.5044

1000

0.075821

-0.65297

-0.35575

<.001

regime

8

_

-0.5107

1000

0.069337

-0.6466

-0.3748

<.001

regime

9

_

-0.4533

1000

0.05392

-0.55902

-0.34766

<.001

regime

10

_

-0.4631

1000

0.051927

-0.56486

-0.36131

<.001

regime

11

_

-0.4929

1000

0.052039

-0.59491

-0.39092

<.001

Visit_TT

_

1

-0.7723

1000

0.07692

-0.92305

-0.62152

<.001

Visit_TT

_

2

-0.1898

1000

0.097899

-0.3817

0.002065

0.053

regime*Visit_TT

5

1

-0.6593

1000

0.137258

-0.92832

-0.39027

<.001

regime*Visit_TT

5

2

-0.2240

1000

0.135921

-0.49044

0.042371

0.099

regime*Visit_TT

6

1

-0.7346

1000

0.119357

-0.96856

-0.50067

<.001

regime*Visit_TT

6

2

-0.2680

1000

0.154159

-0.57011

0.034194

0.082

regime*Visit_TT

7

1

-0.7428

1000

0.097011

-0.93294

-0.55266

<.001

regime*Visit_TT

7

2

-0.2659

1000

0.147396

-0.55482

0.022975

0.071

regime*Visit_TT

8

1

-0.7474

1000

0.081513

-0.90722

-0.58768

<.001

regime*Visit_TT

8

2

-0.2740

1000

0.138379

-0.54518

-0.00273

0.048

regime*Visit_TT

9

1

-0.8175

1000

0.080678

-0.97568

-0.65942

<.001

regime*Visit_TT

9

2

-0.08913

1000

0.111559

-0.30778

0.129528

0.424

regime*Visit_TT

10

1

-0.8471

1000

0.074408

-0.99296

-0.70128

<.001

regime*Visit_TT

10

2

-0.07905

1000

0.109113

-0.29291

0.134808

0.469

regime*Visit_TT

11

1

-0.8572

1000

0.072761

-0.99977

-0.71455

<.001

regime*Visit_TT

11

2

-0.1287

1000

0.11088

-0.346

0.088655

0.246

Table 12.10: MSM Base Case Analysis Results (Select Pairwise Differences): Six-Month Change in Pain Scores

Effect

regime

Visit_TT

_regime

_Visit_TT

Estimate

Number of Bootstrap Samples

Bootstrap Standard Error

Lower limit of 95% CI (bootstrap)

Upper limit of 95% CI (bootstrap)

p-Value

regime

5

_

6

_

0.05962

1000

0.04468

-0.02796

0.14720

0.182

regime

5

_

7

_

0.06270

1000

0.05880

-0.05255

0.17794

0.286

regime

5

_

8

_

0.06904

1000

0.06551

-0.05937

0.19744

0.292

regime

5

_

9

_

0.01167

1000

0.06499

-0.11571

0.13906

0.857

regime

5

_

10

_

0.02142

1000

0.06471

-0.10542

0.14826

0.741

regime

5

_

11

_

0.05125

1000

0.06501

-0.07616

0.17866

0.430

regime

6

_

7

_

0.003075

1000

0.04853

-0.09204

0.09819

0.949

regime

6

_

8

_

0.009416

1000

0.05847

-0.10518

0.12401

0.872

regime

6

_

9

_

-0.04795

1000

0.06883

-0.18286

0.08696

0.486

regime

6

_

10

_

-0.03820

1000

0.06902

-0.17348

0.09707

0.580

regime

6

_

11

_

-0.00837

1000

0.07000

-0.14556

0.12882

0.905

regime

7

_

8

_

0.006340

1000

0.04440

-0.08069

0.09337

0.886

regime

7

_

9

_

-0.05102

1000

0.06169

-0.17194

0.06989

0.408

regime

7

_

10

_

-0.04128

1000

0.06379

-0.16631

0.08376

0.518

regime

7

_

11

_

-0.01145

1000

0.06472

-0.13830

0.11540

0.860

regime

8

_

9

_

-0.05736

1000

0.04792

-0.15129

0.03656

0.231

regime

8

_

10

_

-0.04762

1000

0.05011

-0.14583

0.05060

0.342

regime

8

_

11

_

-0.01779

1000

0.05153

-0.11878

0.08320

0.730

regime

9

_

10

_

0.009747

1000

0.02279

-0.03493

0.05442

0.669

regime

9

_

11

_

0.03958

1000

0.02590

-0.01119

0.09034

0.126

regime

10

_

11

_

0.02983

1000

0.01369

0.00301

0.05665

0.029

Visit_TT

_

1

_

2

-0.5825

1000

0.14243

-0.86162

-0.30331

<.001

Figure 12.4: Least Squares Means from the MSM Analysis: Change in Pain Scores by Treatment Strategy (Regimes 5–11)

12.3.7 Selecting the Optimal Strategy

To find the optimal strategy, we use the MSM with regimes as covariate and model a continuous correlation of pain change and regime, starting with a test of trend and then moving to a quadratic model in Program 12.15. Table 12.11: Comparisons of Treatment Regimes: Linear Trend and 12.12 show that neither analysis demonstrates a significant difference in outcomes. So, consistent with the pairwise analyses, there is no clear best dynamic treatment strategy nor evidence for a difference in outcomes between the various strategies.

Program 12.15: Linear and Quadratic Models

PROC GENMOD data = TT44;

  where visit_TT < 3 AND pain_diff ne .;

  class SubjID visit_tt(ref=”1”);

  weight unst_weight;

  model Pain_diff =  regime visit_tt;

  repeated SUBJECT = SubjID / TYPE=EXCH;

  title ‘Smoothing: trend’;

run;    

PROC GENMOD data = TT44;

  where visit_TT < 3 AND pain_diff ne .;

  class SubjID visit_tt(ref=”1”);

  weight unst_weight;

  model Pain_diff =  regime regime*regime visit_tt;

  repeated SUBJECT = SubjID / TYPE=EXCH;

  title ‘Smoothing: u shape’;

run;    

Table 12.11: Comparisons of Treatment Regimes: Linear Trend

Parm

Level1

Estimate

Number of Bootstrap Samples

Bootstrap Standard Error

Lower limit of 95% CI (bootstrap)

Upper limit of 95% CI (bootstrap)

p-Value

Intercept

-0.7488

1000

0.13514

-1.01372

-0.48397

<.001

regime

-0.0027

1000

0.011935

-0.02607

0.02072

0.823

Visit_TT

2

0.5789

1000

0.142889

0.29885

0.858975

<.001

Visit_TT

1

0.0000

1000

0

0

0

.

Table 12.12: Comparison of Regimes: Quadratic Model

Parm

Level1

Estimate

Number of Bootstrap Samples

Bootstrap Standard Error

Lower limit of 95% CI (bootstrap)

Upper limit of 95% CI (bootstrap)

p-Value

Intercept

-0.5897

1000

0.29502

-1.16795

-0.01148

0.046

regime

-0.0445

1000

0.070366

-0.18244

0.093392

0.527

regime*regime

0.0026

1000

0.00422

-0.0057

0.010837

0.543

Visit_TT

2

0.5792

1000

0.142782

0.299358

0.859066

<.001

Visit_TT

1

0.0000

1000

0

0

0

.

12.3.8 Sensitivity Analyses

To estimate the robustness of the analyses, several sensitivity analyses should be conducted. These include sensitivity analysis surrounding the key assumptions and analysis planning decisions that were made. Because this analysis involves the use of inverse probability weighting, the potential for the results to be influenced by outliers should be investigated. We chose to truncate the unstabilized weights at the 5th and 95th percentiles. The coding process is simple. One can quickly obtain the observed percentiles through the UNIVARIATE or MEANS procedure and rerun the same analytical code. Truncating the weights did not have any effect on the outcome (code and data not shown).

12.4 Summary

In this chapter we presented the theory of the target trial approach and demonstrated how one could use this approach to evaluate different dynamic treatment strategies using longitudinal observational data. Conceptualizing the study as a researcher would in a randomized trial was shown to help avoid immortal time bias and time-dependent confounding. For the analysis comparing different treatment strategies, the concept of replication or cloning was introduced. This approach creates a copy (replicate) of each subject for each treatment regimen of interest. Each replicate is censored at the first point it deviates from the assigned treatment strategy. Using weighting based on the probability of deviating from the strategy, a marginal structural modeling approach is used for the analysis. Using the simulated REFLECTIONS study, we demonstrated the development of a target trial protocol – to evaluate the dynamic strategies of starting opioids immediately when pain levels reached varying levels. We provided step-by-step instructions and SAS code for developing the analysis based on the target trial approach. Results showed only small differences in pain scores between treatment strategies, suggesting that delaying opioid treatment is preferable to early use.

References

Almirall D, Ten Have T, Murphy SA (2010). Structural Nested Mean Models for Assessing Time-Varying Effect Moderation. Biometrics 66(1): p. 131-9. Epub 2009 Apr 13 doi:10.1111/j.1541-0420.2009.01238.x.

Bellamy SL, Lin JY, Ten Have TR (2007). An introduction to causal modeling in clinical trials. Clin Trials 4(1): p. 58-73.

Brumback BA, et al. (2004). Sensitivity analyses for unmeasured confounding assuming a marginal structural model for repeated measures. Stat Med 23(5): p. 749-67.

Cain LE, et al. (2010). When to Start Treatment? A Systematic Approach to the Comparison of Dynamic Regimes Using Observational Data. The International Journal of Biostatistics 6(2): p. 18.

Cain LE, et al. (2011). When to initiate combined antiretroviral therapy to reduce mortality and AIDS-defining illness in HIV-infected persons in developed countries: an observational study. Ann Intern Med 154(8): p. 509-15.

Cain LE, et al. (2015). Using observational data to emulate a randomized trial of dynamic treatment-switching strategies: an application to antiretroviral therapy. Int J Epidemiol. 45(6):2038-2049.

Chakraborty B, Murphy SA (2014). Dynamic Treatment Regimes. Annu Rev Stat Appl. 1: p. 447-464.

Cheung YK, Chakraborty B, Davidson KW (2015). Sequential multiple assignment randomized trial (SMART) with adaptive randomization for quality improvement in depression treatment program. Biometrics 71(2): p. 450-9.

Cole SR, et al. (2005). Marginal structural models for estimating the effect of highly active antiretroviral therapy initiation on CD4 cell count. Am J Epidemiol 162(5): p. 471-8.

Cole SR, Hernán MA (2008). Constructing inverse probability weights for marginal structural models. American Journal of Epidemiology 168: p. 656-64.

Daniel RM, et al. (2013). Methods for dealing with time-dependent confounding. Stat Med 32(9): p. 1584-618.

Emilsson L, et al. (2018). Examining Bias in Studies of Statin Treatment and Survival in Patients With Cancer. JAMA Oncol 4(1): p. 63-70.

Faries, DE, Kadziola ZA (2010). Analysis of longitudinal observational data using marginal structural models in Analysis of Observational Health Care Data Using SAS. Cary, NC: SAS Institute Inc, p. 211-230.

Garcia-Albeniz X, et al. (2015). Immediate versus deferred initiation of androgen deprivation therapy in prostate cancer patients with PSA-only relapse. An observational follow-up study. Eur J Cancer 51(7): p. 817-24.

Goetghebeur E, Loeys T (2002). Beyond intention to treat. Epidemiol Rev 24(1): p. 85-90.

Goldfeld KS (2014). Twice-weighted multiple interval estimation of a marginal structural model to analyze cost-effectiveness. Statistics in Medicine 33(7): p. 1222-1241.

Greenland S, Brumback B (2002). An overview of relations among causal modelling methods. Int J Epidemiol 31(5): p. 1030-7.

Hernán MA, Brumback B, Robins JM (2000). Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 11(5): p. 561-70.

Hernán MA, Brumback BA, Robins JM (2002). Estimating the causal effect of zidovudine on CD4 count with a marginal structural model for repeated measures. Stat Med 21(12): p. 1689-709.

Hernán MA, et al. (2005). Structural accelerated failure time models for survival analysis in studies with time-varying treatments. Pharmacoepidemiol Drug Saf. 14(7): p. 477-91.

Hernán MA, et al. (2016). Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses. J Clin Epidemiol. 79: p. 70-75.

Hernán MA, Hernandez-Diaz S, Robins JM (2004). A structural approach to selection bias. Epidemiology 15(5): p. 615-25.

Hernán MA, Robins JM (2016). Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. Am J Epidemiol. 183(8): p. 758-64.

Hernán MA, Robins JM (2017). Per-Protocol Analyses of Pragmatic Trials. N Engl J Med 377(14): p. 1391-1398.

Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC.

HIV-CAUSAL Collaboration (2011). When to initiate combined antiretroviral therapy to reduce mortality and AIDS-defining illness in HIV-infected persons in developed countries: an observational study. Ann Intern Med 154(8): p. 509-15.

Jonsson L, et al. (2014). Analyzing overall survival in randomized controlled trials with crossover and implications for economic evaluation. Value Health 17(6): p. 707-13.

Ko H, Hogan JW, Mayer KH (2003). Estimating causal treatment effects from longitudinal HIV natural history studies using marginal structural models. Biometrics 59(1): p. 152-62.

Kuehne F, et al. (2019). Guidance for a causal comparative effectiveness analysis emulating a target trial based on big real world evidence: when to start statin treatment. J Comp Eff Res 8(12): p. 1013-1025.

Latimer N (2012). The role of treatment crossover adjustment methods in the context of economic evaluation, in Health Economics and Decision Science School of Health and Related Research 2012 [submitted], The University of Sheffield.

Latimer NR, Abrams KR (2014). NICE DSU technical support document 16: Adjusting survival time estimates in the presence of treatment switching. NICE.

Latimer NR, et al. (2014). Adjusting survival time estimates to account for treatment switching in randomized controlled trials--an economic evaluation context: methods, limitations, and recommendations. Med Decis Making 34(3): p. 387-402.

Latimer NR, et al. (2015). Adjusting for the Confounding Effects of Treatment Switching-The BREAK-3 Trial: Dabrafenib Versus Dacarbazine. Oncologist 20(7): p. 798-805.

Latimer NR, et al. (2016). Assessing methods for dealing with treatment switching in clinical trials: A follow-up simulation study. Stat Methods Med Res 27(3):765-784.

Lavori P, Dawson R (2000). A design for testing clinical strategies: Biased adaptive within-subject randomization. Journal of the Royal Statistical Society 163: p. 29-38.Series A.

Lavori P, Dawson R (2004). Dynamic treatment regimes: Practical design considerations. Clinical Trials 1: p. 9-20.

Lavori PW, Dawson R (2014). Introduction to dynamic treatment strategies and sequential multiple assignment randomization. Clin Trials 11(4): p. 393-399.

Liu Y, Wang Y, Zeng D (2017). Sequential multiple assignment randomization trials with enrichment design. Biometrics 73(2): p. 378-390.

Lodi S, et al. (2019). Effect Estimates in Randomized Trials and Observational Studies: Comparing Apples With Apples. Am J Epidemiol 188(8): p. 1569-1577.

Morden JP, et al. (2011). Assessing methods for dealing with treatment switching in randomised controlled trials: a simulation study. BMC Med Res Methodol 11: p. 4.

Murphy S (2005). An experimental design for the development of adaptive treatment strategies. Statistics in Medicine 24: p. 1455-1481.

Murray EJ, Hernán MA (2018). Improved adherence adjustment in the Coronary Drug Project. Trials 19(1): p. 158.

Nahum-Shani I, et al. (2019). SMART longitudinal analysis: A tutorial for using repeated outcome measures from SMART studies to compare adaptive interventions. Psychol Methods. doi: 10.1037/met0000219.

Pearl J (2010). An introduction to causal inference. Int J Biostat. 6(2): p. Article 7. doi: 10.2202/1557-4679.1203.

Ray M, et al. (2010). The effect of combined antiretroviral therapy on the overall mortality of HIV-infected individuals. AIDS 24(1): p. 123-37.

Robins J, Hernán M, Siebert U (2004). Chapter 28: Effects of Multiple Interventions, in Comparative Quantification of Health Risks: Global and Regional Burden of Disease Attributable to Selected Major Risk Factors, M. Ezzati, et al., Editors. Geneva: World Health Organization, p. 2191-2229.

Robins J, Hernán MA (2009). Advances in Longitudinal Data Analysis. Boca Raton: Chapman & Hall, CRC Press.

Robins J, Orellana L, Rotnitzky A (2008). Estimation and extrapolation of optimal treatment and testing strategies. Stat Med 27(23): p. 4678-721.

Robins J, Rotnitzky A (2014). Discussion of “Dynamic treatment regimes: Technical challenges and applications”. Elect J Stat 8: p. 1273-1289.

Robins JM (1986). A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. Math Model 7(9-12): p. 1393-1512.

Robins JM, Finkelstein DM (2000). Correcting for noncompliance and dependent censoring in an AIDS Clinical Trial with inverse probability of censoring weighted (IPCW) log-rank tests. Biometrics 56(3): p. 779-88.

Robinson RL, et al. (2012). Burden of illness and treatment patterns for patients with fibromyalgia. Pain Med 13(10): p. 1366-76.

Robinson RL, et al. (2013). Longitudinal observation of treatment patterns and outcomes for patients with fibromyalgia: 12-month findings from the reflections study. Pain Med 14(9): p. 1400-15.

Schomaker M, Kuhne F, Siebert U (2019). Re: “Effect Estimates in Randomized Trials and Observational Studies: Comparing Apples with Apples”. Am J Epidemiol. doi: 10.1093/aje/kwz194.

Snowden JM, Rose S, Mortimer KM (2011). Implementation of G-computation on a simulated data set: demonstration of a causal inference technique. Am J Epidemiol 173(7): p. 731-8.

Sterne JA, et al. (2009). Timing of initiation of antiretroviral therapy in AIDS-free HIV-1-infected patients: a collaborative analysis of 18 HIV cohort studies. Lancet 373(9672): p. 1352-63.

The National Institute for Health and Care Excellence (NICE) (2012). Vemurafenib for treating locally advanced or metastatic BRAF V600 mutation-positive malignant melanoma. NICE, p. 45.

The National Institute for Health and Care Excellence (NICE) (2013). Everolimus in combination with exemestane for treating advanced exemestane for treating advanced-positive breast cancer afterendocrine therapy. NICE, p. 62.

The National Institute for Health and Care Excellence (NICE) (2013). Crizotinib for previously treated non-small-cell lung cancer associated with an anaplastic lymphoma kinase fusion gene. NICE, p. 55.

The National Institute for Health and Care Excellence (NICE) (2011), Everolimus for the second-line treatment of advanced renal cell carcinoma. NICE, p. 46.

Vansteelandt S, et al. (2009). Marginal structural models for partial exposure regimes. Biostatistics 10(1): p. 46-59.

Vansteelandt S, Joffe M (2014). Structural Nested Models and G-estimation: The Partially Realized Promise. Statist Sci 29(4): p. 707-731.

Vansteelandt S, Keiding N (2011). Invited commentary: G-computation--lost in translation? Am J Epidemiol 173(7): p. 739-42.

Westreich D, et al. (2012). The parametric g-formula to estimate the effect of highly active antiretroviral therapy on incident AIDS or death. Stat Med 31(18): p. 2000-9.

Zhang Y, et al. (2014). Comparative effectiveness of two anemia management strategies for complex elderly dialysis patients. Med Care 52 Suppl 3: p. S132-9.

Zhang Y, et al. (2018). Comparing the Effectiveness of Dynamic Treatment Strategies Using Electronic Health Records: An Application of the Parametric g-Formula to Anemia Management Strategies. Health Serv Res. 53(3):1900-1918.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset