Skip to main content
  • Study protocol
  • Open access
  • Published:

Developing adaptive interventions for adolescent substance use treatment settings: protocol of an observational, mixed-methods project

Abstract

Background

Over 1.6 million adolescents in the United States meet criteria for substance use disorders (SUDs). While there are promising treatments for SUDs, adolescents respond to these treatments differentially in part based on the setting in which treatments are delivered. One way to address such individualized response to treatment is through the development of adaptive interventions (AIs): sequences of decision rules for altering treatment based on an individual’s needs. This protocol describes a project with the overarching goal of beginning the development of AIs that provide recommendations for altering the setting of an adolescent’s substance use treatment. This project has three discrete aims: (1) explore the views of various stakeholders (parents, providers, policymakers, and researchers) on deciding the setting of substance use treatment for an adolescent based on individualized need, (2) generate hypotheses concerning candidate AIs, and (3) compare the relative effectiveness among candidate AIs and non-adaptive interventions commonly used in everyday practice.

Methods

This project uses a mixed-methods approach. First, we will conduct an iterative stakeholder engagement process, using RAND’s ExpertLens online system, to assess the importance of considering specific individual needs and clinical outcomes when deciding the setting for an adolescent’s substance use treatment. Second, we will use results from the stakeholder engagement process to analyze an observational longitudinal data set of 15,656 adolescents in substance use treatment, supported by the Substance Abuse and Mental Health Services Administration, using the Global Appraisal of Individual Needs questionnaire. We will utilize methods based on Q-learning regression to generate hypotheses about candidate AIs. Third, we will use robust statistical methods that aim to appropriately handle casemix adjustment on a large number of covariates (marginal structural modeling and inverse probability of treatment weights) to compare the relative effectiveness among candidate AIs and non-adaptive decision rules that are commonly used in everyday practice.

Discussion

This project begins filling a major gap in clinical and research efforts for adolescents in substance use treatment. Findings could be used to inform the further development and revision of influential multi-dimensional assessment and treatment planning tools, or lay the foundation for subsequent experiments to further develop or test AIs for treatment planning.

Background

Over 6% of all 12–17 year olds (or 1.6 million youth) meet criteria for a substance use disorder (SUD) [1]. SUDs can lead to immediate and long-term effects for adolescents, such as poor school performance, school drop-out and delinquency, future health problems, motor vehicle accidents, unintentional injuries, and suicide [25]. Although many adolescents with SUDs receiving treatment improve in the short-term, the vast majority continue to use or experience substance-related consequences within 6–12 months of discharge [711], while a large percentage dropout from treatment prior to finishing a particular treatment episode [12, 13].

Adolescents respond to treatment differentially in part based on the treatment setting: that is, whether care is received in outpatient, intensive outpatient, inpatient, or residential settings [6, 1417]. A sequential, individualized approach to making decisions about substance use treatment aims to address heterogeneity in responses to treatment across adolescents as well as for the same adolescent over time [15, 18, 19]. In this approach, treatment providers make decisions about (1) what treatment to provide initially, (2) how best to monitor response to treatment, and (3) whether or how best to alter treatment [2022]. There is currently little empirical research on which settings to offer treatment, at what stages in the clinical pathway, based on what kind of treatment response, measured using which variables—and little guidance on such sequential decision-making as a result. Thus, addiction science and clinical practice would benefit from empirically-based guidance for altering treatment setting of adolescents with SUDs.

Adaptive interventions for adolescent substance use treatment settings

Adaptive interventions (AIs)—also known as adaptive treatment strategies, treatment algorithms, or dynamic treatment regimens—assist sequential, individualized decision-making by recommending whether, how, or when to alter treatment for an individual at critical decision points [2325]. While AIs can be developed for clinical treatments or adherence interventions [26, 27], this project focuses specifically on developing AIs for adolescent substance use treatment settings: i.e., well-operationalized, empirically-supported recommendations to consider when making individualized, sequential decisions about which treatment setting an adolescent with an SUD should receive treatment, based on their individual needs. These AIs will be composed of (1) critical decision stages (i.e., 3-month intervals), (2) setting options, (3) tailoring variables and outcomes used to make decisions concerning which service to provide, and (4) replicable decision rules linking treatment setting recommendations, responses to tailoring variables, and desired outcomes. These AIs can assist families, providers, and policymakers in deciding on treatment settings for adolescent clients at multiple stages over time.

As an example, consider the AI shown in Fig. 1. There are two decision points: stage 1 concerns which service to recommend for months 0–3, and stage 2 concerns which service to recommend during months 3–6. In this example, there is only a single service option at stage 1, namely, outpatient services assigned at entry. In stage 2, there are two services options: discontinue outpatient or switch to intensive outpatient treatment. There are no tailoring variables at stage 1, while the tailoring variable at stage 2 is “responder status” during months 0–3 (e.g., abstinent or not during months 0–3). More complex AIs may include more than two stages, intervention options at each stage, and tailoring variables.

Fig. 1
figure 1

Example of a 6-month, two-stage service-level adaptive intervention (AI) for adolescent marijuana users

Although there is research related to AIs for SUD treatment [15, 2831], we are unaware of any research on recommendations for the setting of adolescent substance use treatment. The American Society of Addiction Medicine (ASAM) Criteria [32] are a major step forward in comprehensive, individualized treatment planning, although the research foundations for the ASAM Criteria are based largely on adult data [3340]. Additional research is needed to generate hypotheses about, and to better understand the utility of proposed AIs, for adolescents. This project aims to develop and evaluate AIs that begin to provide much-needed empirical evidence addressing important questions such as: What tailoring variables and outcomes should be considered when deciding an adolescent’s treatment setting? What values on these tailoring variables signify response versus non-response? Are there any combinations of initial and subsequent treatment setting that work synergistically or antagonistically?

Methods for developing AIs using observational, practice-based data

There are several methods for making causal inferences about the effects of AIs that could be applied to observational, practice-based data [41, 42]. However, there have been few applications of such methods to complex, practice-based observational datasets relevant to adolescent SUDs. We suspect that methods available to identify AIs have not been readily adopted to observational data analyses in adolescent SUD in part due to a lack of theory and conceptual models to guide the development of AIs [54]. As such, the current study relies on a mixed methods approach using stakeholder engagement to compliment statistical methods used to inform the identification of potentially meaningful AIs. In addition, an empirical study of the causal effects of AIs using observational study data particularly requires careful consideration of the potential impact of time-varying confounding bias [4350]. In the sequential decision-making setting, confounders of the effect of subsequent treatment are often also outcomes of previous treatment. In such settings, the use of standard regression, or even naïve use of standard propensity score adjustments, may actually cause more bias [5153]. Appropriate and robust inverse-probability-of-treatment weighting methods can account for such confounders.

Objectives

This study protocol describes a project to begin the development of AIs for the settings of adolescent substance use treatment, based on adolescents’ evolving individual needs throughout the clinical pathway. This project has three discrete aims. First, we aim to explore the views of various stakeholders (parents, providers, policymakers, and researchers) on important factors to consider when deciding the setting of substance use treatment for adolescents. Second, we aim to generate hypotheses concerning candidate AIs via statistical analyses of observational, practice-based empirical data from adolescent substance use treatment settings and informed by our examination of stakeholder views. Third, we aim to compare the relative effectiveness among candidate AIs as well as non-adaptive interventions that are commonly used in everyday practice. The empirically-informed AIs we aim to produce can help guide future placement of adolescents with SUDs into the most appropriate treatment settings at the most appropriate times for their individual needs, as well as lay the foundations for future randomized trials to more rigorously evaluate or identify effective AIs [23, 5563].

Methods

Global Appraisal of Individual Needs

The current project will utilize longitudinal observational data on 15,696 unique adolescents who were administered the Global Appraisal of Individual Needs (GAIN) [64]. The GAIN has eight sections assessing background and demographic characteristics, substance use, physical health, mental health, risk behaviors, environmental, legal, and educational/vocational problem areas. Within each problem area, items assess problem characteristics, recency, severity, and service utilization. As such, use of longitudinal GAIN data provides the current study with an opportunity to identify and assess the relative effectiveness of candidate AIs. The data was routinely collected by Center for Substance Abuse Treatment (CSAT) discretionary grantees engaged in adolescent substance use program activities between 1997 and 2012. During this time, recipients of CSAT discretionary grants for adolescent substance use treatment research collected data on their clients using the GAIN. Adolescents were surveyed at four time points over the course of a year (namely, at intake and at 3-, 6-, and 12-months post-intake). Gender and race distributions closely mirrored those found in adolescent treatment samples nationally: namely, 74% of admissions were male, 18% were non-Hispanic African American, and 35% were Hispanic/Latino. It is important to note that this dataset is not necessarily representative of all adolescent substance use treatment facilities operating today—we suspect these are better performing facilities [65]. However, in the absence of such a large nationally-representative dataset, this one is uniquely poised to address our project aims.

Aim 1. Identify key components of feasible AIs

Objectives

The first aim of this project is to examine the views of stakeholders (parents, clinicians, researchers, policymakers) on which tailoring variables and outcomes are most important to consider when deciding the setting of substance use treatment for adolescents. This aim will be used to inform the empirical data analyses for the remaining project aims.

Online Delphi process

Using an online system called ExpertLens [66], we will conduct an online Delphi process [67] in which we will ask stakeholders to rate the importance of individual needs contained within the GAIN as potential tailoring variables for treatment or key outcomes of interest. We will organize these individual needs according to the six dimensions of the ASAM Criteria: (1) acute intoxication and withdrawal potential; (2) biomedical conditions and complications; (3) emotional/behavioral/cognitive conditions and complications; (4) readiness to change; (5) relapse, continued use, or continued problem potential; and (6) recovery/living environment. We will also ask participants to rate the importance of clinical outcomes based on the National Outcome Measures (NOMs) from the Substance Abuse and Mental Health Services Administration (SAMHSA) [68].

Potential participants will be identified by first constructing a list based on published research, suggestions from known stakeholders in this area, and member lists of relevant societies and organizations. This will be followed by “snowball sampling” in which stakeholders can nominate further participants. To be eligible, participants must identify with one of the following four stakeholder groups: (1) parents of adolescents who have received substance use treatment; (2) providers of adolescent substance use treatment; (3) professionals involved in program planning at the clinic, health-system, state, or federal levels; or (4) researchers of adolescent substance use treatment. We aim to recruit at least 20–40 participants per stakeholder group. To improve recruitment and retention, participants will receive a $200 gift card for completing the process.

Our Delphi process will involve two separate rating rounds, with a round of online group discussion and feedback in between. We expect to keep each round open for 7–10 days and for each round to take participants about 1 h to complete. In Round One, participants will use a 9-point Likert scale to rate the importance of assessing each need or goal for deciding the appropriate setting (outpatient, intensive outpatient, residential, or inpatient) for an adolescent’s substance use treatment. A rating of 1 will correspond to “of lower importance” and a rating of 9 to “of higher importance.” Participants will also be able to comment on the rationale underpinning their ratings as well as suggest additional individual needs and treatment goals to rate in Round Three.

In Round Two, participants will see graphed results from Round One—including the median and frequencies for each item—as well as how a participant’s own rating compares to group ratings. They will also be shown decisions about the group’s agreement or disagreement on the importance of items for the checklist, as determined by the inter-percentile range adjusted for symmetry (IPRAS) analysis technique from the RAND/UCLA Appropriateness method [69]. If an item has disagreement, it is considered to have uncertain importance. If an item has agreement, the tertile in which the median rating for importance falls will be analyzed: a median score between 1 and 3 will indicate lower importance, 4 and 6 will indicate moderate importance, and 7 and 9 will indicate higher importance. These determinations will be summarized in user-friendly, color-coded text beneath each graph: green text indicating the group agreed the item has higher importance; yellow text, that the item has moderate importance; and red text, that the item has lower importance. Participant comments on each item from Round One will also be provided so participants can understand others’ rationales for their ratings. Participants will be asked to discuss these results in an anonymous online discussion forum to explore areas of agreement and disagreement. Rather than having participants sign in simultaneously, they post comments at their own leisure at times convenient to their schedules to increase engagement in discussion while preventing participant fatigue. Project team members will moderate all panels to promote participant engagement in discussions.

In Round Three, participants will re-rate each item in light of Round Two discussions of Round One results, as well as rate new items participants may have proposed in Round One. The process allows participants to revise their views and identify items they deem most important for the checklist. On the last page of the Round Three questionnaire, participants will be asked to provide input on the ExpertLens process. Participant anonymity of responses will be ensured via use of usernames in the ExpertLens system (e.g., Participant 1, Participant 2, etc.). Individual responses will be known only to the ExpertLens moderators.

Upon completion of the Delphi process, a descriptive analysis of participant ratings will characterize the distribution of group responses from each round, estimate changes in group responses between rounds, and determine areas of agreement and disagreement. Those items reaching consensus for high importance (using the IPRAS method) in Round Three will be prioritized for inclusion in the decision rules to be developed and evaluated in later project stages. To interpret these ratings, a thematic analysis will be conducted by systematically coding all comments linked to each of the items, indexing codes into preliminary and inductively identified themes, charting and integrating themes across items, and relabeling the final themes as appropriate [70].

Aim 2. Empirically identify high-quality candidate AIs

Objective

The second aim of this project is to generate hypotheses about candidate AIs for deciding the setting for an adolescent’s substance use treatment using practice-based observational data on the GAIN from the CSAT dataset [64]. The development of candidate AIs will be informed by our stakeholder engagement and will consist of recommendations at two decision points: 0–3 months (stage 1 decision) and 3–6 months (stage 2 decision).

Settings of adolescent substance use treatment

The two stages of treatment (0–3 and 3–6 months) were determined by the GAIN observational study data collection protocol. For decisions about adolescent substance use treatment settings (i.e., residential vs. outpatient vs. intensive outpatient treatment), the 3-month time interval in the CSAT data is highly relevant, as participation in treatment services for at least 90 days is generally regarded as a best practice [6, 8, 71], and most evidence-based prevention/treatment programs for substance use disorders among adolescents are an average of 12 weeks long [10, 16, 72].

At each stage, there are four possible decisions regarding treatment setting: outpatient, intensive outpatient, residential/inpatient, or no treatment. Outpatient settings involve organized services for substance use that do not require an adolescent to be admitted to a residential program or hospital. Intensive outpatient settings require additional structure and support than regular outpatient services. Residential settings involve intensive, structured services for substance use that require an adolescent to be admitted but do not involve hospitalization. Inpatient settings involve intensive, structured services for substance use that require 24-h care in a safe and secure hospital unit. In our dataset, outpatient settings are the most common (see Table 1). In this study, inpatient and residential treatment settings are combined because they are asked about jointly in the GAIN survey.

Table 1 Number of youth in each treatment service

Adolescent participation in a treatment setting during each 90-day period (“stage”) will be assessed using a combination of treatment log data and adolescents’ self-reports of treatment experiences in the 90 days prior to a follow-up visit. Our treatment measures group adolescents into the four specific treatment settings based on whether or not an adolescent received any amount of treatment in the specific setting during the 90-day period in question. For example, to be in the residential group, an adolescent only needs to have received treatment in a residential setting for at least 1 day during the past 90 days. Adolescents in the “no treatment” condition will be coded as such if they reported no days of treatment received in outpatient, intensive outpatient, or residential/inpatient settings during the past 90 day period. We will also conduct sensitivity analyses assigning adolescents to the treatment setting in which they spent the most days over the 90 day period.

Candidate tailoring and confounding variables

Overall, the GAIN contains over 1000 items and 100-symptom count, change score, and service utilization indices that could be used for defining AIs and for dealing with confounding adjustment over time [73]. Moreover, since the GAIN was developed to assist clinicians with patient placement decisions, over 80 items were designed to operationalize the ASAM Criteria [74, 75]. Given the structure of the GAIN, there are hundreds of candidate tailoring variables at baseline that can be used to make the stage 1 services decision. At stage 2, the same baseline variables—as well as change in these measures from baseline to the end of month 3 and treatment assigned during the first 3 months—form a large list of possible tailoring variables from which to choose. Consequently, AIs generated from the GAIN have the possibility of being more individually-tailored than the template example provided in Fig. 1.

Outcomes

Outcomes of interest will be drawn from and informed by the set of “positive” measures selected by SAMHSA for the NOMs: i.e., no substance use, no SUD symptoms, no physical health problems, no mental health problems, no illegal activity, no justice system involvement, housed in the community, no family problems, vocationally engaged, and socially supported [76]. Each NOM will be turned into a “positive” binary indicator, and the total count of the number of positively endorsed NOMs will be used as our primary outcome variable. In addition to using the overall count of binary indicators of NOMs, we will also carefully examine continuous versions of these outcomes.

Analytic plan

A number of state-of-the-art methods will be utilized to identify candidate AIs using the CSAT dataset as part of our Aim 2 efforts. All of the methods we propose to use are based on the principles of Q-learning (Q-L)—an idea borrowed from computer science that can be seen as an extension of moderated regression analysis to the sequential decision-making setting [18, 61]. Q-L uses a backward induction logic (dynamic programming [77]) that incorporates effects of future decisions in evaluating present decisions. This ensures that the constructed AIs optimize outcomes over the short- and long-term, rather than selecting the setting at each stage that improves outcomes only in the short-term and, therefore, ignores any potentially beneficial delayed effects of earlier decisions. In the Q-L regression analyses, the goal will be to find AIs that maximize the mean number of positively endorsed NOMs (primary outcome) and continuous versions of individual NOMs (secondary outcomes). We will carefully compare and contrast the findings from various methods for implementing Q-L with our dataset. Please see Additional file 1 for a technical overview of Q-L.

Data partitioning and power

We will partition the CSAT dataset (N = 15,656) into two datasets, each of which is representative of all participants in the CSAT dataset. One dataset will be used for estimating AIs (Aim 2; e.g., our way to generate hypotheses about specific candidate AIs that might prove promising for adolescents) and a second dataset will be used for evaluation (see Aim 3 below). The purpose of data partitioning is to avoid “overfitting” by evaluating the identified AI with the same data used to empirically identify candidate AIs (Aim 2), a practice that may overstate the usefulness of an AI. Since Aims 2 and 3 will use mutually exclusive, random, subsamples of the complete CSAT dataset, power for both analyses is based on a raw sample size of approximately 7828 youth. Although a sample of 7828 youth is large, given the complexity of the AI estimation methods, it is difficult to judge the power of the analysis to identify meaningful tailoring variables or accurately distinguish among alternative treatment settings. Hence, the value of identified AIs will be assessed through out-of-sample evaluation in Aim 3. For Aim 3, assuming (conservatively) that a minimum of 500 youth in the data provide information about the effectiveness of a particular AI, and that weighting (discussed below) reduces this sample to an effective sample size of 250, we will be able to detect differences that are at least as large as 13 percentage points for binary outcomes and 0.25 effect sizes for continuous outcomes, assuming a two-tailed hypothesis test and a type-I error of 0.05. Mean rates of missingness on items ranged from less than 1 to 27.1%, with a mean of 13.4% across items at all of the follow-ups. Missingness will be dealt with explicitly by chained equations multiple imputation, [78], as implemented in the R package mice [79].

Aim 3. Evaluate the relative effectiveness of candidate AIs

Objective

The third aim of this project is to evaluate the relative effectiveness of candidate AIs generated in Aim 2 by independently examining their causal effects on relevant clinical outcomes. In addition to comparing the relative effectiveness of candidate AIs with each other, we will also compare them against the following non-adaptive decision rules that are commonly used in everyday practice: (1) outpatient services for months 0–3 and then no treatment for months 0–6, (2) always outpatient services (i.e., outpatient for all months 0–6), (3) always intensive outpatient, (4) always residential\inpatient, and (5) no treatment for all months. As with Aim 2 analyses, the goal will be to find AIs most effective on the mean number of positively endorsed NOMs (primary outcome) and continuous versions of individual NOMs (secondary outcomes).

Analytic plan

To compare the relative effectiveness of candidate AIs against each other and non-adaptive decision rules, we will use marginal structural modeling (MSM): a class of causal longitudinal models for conceptualizing and estimating the causal effects of time-varying treatments [43, 8084]. MSMs, when used together with “inverse-probability of treatment weighting” (IPT) [80, 8386], can remove or greatly eliminate the bias that time-varying confounders contribute to estimates of the causal effects of the AIs. Estimated IPT weights for each of the adolescents in the dataset at each decision stage reduce the compositional imbalance in the confounders among adolescents who receive different sequences of treatments. We aim to generalize this analytic approach to allow for comparison between multiple AIs [87].

Discussion

This project aims to provide several innovative insights for the field of adolescent substance use treatment. First, despite the obvious need and early successful efforts toward treatment guidelines, little to no research is currently and explicitly devoted to the empirical construction of AIs for guiding the individualized selection of settings for adolescent substance use treatment. The field has recognized the need to move beyond research that informs how to match adolescents with SUDs to initial treatments (i.e., based solely on known baseline characteristics of the adolescent at treatment entry) to research that informs how to adapt and re-adapt subsequent treatment over time to the specific needs of the individual [15, 19, 25]. The overarching scientific goal of this project is to begin the development of such guidance via the explicit development of empirically-based AIs. Second, stakeholder input is a critical supplement to the development of AIs that are both effective and feasible in practice. Existing conceptual models or behavioral theories are by themselves informative yet insufficient for constructing the data analysis models needed to identify effective AIs. To our knowledge, this project will be the first to use a mixed-methods, iterative approach whereby input from providers, policy-makers, researchers, and parents will directly inform the statistical models used to learn about effective and realistic AIs. Lastly, we will extend and illustrate the use of modern statistical methods for constructing and evaluating AIs using practice-based observational data. Methodologists have developed rigorous and effective methods such as Q-L, decision lists, and MSM for constructing and evaluating AIs from observational study data [43, 45, 88]. Despite the power, elegance, and availability of such methods, they are not routinely employed in adolescent substance use research.

We note several important considerations at the outset of this project. First, the robustness of findings from Aim 1 will depend on the stakeholders we are able to recruit for the panel and retain through the final rating round. Second, Aims 2 and 3 rely on data collected in the CSAT dataset, which provides rather coarse information about the timing of treatments received during a particular 90-day window. Because it is impossible to disentangle the order of treatment and outcomes within a given 90-day period in our dataset, finer-grained decision stages cannot be considered. Although it would have been ideal to have more detailed data on the timing of treatment receipt versus outcomes, we will appropriately lag our data to address this issue. In addition, we are limited in this study to the creation of AIs for assignment to different settings of treatment and cannot examine specific clinical interventions (e.g., cognitive behavioral therapy, motivational enhancement therapy) received in these settings, based on the availability of data in the CSAT dataset. Lastly, given that this project uses observational rather than experimental data, results generated are better viewed as hypothesis generating than confirmatory analyses. As such, this project represents an important first step in a process that itself will be adapted overtime.

In conclusion, this project will fill a major gap in adolescent addiction science and clinical practice. It expands upon the use of multi-dimensional assessments for substance use treatment planning by creating sequential, individualized decision rules for the setting of adolescent substance use treatment—supported by both stakeholder input and statistical evidence. Results from this project could be used to inform the further development and revision of proposed multi-dimensional assessment and treatment planning tools in the field of substance use—such as CONTINNUM, The ASAM Criteria Decision Engine™, and the GAIN Recommendation and Referral Summary). More immediately, identifying candidate AIs that have the promise of more effectively guiding decisions to move adolescents between outpatient, intensive outpatient, residential, and inpatient settings will provide guidance to current practitioners and lay the foundation for subsequent experiments that can test candidate AIs in rigorous clinical trials.

Abbreviations

AI:

adaptive intervention

ASAM:

American Society of Addiction Medicine

CSAT:

Center for Substance Abuse Treatment

IPRAS:

inter-percentile range adjusted for symmetry

IPT:

inverse-probability of treatment weighting

MSM:

marginal structural modeling

NOM:

National Outcome Measure

Q-L:

Q-learning

SAMHSA:

Substance Abuse and Mental Health Services Administration

GAIN:

Global Appraisal of Individual Needs

SUD:

substance use disorder

References

  1. Substance Abuse and Mental Health Services Administration. Results from the 2012 National Survey on Drug Use and Health: Summary of National Findings. HHS Publication No. (SMA) 13-4795. In: NSDUH Series H-46. Rockville: Substance Abuse and Mental Health Services Administration; 2013.

  2. D’Amico EJ, et al. Processes linking adolescent problems to substance-use problems in late young adulthood. J Stud Alcohol. 2005;66(6):766–75.

    Article  PubMed  Google Scholar 

  3. Hill KG, et al. Early adult outcomes of adolescent binge drinking: person- and variable-centered analyses of binge drinking trajectories. Alcohol Clin Exp Res. 2000;24(6):892–901.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Tucker JS, et al. Predictors of attempted quitting and cessation among young adult smokers. Prev Med. 2005;41(2):554–61.

    Article  PubMed  Google Scholar 

  5. Eaton DK, et al. Youth risk behavior surveillance —United States, 2011. In: Surveillance Summaries, vol. 61, no. 4; 2012.

  6. National Institute on Drug Abuse. Principles of adolescent substance use disorder treatment: a research-based guide. Rockville: National Institute on Drug Abuse; 2014.

    Google Scholar 

  7. Dennis M, et al. The Cannabis Youth Treatment (CYT) Study: main findings from two randomized trials. J Subst Abuse Treat. 2004;27(3):197–213.

    Article  PubMed  Google Scholar 

  8. Hser YI, et al. An evaluation of drug treatments for adolescents in 4 US cities. Arch Gen Psychiatry. 2001;58(7):689–95.

    Article  CAS  PubMed  Google Scholar 

  9. Hunter SB, et al. The effectiveness of community-based delivery of an evidence-based treatment for adolescent substance use. J Subst Abuse Treat. 2012;43(2):211–20.

    Article  PubMed  Google Scholar 

  10. Williams RJ, Chang SY. A comprehensive and comparative review of adolescent substance abuse treatment outcome. Clin Psychol Sci Pract. 2000;7(2):138–66.

    Article  Google Scholar 

  11. Salome HJ, et al. Estimating the client costs of addiction treatment: first findings from the client drug abuse treatment cost analysis program (Client DATCAP). Drug Alcohol Depend. 2003;71(2):195–206.

    Article  PubMed  Google Scholar 

  12. Substance Abuse and Mental Health Services Administration. Treatment Episode Data Set (TEDS): 2013. Discharges from Substance Abuse Treatment Services. BHSIS Series S-86, HHS Publication No. (SMA) 16-4988. Rockville: Substance Abuse and Mental Health Services Administration; 2016.

  13. Wickizer T, et al. Completion rates of clients discharged from drug and alcohol treatment programs in Washington State. Am J Public Health. 1994;84(2):215–21.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Godley MD, et al. A randomized trial of assertive continuing care and contingency management for adolescents with substance use disorders. J Consult Clin Psychol. 2014;82(1):40–51.

    Article  PubMed  Google Scholar 

  15. McKay JR. Treating substance use disorders with adaptive continuing care. Washington: American Psychological Association; 2009. p. 277.

    Book  Google Scholar 

  16. Tanner-Smith EE, Wilson SJ, Lipsey MW. The comparative effectiveness of outpatient treatment for adolescent substance abuse: a meta-analysis. J Subst Abuse Treat. 2013;44(2):145–58.

    Article  PubMed  Google Scholar 

  17. Tevyaw TO, Monti PM. Motivational enhancement and other brief interventions for adolescent substance abuse: foundations, applications and evaluations. Addiction. 2004;99(Suppl 2):63–75.

    Article  PubMed  Google Scholar 

  18. Nahum-Shani I, et al. A SMART data analysis method for constructing adaptive treatment strategies for substance use disorders. Addiction. 2017;112(5):901–9.

    Article  PubMed  Google Scholar 

  19. Morgenstern J, McKay JR. Rethinking the paradigms that inform behavioral treatment research for substance use disorders. Addiction. 2007;102(9):1377–89.

    Article  PubMed  Google Scholar 

  20. Dennis ML, et al. The duration and correlates of addiction and treatment careers. J Subst Abuse Treat. 2005;28(Suppl 1):S51–62.

    Article  PubMed  Google Scholar 

  21. Hser YI, et al. Drug treatment careers. A conceptual framework and existing research findings. J Subst Abuse Treat. 1997;14(6):543–58.

    Article  CAS  PubMed  Google Scholar 

  22. Grella CE, Joshi V. Gender differences in drug treatment careers among clients in the national Drug Abuse Treatment Outcome Study. Am J Drug Alcohol Abuse. 1999;25(3):385–406.

    Article  CAS  PubMed  Google Scholar 

  23. Almirall D, et al. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med. 2014;4(3):260–74.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Collins LM, Murphy SA, Bierman KL. A conceptual framework for adaptive preventive interventions. Prev Sci. 2004;5(3):185–96.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Murphy SA, et al. Developing adaptive treatment strategies in substance abuse research. Drug Alcohol Depend. 2007;88(Suppl 2):S24–30.

    Article  PubMed  Google Scholar 

  26. Kaminer Y. Adaptive treatment for adolescent cannabis use disorders. 2012. https://clinicaltrials.gov/ct2/show/NCT01656707.

  27. Budney AJ, Stanger C. Behavioral treatment of adolescent substance use. 2014. https://clinicaltrials.gov/ct2/show/NCT02063984.

  28. The Methodology Center: Projects Using SMARTs. [cited 2014 Oct 31]. http://methodology.psu.edu/ra/smart/projects.

  29. McKay JR, et al. Effect of patient choice in an adaptive sequential randomization trial of treatment for alcohol and cocaine dependence. J Consult Clin Psychol 2014;83(6):1021–32.

    Article  Google Scholar 

  30. Petry NM, et al. A randomized trial adapting contingency management targets based on initial abstinence status of cocaine-dependent patients. J Consult Clin Psychol. 2012;80(2):276–85.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Waldron HB, Turner CW. Evidence-based psychosocial treatments for adolescent substance abuse. J Clin Child Adolesc Psychol. 2008;37(1):238–61.

    Article  PubMed  Google Scholar 

  32. American Society of Addiction Medicine. Crosswalk of the adolescent placement criteria. In: Shulman GD, editor. ASAM PPC-2R patient placement criteria for the treatment of substance-related disorders. Chevy Chase: Lippincott Williams & Wilkins; 2007.

    Google Scholar 

  33. Baker SL, Gastfriend DR. Reliability of multidimensional substance abuse treatment matching. J Addict Dis. 2004;22(sup1):45–60.

    Article  Google Scholar 

  34. Deck D, et al. Impact of patient placement criteria on substance abuse treatment under the oregon health plan. J Addict Dis. 2004;22(sup1):27–44.

    Article  Google Scholar 

  35. Gastfriend DR, et al. New constructs and assessments for relapse and continued use potential in the ASAM patient placement criteria. J Addict Dis. 2004;22(sup1):95–111.

    Article  Google Scholar 

  36. Gastfriend DR, Mee-Lee D. The ASAM patient placement criteria. J Addict Dis. 2004;22(sup1):1–8.

    Article  Google Scholar 

  37. Levine HJ, et al. Determining service variations between and within ASAM levels of care. J Addict Dis. 2004;22(sup1):9–25.

    Article  Google Scholar 

  38. Minkoff K, et al. Development of service intensity criteria and program categories for individuals with co-occurring disorders. J Addict Dis. 2004;22(sup1):113–29.

    Article  Google Scholar 

  39. Sharon E, et al. Predictive validity of the ASAM patient placement criteria for hospital utilization. J Addict Dis. 2004;22(sup1):79–93.

    Article  Google Scholar 

  40. Staines G, et al. Convergent validity of the ASAM patient placement criteria using a standardized computer algorithm. J Addict Dis. 2004;22(sup1):61–77.

    Article  Google Scholar 

  41. Chakraborty B, Moodie E. Statistical methods for dynamic treatment regimes. New York: Springer; 2013.

    Book  Google Scholar 

  42. Kosorok MR, Moodie EE. Adaptive treatment strategies in practice: planning trials and analyzing data for personalized medicine. Philadelphia: Society for Industrial and Applied Mathematics; 2015.

    Book  Google Scholar 

  43. Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology. 2000;11(5):561–70.

    Article  CAS  PubMed  Google Scholar 

  44. Hernan MA, et al. Comparison of dynamic treatment regimes via inverse probability weighting. Basic Clin Pharmacol Toxicol. 2006;98(3):237–42.

    Article  CAS  PubMed  Google Scholar 

  45. Murphy SA, et al. Marginal mean models for dynamic regimes. J Am Stat Assoc. 2001;96(456):1410–23.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Pearl J. Graphs, causality, and structural equation models. Sociol Methods Res. 1998;27(2):226–84.

    Article  Google Scholar 

  47. Robins JM. A graphical approach to the identification and estimation of causal parameters in mortality studies with sustained exposure periods. J Chronic Dis. 1987;40(Suppl 2):139S–61S.

    Article  PubMed  Google Scholar 

  48. Robins JM. Correcting for non-compliance in randomized trials using structural nested mean models. Commun Stat Theory Methods. 1994;23(8):2379–412.

    Article  Google Scholar 

  49. Robins JM. Causal inference from complex longitudinal data. In: Berkane M, editor. Latent variable modeling and applications to causality. Lecture Notes in Statistics. New York: Springer; 1997.

    Google Scholar 

  50. Robins JM. Estimating causal effects of time-varying endogenous treatments by G-estimation of structural nested models. In: Berkane M, editor. Latent variable modeling and applications to causality. Lecture notes in statistics. New York: Springer; 1997. p. 69–117.

    Chapter  Google Scholar 

  51. Almirall D, et al. Subgroups analysis when treatment and moderators are time-varying. Prev Sci. 2013;14(2):169–78.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Almirall D, Ten Have T, Murphy SA. Structural nested mean models for assessing time-varying effect moderation. Biometrics. 2010;66(1):131–9.

    Article  PubMed  Google Scholar 

  53. Barber JS, Murphy SA, Verbitsky N. Adjusting for time-varying confounding in survival analysis. Sociol Methodol. 2004;34(1):163–92.

    Article  Google Scholar 

  54. Nahum-Shani I, Hekler EB, Spruijt-Metz D. Building health behavior models to guide the development of just-in-time adaptive interventions: a pragmatic framework. Health Psychol. 2015;34(Supplement):1209–19.

    Article  PubMed Central  Google Scholar 

  55. Almirall D, et al. Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Stat Med. 2012;31(17):1887–902.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Almirall D, et al. SMARTer discontinuation trial designs for developing an adaptive treatment strategy. J Child Adolesc Psychopharmacol. 2012;22(5):364–74.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Collins LM, Nahum-Shani I, Almirall D. Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clin Trials. 2014;11(4):426–34.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Lavori PW, Dawson R. A design for testing clinical strategies: biased adaptive within-subject randomization. J R Stat Soc Ser A Stat Soc. 2000;163(1):29–38.

    Article  Google Scholar 

  59. Lavori PW, Dawson R. Dynamic treatment regimes: practical design considerations. Clin Trials. 2004;1(1):9–20.

    Article  PubMed  Google Scholar 

  60. Murphy SA. An experimental design for the development of adaptive treatment strategies. Stat Med. 2005;24(10):1455–81.

    Article  CAS  PubMed  Google Scholar 

  61. Nahum-Shani I, et al. Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods. 2012;17(4):478–94.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Thall PF, et al. Adaptive therapy for androgen-independent prostate cancer: a randomized selection trial of four regimens. J Natl Cancer Inst. 2007;99(21):1613–22.

    Article  CAS  PubMed  Google Scholar 

  63. Thall PF, Millikan RE, Sung HG. Evaluating multiple treatment courses in clinical trials. Stat Med. 2000;19(8):1011–28.

    Article  CAS  PubMed  Google Scholar 

  64. Dennis ML. Global Appraisal of Individual Needs (GAIN) Administration guide for the GAIN and related measures (Version 1299). Bloomington: Chestnut Health Systems; 1999.

    Google Scholar 

  65. Hunter SB, et al. Assessing the generalizability of the CSAT-sponsored GAIN dataset: are the CSAT sites representative of adolescent treatment programs in the U.S.? J Subst Abuse Treat. 2014;46(2):238–43.

    Article  PubMed  Google Scholar 

  66. Khodyakov D, et al. Conducting online expert panels: a feasibility and experimental replicability study. BMC Med Res Methodol. 2011;11:174.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.

    CAS  PubMed  Google Scholar 

  68. Substance Abuse and Mental Health Services Administration (SAMHSA). National Outcomes Measures (NOM). Rockville: SAMHSA, U.S. Department of Health and Human Services; 2011.

  69. Fitch K, et al. RAND, UCLA appropriateness method (RAM). Santa Monica: RAND Corporation; 2001.

    Google Scholar 

  70. Bowling A. Research methods in health: investigating health and health services. New York: McGraw-Hill; 2009.

    Google Scholar 

  71. National Institute on Drug Abuse. Principles of drug abuse treatment for criminal justice populations—a research-based guide. Bethesda: National Institutes of Health; 2006.

    Google Scholar 

  72. Winters KC, Botzet AM, Fahnhorst T. Advances in adolescent substance abuse treatment. Curr Psychiatry Rep. 2011;13(5):416–21.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Dennis ML, et al. Comparisons of adolescents and adults by ASAM profile using GAIN data from the Drug Outcome Monitoring Study (DOMS): preliminary data tables. 1999. http://www.chestnut.org/li/posters/asamprof.pdf.

  74. Mee-Lee D, American Society of Addiction M. ASAM patient placement criteria for the treatment of substance-related disorders. Chevy Chase: American Society of Addiction Medicine; 2001.

    Google Scholar 

  75. GAIN Coordinating Center. GAIN-I encyclopedia of supplemental documentation on scales and other calculated variables [Electronic version]. 2011 [cited 2014 Oct 31]. http://www.gaincc.org/_data/files/Psychometrics_and_Publications/Resources/GAIN-I_Scales_and_Variables.xls.

  76. Substance Abuse and Mental Health Services Administration. Data: NOMs 101: National Outcome Measures. [cited 2014 Oct 31]. http://media.samhsa.gov/co-occurring/topics/data/nom.aspx.

  77. Bellman RE. Dynamic programming. Dover books on computer science. Mineola: Dover Publications; 2003.

    Google Scholar 

  78. Van Buuren S. Multiple imputation of discrete and continuous data by fully conditional specification. Stat Methods Med Res. 2007;16(3):219–42.

    Article  PubMed  Google Scholar 

  79. Buuren S, Groothuis-Oudshoorn K. mice: Multivariate imputation by chained equations in R. J Stat Softw 2011;45(3):1–67.

    Article  Google Scholar 

  80. Hernan MA, Brumback BA, Robins JM. Estimating the causal effect of zidovudine on CD4 count with a marginal structural model for repeated measures. Stat Med. 2002;21(12):1689–709.

    Article  PubMed  Google Scholar 

  81. Robins JM. A new approach to causal inference in mortality studies with a sustained exposure period: application to the healthy worker survivor effect. Math Model. 1986;7:1393–512.

    Article  Google Scholar 

  82. Robins JM. Addendum to “a new approach to causal inference in mortality studies with sustained exposure periods-application to control of the healthy worker survivor effect. Comput Math Appl. 1987;14(9–12):923–45.

    Article  Google Scholar 

  83. Robins JM. Association, causation, and marginal structural models. Synthese. 1999;121:151–79.

    Article  Google Scholar 

  84. Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000;11(5):550–60.

    Article  CAS  PubMed  Google Scholar 

  85. Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the joint causal effect of nonrandomized treatments. J Am Stat Assoc. 2001;96(454):440–8.

    Article  Google Scholar 

  86. Robins JM. Marginal structural models versus structural nested models as tools for causal inference. In: Berry MEHD, editor. Statistical models in epidemiology: the environment and clinical trials. New York: Springer; 1999. p. 95–134.

    Google Scholar 

  87. Orellana L, Rotnitzky A, Robins JM. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part I: main content. Int J Biostat. 2010;6(2), p. Article 8.

  88. Moodie EE, Chakraborty B, Kramer MS. Q-learning for estimating optimal dynamic treatment rules from observational data. Can J Stat. 2012;40(4):629–45.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Authors’ contributions

SG, DA, SBH, DFM, ERP, RR, and BAG conceptualized the project and obtained funding. SG drafted an initial version of this paper. All authors provided edits and contributed to all sections. All authors read and approved the final manuscript.

Acknowledgements

None.

Competing interests

SG’s spouse is a salaried employee of Eli Lilly and Company, and owns stock. SG has accompanied his spouse on company-sponsored travel. All other authors declare that they have no competing interests.

Availability of data and materials

Not applicable.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Not applicable.

Funding

This project is funded by Grant R01 DA015697-07 from the National Institute on Drug Abuse, “The Causal Effect of Community-Based Treatment for Youths” (PI: Beth Ann Griffin).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Beth Ann Griffin.

Additional file

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grant, S., Agniel, D., Almirall, D. et al. Developing adaptive interventions for adolescent substance use treatment settings: protocol of an observational, mixed-methods project. Addict Sci Clin Pract 12, 35 (2017). https://doi.org/10.1186/s13722-017-0099-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13722-017-0099-4

Keywords