Clearinghouse for Military Family Readiness, The Pennsylvania State University, 402 Marion Place, University Park, PA 16802, USA
School of Social and Behavioral Health Sciences, Oregon State University, 410 Waldo Hall, Corvallis, OR 97330, USA
Department of Agricultural Economics, Sociology and Education, The Pennsylvania State University, 107 Ferguson Bldg, University Park, PA 16802, USA
Keywords:
Common components analysisProgram evaluation
Common factors
Common elements
Separation from military service
Veteran reintegration
Veteran programs
Common Components Analysis (CCA) summarizes the results of program evaluations that utilize randomized control trials and have demonstrated effectiveness in improving their intended outcome(s) into their key elements. This area of research has integrated and modified the existing CCA approach to provide a means of evaluating components of programs without a solid evidence base, across a variety of target outcomes. This adapted CCA approach (a) captures a variety of similar program characteristics to increase the quality of the comparison within components; (b) identifies components from four primary areas (i.e., content, process, barrier reduction, and sustainability) within specific programming domains (e.g., vocation, social); and (c) proposes future directions to test the extent to which the common components are associated with changes in intended program outcomes (e.g., employment, job retention). The purpose of this paper is to discuss the feasibility of this adapted CCA approach. To illustrate the utility of this technique, researchers used CCA with two popular employment programs that target successful Veteran reintegration but have limited program evaluation– Hire Heroes USA and Hire Our Heroes. This adapted CCA could be applied to longitudinal research designs to identify all utilized programs and the most promising components of these programs as they relate to changes in outcomes.
In the past two decades, there has been a proliferation in the number of prevention and treatment programs offered in the United States. These programs address a variety of topics, including parenting, employment, substance use, and the prevention of HIV and other sexually transmitted diseases. In the United States, a great number of these active programs are funded by well-intentioned government and philanthropic agencies. For Veterans, more than 400,000 programs are available (Meyer, 2013). Some of these programs come with extensive costs to operate, yet most have no evidence of program impact. Organizations, taxpayers, donors, and participants who invest time, energy, or money into a program should have the confidence that the program is achieving its desired outcome and, therefore, is worth continued investment. Unfortunately, due to the rapid deployment of most programs, evaluation is often not a priority due to time and resource constraints.
Program evaluation is critical to ensure programs work as intended and do not produce iatrogenic effects. The highest standard for program evaluation is a randomized control trial (RCT). However, these designs may also have limitations, including threats to external (e.g., the lack of generalizability) and internal validity (e.g., appropriateness of the control group, design contamination). For instance, participants initially assigned to a control group may choose to enroll in a comparable program in their community. The abundance of alternative programs from which participants can choose can interfere with random assignment. This post-randomization cross-contamination across programs makes uncovering significant differences more challenging and may dilute effect sizes. Therefore, a practical solution is needed that uses alternative analytic techniques to statistically replicate the strengths of RCTs (e.g., propensity score matching) and capture and evaluate the variety of programs available to participants.
This paper describes the feasibility of a coding approach to summarize program components and proposes future directions to examine the effectiveness of those components. Specifically, this approach builds on the Distillation and Matching Model (DMM) developed by Chorpita, Becker, and Daleiden (2007) a meta-analysis conducted by Kaminski, Valle, Filene, and Boyle (2008) and the theoretical framework developed by Rotheram-Borus et al. (2009) for the purpose of providing clinicians and communities with effective components for better choices. Chorpita and Daleiden (2009) suggested the Common Components Analysis (CCA) approach, which examines program manuals and empirical articles to distill the most frequently used program elements from programs demonstrated to be effective in RCTs. This traditional approach, then summarizes the similarities across evidence-based interventions. The CCA adaptation recommended here gathers common components from programs in which evaluation data are either weak or nonexistent and provides researchers with information about what potentially works. In the first phase of the adaptation of CCA, the primary goal was to identify common components. To accomplish this goal, a literature review was conducted to extract components from previous CCAs from child therapy, parent training, and HIV prevention programs and modify the CCA for new outcomes. The utility of this adapted approach was then tested by coding several programs that support Veterans’ transition to civilian life within the employment domain, a programmatic area that could benefit from this type of methodology. A proposed future utilization of this adapted CCA approach is a prospective longitudinal study that includes analysis to determine which common components are related to changes in participant outcomes.
Toward the goal of improving program development and understanding what if any program components improve outcomes, this paper summarizes the theoretical and empirical backgrounds of the traditional CCA and then describes the adaptation of this approach, which we applied to assess common components within a number of program areas nominated by Veterans. Finally, future directions for capturing utilized programs and steps to validate this approach are examined.
CCA was first introduced by Rosenzweig (1936) and has since been referred to as common elements (Chorpita et al., 2007), common components (Kaminski et al., 2008), and common factors (Rotheram-Borus et al., 2009). The goal of CCA is to distill programs into their common elements and identify the extent to which these shared elements, versus the entire program, are associated with impacts on the intended outcomes (Chorpita & Daleiden, 2009). This approach facilitates choosing among programs that have been evaluated using RCTs with a specific age, race, and/or gender. In addition, researchers can use the DMM to determine which programs have effective components to improve functioning in a sample that has not been evaluated.
This manuscript focuses on the more recent work of Chorpita, Daleiden, and Weisz (2005, 2007). Chorpita et al. (2005) use the term common elements to describe the underlying similarities across content in effective clinical programs while controlling for participant characteristics. For scientific accuracy, there needs to be transparency when different programs share the same underlying strategies but use different names for the same characteristics or activities (Chorpita et al., 2005). Within a specific targeted diagnosis (e.g., depression, anxiety), Chorpita et al. (2005) coded for similarities within 49 protocols that demonstrated efficacy in treatment programs targeting children’s anxiety, social phobia, depression, and disruptive behavior. This work resulted in the identification of 26 practice elements. Among empirically validated treatments for children’s anxiety and depression, coding revealed that 70%–75% of evidence-based treatments used similar practice elements. Common practice elements of anxiety treatments include relaxation techniques, tangible rewards, self-monitoring, cognitive-coping, psychoeducation, and exposure. By contrast, children’s depression treatments had four common practice elements: activity scheduling, maintenance, skill building, and social skills training. Each of these elements was included in 50% of the programs. Chorpita et al. (2007) argue that identifying common elements will improve selection of programs with the content practitioners want to deliver based on their judgment and clinical theory when there is insufficient empirical support.
Kaminski et al. (2008) reasoned that understanding which common components produce behavior change and promote more successful outcomes is vital. Therefore, they conducted a meta-analysis to identify common components of parenting programs that seek to promote child adjustment. Their results suggested that there are two domains of active program ingredients− content and delivery. The reviewed parenting programs had common content components, including problem-solving, social skills promotion, and cognitive/academic skills. The meta-analysis revealed that the programs also shared several delivery components, including a curriculum or a manual, modeling, and practicing with the parent’s own child. Indeed, these findings align with other research that suggests that more active engagement with the content may be the most effective educational strategy to improve learning (Prince, 2004). This distinction between content and delivery may be meaningful for programs that deliver the same content, yet employ different delivery strategies.
Rotheram-Borus et al. (2009) proposed an alternative approach to understand what areas the most effective programs with RCTs have in common. They referred to these areas as common factors. This method is slightly different from their predecessors in two ways. First, the approach does not exclusively focus on the details of the elements that Chorpita argued are important, such as the core features, including specific messages and techniques. Instead, Rotheram-Borus et al. (2009) take a broader approach by also assessing the programs’ effectiveness in removing barriers to participation and in using strategies to sustain the targeted behavior change within their specific domain. Second, program delivery practices were conceptualized as important tools to facilitate participation, to maintain program adherence (i.e., use of manuals), and to provide ongoing support. This broader conceptualization of what effective programs have in common involves identifying commonalities across program domains. Therefore, the adapted CCA includes the common factors approach as the framework for coding programs from different domains, which target specific content and barrier reduction components linked to changes in behavior.
The primary feature that differentiates this adapted CCA approach from previous studies is that it does not utilize programs empirically supported by a RCT. To date, common components have been identified for a limited number of programs with RCTs and only for programs in select domains, including child mental health treatments, parenting programs, and HIV prevention programs (Chorpita and Daleiden, 2009; Chorpita et al., 2013; Ingram et al., 2008; Kaminski et al., 2008; Rotheram-Borus et al., 2009). Given the rapid development of programs to meet urgent needs, there are many existing programs without evaluation data, let alone rigorous evaluation and publication of the results. Not knowing which programs are effective is a significant limitation. However, there are two phases to this adapted CCA approach that can be used to address this drawback. The first phase identifies and gathers the common components across programs within a particular domain, which is the primary focus of this article. The second phase is to transition from a descriptive summary of common components and to conduct an analysis of the association between exposure to programs with shared common components and changes in targeted outcomes. In other words, this second phase examines which components create changes to yield the desired outcomes. This proposed future direction suggests using a longitudinal sample and controlling for preexisting differences by applying a stronger analytic technique (i.e., propensity score analysis). This could help provide evidence to funders and encourage further investment in a stronger evaluation. Program developers can apply these findings to determine how well their programs incorporate the most effective components that were related to changes in participant outcomes. To ensure this type of evaluation is viable, a description of the process used to develop the coding manual for the program is needed.
A coding system for identifying common components was developed by (a) conducting a review of the program evaluation literature to identify key characteristics that would help determine the scientific rigor and quality of a program; (b) organizing the key structure of Rotheram-Borus et al.’s (2009) factors as the major common components themes; (c) incorporating content and process codes from previous common components empirical literature (Chorpita et al., 2013; Kaminski et al., 2008); and (d) establishing the feasibility of the coding system by training a team of four coders to examine the qualitative content analysis approach across a sample of programs to support Veterans, and adding common components not mentioned within previous research. Using the qualitative software NVivo 11, independent raters coded programs by indicating whether each component was either present or absent. Coders applied all codes to a program. Therefore, codes were not mutually exclusive. Reliability among coders was established by checking coding consistency, discussing discrepancies, and coming to agreement on the final codes and definitions of each component. Hereafter, this adaptation is referred to as the common components approach.
Prior to identifying common components, basic background information or program characteristics should be collected. Kaminski et al. (2008) stated that program characteristics (e.g., funding, target audience) should be investigated to determine potential reasons for variability across effective programs. Chorpita et al. (2005) suggested that capturing combinations of program characteristics can help researchers determine who benefited differently from similar programs and can help match participants with the best treatment. Even programs with similar components may differ depending on the targeted domain, characteristics of the target population, and previous program evaluation. Table 1 displays the characteristics of programs for Veterans during their transition from military service to civilian life. This coding system can also be adapted for other programs. To illustrate program characteristics, provided below are descriptions of the various characteristics examined and two examples of popular employment programs used by Veterans– Hire Heroes USA and Hire Our Heroes.
The research team added the first two program characteristics in Table 1, which refer to how the program is funded and what type of organization operates the program. Some programs are government-funded and operated. Other programs are developed by individuals or organizations and they pass the costs of programming to consumers, or they acquire donations to help support program operations. These programs may have financial incentives to market their program to consumers (e.g., Botvin LifeSkills Training) or a philanthropic non-profit venture to support a special group (e.g., Hire Our Heroes). Donors often examine the scal responsibility of these programs. For example, recent news on the Wounded Warrior Program (WWP) increased awareness regarding the need for oversight of financial management practices. The oversight provided by various charity watchdog groups exposed how expenditures of revenue from donors (e.g., high administration salaries, fundraising expenses) may not have aligned with the program’s commitment to support Veterans. The WWP responded to the criticism by offering explanations for the overall scale decisions of the organization’s spending (Nardizzi & Giordano, 2016); however, the program is concerned that the negative press may have resulted in long-term adverse impacts on the program despite continued offerings of innovative ways to assist wounded Veterans in need. For funding, the two example programs, Hire Heroes USA and Hire Our Heroes, both accept donations for their program and are non-profit organizations.
The third question in Table 1 examines the outcome that the program is designed to change. To allow for a variety of programs with varying targeted outcomes, ten outcomes from Berglass and Harrell’s (2012) model of Veteran well-being were added. Berglass and Harrell focus on the challenges Veterans may face as they transition from military to civilian life. These challenges are associated with four domains (10 sub-domains), including needing purpose (i.e., employment and education), material needs (i.e., legal, financial, and shelter), health (i.e., mental and physical health) and social/personal relationships (i.e., family, social, and spirituality). This comprehensive model of well-being was selected because it is theoretically and empirically driven and can be used to determine potential program benefits within each domain. Hire Heroes USA and Hire Our Heroes both target the employment domain of well-being.
The research team also included the fourth question in Table 1, which determines where a program falls within the Institute of Medicine (IOM) protractor framework (i.e., health promotion, universal prevention, selective/indicated, treatment, or maintenance) (National Research Council & Institute of Medicine, 2009). The IOM protractor was originally commissioned within a health or behavioral health domain and provides a clear demarcation of a program’s general goals. This characteristic is important because any program included in the analysis may serve very different populations; therefore, this characteristic will help control for variability in programs that target participants with different risk factors. That is, it will allow for comparisons across prevention programs separate from comparisons across treatment programs.
Within the IOM’s framework, it is key to consider risk and protective factors, such as how successfully an individual adapts to change and stressful transitions. This understanding of risk and protective factors will determine an individual’s needs and potential improvement from an intervention (National Research Council & Institute of Medicine, 2009). As the prevention continuum progresses from left to right, increases in individual risk are met with increases in systematic protective programming (Doyle & Peterson, 2005; Hawkins et al., 1992;Hawkins, Lishner, Catalano, & Howard, 1986).
Although the IOM protractor was developed to classify health and behavioral health programs, many targeted outcome domains, not just mental health, can be classified across most sections of the protractor. Knapp (2014) applied similar principles of categorizing prevention programs as universal, selective, and indicated within the domains of health, housing, relationships, and school or work. For example, a universal prevention program may increase the quantity of a affordable housing by offering guaranteed home mortgage loans in order to reduce Veteran homelessness; a selective prevention program may provide housing support to Veterans who have mental health issues; and indicated prevention programs offer recovery-based housing to individuals who may be at risk, but not diagnosed, for serious substance use problems. Treatment programs may be offered to Veterans who are already experiencing substance use problems. Maintenance programs include halfway houses where clients live after completing a treatment program to help them sustain their progress. Comparison within each IOM area will help to determine if the components are similar across prevention or treatment programs.
Therefore, this framework was used to guide the examination of other transition and reintegration programs used by Veterans from the IOM’s health focus to other domains of well-being (e.g., purpose, material needs, and social domains). Our example programs, Hire Heroes USA and Hire Our Heroes are universal prevention programs. These programs are offered to the general population of Veterans.
Table 1
Question and Response Options to Assess Program Characteristics.
| Question | Response Options | ||
|---|---|---|---|
| Funding | 1. How is the program funded? | Government Donations Own Revenue | |
| 2. Who operates the program? | Government Non-profit For-profit Unclear | ||
| Target outcome | 3. Target outcome? | Employment Education Financial Legal Housing | Physical health Mental health Family Social networks Community |
| Prevention framework | 4. What area of prevention? | Health Promotion Prevention universal Prevention selective/indicated Treatment Maintenance/rehabilitation | |
| Implementation quality | 5a. Is there a manual? 5b. Is the program guided by a theory? 5c. What procedures are used to ensure consistent program implementation? 5d. How is program success measured? | Yes/No Yes/No Goal-oriented efforts Ongoing facilitators/therapists training Protocol adherence Passive assessment Outcome analysis Planned evaluation | |
| Target audience | 6. Target audience? | Veterans only Providers Specific branch Veterans with a disability | All civilians Corporations Women only |
| Size | 7. How many Veterans does the program serve? | ____ Since Year____ | |
| Geographical reach | 8. What geographic area does the program serve? | Local community State Regional National | |
| Duration | 9. What is the duration of the program? | One interaction Structured number of sessions Unlimited (Variable) |
Question five in Table 1 helps capture program implementation quality with regard to protocol adherence and whether the program is theoretically driven. Rotheram-Borus et al. (2009) suggest that programs are effective if they are guided by a manual and theory, regardless of the specific theory. A program manual delineates both the program content and the implementation process and provides program developers and program evaluators with a means of assessing whether the program was delivered as intended. Thus, two sub-questions were added to determine if the program has (a) a manual to assist in program delivery and (b) a specific theory or logic model. Two additional questions on program implementation quality were adapted from Crowley (2010) to identify the (a) procedures for ensuring consistent program implementation and the (b) method for evaluating the program. The procedures for ensuring consistent program implementation include having clear program goals and expectations, providing facilitators with developer-led training in program curricula, and ensuring facilitators implement the program as intended. Also, the method for evaluating the program includes passive assessment (i.e., anecdotal success stories) and planned outcome evaluation (i.e., use of research designs that use comparison groups, longitudinal assessment, and statistical analysis).
Our example programs, Hire Heroes USA and Hire Our Heroes, do not have manuals, and it is unclear if there are procedures to ensure the programs are implemented effectively. In regards to evaluation, both programs rely on passive assessment or anecdotal success stories as their evidence for success. However, Hire Heroes USA also uses outcome analysis to examine program impact. Hire Heroes USA has documented confirmed hires (e.g., 6320 hires for 2016).
Understanding the extent to which programs are designed for similar or unique populations is an essential part of the CCA approach. Chorpita et al. (2005) and Kaminski et al. (2008) included key participant characteristics in their studies (i.e., age and gender of participants). Kaminski et al. (2008) stated that program characteristics, including characteristics of the individuals that the program serves, allow for investigating the variability across effective programs. As shown in Table 1, question 6, categories for special Veteran populations were identified to determine whether programs were only available to a single Service branch, women, or Veterans with disabilities. The program’s target audience can be used to understand underlying risk factors. Chorpita et al. (2005) suggest these target audience categorizations can help people select a program. Our example programs, Hire Heroes USA and Hire Our Heroes, are for all Veterans regardless of time since separation and also provide specific programming for corporations.
Other program characteristics added include basic structural descriptors of program size (i.e., how many Veterans the program serves), geographic reach (i.e., local, state, regional, or national), and program duration (i.e., one interaction, specific number of sessions, or unlimited/variable). See Table 1, questions 7, 8, and 9 for a summary of these additional structural descriptors. These program characteristics help determine structural similarities across program components that may then be analyzed with respect to Veteran well-being. The exampleprograms, Hire Heroes USA and Hire Our Heroes, are both offered nationally and are available to a Veteran for an unlimited number of interactions, that is, Veterans can use the program until gainful employment is achieved.
The next set of codes are used to distill programs into their common components and sub-components. Delineated below is the adapted common components approach. This approach uses the four types of components discussed by Rotheram-Borus as common factors. This approach also adds details or sub-components from Chorpita and Kaminski and themes mentioned from a sample of programs that target Veterans.
Table 2
Content and Process Component Question and Response Options.
| Components | Question | Response Options | ||
|---|---|---|---|---|
| Content | 1. Program content: What skills are taught in the program? | Information Problem-solving Coping | Relaxation Physical activity Physical ability | Self-monitoring Interpersonal skills Other |
| Process | 2. Delivery method: How are the content and skills taught? 3. Delivery mode: What is the mode of delivery? | Self-paced Direct instruction Rehearsal, role play, practice Online Social Media | Coaching/mentoring Modeling Interactive (w/o rrp) In person individual In person group | Peer support Outdoor activity Networking group Phone Mobile app |
| Barrier reduction | 4. Program access: Does the program provide access to the program? 5. Tangible support: Does the program provide tangible support to help people achieve their goals? 6. Reduced barriers: Does the program reduce other barriers? | Transportation Scholarship for education Cash Clothes Reduce stigma | Child care Food Legal advice Housing Increase motivation | Teleconsultation Insurance Other No No |
| Sustainability | 7. How does the program provide ongoing support? | Support groups Referrals | Helpline Merchandise | Other None |
Note. Delivery methods will be captured for each content sub-component.
The information content has expanded with the coding of each domain. Employment information codes include: interviewing techniques, resume writing, translating military to civilian careers, career planning/exploration, and job search techniques.
w/o rrp = without rehearsal, role-playing, practice. Response options within each component = sub-components.
As outlined in Table 2, the proposed approach involves identifying four categories of common components. Within each component, there are several identified sub-components that could be common across programs. The common components include (a) content components, what is taught or provided by the program (e.g., skills taught, knowledge/information, problem-solving, coping skills); (b) process components– the delivery mode (e.g., online, in person individual) and methods (e.g., self-paced, direct instruction, modeling, role-playing, practice); (c) barrier reduction components (e.g., providing monetary or other tangible support, providing transportation, reducing stigma to increase program utilization); and (d) sustainability components (e.g. ongoing social support groups, community support, referrals).
To facilitate understanding of how the common components approach was adapted, we continue our example of two popular programs used by Veterans that have not been evaluated by an RCT– Hire Heroes USA and Hire Our Heroes. The similarities and differences between these programs were highlighted within each section in the text below and are summarized in Table 3.
This component examines specific skills that an intervention targets to change behavior. Although alternative theories or different rationales may exist, similar core activities may be shared between programs. The coding of the specific content taught will be captured regardless of the theory. The original content components were included from previous studies on CCA. For example, Chorpita included content, such as psychoeducation techniques from mental health therapeutic sessions; coping skills; problem-solving; and relaxation techniques. Due to the variation of the types of programs Veterans are using, the content themes were added during the coding process. Common content codes were added to include the specific information the program provided (e.g., interviewing skills, resume writing, physical activity, physical ability, improvement of interpersonal skills). See Table 2 for a summary of response options.
The majority of the programs and services that support Veterans do not have a strong empirical literature. Therefore, evidence-based programs cannot be used to identify potential common content components for the following sub-domains: employment and education, legal, financial and shelter, mental and physical health, and family, social, and spiritual (Berglass & Harrell, 2012). The Philanthropy Roundtable (Meyer, 2013) provided a brief list of programs that share the goal of improving Veteran well-being. These programs provided services, such as license transfer, online job databases, training, placement, job retention, educational expenses, and academic social support. The identification of common sub-components under content components has been exploratory and iterative in order to clarify definitions and be informative. As coding has continued the content components have expanded.
Table 3 provides a summary of the two example employment programs and their similarities and differences in common components. Specifically, both programs provide information about how to find employment, such as how to write a resume; how to have a successful job interview; how to search for jobs online; how to translate military experience to civilian jobs and provide career planning and exploration content. However, Hire our Heroes also provides entrepreneurship training and Hire Heroes USA provides opportunities to network at career fairs.
Process components include discrete techniques, strategies, or other delivery devices used to engage participants and transfer knowledge or teach skills (Kaminski et al., 2008; Rotheram-Borus et al., 2009). In other words, process components are the delivery method and mode used to share the content components. Kaminski et al.’s (2008) meta-analysis provided the initial variables for the delivery method, which is, how the skills are taught (e.g., modeling, homework, rehearsal, role-playing). Delivery methods that are typically present in parenting programs include skills training, modeling, rehearsal, and practice (Kaminski et al., 2008; Rotheram-Borus et al., 2009). The methods included in this coding scheme align with the basic findings from educational psychology that providing more opportunities for the participant to engage with content will increase knowledge acquisition and retention (Prince, 2004). For example, active engagement, in which the material is practiced and feedback is provided to the Veteran, may be more effective to change behavior compared to self-paced learning, such as reading the material or direct instruction in a classroom setting with little opportunity for feedback.
Although Kaminski et al. (2008) specifically excluded self-paced parenting materials, this coding scheme included this as a delivery method to differentiate between active engagement delivery methods. Other delivery method codes, such as coaching/mentoring (Chandler, Kram, & Yip, 2011; Ragins, Cotton, & Miller, 2000; Ragins & Kram, 2007; Steinberg & Foley, 1999) and networking were also added after a review of the literature. Mentoring and networking components demonstrated positive outcomes when the participant was transitioning into a civilian career. In addition, individuals were happier overall with their transition when they received both mentoring and networking than when exposed to other strategies (Baruch & Quick, 2007). Other delivery method codes added by the research team include informal peer support and web-based interactive tools.
The preliminary coding of programs determined the need to capture in-person or web-based delivery. Therefore, a second sub-domain was added to include the mode of delivery, which is online, in person in a group, in person with an individual, or phone. Some online content may be specific to the target program, and other content may simply be a link to another external website. This type of linkage makes understanding the content specific to the target program critical, and any content delivered through another online website only annotated with the code external link.
The list of process components may grow as the research team investigates other employment programs and codes across the four domains of well-being. The coded examples in Table 3 have similar delivery modes as Hire Heroes USA and Hire Our Heroes are both delivered online, and provide content to learn on your own and include web-based interactive job search tools. Both programs also deliver in-person individual services, such as individual mentoring and both offer in-person group meetings by hosting workshops. For the delivery method, both programs offer direct instruction on how to get a job. Hire Our Heroes offers rehearsal, role-playing, and practice of skills, which includes individual coaching and individual mentoring. Hire Heroes USA offers opportunities for coaching and feedback after job interviews at their career days. Only Hire Our Heroes has online information about volunteering to be a mentor and offers how-to videos for individuals to review as needed.
Table 3
Comparison of Components between Hire Heroes USA and Hire Our Heroes Employment Programs.
| Components | Question | Hire Heroes USA | Hire Our Heroes |
|---|---|---|---|
| Content | What skills are taught in the program? | How to practice a job interview How to write a resume How to translate military experience Job board, search engine How to network at conference Career planning, exploration | How to practice a job interview How to write a resume How to translate military experience Job board, search engine Career planning, exploration Entrepreneurship |
| Process | 2. Delivery method: How are the content and skills taught? |
Self-paced
Direct instruction
Rehearsal, practice the job interview Mentoring/coaching Networking group |
Self-paced Direct instruction Mentoring/coaching Volunteer to be a mentor How to videos |
| 3. Delivery mode: What is the mode of delivery? | Web-based job searching tools Web pages w/o human interface for career planning, exploration In person individual In person group (e.g., workshops, career opportunity days) Virtual job fairs | Web-based job searching tools In person individual In person group (e.g., Veteran Strong Transition Workshops & Seminars) Virtual job fairs | |
| Barrier reduction | 4. Program access: Does the program provide access to the program? 5. Tangible support: Does the program provide tangible support to help people achieve their goals? 6. Reduced barriers: Does the program reduce other barriers? | Free for Veterans External links to training programs No | Free for Veterans External links to training programs Free airfare to attend job interview No |
| Sustainability | 7. How does the program provide ongoing support? | Ongoing coaching | Ongoing coaching |
Effective programs not only address important content but also reduce barriers for the participant to access the program (Rotheram-Borus et al., 2009). The barrier reduction component is a category that includes providing access to the program (e.g., transportation, child care, insurance to provide access to health care), tangible support to help people achieve their goals (e.g., scholarship for education, food, licensing assistance), or strategies to reduce environmental barriers (e.g.,stigma reduction, increase motivation to change) (Rotheram-Borus et al., 2009).
Barrier reduction components were coded using three questions to correspond to the three primary barriers (i.e., access, tangible support, intrinsic). Table 2, question 6 addresses the access to care and is one of the three primary barriers. The barrier reduction components originally proposed by Rotheram-Borus et al. (2009) for programs targeting run away youth included access to health care, condoms, and shelter.
Understanding barrier reduction to increase Veteran program utilization has been well-studied (Hoge et al., 2004; Maguen & Litz, 2006; Pietrzak et al., 2009; Vogt, 2011; Wright et al., 2009). Many Service members return to communities where Veterans A airs (VA) health services are a long distance from their homes (Andersen, 1995); therefore, it is key to address logistical barriers to care. In order to address physical distance from a program site, programs added barrier reduction strategies, such as offering teleconsultation or providing transportation or child care.
The second barrier reduction component, providing tangible support, can also be considered an access issue. However, what makes tangible support different from access is the end goal. Tangible supports may include providing financial resources for education to help a person achieve their goal of obtaining an education. Tangible supports do not teach a person how to meet his or her need. Instead, tangible supports meet individuals’ needs directly. For example, cash assistance for food and housing grants reduce barriers by providing money to overcome hurdles associated with the ability to pay for necessities or self-improvement activities. However, cash assistance does not offer suggestions for improving nutritional intake or developing a budget that could ensure saving for a house or subsequent housing expenses. Veteran benefits often offer tangible supports to obtain a home loan or an education. Many Veteran programs also offer job placement. This is also considered a tangible support since this reduces the barrier related to finding a job.
The last barrier reduction component includes other intrinsic reasons that prevent an individual from improving. Increasing motivation to change, was identified by Rotheram-Borus et al. (2009) as one of these components. The research team also added reducing stigma as a barrier reduction technique since the research has identified this as a specific need for Veterans. Research with Veterans suggests reducing negative beliefs about using services and concerns about what leadership and peers will think predicts utilization of services (Stotzer, Whealin, & Darden, 2012). Programs that target reduction of these barriers within each domain of well-being may increase the program utilization and directly improve overall function.
Both example programs are free to the Veteran, thereby providing easy access to the program. Both programs provide external links to education and training programs. Only one of the two example programs had examples of barrier reduction. Hire Our Heroes offers tangible support through free airfare to job interviews for out-of-state jobs. This type of barrier reduction could increase the likelihood of a Veteran gaining employment.
The last common component that effective programs include is sustainability. The sustainability component involves strategies used to sustain knowledge and skills over time (Rotheram-Borus et al., 2009). For instance, research supports the long-term benefits of including a social relationship sub-component in substance abuse programs (e.g., 12-step help groups). Two of the sub-components included in the coding system, offering ongoing peer social support and providing community support (e.g., referrals for additional services), were based on examples from Rotheram-Borus et al. (2009). Kaminski et al. (2008) also identified supplemental services as part of the program components; however, the concept seems to fit better within the adapted CCA sustainability components, and the term “referrals” is used to designate this type of strategy.
Other sustainability components added include merchandise and helplines. Merchandising was included as a component since programs that depend on fundraising to provide services and activities may keep participants engaged in the program long after their goals are achieved, and this could help other Veterans. In addition, helplines can foster ongoing relationships between the Veteran and the program if additional challenges arise.
The sustainability component generalizes well to the Veteran population and to other populations. Specific sustainability sub-components that encourage ongoing peer relationships among Veterans may be more effective at keeping the Veteran engaged and improving his or her well-being than programs that fail to provide a social support component. Research suggests that cohesion and social networks are linked to health outcomes (Burrell, Briley, & Fortado, 2003; Costa & Kahn, 2010). Therefore, peer relationships are especially important for Service members and their families who belong to tight-knit communities, such as military units and installations.
Both of the example programs include sustainability components. Both programs provide ongoing coaching to assist Veterans after program completion. Hire Heroes USA also provides an outreach team that offers continued support until the Veteran gets a job or no longer needs assistance.
This paper described the feasibility of an adaptation of CCA and intends to improve researchers’ and program developers’ understanding of what components are being used across programs. In addition, the adaptation proposes an evaluation strategy that would help determine which common components impact the targeted outcome(s). This type of adaptation is needed for programs that have not been rigorously tested. This adapted approach integrates existing common components demonstrated in RCTs and added sub-components derived through qualitative coding of existing programs. Furthermore, due to the lack of rigorous RCTs, additional codes regarding the target population, quality of training, and implementation quality were included.
This adaptation of the CCA approach has many strengths. This new approach (a) builds upon the current literature of content, process, barrier reduction, and sustainability components found to be useful in previous studies across a variety of domains; (b) provides an ecologically valid approach to addressing the gaps in existing programs of evaluating what is actually being utilized (i.e., not a tightly controlled environment of an RCT); and (c) operates with little to no cost to the programs that are included in the analysis. However, understanding the description of utilized programs is only the first step in a CCA. The next step is to validate which common components relate to effective changes in targeted outcomes. Therefore, this approach would need to include a longitudinal study in which a participant could nominate any number of programs he or she utilized within a specific time frame. The ability to nominate more than one program would allow a real-world, ecologically valid summary of components that can relate to change.
This is significant as it will help identify programs that have similar components within and across domains of well-being, including specific content, process, barrier reduction, and sustainability components. Although the preferred research design of program evaluation is to include a control group, or at least a comparison group, there are limitations in using this design. For example, participants in the control group could choose comparable programs that may have similar goals, content, and delivery modes which could dilute or distort the estimates of program impact. This CCA adaptation does not include a control or comparison group. However, improvements to the longitudinal study can include more rigorous analytic techniques, such as matching participants with equivalent risk factors with participants who did not participate in any programs. The CCA can guide program directors and funders to maximize the impact of programs by focusing on the need for more rigorous evaluation e orts on those programs that utilize the components that are most associated with improvements in well-being over time. In addition, this may assist program users (e.g., Veterans) in selecting programs with specific components and accessing the program or tangible support that could help them attain their goals.
The major limitation to understanding what programs work is a lack of program impact evidence. This CCA approach can be based on a systematic review of any number of nominated utilized programs. This review can identify the gaps in services and determine how effective these programs are at assisting Veterans. Within the domain of Veteran mental health, some of the strongest evidence-based strategies are used for the prevention and treatment of Post Traumatic Stress Disorder (PTSD). For example, approximately 94% of Veteran medical centers provide at least one evidence-based therapy (i.e., Prolonged Exposure [PE] or Cognitive Behavioral Therapy [CBT], and 72% o er both (Wells et al., 2011)). The Department of
Veteran A airs has recommended the use of these therapies at all medical centers. However, very few other programs in other domains of well-being have as much evidence of impact. Toward the goal of making informed decisions about developing or revising existing programs, there needs to be a systematic investigation of what components are associated with indicators of Veteran well-being over time. In this era of tight budgets and fiscal constraints, knowing what programs promote short- and long-term successful reintegration into civilian life and effective adjustment in Veterans is essential. When budgets and time are severely constrained, one option may be to use specific components of programs.
Although this CCA summarizes what programs have in common, future directions should consider ways to address selection bias (i.e., what predicts program utilization) and examine how effective similar components are at improving well-being. For instance, programs that include many of the components related to behavioral change but lack an RCT may be able to request a rigorous program evaluation effort. This stronger evaluation method can ensure participants who selected that program have similar risk levels and may be the next step in validating CCA results. In addition, this CCA is an aggregate of similarities across programs and does not attempt to conduct a factorial intervention design. Factorial intervention designs explicitly test the varying combinations of components in an experiment (Collins, Dziak, Kugler, & Trail, 2014). This type of experimentation is used to understand what is working within a program and what future, positive direction could be taken.
Authors who intend to conduct an adapted CCA across other domains or populations can use the coding that was developed for Veteran programs. The larger framework of the coding manual can help guide common component identification for other programs. Currently, this coding strategy is being used with home visiting programs to determine what gaps exist in current programming and what methods could be used to address those gaps given other common components literature.
The first step in conducting this CCA is descriptive, as it involves identifying what programs have in common and what components are unique. This area of research is exploratory, and researchers can identify ways to build on the rubric provided here. The next step for the research team is to code a variety of programs that were nominated by Veterans, identify program sub-components for a given domain, and summarize common components across the various domains of well-being (e.g., employment, education, social). Results from these steps are only descriptive and do not necessarily explain what components are most effective in changing targeted outcomes. Therefore, the next phase will include an analysis of how these components relate to changes in well-being over time. This longitudinal study is necessary to develop a prospective model of what risk and protective factors predict program utilization and what content, process, barrier reduction, and sustainability components are most effective in improving targeted well-being. To ensure equivalent comparison groups, a more rigorous analytic technique will be utilized. For example, the use of a longitudinal study with propensity score matching analysis and the recruitment of the sample before separation from the military could improve the conclusions of the benefits of common components. This planned longitudinal analysis can capture the variety of programs Veterans use across the duration of their transitions. This naturalistic longitudinal study of Veterans has a strength over an RCT because conducting this study in the real-world setting already increases the generalizability of the findings. Propensity score matching could account for pre-existing risk and protective factors and could, therefore, strengthen the quasi-experimental design.
In addition, following a similar branching technique utilized by Chorpita et al. (2005) to test if specific program characteristics (i.e., for whom are particular components most effective) moderate the relationship between components and well-being could be informative. This type of information can be valuable to community leaders who decide what programs to o er participants. Also, this approach can be informative to government and privately funded program developers. Identification of common components can help program developers target specific activities that are associated with improved changes in well-being. Finally, unmet needs for Veteran programs may be identified through this process of CCA and could help guide future programming efforts.
This adapted CCA approach could be an innovative strategy used to understand the full scope of programs utilized. Within the area of Veteran programs, a scarcity of program evaluation exists. No studies were located that summarize the common components of Veteran programs. Therefore, this approach will provide unique information to key stakeholders who are asking for more empirical support for the services they render. Furthermore, once this approach has been validated within this area of programming (i.e., Veteran programs), the analytic approach and coding system can be useful to other populations, such as government and philanthropic agencies, without performing similar rigorous coding development and evaluations.
This work was supported by a seed grant provided by the Social Science Research Institute at The Pennsylvania State University. The authors wish to thank our colleagues from The Veteran Metrics Initiative for their support of this work (Bradford Booth, Laurel Copeland, Erin Finley, Cynthia Gilman, Chris Jamieson, Suzanne Lederer, William Skimmyhorn, Jackie Vandermeersch, and Dawne Vogt).
We would also like to recognize the assistance of team members at the Clearinghouse for Military Family Readiness at Penn State in the preparation of this manuscript. We thank Julia A. Bleser, MS, MSPH, Research & Evaluation Associate, and April Gunsallus, Doctoral Candidate for their assistance with coding Veteran programs, Elise Dreibelbis and Katie E. Davenport, MA, Research & Evaluation Associate, for editing this manuscript.
Baruch, Y., & Quick, J. C. (2007). Understanding second careers: Lessons from a study of U.S. Navy admirals. Human Resource Management, 46, 471–491. http://dx.doi.org/10.1002/hrm.20178.
Crowley, D. M. (2010). PROSPER community leadership interviews (CAPI). Programming knowledge coding manual. Unpublished manuscript.
Hoge, C. W., Castro, C. A., Messer, S. C., McGurk, D., Cotting, D. I., & Ko man, R. L (2004). Combat duty in Iraq and Afghanistan, mental health problems, and barriers to care. New England Journal of Medicine, 351, 13–22. http://dx.doi.org/10.1056/ NEJMoa040603.
Knapp P. (2014, May 2). What could prevention encompass? [E-mail from Penny Knapp, M.D., pkknapp@ucdavis.edu].
Nardizzi, S., & Giordano, A. (2016). http://www.thewoundedtruth.com/A further response to Senator Grassley’s questions about wounded warrior project. Retrieved from http://www.thewoundedtruth.com/.
Wright, K. M., Cabrera, O. A., Bliese, P. D., Adler, A. B., Hoge, C. W., & Castro, C. A (2009). Stigma and barriers to care in soldiers post combat. Psychological Services, 6, 108–116.
Nicole R. Morgan, Ph.D., is a Research and Evaluation Scientist for the Clearinghouse for Military Family Readiness at The Pennsylvania State University. She takes a developmental approach and applies advanced statistical analysis to evaluate programs designed to improve children’s Kindergarten readiness skills, parent involvement and reduce child maltreatment, post-traumatic stress, and other topics concerning the well-being of Veterans and their families. Additionally, she served 5 years in the United States Air Force as an Air Traffic Control Radar Specialist and Quality Control Specialist.
Kelly D. Davis is an assistant professor of Human Development and Family Sciences who studies the connection between work, family, and health for working families, particularly military families. She examines daily stress processes and transmission among family members and investigates interventions to improve each family members’ health and well-being, as well as family dynamics.
Cameron B. Richardson is a Research & Evaluation Scientist for the Clearinghouse for Military Family Readiness at the Pennsylvania State University. He received his Ph.D. in Human Development from the University of Maryland in 2011. His research interests include: (1) Moral development across the lifespan; (2) Program evaluation, both process and outcome; and, (3) Positive family and youth development. His current efforts focus on organizational capacity building, with an emphasis on providing organizations with practical, sustainable, and flexible solutions to everyday data management and data analysis challenges.
Daniel F. Perkins is a Professor of Family and Youth Resiliency and Policy at the Pennsylvania State University. He is Director of an applied research center, the Clearinghouse for Military Family Readiness at Penn State (http://www.militaryfamilies.psu.edu/). Dr. Perkins’ scholarship involves the integration of practice and research into three major foci: (1) Positive Youth Development, (2) Healthy Family Development; and (3) Community Collaborations. He has been designing and evaluating family and youth programs for 20 plus years. His current e orts involve transitioning of evidence-based programs to real-world settings. Since 2006, he has supported several prevention and early intervention projects in Ireland.