How to Write the Methods Section of a Foster Care Mental Health Research Proposal
A practical breakdown for social work students — from selecting a research design and recruiting your sample to specifying your intervention, operationalizing variables, and addressing threats to internal and external validity.
The Methods section of a research proposal is where most students stall. The literature review felt manageable — find sources, synthesize, cite. The specific aims required some thinking but followed a logical structure. Then the Methods section arrives and suddenly you are expected to describe, in precise detail, exactly how you plan to run a study you have never run before. Every decision has to be justified. Every measurement tool has to be named. Every threat to validity has to be anticipated. This guide walks through the entire Methods section for a foster care mental health study — the specific topic outlined in the COPES framework above — and explains what to write, why each component matters, and the exact errors that lose points.
What This Guide Covers
The COPES Framework — What It Is and How It Shapes Your Entire Proposal
Before you can write a Methods section, you need a sharply defined research question. Vague questions produce vague methods. The COPES model (Client-Oriented Practical Evidence Search) is a structured tool for building clinical and social work research questions that translate directly into study design. Each component maps onto a specific methodological decision you will make in the Methods section.
Your research question: “Among youths aged 15 to 24 years in foster care, how does a six-month foster parent training program, compared to those with no program in foster placements, help improve mental health symptoms?” Every sub-section of the Methods needs to answer some version of this question operationally. The design answers how you will test it. The sampling answers who will be in the study. The measurement answers how you will know if symptoms improved.
Research consistently documents elevated rates of mental health disorders among foster youth. According to a systematic review published in Pediatrics (Turney & Wildeman, 2016), children in foster care experience mental health problems at rates 2–5 times higher than the general child population, with prevalence of PTSD, depression, and ADHD particularly high in the 15–24 age group. This age group is also at risk of aging out of the system without adequate transition support — making intervention studies in this cohort especially pressing for the field.
Your Background and Significance section (which precedes Methods) should synthesize evidence like this to establish the rationale. Your Methods section then describes how your proposed study will generate new evidence that addresses current gaps.
Choosing and Justifying Your Research Design
The research design is the structural framework of your study. It is the first sub-heading in the Methods section, and it sets up everything else. The design has to match your research question, your population, your timeline, and your ethical constraints. For a foster care mental health study comparing an intervention to a control condition, you are working in quasi-experimental territory — and you need to name that clearly and justify it.
Why Not an RCT?
A randomized controlled trial (RCT) would be the most internally valid design — randomly assigning youth to receive the foster parent training program or not. But in child welfare settings, random assignment raises ethical issues. Can you ethically withhold a potentially beneficial program from vulnerable youth? Agency partners may resist it. IRBs may require additional justification. Most social work research proposals at the graduate level justify a quasi-experimental design on exactly these grounds.
The Quasi-Experimental Pretest-Posttest Design
A pretest-posttest design with a comparison group is the most practical and defensible option. Measure mental health symptoms at baseline (before the training program begins), implement the program for six months, then measure again at post-test. The comparison group — youth in placements without the program — receives the same measurement at the same time points but no structured intervention. This allows you to compare change over time between groups without randomization.
Write this in your proposal in future tense (“This study will use…”) and include one sentence explicitly justifying the design choice: “A quasi-experimental pretest-posttest design with a comparison group will be used because random assignment to condition is ethically and practically unfeasible within the participating child welfare agencies.” That sentence does two things: names the design and addresses the most obvious critique before the reviewer raises it.
Students frequently write “mixed methods design” when they mean quantitative. Mixed methods specifically refers to studies combining quantitative data (surveys, scales, counts) with qualitative data (interviews, observations, focus groups). If your study only uses validated rating scales and no qualitative data collection, it is a quantitative quasi-experimental design. Calling it mixed methods incorrectly signals a misunderstanding of research design terminology and will cost you points.
Setting and Population
The Setting sub-heading describes where the study will take place and why that location is appropriate for your research question. Be specific. “The study will take place in foster care placements” is not enough. Name the type of agency, the geographic region if relevant to your proposal, and why this setting provides access to the target population.
For a foster care study, your setting would be child welfare agencies — public, private nonprofit, or both — that manage foster placements in a defined region. Describe whether the agencies are county-operated or contracted through the state. Note whether the setting includes kinship care (relatives as foster parents) or only non-relative foster homes, because this affects who your comparison group is and how the training program will be delivered.
Agency Type
Public child welfare agencies, private licensed foster care agencies, or county-contracted placement organizations. Name the type and describe their capacity and approximate number of active placements.
Geographic Scope
Specify whether this is a single-county, multi-county, or statewide study. Geographic scope affects external validity — findings from a single urban county may not generalize to rural placements.
Placement Type
Distinguish between non-relative foster homes, kinship care placements, and group homes. Your intervention (foster parent training) applies differently in each context, and the comparison condition may differ by placement type.
Sampling: Inclusion Criteria, Exclusion Criteria, and Sample Size
This is one of the most technically detailed parts of the Methods section — and one of the most commonly underdeveloped. You need three distinct components: who qualifies to be in the study (inclusion criteria), who does not (exclusion criteria), and how many participants you expect to recruit and why (projected sample size with justification).
Inclusion Criteria
Projected Sample Size
You cannot simply write “we plan to recruit 100 participants.” Sample size in a research proposal requires a power analysis — or at minimum, a reference to power analysis conventions for your effect size assumptions. Here is how to present it:
Recruitment Procedures and Consent Process
The Recruitment sub-section describes, step by step, how you will identify and enroll participants. It needs to be specific enough that someone could replicate the process. “Participants will be recruited from foster care agencies” is not a recruitment procedure. A recruitment procedure tells you who does what, in what order, using what materials.
-
Agency Partnership
The research team will partner with two to three child welfare agencies in the study region. Agency directors and case supervisors will be contacted through an introductory letter describing the study. A memorandum of understanding (MOU) will be signed prior to recruitment. Agency caseworkers will serve as the primary point of contact for identifying eligible youth.
-
Caseworker Identification of Eligible Youth
Caseworkers will review their active caseloads and identify youth meeting the inclusion criteria. They will provide eligible youth and their foster parents with a brief study overview sheet describing the purpose, what participation involves, and how to contact the research team. Caseworkers will not disclose which youth decline — this protects privacy and avoids coercion through the caseworker relationship.
-
Initial Contact and Information Session
Interested youth and their foster parents will be invited to an information session (in-person or virtual) where a trained research assistant will explain the study in detail, answer questions, and provide the informed consent/assent documents. No consent will be obtained at this session — participants will be given at least 48 hours to review documents before signing.
-
Consent and Assent Process
Youth aged 18 and over will provide their own informed consent. Youth under 18 will provide written assent, with informed consent provided by their legal guardian or agency representative with guardianship authority. Both documents will be written at a 6th–8th grade reading level and reviewed with participants verbally. Signed consent and assent forms will be stored separately from study data in a locked file.
-
Baseline Assessment
Upon enrollment, each participant will complete the baseline assessment battery including the primary mental health symptom measure (described in the Measurement section). Baseline data will be collected before any intervention begins. Participants in both the intervention and comparison groups complete baseline assessments at the same time point.
Describing the Intervention — What Your Methods Must Include
The Intervention sub-section is where you describe what the experimental group actually receives — the foster parent training program — and what the comparison group receives instead. This section has to be specific. “A foster parent training program will be implemented” tells the reviewer nothing. You need to name the program model, describe its components, specify the duration and format, identify who delivers it, and describe what happens in the comparison condition.
Intervention Group: Foster Parent Training Program
Specify whether you are using an established evidence-based model (e.g., KEEP — Keeping Foster and Kinship Parents Trained and Supported; MTFC — Multidimensional Treatment Foster Care) or a newly developed curriculum. If using an existing model, cite the developers and published evaluations. Describe: number of sessions (e.g., 16 weekly group sessions of 90 minutes each), content per session (e.g., sessions 1–4: understanding trauma and its behavioral effects; sessions 5–8: de-escalation and emotional regulation strategies), delivery format (group-based, individual, hybrid, virtual), and facilitator qualifications (licensed clinical social worker, trained family support specialist).
Comparison Group: Treatment as Usual
Describe exactly what the comparison group receives. “No program” is usually a mischaracterization — most foster parents receive some form of baseline required training from their agencies. “Treatment as usual” (TAU) means participants continue receiving whatever services and support they would normally access. Describe what TAU typically looks like in your study setting: required hours of annual training, monthly caseworker contact, access to agency support services. This matters for interpreting your results — if TAU already includes significant caregiver support, your intervention’s effect size may be smaller than in a truly no-treatment comparison.
Using an established, validated program model instead of a newly designed curriculum significantly strengthens the feasibility and credibility of your proposal. KEEP (Keeping Foster and Kinship Parents Trained and Supported) is a widely studied group-based intervention for foster and kinship parents, with randomized trial evidence supporting reductions in child behavioral problems. The SAMHSA National Registry of Evidence-Based Programs and Practices is the authoritative external source for identifying validated programs — reviewers recognize it and expect graduate students to use it.
If your professor has not specified a program, citing KEEP or a similar NREPP-registered program and citing its published evidence gives your Methods section a level of credibility that a hypothetical “six-session workshop” cannot achieve.
Dependent Variable: Measuring Mental Health Symptoms
Your dependent variable is what you are measuring to determine if the intervention worked. The research question names it: “mental health symptoms.” That phrase alone is not a measurable variable. You need to operationally define it — specify exactly what mental health symptoms you are measuring, how you are measuring them, using what instrument, at what scale, and at what level of measurement.
Mental Health Symptom Severity
The primary dependent variable is the overall severity of mental health symptoms including internalizing (anxiety, depression, PTSD) and externalizing (aggression, conduct) behavioral indicators.
Total Score on the SDQ
Operationally defined as the total difficulties score on the Strengths and Difficulties Questionnaire (SDQ) — a validated 25-item measure with five subscales assessing emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behavior.
Self-Report + Caregiver Report
Youth 11 and older complete the self-report SDQ. Foster parents complete the parallel caregiver-report version. Both forms are completed at baseline and at six-month post-test. Discrepancies between informants are noted in analysis.
Likert-Type Total Score
Each item is scored 0–2 (not true, somewhat true, certainly true). Total difficulties score ranges from 0 to 40. Higher scores indicate greater symptom severity. Cut-off norms are established for clinical and borderline ranges.
Continuous
The total SDQ difficulties score is treated as a continuous variable for analytic purposes, allowing parametric statistical comparisons (independent samples t-test, ANCOVA) between groups at post-test.
Psychometric Evidence
The SDQ has strong psychometric support across multiple countries and populations including foster care samples. Cronbach’s alpha values typically range from 0.73 to 0.88 for total score. Cite the original validation study (Goodman, 1997) and at least one study in a foster care sample.
Independent Variable: The Foster Parent Training Program
The independent variable is the cause you are testing — the foster parent training program versus no program. Describe it with the same operational precision you applied to the dependent variable. The level of measurement for your independent variable is dichotomous (categorical with two levels): intervention group or comparison group.
Independent Variable Specification
| Element | Description |
|---|---|
| Variable Name | Foster parent training program participation |
| Operational Definition | Whether the foster parent of the enrolled youth participated in the structured six-month KEEP training program during the study period |
| How Measured | Program attendance records maintained by the facilitating agency; minimum attendance threshold defined as attending at least 12 of 16 sessions |
| Level of Measurement | Dichotomous/categorical: intervention group (1) or comparison group (0) |
| Fidelity Check | Facilitator session logs, random observation of 20% of sessions by a trained research assistant, post-session fidelity checklist |
The fidelity check is something many students omit. It matters because if the training program is not delivered as intended — inconsistent facilitators, shortened sessions, missing content — you cannot attribute outcome differences to the program. Describing your fidelity monitoring procedure shows reviewers that you understand the difference between studying the program as designed and studying whatever happens to occur in its name.
Analytic Procedures
The analysis section describes the statistical tests you will use to answer your research question. Match your analysis to your research design, your variables, and the level of measurement you described.
Primary Analysis: ANCOVA
An analysis of covariance (ANCOVA) comparing post-test SDQ scores between intervention and comparison groups, controlling for baseline SDQ scores. This is the appropriate test for a pretest-posttest design with a comparison group and a continuous outcome variable. Controlling for baseline scores accounts for any pre-existing group differences.
Secondary Analysis: Subscale Comparisons
Separate ANCOVAs for each of the five SDQ subscales (emotional symptoms, conduct problems, hyperactivity, peer problems, prosocial behavior) to determine which symptom domains show the greatest change. Apply Bonferroni correction for multiple comparisons.
Descriptive Statistics
Means, standard deviations, and frequency distributions for all demographic variables and outcome measures at baseline. Comparison of intervention and comparison groups on demographic variables to assess equivalence at baseline using chi-square (categorical) and t-tests (continuous).
Attrition Analysis
Compare baseline characteristics of participants who complete the study versus those lost to follow-up. If significant differences exist between completers and non-completers, address this in the limitations section and consider intent-to-treat analysis.
State the software you will use (“All statistical analyses will be conducted using IBM SPSS Statistics, Version 29, or R version 4.3”) and the alpha level for significance testing (“Statistical significance will be set at p < .05 for all primary analyses”). These details signal to reviewers that you have thought through analysis at a practical level, not just conceptually.
Internal Validity — Threats and How to Address Them
Internal validity is the degree to which you can attribute changes in mental health symptoms to the foster parent training program rather than to something else. Your professor’s instructions are explicit: you must discuss threats to internal validity. These are not hypothetical concerns — they are standard critiques that any reviewer will raise, and addressing them in the proposal demonstrates methodological awareness.
Without random assignment, you cannot rule out that differences between the intervention and comparison groups at baseline explain the post-test differences. This is the central weakness of quasi-experimental designs, and your Methods section must address it directly. The ANCOVA approach (controlling for baseline scores) partially addresses it statistically. Describing your baseline equivalence check addresses it procedurally. Acknowledging it honestly in the Limitations section addresses it rhetorically. You need all three.
External Validity and Generalizability
External validity is about who your results apply to beyond your sample. A study with strong internal validity (you can be confident the program caused the improvement) can still have weak external validity (you cannot be confident the same result would occur with different youth, in different agencies, in different regions).
Geographic Generalizability
- Your sample comes from agencies in a specific region — findings may not generalize to rural foster care systems, states with different child welfare structures, or international contexts.
- Urban vs. rural placement dynamics differ significantly in terms of available services, caseworker caseloads, and cultural context of caregiving.
- Acknowledge this explicitly and recommend multi-site replication studies.
Population Generalizability
- Your age range (15–24) is specific. Results for older teens may not apply to younger children in foster care, who have different developmental needs and symptom profiles.
- If your sample is predominantly one racial or ethnic group, discuss what that means for generalizability to other groups.
- Youth who consented to participate may be systematically different from those who declined.
Program Generalizability
- Results apply to the specific training program model used (e.g., KEEP). A different program with different content, duration, or facilitator qualifications might produce different results.
- The experience and training of the facilitators in your study may be higher than typical in community implementation — this is called “efficacy vs. effectiveness” gap.
- Address how program fidelity monitoring in your study may produce results that are difficult to replicate in routine practice.
Temporal Generalizability
- Six months is a relatively short follow-up period. Whether symptom improvements are maintained at 12 or 18 months is unknown.
- Child welfare policy environments change — a study conducted during a period of system expansion may not generalize to a period of budget cuts or policy reversal.
- Recommend follow-up assessment periods in future research directions.
Ethical and Practical Limitations
Your proposal already identifies two practical limitations: data access constraints (deficient data, no consent leading to small samples) and consent challenges with minors. But the Methods section needs to go further. Ethical and practical limitations are not just honesty about what might go wrong — they demonstrate to reviewers that you understand the real-world complexity of conducting research with vulnerable youth populations.
Vulnerability of the Population
Foster youth are a legally protected vulnerable population under federal research regulations (45 CFR 46 Subpart D). Your IRB application will require additional justification for involving minors in foster care. The risk-benefit ratio must be explicitly analyzed — potential benefits of participation (access to services, contribution to knowledge) must outweigh risks (privacy breach, distress from mental health assessment). Your Methods must describe how you will monitor for and respond to participant distress during assessment.
Consent Complexity
Determining who has legal authority to consent for minors in foster care varies by state and by case. In some states, the child welfare agency holds guardianship; in others, biological parents retain legal rights even during placement. Your Methods must describe how you will navigate this — and what you will do if consent cannot be obtained from the appropriate legal authority despite the youth’s willingness to participate.
Placement Instability and Attrition
Foster placement changes are frequent, particularly in the 15–24 age group. A youth enrolled in your study may move to a new placement outside the study region, have their foster parent change, or age out of the foster care system during the six-month study period. Describe your protocol for managing each of these scenarios — will you follow youth to new placements? Will you substitute new participants? How will you handle incomplete data?
Confidentiality and Mandated Reporting
Mental health assessment data is sensitive. Your data storage plan (encrypted files, de-identified datasets, locked storage, limited access) must be described. Additionally — and this is frequently overlooked — research assistants collecting data are mandated reporters in most states. If a youth discloses abuse or neglect during data collection, the researcher has a legal reporting obligation that may affect the research relationship. Your Methods must acknowledge this and describe how participants will be informed of this limit to confidentiality prior to consent.
Resource and Feasibility Constraints
Foster parent training programs require trained facilitators, physical or virtual space, and ongoing quality monitoring. Describe the resources required to implement the intervention and how they will be funded or provided in collaboration with the agency partner. A Methods section that proposes a resource-intensive intervention without acknowledging feasibility constraints will be questioned by reviewers who know child welfare agencies operate under significant budget pressure.
The Control Group Ethics Problem — Address It Head-On
One of the most common ethical critiques of this type of study is: is it ethical to withhold a potentially beneficial training program from some foster parents (and by extension, some youth) in order to create a comparison condition? Your Methods section should address this directly. The standard justification is that at the time of the study, the program’s effectiveness has not yet been established for this population — which is precisely why the study is needed. If the program were already proven effective, it would be unethical to withhold it; because it has not yet been established, testing it rigorously is itself the ethical action. This is not a legal technicality — it is a genuine methodological position that you should be able to articulate. For support structuring your Methods section arguments at this level, our research paper writing services and academic writing services provide specialist guidance for graduate social work students.
Frequently Asked Questions
Need Help With Your Research Proposal?
From Specific Aims and literature reviews to Methods sections, variable operationalization, and validity analysis — our social work research specialists provide structured guidance at every stage of the research proposal process.
Research Writing Support Get StartedWhat the Methods Section Is Really Testing
When a graduate school professor assigns a research proposal, they are not expecting you to design a study and run it. They are testing whether you understand how research design decisions connect to validity, ethics, and the quality of knowledge produced. A Methods section that names a design without justifying it, lists inclusion criteria without explaining their logic, or describes measurement without operationalizing variables has not demonstrated that understanding.
The foster care mental health topic in this proposal is one of the more demanding because it involves a vulnerable population, ethical complexity around randomization, genuine measurement challenges (mental health in a population that has every reason to be guarded with adults), and a six-month timeline that creates real attrition risk. Addressing all of that in 2–3 pages requires precision, not padding.
Write in future tense throughout. Be specific about every number — sample sizes, session counts, instrument items, alpha levels. Cite sources for your effect size assumptions, your measurement tool’s psychometrics, and your program model’s evidence base. And when you reach the Limitations section, do not soften the real weaknesses — name them clearly and describe how you have designed the study to minimize them. That is what scientific rigor looks like on paper.
For students who need structured support developing any section of this proposal — Specific Aims, Background and Significance, or the full Methods section — our research paper writing services, academic writing services, and critical thinking support are available with specialists familiar with social work research methodology and APA format requirements.