PHD Discussions Logo

Ask, Learn and Accelerate in your PhD Research

Question Icon Post Your Answer

Question Icon

How have you prepared your raw data for analysis‑ Consider your choices in coding open-ended responses, formatting scales, and labeling variables to ensure a seamless analytical process.

When justifying my sample size for committee review, I want to move beyond generic rules-of-thumb. My core design question is this: how do you quantitatively define the "minimum meaningful effect" for your research context is it a practical difference, a clinical threshold, a theoretical increment? And once defined, how does this specific value directly and mathematically dictate the number of participants you need to recruit, through power analysis?

All Answers (1 Answers In All)

By Tanya Answered 2 months ago

This is the heart of robust design. I never use arbitrary conventions. First, you must define the "minimum meaningful effect" (MME) through literature, pilot studies, or stakeholder input it's the smallest difference that would have practical or theoretical significance in your field. This MFE becomes the 'effect size' in an a priori power analysis. I have seen studies fail because they could only detect unrealistically large effects. You input this MFE, along with your desired power (e.g., 80%) and alpha level (e.g., .05), into the analysis. The output is your required sample size; it's a direct mathematical function. If you can't recruit that many, your study may be underpowered to detect what actually matters.

Your Answer