How to choose a research design


The 4 main types of research design


post-8.png

The main goal of charity and ministry leaders is to make a difference through their programs. Likewise, the main goal of effective research design should be ultimately to improve the impact of these programs.

Knowing which type of research design to use therefore is fundamental to the type of improvement and information leaders are looking to gain. Whilst there are many specific and tailored designs to use, here are four main categories to consider.


1. Exploratory designs – for when you want to explore and understand beneficiary needs

Exploratory designs are all about discovering the needs and opinions of your target beneficiaries. Evidence from this can help you identify what services to provide and the best approaches to providing those services. It can also help you determine what outcomes will be appropriate for you to measure, and the best ways to measure them.

Exploratory designs can include the following:

  • Review of the current literature: What data and information already exists about the people you are serving? What interventions have been successful in the past, with whom, and under what circumstances? This may include demographic, educational, cultural or other information that can be obtained through your agency, collaborating agencies, or public resources (e.g. census data).

  • Collection of new data: This involves a needs assessment, gathering new data from your beneficiaries through surveys, focus groups, or interviews of a cross-section of your target beneficiaries.

Example: A ministry wants to know the needs of their local community. They therefore conduct a needs assessment using a combination of census data and door-to-door interviews. The ministry learns that the local youth may benefit from classes to help them apply for jobs.

Click here to see how we can help you conduct a needs assessment.


2. Descriptive designs – for when you want to understand the output and functioning of your program

A descriptive design helps you understand if your program is functioning as planned, provide you with feedback about the services you offer, determine whether your program is producing the types of outputs and outcomes you want, and help clarify program processes, goals and objectives.

Descriptive designs can involve the following:

  • Analysis of existing data or information: This can answer questions like, are we meeting our output targets, and who participates in program services?

  • Collection of program feedback / experience data: Survey, interview or focus group data relating to program experiences. This might involve open-ended surveys, interviews or focus groups of a randomly or systematically selected group of program participants.

  • Preliminary outcome measures. Pre and post outcome measurement of participants during the descriptive process will allow you to see if your program is operating as intended.

  • Extended statistical analysis of data collected. Correlation or other types of statistical analysis of data collected from existing sources, surveys, interviews, or preliminary outcome measures can be conducted (usually by a professional evaluator or statistician) to help answer questions about your program’s participants, processes and outcomes.

Example: A charity wants to know how well their after-school learning programs are functioning. They conduct an end of program feedback survey with both teachers and students to learn what their experiences of the course have been so far.

Click here to see how we can help you conduct data analysis, or a descriptive evaluation.


3. Impact evaluation designs – for when you want to understand the outcomes and impact of your program

Impact evaluation designs provide evidence of the impact of your programmes. These designs are focused on revealing a causal or correlational relationship between your services and the change in the lives of your beneficiaries. Depending on context, time, and resource constraints, there are three main types of Impact Evaluation designs.

  • Non-experimental design – designs that only involve your target beneficiary group. These designs either collect data both before and after the program, or alternatively collect pre and post program data retrospectively.

  • Comparison Group Design – designs that compare outcomes between your target beneficiary and other groups.

  • Random Control Trial Design – designs that randomise who receives your services and who doesn’t, and compares the different outcomes between these two groups.

Example: A ministry school wants to know the long-term impact they are having upon the lives of their students. However, it isn’t possible to find a control group. They therefore conduct a retrospective pre-test post-test survey with graduates, asking about their experiences both before and after attending the ministry school.

Click here to see how we can help you conduct an impact evaluation.


4. Case study designs – for when you want to have an in-depth understanding of how your program caused these changes

Finally, a case study design allows you to understand the causal pathway between your program and the resulting changes that occur. This enables you to understand not only if your program is making a difference, but crucially how this is happening.

Example: A charity want to understand exactly why their program is causing specific changes. They therefore conduct a series of case studies, focusing on their success stories. This ‘success mapping’ enables them to understand exactly how they were able to cause their successes, and enable them to recreate these stories in the future. 

Whilst choosing a research design may seem complicated, it is a vital step in collecting the information you need to grow in impact. Please click here to book a free consultation.



REFERENCES 

1  Guttentag, M. and Secord, P. 1983. Too many women? The sex ratio question. Beverly Hills, CA: Sage.

2  Regnerus, M. 2012. ‘Mating Market Dynamics, Sex-Ratio Imbalances, and Their Consequences’. Spring Science and Business Media Symposium: Mating Games.


Related Insights


Samuel Verbi

Samuel is the Co-Founder and Director of Evaluation at Eido. Prior to this he has four years of professional experience as a monitoring and evaluation freelancer, and five years of research experience completing his bachelors and masters in sociology.

Previous
Previous

What is a 'Faith-based' Group?