Evaluation methodology

To guide the evaluation, a methodology and evaluation matrix were developed, based on the guidance provided by the TB Policy on Results (2016). The scope of the evaluation focused on relevance (continued need), design and delivery and performance (effectiveness and efficiency), and included nine evaluation questions which are included in Appendix B.

An Evaluation Working Group (EWG), composed of Justice Canada representatives from the Official Languages Directorate and the IAID, provided advice during the design and implementation of the evaluation. The EWG coordinated access to data and provided feedback on evaluation products (i.e., the evaluation questions and indicators, preliminary findings and the draft evaluation report).

The evaluation used four data collection methods: a document and data review, a file review, key informant interviews, and case studies. GBA+ considerations were also included in the design, data collection, and reporting activities. Each of the methods is described below.

3.1 Document and Data Review

The review of relevant documents and data informed all of the evaluation issues and questions. It also offered a common information base for the other lines of evidence. The first step in the document and data review consisted of reviewing program-related documents to assist in addressing evaluation questions related to continued relevance and performance. The types of documents reviewed include the following:

3.2 File Review

A review was conducted of Support Fund files to enable an assessment of some of the basic characteristics of all projects funded during the period covered by the evaluation. To this end, a profile of all targeted projects was prepared, based on the available performance data and information. This process specifically covered four fiscal years (2017-18 to 2020-21).

The relevant data and information (level of funding provided, type of funding, nature of the activities, targeted clientele, etc.) were entered into a spreadsheet, and summary tables were prepared and included in a separate technical report.

3.2.1 Key Informant Interviews

Semi-structured interviews with key informants contributed to the in-depth understanding of the Initiative, and enabled the identification of successes, as well as problems and challenges, and potential solutions related to the Initiative. Key informants were also provided with an opportunity to corroborate, explain, or further elaborate on findings from other data sources, and provided important input into whether outcomes have or have not been achieved, and why.

The evaluation team collaborated with the EWG to identify key informants for this evaluation. In total, 36 interviews including 43 individuals (some were group interviews) were conducted involving representatives from the following stakeholder groups:

The interviewees addressed issues related to the relevance of the Initiative, the extent to which the Initiative has achieved its expected results, the effectiveness and efficiency of the management structure, and the reporting and accountability processes.

3.3 Case Studies

A total of three case studies were conducted with a view of documenting the results of the Initiative and of illustrating the various types of projects funded, along with best practices and lessons learned. Each case study involved a review of the relevant documentation and interviews (individual or group interviews). More specifically and as applicable, individuals who were interviewed as part of the general key informant interview process described in subsection 3.2.1 were asked additional questions that related specifically to a case study.

In consultation with the EWG, the following themes were covered by the case studies:

For each case study, a technical report was prepared, which covers the overall findings as they relate to the theme addressed, and the contribution of the Initiative. A summary of each case study is included in Appendix C.

3.4 Consideration of Gender-Based Analysis Plus

The federal government’s Policy on Results, along with its associated Directive on Results specify the expectations related to GBA+ in the context of evaluation studies. First, it confirms that in establishing their performance measurement strategy, program managers must include, where relevant, a GBA+ lens. It also identifies, as a mandatory procedure, that all evaluations be planned to take into account, where relevant, GBA+ considerations.

This evaluation of the Initiative provided an opportunity to explore how a GBA+ may strengthen our understanding of the extent to which the program’s reach and benefits include diverse groups of women, men and non-binary individuals, considering a range of identity factors such as language, regional distribution, race, age, disability, or education.

3.5 Constraints, Limitations and Mitigation

The main constraints related to this evaluation are the nature and extent of performance information being collected in relation to each project funded. To mitigate these issues, the evaluation assessed the results that have emerged during the evaluation period, but also took into account activities and investments that occurred prior to the evaluation period but whose associated benefits were experienced during the evaluation period. Also, while the performance data and information included some information gaps, particularly for the case studies, interviews were conducted, and other relevant information were integrated in the analysis.

Also, the uncertainty surrounding the COVID-19 pandemic had an impact during the data collection process. To mitigate this challenge, the period assigned to conduct interviews was expanded, and a variety of means (phone, MS Teams, and Zoom) were used to accommodate key informants.