About the Author(s)


Lesedi S. Matlala Email symbol
School of Public Management, Governance and Public Policy (SPMGPP) College of Business & Economics (CBE), University of Johannesburg, Johannesburg, South Africa

Diniko P. Setwaba symbol
The National School of Government (NSG), Pretoria, South Africa

Citation


Matlala, L.S. & Setwaba, D.P., 2025, ‘Use of evaluation evidence in municipal decisions: The case of Tshwane’s indigent programme’, Journal of Local Government Research and Innovation 6(0), a278. https://doi.org/10.4102/jolgri.v6i0.278

Original Research

Use of evaluation evidence in municipal decisions: The case of Tshwane’s indigent programme

Lesedi S. Matlala, Diniko P. Setwaba

Received: 10 Mar. 2025; Accepted: 09 June 2025; Published: 08 Aug. 2025

Copyright: © 2025. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Evidence-based decision-making (EBDM) is a fundamental principle in governance, particularly in social welfare policies where programme effectiveness depends on rigorous data analysis. However, in many municipal contexts, including South Africa, decision-making relies heavily on administrative data and performance monitoring, with limited integration of evaluation evidence into policymaking.

Aim: This study examines the key factors influencing the use of evaluation evidence in decision-making for the indigent programme exit strategy in the City of Tshwane (CoT). It identifies systemic barriers to evidence use and explores how political dynamics, institutional structures and capacity constraints affect the uptake of evaluation findings in municipal governance.

Methods: Using a qualitative case study approach, data were collected through semi-structured interviews, document analysis and focus group discussions with municipal officials, political stakeholders and programme beneficiaries. This triangulated methodology provides a comprehensive understanding of how evaluation evidence or the lack thereof influences local policymaking.

Results: The study reveals that the absence of a structured evaluation culture, political interference and weak institutional frameworks limits the use of evaluation evidence. Municipal decision-makers often prefer performance monitoring over impact evaluations, resulting in policy inertia. Political actors selectively use evidence that aligns with electoral interests, while weak knowledge systems and limited technical capacity hinder accessibility and application of evaluation findings.

Conclusion: To enhance programme effectiveness, municipalities must institutionalise evaluation, improve political commitment to evidence use, and develop robust knowledge management systems.

Contribution: This study advances understanding of the barriers to evidence-informed decision-making in South African municipalities and offers practical recommendations for reform.

Keywords: evidence-based decision-making; evaluation; evidence; indigent programme; social welfare; local governance; political influence; municipal decision-making.

Introduction

Background and rationale

Evidence-based decision-making (EBDM) has become a central principle in governance, ensuring that policies and programmes are informed by rigorous research and empirical analysis rather than anecdotal evidence or political considerations (Barends, Rousseau & Briner 2014; Cairney 2016; Weiss 1979). The use of evidence in decision-making is particularly critical in social welfare policies, where the effectiveness of interventions directly impacts vulnerable populations (Heinrich 2007; Nutley, Walter & Davies 2009). As a subset of policy-relevant data, evaluation evidence provides systematic insights into programme effectiveness, efficiency and sustainability, allowing policymakers to make informed adjustments that enhance service delivery and resource allocation (Boaz & Ashby 2003; Stewart et al. 2019). The importance of integrating evaluation evidence into governance is widely acknowledged in both academic literature and policy practice, with scholars emphasising that policy decisions should be guided by robust empirical research rather than institutional inertia or political expediency (Briner, Denyer & Rousseau 2009; Goldman & Pabari 2021). In this context, evaluations are essential for identifying gaps in programme implementation, assessing impact and ensuring accountability (Patton 2002; Reddy 2016). However, despite the theoretical emphasis on EBDM, the practical integration of evaluation findings into policymaking remains uneven, particularly at the municipal level, where administrative challenges, political influence and capacity constraints often impede the uptake of evidence (Arthur et al. 2023; Browne, Buke & Maruna 2017).

In municipal governance, particularly in developing contexts, the reliance on administrative data and performance monitoring frequently overshadows the use of evaluation findings (Cairney 2017; Matlala 2025; Porter & Goldman 2013). Municipal decision-makers often prioritise readily available performance reports and compliance-driven metrics, which focus on service delivery outputs rather than the broader impact of interventions (Bright et al. 2019; Stewart 2018). This preference for short-term indicators is further reinforced by bureaucratic structures emphasising immediate deliverables over long-term policy effectiveness (Arthur et al. 2023; Cassell, Cunliffe & Grandy 2018). As a result, evaluation evidence, which provides deeper insights into programme sustainability and effectiveness, is frequently underutilised in municipal decision-making (Boaz et al. 2003; Nutley et al. 2009). Political dynamics also play a crucial role in shaping how evidence is integrated into policy processes (Berman 2016; Boswell 2009). Political actors, including municipal councillors and executive committees, may selectively use evaluation findings to support pre-existing policy positions or electoral strategies while disregarding evidence that contradicts political priorities (Goldman & Pabari 2021; Matlala 2025; Weiss 1979). This selective engagement with evaluation findings weakens the credibility of evidence-based policy processes and contributes to the persistence of ineffective interventions (Reddy 2016; Tissington 2013). Furthermore, weak institutional mechanisms for knowledge management and limited technical expertise in conducting and interpreting evaluations further constrain the systematic use of evaluation findings (Bryman 2012; Creswell & Creswell 2018). These challenges are particularly evident in social welfare programmes, where the sustainability of interventions often depends on continuous learning and adaptation based on empirical evidence (Patton 2002; Stewart et al. 2019).

Social welfare programmes are designed to support individuals and households experiencing economic hardship, ensuring access to basic services and opportunities for social and economic advancement (Arthur et al. 2023; Mkandawire 2005). These interventions typically include income support, food assistance, housing subsidies and access to health care and education, aiming to reduce poverty and promote social inclusion (Fuo 2020; Heinrich 2007). In the South African context, indigent programmes play a crucial role in addressing poverty by providing free or subsidised access to municipal services such as water, electricity and sanitation to households that meet specific income criteria (City of Tshwane [CoT] 2012; Leburu 2017). However, the long-term sustainability of these programmes depends on the ability of municipalities to implement effective exit strategies that enable beneficiaries to transition out of dependency and achieve financial independence (Managa 2012; Tissington 2013). Exit strategies in social welfare are intended to create pathways for economic self-sufficiency through employment opportunities, skills development and other support mechanisms (Pillay 2010; Stewart et al. 2019). The success of these strategies relies on a strong evidence base that can inform policy adjustments and ensure that interventions are tailored to the evolving needs of beneficiaries (Goldman & Pabari 2021; Porter & Goldman 2013). Despite the importance of evaluation in refining and improving exit strategies, municipalities often struggle to systematically integrate evaluation findings into their decision-making processes, leading to inefficiencies and prolonged dependence on social welfare services (Leburu 2017; Mashego 2015).

Integrating evaluation evidence into the decision-making process for indigent programme exit strategies remains an underexplored area in academic research and policy discourse (Heinrich 2007; Stewart et al. 2019). While numerous studies have examined the effectiveness of indigent policies, the role of evaluation in shaping exit strategies has received limited attention (Arthur et al. 2023; CoT 2015). In the South African context, where indigent policies are critical for providing essential services to low-income households, developing and implementing sustainable exit strategies are vital for ensuring the long-term success of these interventions (Mostert 2023; Ruiters 2018). However, municipalities often struggle to incorporate evaluation findings into these strategies, leading to inefficiencies and prolonged dependence on social welfare services (Managa 2012; Nutley et al. 2009). Despite the existence of national policy frameworks that promote the use of evidence in governance, the extent to which evaluation findings inform municipal decision-making remains inconsistent (Goldman & Pabari 2021; Stewart et al. 2019). There is a clear gap in understanding the factors that enable or hinder the use of evaluation evidence in shaping exit strategies for indigent programmes, particularly at the local government level, where resource constraints and political pressures often dictate policy decisions (Fuo 2020; Pabari & Porter 2013). This gap necessitates a focused examination of the institutional, political and technical factors influencing evidence use in municipal indigent programme management (Arthur et al. 2023; Heinrich 2007).

Although evaluation frameworks and national policies emphasise the importance of EBDM, municipalities continue to rely predominantly on administrative and performance data, with minimal engagement with evaluation evidence (Boaz et al. 2003; Browne et al. 2017). This reliance on short-term performance indicators rather than long-term impact assessments limits the effectiveness of social welfare programmes and reduces their capacity for sustainable poverty alleviation (Porter & Goldman 2013; Mostert 2023). The absence of institutional mechanisms to systematically integrate evaluation findings into municipal governance further exacerbates these challenges, making it difficult for policymakers to adopt evidence-based solutions to indigent programme management (Cairney 2017; Stewart et al. 2019). The study identifies factors that influence the use of evaluation evidence in municipal decision-making, focusing on the indigent programme exit strategy in the CoT. By exploring the institutional, political and technical constraints affecting the uptake of evaluation findings, the study seeks to provide insights into the broader challenges of evidence use in local governance (Heinrich 2007; Stewart et al. 2019).

The analysis is structured as follows. Firstly, the study reviews the existing literature on the use of evidence in social welfare policies, highlighting key debates on the role of evaluation in decision-making. Secondly, it discusses the theoretical framework, drawing on systems theory and the bounded rationality model to explain the complexities of evidence integration in municipal governance. The research methodology is then outlined, detailing the qualitative case study approach adopted for data collection and analysis. The findings section presents empirical insights into how evaluation evidence is used – or neglected – in the decision-making process for the indigent programme exit strategy in Tshwane. The discussion situates these findings within the broader policy context, examining the implications for evidence-based governance. Finally, the conclusion summarises the key findings and offers recommendations for enhancing the integration of evaluation evidence into municipal decision-making.

Literature review: Evidence use in social welfare policies

The use of evidence in social welfare policies is fundamental to ensuring that interventions are effective, sustainable and responsive to the needs of vulnerable populations. Evidence-Based Policy-Making (EBPM) is grounded in the principle that decisions should be informed by robust empirical data rather than political agendas or anecdotal experiences (Nutley et al. 2009). Evidence used in social welfare spans various sources, including administrative records, programme monitoring data and formal evaluation studies. Each evidence type serves different functions in the policymaking cycle, from problem identification to policy formulation, implementation and impact assessment (Patel 2015). In well-developed welfare states such as those in Scandinavia, Canada and the United Kingdom, governments have institutionalised the use of evidence through dedicated research institutions and policy advisory bodies (Pawson 2006). These countries systematically integrate evaluations into welfare programme adjustments, ensuring that policies are driven by data on poverty alleviation, social protection and employment creation (Heinrich 2007). However, in developing countries like South Africa, the use of evaluation evidence in social welfare remains limited, with decision-makers often prioritising short-term political gains over long-term impact assessments (Goldman & Pabari 2021). The reliance on administrative records and routine monitoring data, rather than independent evaluations, results in poor programme adjustments, inefficient targeting of beneficiaries and prolonged social dependency cycles (Stewart et al. 2019). This section unpacks the different types of evidence used in social welfare policymaking, highlighting where, how and why evidence is applied in different governance contexts.

The first category of evidence used in social welfare policy is monitoring data, which provides real-time insights into programme implementation. Monitoring data tracks service delivery outputs, budget allocations and beneficiary enrolment trends, allowing policymakers to assess whether a programme is being implemented as planned (Browne et al. 2017). This type of evidence is commonly used in cash transfer programmes, food security interventions and housing subsidies, where governments need to track the distribution of resources and reach of beneficiaries (Patel 2015). In South Africa, for instance, the Social Relief of Distress (SRD) grant relies heavily on monitoring data to ensure that cash payments reach intended beneficiaries on time (World Bank 2018). However, monitoring data alone does not provide insights into whether programmes achieve their intended impact or address structural poverty in the long term (Heinrich 2007). Governments that rely solely on performance tracking and administrative compliance metrics may overlook critical issues related to programme efficiency, unintended consequences and the need for policy adaptation (Nutley et al. 2009). In Brazil, for example, the Bolsa Família Program initially emphasised coverage and disbursement rates as key indicators of success. However, subsequent impact evaluations revealed that cash transfers alone were insufficient to break intergenerational cycles of poverty without complementary education and health interventions (Fiszbein & Schady 2009). This led to policy refinements incorporating conditionalities on school attendance and health care visits, demonstrating how impact evaluations must complement monitoring data to guide effective decision-making (Boaz et al. 2019).

The second category of evidence in social welfare policymaking is administrative records, which include data on beneficiary demographics, service utilisation and programme costs (Stewart et al. 2019). Administrative data are widely used to inform eligibility criteria, track budgetary allocations and assess programme efficiency in welfare states (Patel 2015). In Canada, the government uses integrated administrative data systems to evaluate employment insurance schemes and social housing programmes, allowing policymakers to detect inefficiencies and adjust targeting mechanisms (Goldman & Pabari 2021). In South Africa, administrative records play a central role in programmes such as the Child Support Grant (CSG) and the Expanded Public Works Programme (EPWP), where beneficiary databases help determine who qualifies for assistance (Reddy 2016). However, a critical limitation of administrative records is that they often capture only surface-level trends, lacking deeper insights into programme effectiveness and beneficiary outcomes (Weiss 1979). For example, while South Africa’s Indigent Policy provides free basic services to millions of low-income households, reliance on administrative data alone has led to poor exit strategies, as policymakers fail to assess whether beneficiaries achieve economic independence over time (Mostert 2023). This highlights the need to integrate qualitative evidence and long-term impact assessments to ensure that programmes do not perpetuate dependency cycles (Cairney 2016).

Formal programme evaluations represent the third and most rigorous category of evidence in social welfare policy, as they assess causal relationships between interventions and social outcomes (Heinrich 2007). Unlike monitoring data and administrative records, evaluations employ experimental, quasi-experimental and qualitative research designs to determine whether policies achieve their intended impact (Nutley et al. 2009). In Mexico, the Progresa/Oportunidades conditional cash transfer programme was continuously evaluated using randomised controlled trials (RCTs), leading to policy refinements that improved school attendance and maternal health outcomes (Fiszbein & Schady 2009). Similarly, in Rwanda, the Vision 2020 Umurenge Programme (VUP) used a combination of longitudinal surveys and household impact assessments to guide policy adjustments, ensuring that cash transfers were effectively linked to income-generating activities (Stewart et al. 2019). These examples highlight how evaluations enable governments to identify policy gaps, refine implementation strategies and allocate resources more effectively (Boaz et al. 2019). In contrast, South Africa has struggled to institutionalise evaluation findings into social welfare policymaking, despite establishing the National Evaluation System (NES) (Goldman & Pabari 2021). Municipalities, in particular, rely on short-term compliance reports rather than independent evaluations, resulting in welfare policies that are static rather than adaptive (Reddy 2016). This lack of engagement with impact evaluations weakens policy learning and contributes to repeated inefficiencies in programme design and execution (Weiss 1979).

Beyond technical and institutional barriers, political dynamics significantly influence how evidence is used in social welfare policy. Political actors may manipulate, selectively apply or ignore evaluation findings based on electoral incentives rather than empirical insights (Cairney 2016). In many low- and middle-income countries, social welfare programmes serve political functions by reinforcing patronage networks, securing voter loyalty and shaping government legitimacy (Boswell 2009). In South Africa, for example, research has shown that municipal welfare programmes, such as the Indigent Policy, are often extended indefinitely because of political pressures, despite evaluations suggesting the need for graduation strategies to promote economic self-reliance (Mostert 2023). Similarly, in India’s rural employment guarantee scheme (MGNREGA), studies have revealed that local politicians manipulate beneficiary lists and programme implementation to gain electoral support rather than improve long-term economic opportunities (Heinrich 2007). These cases illustrate how evidence use in social welfare policy is often shaped by political considerations, which can either enhance or undermine the effectiveness of social interventions (Stewart et al. 2019). Overcoming these challenges requires strong institutional safeguards, independent evaluation mechanisms and greater civil society engagement in policy oversight (Boaz et al. 2019).

Despite the challenges of evidence use in social welfare policymaking, some governments have successfully integrated evaluation findings into policy adjustments, improving social outcomes. In Brazil, the government’s shift from universal food subsidies to targeted cash transfers was informed by a series of evaluation studies demonstrating that direct financial assistance was more effective in reducing poverty (Fiszbein & Schady 2009). Similarly, in Rwanda, sustained investment in evaluation capacity and knowledge systems has allowed the government to refine poverty alleviation strategies based on real-time impact assessments (Goldman & Pabari 2021). The United Kingdom’s What Works Network, which synthesises policy-relevant research across multiple sectors, has significantly influenced welfare reform by providing timely, accessible and high-quality evidence to policymakers (Nutley et al. 2009). These cases underscore the fact that when governments commit to institutionalising EBDM, welfare programmes become more adaptive, cost-effective and impactful (Stewart et al. 2019). However, achieving this requires not only technical capacity and institutional infrastructure but also the political will to prioritise long-term policy effectiveness over short-term electoral gains (Cairney 2016).

Theoretical framework

The theoretical framework guiding this study is grounded in two complementary models, namely systems theory and the bounded rationality model, both of which provide a conceptual lens for understanding the complex interplay between evidence and decision-making in social welfare policy. These models help explain why, despite the increasing emphasis on evidence-based policymaking, decision-makers in municipal governance often struggle to integrate evaluation findings into policy processes. Systems theory provides a structural analysis of how various actors within the policymaking ecosystem interact, influencing evidence flow and uptake. The bounded rationality model, on the contrary, focuses on the constraints that limit policymakers’ ability to engage with evidence effectively, emphasising cognitive, institutional and political factors that shape decision-making. Together, these models illustrate why social welfare policies, particularly in municipal governance, often remain static despite empirical findings suggesting necessary adjustments.

Systems theory conceptualises policymaking as a dynamic and interconnected system in which multiple stakeholders – including government agencies, research institutions, civil society organisations and political actors – continuously interact within an evidence ecosystem (Meadows 2008). The governance process is not linear but cyclical, where evidence enters the system as an input, undergoes a policy deliberation process and results in policy outputs, which are then influenced by external environmental factors such as political pressures, economic constraints and public demand (Stewart et al. 2019). As represented in Figure 1, the policy process involves continuous feedback loops, where new evidence should ideally shape subsequent iterations of policymaking. However, in practice, the system is often fragmented, with actors operating in silos, leading to inefficiencies in evidence transmission (Cairney 2016). In well-functioning systems, evidence is systematically collected, synthesised and translated into actionable policy recommendations, ensuring that welfare interventions are continuously refined based on empirical findings. In contrast, in many developing countries, including South Africa, municipal governments lack the institutional mechanisms to facilitate this iterative process, leading to an overreliance on administrative records rather than impact evaluations (Goldman & Pabari 2021). Without an integrated system of knowledge-sharing, policymakers often fail to utilise evaluations in decision-making, resulting in the persistence of ineffective social welfare programmes.

FIGURE 1: The systems model of evidence-based policymaking.

While systems theory explains the structural and institutional dynamics of evidence use in policymaking, the bounded rationality model highlights the cognitive and decision-making constraints that prevent policymakers from fully utilising available evidence (Simon 1957). This model challenges the assumption that policymakers operate as fully rational actors, arguing that decisions are made within the constraints of limited information, time pressures and competing political interests (Cairney 2016). Policymakers often do not have the capacity to engage in exhaustive information processing; instead, they rely on heuristics, simplified data points and politically convenient narratives when making decisions (Weiss 1979). Figure 2 illustrates this bounded decision-making process, which progresses through sequential steps – from problem identification to policy implementation – but faces interruptions at multiple stages because of political constraints, bureaucratic inefficiencies and limited institutional capacity. Unlike the idealised rational decision-making model, which assumes that all relevant information is systematically considered before arriving at an optimal policy choice, bounded rationality suggests that policymakers tend to satisfy, meaning they settle for decisions that are ‘good enough’ rather than optimal, often because of incomplete information or pressure to act quickly (Simon 1957).

FIGURE 2: The bounded rationality decision-making process.

The constraints imposed by bounded rationality manifest in multiple ways within social welfare policymaking. Political pressures frequently distort the decision-making process, leading to the selective use or outright rejection of evidence based on electoral incentives rather than empirical merit (Boswell 2009). In many cases, policymakers will only engage with research findings that align with their pre-existing policy positions, ignoring inconvenient evidence that suggests alternative approaches (Cairney 2016). This is particularly evident in municipal welfare policies, where governments often extend welfare benefits indefinitely despite evaluation findings indicating the need for structured exit strategies (Mostert 2023). Similarly, bureaucratic inertia and institutional resistance to change contribute to a reluctance to incorporate evaluation findings into welfare policy adjustments. Government institutions are often structured in ways that prioritise compliance-driven reporting over impact-driven policy refinements, making it difficult for new evidence to influence entrenched decision-making processes (Nutley et al. 2009). Furthermore, cognitive overload among policymakers exacerbates these challenges, as decision-makers frequently must process vast amounts of information under time constraints, leading them to rely on executive summaries or politically curated data rather than comprehensive evaluation reports (Boaz et al. 2019).

The interplay between systems theory and bounded rationality highlights the core challenges of integrating evidence into social welfare policymaking. While a well-functioning evidence ecosystem should enable policymakers to engage in continuous learning and programme refinement, real-world constraints undermine this process, including fragmented knowledge systems, political interference and cognitive limitations. Figure 1 and Figure 2 illustrate how these theoretical frameworks operate in practice. They demonstrate that while evidence enters the policy cycle as an input, systemic and cognitive barriers are often obstructed before they can meaningfully influence policy outputs. Addressing these challenges requires deliberate institutional reforms, including improved knowledge translation mechanisms, greater coordination between evidence producers and policymakers, and the development of policy tools that simplify the presentation of evaluation findings to facilitate their use in decision-making. Without these changes, social welfare policies will continue to be shaped more by political expediency than by empirical research, limiting their effectiveness in addressing poverty and inequality sustainably.

Methodological framework

This study employs a qualitative case study approach to investigate the factors influencing the use of evaluation evidence in social welfare policymaking, focusing on the CoT’s indigent programme. A case study design is particularly suitable for this research as it allows for an in-depth examination of a specific policy intervention within its real-world governance context (Yin 2018). Given the complexity of municipal policymaking, the case study approach enables exploring the interactions between policymakers, evaluation systems and political actors, providing rich, contextually embedded insights that may not be achievable through broader survey-based research (Creswell & Creswell 2018). By narrowing the focus to a single municipal programme, the study captures how evaluation evidence is accessed, interpreted and applied in a localised governance setting, ensuring a detailed and nuanced understanding of policy dynamics.

The City of Tshwane’s indigent programme as a case study

The CoT’s indigent programme serves as an ideal case study for examining evidence use in municipal policymaking, as it represents a structured social welfare initiative designed to support low-income households. The programme is a municipal-level intervention that aligns with national policy frameworks to ensure access to basic services such as water, electricity, sanitation and refuse removal for economically vulnerable residents (CoT 2015). Like many other indigent policies in South Africa, the programme provides subsidised services to qualifying households, reinforcing the municipality’s commitment to social protection and poverty alleviation at the local government level.

The selection of the CoT’s indigent programme as a case study is justified based on three key factors. Firstly, it is an established policy initiative, allowing for a longitudinal analysis of how evidence has shaped (or failed to shape) municipal decision-making over time. Secondly, it involves multiple governance stakeholders, including municipal administrators, political oversight bodies, frontline service providers and programme beneficiaries, making it an ideal setting for assessing how different actors engage with evaluation evidence. Finally, as a highly targeted municipal programme, it provides a manageable and context-specific setting for investigating the real-world application of evaluation evidence within local governance structures. The case study approach ensures that findings are deeply contextualised, offering insights to inform broader policy discussions on evidence use in municipal social welfare programmes across South Africa.

Sampling method

The study employs a purposive sampling strategy, selecting participants based on their direct involvement in the CoT’s indigent programme (see Table 1). Purposive sampling is particularly effective in qualitative research, as it ensures that data collection focuses on information-rich cases that provide deep insights into policy design, implementation and evaluation practices (Patton 2002). This method allows the researcher to target individuals who are best positioned to provide perspectives on evidence use, ensuring that the study captures a diverse range of voices from municipal governance structures.

TABLE 1: Participant groups, roles and justification for inclusion.

To enhance data credibility, the study includes participants from four key groups: municipal officials responsible for programme management, political oversight committees that influence decision-making, social workers and ward committee members involved in service delivery and indigent programme beneficiaries who experience the policy’s implementation firsthand. The total sample size is projected to be 25–30 participants, ensuring data saturation within the range.

Data collection methods

Data collection relies on three primary qualitative methods, namely semi-structured interviews, document analysis and focus group discussions (FGDs). Combining these methods ensures data triangulation, increasing the credibility and reliability of findings (Denzin & Lincoln 2011). Semi-structured interviews are conducted with 30 participants, including municipal officials, policymakers, social workers and programme beneficiaries. The interviews explore themes related to institutional constraints, political influences, access to evaluation findings and evidence-informed decision-making (Kvale 2007). The flexibility of semi-structured interviews allows participants to expand on key issues, ensuring rich, in-depth insights into governance practices. Document analysis examines policy reports, municipal records, budget allocations, evaluation studies and council meeting minutes related to the indigent programme.

These documents provide historical and policy-level evidence of decisions, whether evaluation reports have been systematically integrated into policy deliberations and whether evidence has influenced programme adjustments (Bowen 2009). Five FGDs are conducted with ward committee members and community representatives, each comprising 6–8 participants per session. FGDs allow for collective reflections on programme effectiveness, community perceptions of evidence use and policy communication challenges (Krueger & Casey 2015). This method helps uncover gaps between policy intentions and actual service delivery, highlighting whether indigent policies are grounded in evidence or shaped by external pressures. All interviews and FGDs are audio-recorded with participant consent, transcribed verbatim and coded thematically using NVivo software to ensure a structured, transparent qualitative data analysis (Bazeley & Jackson 2013).

Data analysis

The study employs thematic analysis to examine qualitative data collected through semi-structured interviews systematically, FGDs and document analysis. This approach enables the identification of key patterns and themes related to evidence use, institutional constraints, political influences and implementation barriers (Braun & Clarke 2006). The analysis follows a structured six-step process, beginning with data familiarisation, where transcripts from interviews and FGDs are reviewed in detail to identify recurring ideas. Initial coding is then conducted using NVivo software, ensuring systematic categorisation of key concepts (Bazeley & Jackson 2013). Codes are subsequently grouped into broader themes, with particular attention to how policymakers, service providers and beneficiaries perceive the role of evaluation evidence in municipal decision-making. Themes such as bureaucratic inefficiencies, political considerations and accessibility of evaluation reports are reviewed, refined and validated through peer debriefing and cross-referencing among participant groups to ensure credibility and consistency (Lincoln & Guba 1985).

Once themes are defined, the study moves to synthesising findings, integrating theoretical insights from systems theory and the bounded rationality model to explain why evidence is either underutilised or selectively applied in policymaking. The final analysis presents themes in a structured manner, incorporating direct quotations from participants to illustrate governance dynamics and barriers to evaluation uptake. Findings are further strengthened through triangulation, comparing qualitative insights with document analysis to assess discrepancies between formal policy commitments and actual implementation practices (Shenton 2004). By employing a rigorous thematic analysis framework, the study ensures that data interpretation remains grounded in participant narratives while contributing to broader discussions on evidence-informed policymaking in South African municipalities.

Validity and trustworthiness

The study employs multiple validation techniques to enhance rigour and credibility, including triangulation, member checking and audit trails (Lincoln & Guba 1985). Triangulation ensures that findings are cross-validated through interviews, document analysis and FGDs, reducing the risk of bias and misinterpretation (Patton 2002). Member checking allows participants to review preliminary findings, ensuring that interpretations accurately reflect their perspectives (Birt et al. 2016). An audit trail documents all methodological decisions, providing transparency and reproducibility for future research (Shenton 2004).

Ethical considerations

Ethical clearance to conduct this study was obtained from the University of the Witwatersrand’s School of Governance Ethics Committee (No. WSG-2022-06).

Findings and discussion

Institutional constraints and the absence of an evaluation culture

The absence of a structured evaluation culture within the CoT’s indigent programme reflects broader institutional constraints that prioritise administrative compliance over evidence generation and policy learning. While municipal governance structures emphasise reporting on service delivery outputs and financial expenditures, there is no systematic mechanism for conducting evaluations that assess long-term programme impact (Nutley et al. 2009). Several municipal officials interviewed for this study noted that while performance data on the number of indigent households receiving subsidies are regularly collected, there are no formal processes to evaluate whether the programme effectively lifts beneficiaries out of poverty or promotes self-sufficiency:

One official explained, ‘the monitoring doesn’t happen as it should, making it difficult to evaluate what is working and what is not working.’ (Municipal official, male, 45)

Another official said:

‘There is high reliance on the numbers in terms of reaching on set targets, there is a lot of reporting.’ (Senior administrator, female, 40)

This suggests that evaluation is not embedded within the governance framework and remains a secondary function rather than a core component of municipal decision-making (Goldman & Pabari 2021). Without an institutionalised approach to evaluation, the indigent programme operates without empirical feedback loops, leading to repetitive policy cycles that fail to incorporate lessons from previous implementation challenges.

A key institutional barrier is the fragmented nature of governance structures, which limits the municipality’s ability to integrate evaluation into decision-making processes. The study found that policy formulation, implementation and performance monitoring responsibilities are distributed across multiple municipal units. Moreover, there is no evidence suggesting that an entity is solely responsible for conducting evaluations or ensuring their uptake, exacerbated by an institutionalised evaluation function. As one member of the Section 79 committee described:

‘The M&E information is only provided when requested, however, the use of M&E information is the weakest element of oversight, evaluation information is not used compared to other types of information used.’ (Section 79 committee member, male, 50)

Similar governance inefficiencies have been documented in other African municipal contexts, where the absence of evaluation mandates relies on routine administrative data rather than empirical evidence in policymaking (Stewart et al. 2019). In the CoT, the lack of a dedicated evaluation unit means that findings are not systematically reviewed or incorporated into programme adjustments even when informal assessments occur. This structural weakness prevents evidence-based policy learning and reinforces policy inertia, where municipal strategies remain unchanged despite social and economic condition shifts.

Another critical factor contributing to the absence of an evaluation culture is the dominance of compliance-driven reporting systems, which prioritise meeting regulatory requirements over engaging in critical policy reflection. Municipal officials indicated that evaluation activities – when they do occur – are often conducted in response to external oversight requirements rather than as part of an internal effort to improve programme effectiveness. One member of the Section 79 committee noted:

‘When a committee sits, it is often based on reports that are requested and we go where the implementation happens, often what is reported is not the same with what is in the reports.’ (Ward committee member, male, mid-30s)

Another noted:

‘We report on numbers a lot and, If the numbers are achieved, then they are happy and if not, you are asked what you can do better.’ (Councillor, female, 39)

This aligns with findings from studies on public sector accountability, where governments in resource-constrained settings focus on demonstrating procedural compliance rather than measuring policy impact (Patel 2015). As a result, municipal staff members are primarily occupied with ensuring financial accountability, and little institutional capacity is dedicated to evaluation design, execution and learning. The long-term consequence is that the CoT continues to implement the indigent programme without concrete evidence of its effectiveness, reinforcing a governance model prioritising bureaucratic efficiency over substantive policy learning and innovation. Addressing these institutional constraints requires establishing dedicated evaluation structures, integrating evidence-based review mechanisms into decision-making processes and fostering a culture where evaluation is seen as a strategic governance tool rather than a compliance obligation.

Political considerations and the selective use of evidence

Political considerations dominate the uptake and use of evidence in municipal policymaking, particularly in the CoT’s indigent programme. Instead of relying on empirical evaluation findings, policy decisions are often influenced by electoral incentives, party dynamics and political bargaining, resulting in the selective use of evidence that aligns with political interests while dismissing findings that may be politically inconvenient (Boswell 2009). Several municipal officials and ward committee members interviewed for this study expressed frustration over how political actors influence decision-making, often prioritising short-term electoral gains over long-term policy effectiveness. One municipal official noted:

‘The intervention is implemented as a social grant, it has been adopted as a culture or a norm that the government has created which just teaches people to just receive without an expected outcome.’ (Municipal policymaker, male, 50)

Another said:

‘The programme is not really targeted to empower households such that can exit the programme.’ (Municipal official, female, late 30)

This suggests that even where monitoring data or informal assessments indicate inefficiencies in the indigent programme, political actors are unlikely to act on such evidence if doing so could jeopardise electoral support. Similar trends have been observed in other developing governance contexts, where social welfare policies are used as political instruments rather than as evidence-driven poverty alleviation tools (Goldman & Pabari 2021).

A key factor influencing the selective use of evidence is the electoral value of indigent programmes, which makes them highly politicised and resistant to reform (Matlala 2024). As indigent policies provide tangible benefits, such as free basic services, to a large segment of the low-income population, councillors and municipal officials face political pressure to maintain or expand these programmes regardless of evaluation findings on their effectiveness (Heinrich 2007). In interviews, ward committee members revealed that political leaders often resist discussions on structured exit strategies for beneficiaries, fearing negative reactions from voters. One committee member stated:

‘There is corruption at the community level where projects that are supposed to be prioritised for people who are indigent, often such projects benefit people that are not supposed to benefit.’ (Ward committee member, female, 42)

Another ward committee member said:

‘There is a lot or patronage amongst those people who are supposed to champion the programme and make it work to uplift their constituencies.’ (Ward committee member, male, 44)

This highlights how political actors influence policy decisions by rejecting evaluation findings that could result in unpopular reforms, even when such changes are necessary to ensure the programme’s long-term sustainability. Studies in Latin America and South Asia have documented similar patterns, where policymakers selectively engage with evaluation results that justify expanding social protection schemes while ignoring recommendations that advocate for more stringent beneficiary assessments (Patel 2015).

Beyond electoral incentives, political loyalty and party dynamics further shape whether evaluation evidence is considered in policymaking. Respondents indicated that policy decisions are often dictated by party agendas rather than objective evaluation findings in municipal coalitions or politically contested environments (Reddy 2016). One municipal policymaker observed that one of the councillors said:

‘… [I]ndigent programme can be used as political tool where they are used to do politicking.’ (Councillor, male, 47)

Similarly, one participant stated, ‘there is a lot or patronage amongst those people who are supposed to champion the programme and make it work to uplift their constituencies.’ This reflects a broader trend in South African municipal governance, where policymaking is frequently shaped by inter-party conflicts rather than by technocratic analysis (Stewart et al. 2019). The consequence is that evaluation findings – when they do exist – are either strategically used to justify politically favourable policies or are ignored altogether if they contradict the dominant party’s agenda. Addressing this challenge requires institutional reforms that depoliticise evidence use, such as establishing independent municipal evaluation units, embedding evidence-use requirements into policy processes and ensuring greater public oversight over how evaluation findings inform decision-making. Without such mechanisms, the political instrumentalisation of evidence will continue to hinder efforts to implement effective, sustainable and empirically grounded social welfare policies in municipalities like the CoT.

Weak knowledge management systems and limited accessibility of evidence

The study found that another critical barrier to the uptake of evaluation evidence in the CoT’s indigent programme is the absence of centralised knowledge management systems and the limited accessibility of policy-relevant evidence. Municipal decision-makers lack structured mechanisms to access, store and retrieve evaluation findings, which prevents them from incorporating empirical insights into policy design and implementation (Nutley et al. 2009). Interviews with municipal officials revealed that while data on indigent programme beneficiaries and service delivery outputs exist, such information is not systematically archived, analysed or disseminated to support policy learning. One municipal administrator explained:

‘Capacity challenge makes it difficult for the municipality to get relevant information to measure the status of the municipality in terms of improvement or impact of the intervention to inform future planning.’ (Municipal administrator, female, 36)

This reflects broader governance challenges in South African municipalities, where fragmented administrative structures result in disjointed data systems undermining EBDM (Goldman & Pabari 2021). The absence of a municipal evaluation database means that even when evaluations are conducted internally or by external consultants, their findings are often not readily available to policymakers, reducing the likelihood of their use in governance processes.

Beyond the structural absence of centralised evaluation systems, municipal officials often lack formalised knowledge-sharing platforms, preventing meaningful discussions about how evaluation findings should inform policy adjustments. Respondents noted that evaluation reports rarely inform policy discussions within the municipality because of the limited institutionalisation of evidence-based practices. One committee member stated:

‘M&E information doesn’t get to reach the oversight committee that often; if it doesn’t get requested, it doesn’t get presented at all to the oversight committee. There is no voluntary presentation of such information to assist the oversight committee to do its job.’ (Oversight committee member, male, 55)

This aligns with research on public sector knowledge management, highlighting that government officials struggle to apply research insights to policy reform efforts without structured knowledge-sharing mechanisms – such as evaluation forums, data dashboards or evidence-use committees (Stewart et al. 2019). In comparison, municipalities in Canada and Australia have established evidence management systems, where policymakers must review evaluation findings before making major policy decisions, ensuring that governance processes are informed by empirical research rather than ad hoc decision-making (Patel 2015).

Limited accessibility to evaluation reports and policy evidence is further exacerbated by weak documentation practices and reliance on external evaluators, leading to a situation where knowledge is not retained within the municipality. Several municipal officials reported that when evaluations are commissioned by external consultants, the findings are often submitted in technical reports that are not easily understandable or user-friendly. One of the councillors stated:

‘There are several performance reports that are produced by the M&E unit. However, the process of gathering the information is extremely slow due to the capacity of social workers who are unable to visit the households to conduct evaluations.’ (Councillor, female, late 40s)

This finding aligns with research on knowledge translation barriers, which emphasises that evidence must be presented in accessible formats – such as policy briefs, executive summaries and interactive data visualisations – to facilitate its use in decision-making (Cairney 2016). In the CoT, the absence of knowledge translation mechanisms means that even when research is available, it is not synthesised into actionable insights that municipal officials can readily apply to policymaking. Strengthening knowledge management systems through the establishment of a centralised evaluation repository, regular evidence-sharing workshops and dedicated municipal evaluation units would improve the accessibility and utilisation of research findings, ensuring that policy decisions are based on structured evidence rather than fragmented administrative data or political intuition.

Capacity limitations and the marginalisation of evidence in policymaking

A significant barrier to the integration of evaluation evidence in municipal policymaking in the CoT is the lack of institutional capacity and expertise to engage with evaluation findings effectively. Municipal departments responsible for social welfare policies and indigent programme implementation are often understaffed and lack trained personnel with evaluation and data analysis skills, limiting their ability to interpret and apply evidence in policy decisions (Nutley et al. 2009). Several municipal officials interviewed noted that even when evaluation reports or monitoring data exist, there is no dedicated team to analyse the findings and translate them into actionable policy recommendations. One committee member stated:

‘I doubt that M&E is done on this intervention except for the monitoring that is expected to be done as a build-in mechanism in the implementation of the intervention.’ (Section 79 committee member, male, 51)

Another committee member said:

‘The municipality does not have the capacity to implement social relief interventions following the required project management cycle; it is only capacitated to provide basic services.’ (Municipal official, female, 42)

This reflects a broader trend in South African municipalities, where limited investment in policy analysis and evaluation expertise weakens the integration of empirical evidence into decision-making (Goldman & Pabari 2021). The absence of dedicated evaluation officers within municipal structures means that programme assessments – when they do occur – are often treated as one-off exercises rather than as tools for continuous policy learning and improvement.

Beyond human resource shortages, budget constraints further hinder the municipality’s ability to conduct and utilise evaluations. Respondents indicated that most municipal resources are allocated towards immediate service delivery needs, leaving little funding for policy research and impact assessments. A senior municipal official explained:

‘There are a number of challenges with how monitoring is undertaken.’ (Municipal staff, male, 39)

Additionally, one municipal official said, ‘Staff doesn’t have time can carry out such exercises, this would require a survey of some sort to determine but there is just no capacity to do so.’ Moreover, one councillor said:

‘The section 79 committee relies on the honesty of the department of social development reports to correctly report on the status of the indigent register whether people are exited as soon as their indigent status has improved. It is also difficult to ensure that the real indigent get the exemptions from paying for some of the free services.’ (Oversight councillor, male, 53)

This aligns with findings from studies on local government budgeting, which show that in developing country contexts, evaluation is often deprioritised in favour of more immediate operational expenditures (Patel 2015). While performance monitoring – such as tracking the number of indigent beneficiaries or service provision levels – is integrated into administrative processes, more complex evaluations that assess long-term impact or programme effectiveness require additional financial and human resources that are often unavailable (Stewart et al. 2019). In contrast, countries with strong municipal evaluation cultures, such as Canada and Brazil, allocate dedicated funds for policy assessments, ensuring that evaluation findings inform strategic planning and policy reform (Heinrich 2007). Without similar financial commitments in Tshwane, evidence-based policymaking remains marginalised, as municipal officials lack both the resources and the institutional support to engage meaningfully with evaluation findings.

Another major limitation is the lack of training and knowledge translation mechanisms that enable municipal officials to apply evaluation findings to policymaking. Moreover, even when evaluation reports are available, they are often highly technical and not presented in formats that facilitate their use in governance. This reflects a broader issue in evidence-based policymaking, where technical research outputs fail to reach policymakers in an accessible and actionable form (Cairney 2016). Similar challenges have been documented in municipal governance structures in Kenya and Nigeria, where policymakers struggle to interpret evaluation findings because of limited technical expertise and a lack of simplified, policy-relevant summaries (Boaz et al. 2019). Addressing these capacity limitations requires a multi-faceted approach, including training municipal officials in evaluation literacy, embedding policy analysts within departments and developing standardised knowledge translation tools such as policy briefs and data dashboards. Without these reforms, evaluation evidence will continue to be marginalised in municipal governance, reinforcing a cycle where policy decisions are driven by short-term administrative concerns rather than long-term empirical insights.

Synthesis of key findings and implications for policy

The findings of this study reveal a complex interplay between institutional weaknesses, political interference and capacity deficits, all of which contribute to the limited use of evaluation evidence in policymaking within the CoT’s indigent programme. The absence of a structured evaluation culture has resulted in a governance system prioritising administrative reporting and compliance over empirical assessments of programme impact. Municipal departments, constrained by rigid bureaucratic structures and fragmented governance frameworks, struggle to integrate evaluation findings into decision-making. Instead of systematic evaluations that assess the long-term effectiveness of indigent support, there is a heavy reliance on routine performance monitoring, which focuses on output indicators rather than meaningful impact assessments (Nutley et al. 2009). This lack of institutionalised evaluation mechanisms leads to policy stagnation, where programme structures remain unchanged despite shifts in socio-economic conditions and evolving community needs. Without formalised knowledge-sharing platforms and centralised evidence repositories, evaluation findings – when they do exist – fail to inform policy adjustments, leading to inefficiencies in indigent support administration.

The political instrumentalisation of evidence further complicates efforts to strengthen evidence-based policymaking in municipal governance. Instead of being used objectively to enhance programme efficiency, evaluation findings are often filtered through political agendas, with policymakers selectively engaging with evidence that aligns with electoral incentives while ignoring or dismissing findings that suggest politically sensitive reforms (Boswell 2009). The indigent programme, which provides critical social support to low-income households, has become deeply embedded in the political landscape, making policymakers reluctant to act on evaluation findings recommending stricter eligibility criteria or structured exit strategies (Heinrich 2007). Respondents also noted that patronage dynamics and intra-party competition further reduce the likelihood that empirical evidence will shape municipal policy decisions. Similar challenges have been observed in other African municipalities, where social welfare policies are often used as political tools rather than as data-driven interventions aimed at long-term poverty alleviation (Goldman & Pabari 2021). This pattern of selective evidence use undermines policy effectiveness and erodes public trust in municipal governance, reinforcing perceptions of inefficiency and political opportunism.

Beyond political influences, capacity limitations and weak knowledge management systems exacerbate the municipality’s inability to engage with evaluation evidence effectively. Municipal officials lack the technical expertise required to interpret evaluation findings, and limited financial resources mean that programme assessments – when conducted – are often one-off exercises rather than part of an ongoing policy-learning cycle (Stewart et al. 2019). Additionally, the absence of dedicated evaluation units within municipal structures prevents institutional knowledge retention, leading to a situation where evaluation findings are not systematically archived or disseminated for future policymaking. In contrast, countries like Canada and Australia have implemented structured evaluation frameworks where evidence is systematically reviewed before making major policy decisions (Patel 2015). Strengthening institutional mechanisms for evidence use, such as centralised data repositories, independent evaluation units and routine policy-learning workshops, could significantly improve the integration of empirical research into municipal governance. Without these structural reforms, policy decisions in the CoT will continue to be driven by political considerations and compliance reporting rather than by rigorous evidence-based assessments of programme effectiveness.

Conclusion and recommendations

This study has revealed deep-seated barriers to the use of evaluation evidence in municipal policymaking, particularly in the CoT’s indigent programme. The absence of a structured evaluation culture, compounded by institutional inertia, political interference and capacity deficits, has resulted in a governance system that prioritises short-term performance metrics over long-term impact assessments. The evidence shows that municipal actors rely heavily on administrative data and routine compliance reporting, while systematic evaluations remain absent or, when conducted, are rarely used to drive policy improvements (Goldman & Pabari 2021). Instead of guiding programmatic decisions, evaluation findings – where they exist – are often ignored, selectively applied or rendered inaccessible because of fragmented knowledge management systems (Stewart et al. 2019). Political actors, particularly councillors and oversight committees, shape the way evidence is used, ensuring that findings that align with electoral incentives are prioritised, while those that suggest unpopular reforms – such as exit strategies for indigent beneficiaries – are deliberately sidelined (Boswell 2009). At the same time, the lack of specialised evaluation expertise, understaffed municipal departments and weak knowledge translation mechanisms mean that even when policymakers want to engage with evaluation reports, they lack the capacity and resources to do so effectively (Patel 2015). These systemic failures have crippled evidence-based policymaking, reinforcing a municipal environment where policy inertia, political expediency and bureaucratic inefficiencies dominate governance processes.

Addressing these challenges requires a paradigm shift – a move away from compliance-driven governance towards a system where empirical evidence shapes municipal policymaking. The first and most urgent step is to institutionalising evaluation within municipal structures by establishing a dedicated evaluation unit within the CoT’s social welfare and planning departments. This unit should be tasked with coordinating impact assessments, synthesising evaluation findings into actionable policy recommendations and ensuring that evidence is systematically reviewed before policy decisions are made (Heinrich 2007).

Without an internal mechanism that mandates and monitors the use of evaluation evidence, municipal policies will continue to be driven by historical precedent rather than empirical insight. This evaluation unit must not operate in isolation – it should be embedded within a broader governance framework that promotes collaboration between municipal officials, researchers, civil society actors and external evaluation experts, ensuring that policymaking benefits from diverse sources of knowledge and expertise (Nutley et al. 2009).

Beyond institutional restructuring, it is critical to curb political interference in evidence use by strengthening accountability measures that ensure evaluations are systematically reviewed before major policy decisions are made. One way to achieve this is to make evaluation findings publicly available, requiring municipal policymakers to justify decisions that contradict empirical recommendations. A municipal evidence-use committee, composed of independent policy analysts, governance experts and community representatives, should be established to audit the extent to which evaluation findings inform municipal policies (Cairney 2016). This would deter the selective use of evidence for political gain, ensuring that policy adjustments are driven by data rather than political manoeuvring (Boaz et al. 2019). Additionally, introducing mandatory evidence-based policy hearings, where councillors and municipal officials must engage in transparent discussions on evaluation findings, would help depoliticise the use of evidence, ensuring that municipal decision-making is based on what works, rather than what is politically expedient.

A third key recommendation is the development of a centralised knowledge management system that ensures evaluation findings are accessible, understandable and integrated into policymaking processes. The current system – where data are scattered across departments, buried in lengthy reports or lost in bureaucratic silos – must be overhauled. The CoT should create an online evaluation repository, where all policy assessments, impact evaluations and programme performance reviews are stored in a user-friendly, searchable database (Stewart et al. 2019). This system should be accompanied by regular evidence-sharing platforms, such as policy roundtables, interactive dashboards and evaluation briefings, where municipal officials can engage with evaluation insights in real time (Patel 2015). Countries such as Canada and Australia have pioneered similar digital governance platforms, ensuring that policymakers have instant access to high-quality, policy-relevant evidence, rather than relying on fragmented, outdated or politically curated information (Heinrich 2007). Implementing such a system in Tshwane would strengthen institutional memory, improve evidence accessibility and foster a culture where evaluation is seen as an essential governance tool rather than a compliance burden.

Lastly, capacity-building initiatives must be prioritised to equip municipal officials with the technical expertise needed to interpret and apply evaluation findings. Training workshops, evaluation literacy programmes and mentorship initiatives should be introduced to ensure that municipal policymakers, programme managers and administrative staff can engage meaningfully with evaluation evidence (Goldman & Pabari 2021). Additionally, the municipality should allocate dedicated funds for evaluations, ensuring that programme assessments are regularly conducted and not merely commissioned on an ad hoc basis when external funding is available (Stewart et al. 2019). By embedding evaluation funding within the municipal budget, Tshwane can institutionalise evidence-based policymaking, rather than treating it as a sporadic, donor-driven exercise.

The failure to integrate evaluation evidence into municipal policymaking in the CoT is not merely a technical issue – it is a governance failure that perpetuates policy stagnation, inefficient resource allocation and service delivery weaknesses. Without urgent reforms, the indigent programme will remain trapped in a cycle of political manipulation, bureaucratic inefficiency and limited policy learning, ultimately failing to achieve its intended goal of sustainable poverty alleviation. The recommendations outlined above offer a practical roadmap for strengthening evidence use in governance, ensuring that decision-making is grounded in empirical insights rather than political expediency. By embedding evaluation structures, depoliticising evidence use, improving knowledge management systems and strengthening municipal capacity, the CoT can transition towards a governance model where policy effectiveness, rather than political strategy, drives municipal decision-making. Without these interventions, the continued neglect of evaluation evidence in policymaking will undermine service delivery and erode public trust in the municipality’s ability to govern effectively.

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

L.S.M. and D.P.S. contributed equally to the conceptualisation, methodology design, data collection, analysis, and writing of the manuscript. L.S.M and D.P.S. confirm responsibility for the integrity and accuracy of the work. All authors contributed to the article, discussed the results, and approved the final version for submission and publication.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

The authors confirm that data supporting the findings are available in the article. Raw data that support the findings of the article are available from the corresponding author, L.S.M., upon reasonable request.

Disclaimer

The views and opinions expressed in this article are those of the authors and are the product of professional research. They do not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The authors are responsible for this study’s results, findings and content.

References

Arthur, S., Mitchell, M., Lewis, J. & McNaughton Nicholls, C., 2023, Qualitative research practice: A guide for social science students and researchers, Sage, London.

Barends, E., Rousseau, D.M. & Briner, R.B., 2014, Evidence-based management: The basic principles, Center for evidence-based management, viewed 25 January 2025, from https://cebma.org/wp.

Bazeley, P. & Jackson, K., 2013, Qualitative data analysis with Nvivo, 2nd edn., Sage, London.

Berman, P., 2016, ‘Policy implementation and street-level bureaucracy’, Journal of Public Administration Research and Theory 26(2), 253–270. https://doi.org/10.1093/jopart/muv027

Birt, L., Scott, S., Cavers, D., Campbell, C. & Walter, F., 2016, ‘Member checking: A tool to enhance trustworthiness or merely a nod to validation?’ Qualitative Health Research 26(13), 1802–1811. https://doi.org/10.1177/1049732316654870.

Boaz, A. & Ashby, D., 2003, Fit for purpose?: Assessing research quality for evidence based policy and practice, ESRC UK Centre for Evidence Based Policy and Practice, London.

Boaz, A., Davies, H., Fraser, A. & Nutley, S., 2019, What works now: Evidence-informed policy and practice, Policy Press, Bristol.

Boswell, C., 2009, The political uses of expert knowledge: Immigration policy and social research, Cambridge University Press, Cambridge.

Bowen, G.A., 2009, ‘Document analysis as a qualitative research method’, Qualitative Research Journal 9(2), 27–40. https://doi.org/10.3316/QRJ0902027

Braun, V. & Clarke, V., 2006, ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Bright, J., Ganesh, B., Seidelin, C. & Vogl, T., 2019, ‘Data science for local government’, Policy & Internet 11(1), 1–26. https://doi.org/10.1002/poi3.172

Briner, R.B., Denyer, D. & Rousseau, D.M., 2009, ‘Evidence-based management: Concept cleanup time?’, Academy of Management Perspectives 23(4), 19–32. https://doi.org/10.5465/AMP.2009.45590138

Browne, J., Buke, L. & Maruna, S., 2017, ‘Policy learning and evidence use in local government’, Public Administration Review 77(5), 660–672. https://doi.org/10.1111/puar.12781

Bryman, A., 2012, Social research methods, Oxford University Press, Oxford.

Cairney, P., 2016, The politics of evidence-based policy making, Palgrave Macmillan, London.

Cairney, P., 2017, ‘The use of evidence in policymaking’, Journal of European Public Policy 24(5), 647–664. https://doi.org/10.1080/13501763.2016.1250158

Cassell, C., Cunliffe, A.L. & Grandy, G., 2018, The SAGE handbook of qualitative business and management research methods, Sage, London.

City of Tshwane (CoT), 2012, Indigent policy of the City of Tshwane, City of Tshwane Municipality, Pretoria.

City of Tshwane (CoT), 2015, Social development annual report, City of Tshwane Municipality, Pretoria.

Creswell, J.W. & Creswell, J.D., 2018, Research design: Qualitative, quantitative, and mixed methods approaches, 5th edn., Sage, Los Angeles.

Denzin, N.K. & Lincoln, Y.S., 2011, The SAGE handbook of qualitative research, 4th edn., Sage, Los Angeles.

Fiszbein, A. & Schady, N.R., 2009, Conditional cash transfers: Reducing present and future poverty, The World Bank, Washington, DC.

Fuo, O., 2020, ‘The role of local government in promoting social justice in South Africa’, Law, Democracy & Development 24(1), 66–91. https://doi.org/10.4314/ldd.v24i1.5

Goldman, I. & Pabari, M., 2021, Using evidence in policy and practice: Lessons from Africa, Routledge, London.

Heinrich, C.J., 2007, ‘Evidence-based policy and performance management: Challenges and prospects in two parallel movements’, American Review of Public Administration 37(3), 255–277. https://doi.org/10.1177/0275074007301957

Krueger, R.A. & Casey, M.A., 2015, Focus groups: A practical guide for applied research, 5th edn., Sage, Los Angeles.

Kvale, S., 2007, Doing interviews, Sage Publications, Thousand Oaks.

Leburu, D., 2017, ‘Municipal social welfare policies and indigent households’, South African Journal of Local Government 12(2), 34–56.

Lincoln, Y.S. & Guba, E.G., 1985, Naturalistic inquiry, Sage, Beverly Hills.

Managa, A., 2012, Unfulfilled promises and their consequences: A reflection on local government performance in South Africa, Africa Institute of South Africa, Pretoria.

Mashego, T., 2015, ‘Indigent policies and local government performance’, Development Southern Africa 32(1), 81–100. https://doi.org/10.1080/0376835X.2014.981987

Matlala, L.S., 2024, ‘Factors affecting effective citizen-based monitoring of frontline service delivery in South Africa’, Africa’s Public Service Delivery and Performance Review 12(1), a851.

Matlala, L.S., 2025, ‘Factors affecting the use of evidence in public sector programmes in south africa: A systematic review of outcome 8 programmes’, Conference on Digital Government Research 1. https://doi.10.59490/dgo.2025.1024

Meadows, D.H., 2008, Thinking in systems: A primer, Chelsea Green Publishing, White River Junction.

Mkandawire, T., 2005, Targeting and universalism in poverty reduction, UN Research Institute for Social Development, Geneva.

Mostert, A., 2023, City of Tshwane indigent programme review: Ensuring service delivery to the most vulnerable groups, Tshwane Municipality Reports, Pretoria.

Nutley, S., Walter, I. & Davies, H.T.O., 2009, Using evidence: How research can inform public services, Policy Press, Bristol.

Pabari, M. & Porter, S., 2013, ‘Evidence-informed decision making in Africa’, Evaluation Journal of Australasia 13(1), 28–40.

Patel, L., 2015, Social welfare and social development in South Africa, Oxford University Press Southern Africa, Cape Town.

Patel, Z., Greyling, S., Parnell, S. & Pirie, G., 2015, Urban governance and the politics of climate change in South Africa, Routledge, London.

Patton, M.Q., 2002, Qualitative research and evaluation methods, 3rd edn., Sage, Thousand Oaks.

Pawson, R., 2006, Evidence-based policy: A realist perspective, Sage, London.

Pillay, D., 2010, ‘Local government and social protection in South Africa’, Transformation: Critical Perspectives on Southern Africa 73(1), 69–89.

Porter, S. & Goldman, I., 2013, ‘A framework for evidence-informed policymaking in South Africa’, African Evaluation Journal 1(1), 1–14. https://doi.org/10.4102/aej.v1i1.25

Reddy, P.S., 2016, ‘The politics of service delivery in South Africa: The local government sphere in context’, Journal for Transdisciplinary Research in Southern Africa 12(1), 337–353. https://doi.org/10.4102/td.v12i1.337

Ruiters, G., 2018, ‘State, bureaucracy, and municipal governance in South Africa’, Review of African Political Economy 45(156), 167–186. https://doi.org/10.1080/03056244.2018.1459691

Shenton, A.K., 2004, ‘Strategies for ensuring trustworthiness in qualitative research’, Education for Information 22(2), 63–75. https://doi.org/10.3233/EFI-2004-22201

Simon, H.A., 1957, Models of man: Social and rational, Wiley, New York, NY.

Stewart, R., 2018, The role of evidence in decision-making in South African municipalities, HSRC Press, London.

Stewart, R., Dayal, H., Langer, L. & Van Rooyen, C., 2019, ‘The evidence ecosystem in South Africa: Growing resilience and institutionalization of evidence use’, Palgrave Communications 5(1), 90. https://doi.org/10.1057/s41599-019-0303-0

Tissington, K., 2013, A review of housing policy and development in South Africa since 1994, Socio-Economic Rights Institute of South Africa, Johannesburg.

Weiss, C.H., 1979, ‘The many meanings of research utilization’, Public Administration Review 39(5), 426–431. https://doi.org/10.2307/974668

World Bank, 2018, Social protection and labor strategy 2018–2022: Toward a resilient and inclusive future, The World Bank, Washington, DC.

Yin, R.K., 2018, Case study research and applications: Design and methods, 6th edn., Sage, Los Angeles.


 

Crossref Citations

1. Adaptive governance for resilient local service delivery
Ogochukwu I. Nzewi
Journal of Local Government Research and Innovation  year: 2025  
doi: 10.4102/JOLGRI.v6i0.322