The Education Agency as Research Intermediary

Since the late 1970s, researchers and policy makers have debated the role of research in policy making. Since the passage of the Every Student Succeeds Act (ESSA) in 2015, education policy has experienced a renewed interest in this topic. ESSA presents a rigorous framework and set of requirements for research use in educational decision making. This paper presents an evidence-informed model that education agencies (EAs) can follow to support schools in the implementation of ESSA’s evidence-based practice provisions. The model includes three key activities that EAs should engage in to become effective research intermediaries: (1) model effective research use, (2) build capacity in educators, and (3) promote action research.


Introduction
In 1978, Carol Weiss published a paper entitled Improving the Linkage between Social Research and Public Policy. This paper explored the relationship between research and policy and launched a conversation about the role of research in policy making that has persisted for more than forty years and influenced the educational policy landscape. As the Every Student Succeeds Act (ESSA) ushers in new and heightened expectations for evidence-based practices and research use in schools, education agencies (governmental bodies that regulate education; sometimes called departments, commissions, or ministries) must seek new ways to increase educator access to research and help local leaders develop the ability to apply findings as it relates to common problems of practice. EAs must become effective research intermediaries.

Background
In 2015, the United States Congress passed the latest reauthorization of the Elementary and Secondary Education Act of 1965 (ESEA) through a bill known as the Every Student Succeeds Act (ESSA). As the governing law, ESSA drives the collective effort to improve educational access, equity, and quality in the United States. One way in which it seeks to accomplish this task is through the implementation of evidence-based practice (EBP) provisions-a set of parameters that guide the way research should be used to inform educational decision making.
The use of research to inform educational decision making in the United States is not new. The 2001 reauthorization of ESEA, known as No Child Left Behind (NCLB), included directives for education leaders to consider "scientifically based research" but stopped short of defining, clearly, what that would look like in practice. Further modifications and federal initiatives, such as Race to the Top grants, School Improvement Grants (SIGs), and reauthorizations of the Educational Sciences Reform Act, continued to increase the level of rigor applied to research used in schools (Tseng, 2012;Goertz et al., 2016). The ESSA provisions represent a natural evolution in the role that research plays in educational decision making; moving away from the ambiguous language of No Child Left Behind, to a specific definition of evidence that includes four thresholds of academic rigor; (1) experimental studies, (2) quasi-experimental studies, (3) correlational studies, and (4) logic models (Elementary andSecondary Education Act of 1965, 2018).
While the passage of ESSA has served as an impetus for more rigorous research use in the United States, it reflects a larger global effort to improve research use in schools. In the United Kingdom, for example, the Department for Education has declared its "ongoing commitment to promote the use of evidence through encouraging incremental change in schools" and has demonstrated this commitment through its own commissioned studies, the implementation of teacher professional development standards that emphasize research, and the creation of grants and endowments to expand research use (Coldwell et al., 2017). Similar movements can be found in Australia (Hempenstall, 2014), Singapore (Low et al., 2012), and China (Law, 2014).
The question this creates for EAs in the United States is straight forward: how does the EA support schools in developing and implementing new models of evidence-based decision making? While the question is straight forward, its answer is anything but. ESSA places an emphasis on the importance of local control and empowers school and district leaders to set policies to guide their work (Edgerton, 2019). The trouble is that many of the EBP provisions in ESSA land on the shoulders of schools identified for improvement (Elementary andSecondary Education Act of 1965, 2018). These are schools who already face additional regulatory burdens and capacity challenges. To further complicate matters for the EA, the provisions have inserted themselves into a profession that is largely unprepared to utilize research for decision making and disengaged from the research community (discussed further below). In order to successfully promote effective use of research in the Era of ESSA, EAs must come to terms with the current state of research use and find new ways to support schools in this area.

The Current State of Research Use
The study of the use of research in policy making and educational decision making dates back the work of Carol Weiss in the 1970s. Weiss studied the role that research played in informing and shaping public policy, and with her colleague Bucuvalas identified three uses of research in policy making:  instrumental use: research used in problem solving,  conceptual use: research used to inform or change a point of view, and  symbolic use: research used to validate or legitimize a decision (Weiss & Bucuvalas, 1980). Later, Weiss would expand her model to include the concept of imposed use (2005), or the use of research because it is required or expected. It is through this framework that this article will explore the current state of research use at both the leadership and the classroom levels.

Research Use by School Leaders
It is unsurprising that the majority of research use in schools rests in the leadership sphere. When school leaders turn to research they, primarily engage in an instrumental use of research, gathering, and analyzing research to solve timely problems and inform professional learning activities (Penuel et al., 2016). Leaders also participate in a combination of conceptual and symbolic use of research while grant writing (Malin, 2019) where the task is to convince an unknown reviewer to fund their project. At the central office level, symbolic use of research is often deployed to generate buy-in for a decision (Honig & Coburn, 2008). Generally, education leaders have a positive view of research and see the benefits of turning to research to inform a decision (Hemsley-Brown & Sharp, 2003;Penuel et al., 2016;Ostinelli, 2016;Green, 1990).
While educational leaders view research in a positive light, their knowledge of research is generally broad and shallow (Hemsley-Brown & Sharp, 2003). They tend to operate under a rather loose definition of what constitutes research and consider existing practitioner knowledge and stakeholder input to be research (Goertz et al., 2016;Honig & Coburn, 2008). Stakeholder voice often weighs more heavily on a decision than empirical research (Ostinelli, 2016). Education leaders rarely indicate in studies that they have encountered research that changed the way they looked at a problem (Penuel et al., 2016;Hemsley-Brown & Sharp, 2003).
Schools and districts have a limited capacity for research collection and review. While most districts have infrastructure to support the collection and analysis of student level data, they do not have infrastructure in place for the collection and review of research (Hemsley-Brown & Sharp, 2003). Due to this lack of infrastructure, education leaders are likely to get their research in synthesized forms from professional associations (Penuel et al., 2016;Ostinelli, 2016). Time to review and analyze research is a persistent issue and while principals are

Research Use by Classroom Teachers
Classroom teachers use research to inform their daily practice far less frequently than their counterparts in the leadership realm. Teachers rarely access or use research when designing or planning instruction, and are generally unable to identify a piece of research that influenced their decisions and prefer the practical wisdom of their peers to empirical research (Lysenko et al., 2014;Levin et al., 2011). Teachers occasionally engage in an instrumental use of research to explore a new challenge in their classrooms and solve immediate problems, (Coldwell et al., 2017), but are also likely to participate in imposed research use due to external pressures and expectations (Sato & Loewen, 2019).
As has been stated, teachers rely heavily on their own knowledge and the knowledge of their peers. Teachers understanding of pedagogy is often aligned with research (Sato & Loewen, 2019) but they do not often use research as a basis for their decision making (Malin, 2019). While teachers report that they value research, they are often unprepared to effectively use research and it becomes a complex task requiring added supports (Coldwell et al., 2017;See et al., 2016). This leads them to rely on their own values, believes and experiences to interpret the validity of a piece of research and they often only feel that a piece of research is credible if it aligns with their own professional experiences (Kennedy, 1997;Hemsley-Brown & Sharp, 2003). As stands to reason, teachers also suffer from a lack access to quality research due to the limited infrastructure at the district level (Sato & Loewen, 2019;Malin, 2019).

Embracing the Role of Research Intermediary
Given what we know about the current state of research use, the importance of the research intermediary cannot be understated. Sometimes called research brokers or knowledge mobilizers, research intermediaries are individuals or organizations that help practitioners understand and use research to inform and shape decisions (Cooper & Shewchuk, 2015). Policy makers have relied on research intermediary organizations for several years, and many organizations, such as the comprehensive assistance centers provided by the U.S. Department of Education, are filling this role for state agencies (Goertz et al., 2016). However, as ESSA places a greater emphasis on evidence-based teaching practice and drives research use closer and closer to the classroom level, it becomes the EA's responsibility to find new and creative ways to provide intermediary services to districts and schools. As such, this paper presents three methods that existing literature suggests may be effective in elevating the EA to the role of research intermediary.

Modeling Research Use
The EA can begin to transform into an effective research intermediary by modeling effective research use. Research related to the United Kingdom's transition to evidence-based teaching practice revealed that organizational efforts and the attitudes of key leaders played a major Journal of Educational Issues ISSN 2377-22632020 www.macrothink.org/jei 5 role in success; with highly effective, well led schools becoming more research engaged and dedicated to continuous improvement efforts (Coldwell et al., 2017).
One way that EAs can begin to model research use is by becoming more transparent when discussing the research used to inform their own work. It is common to hear leaders say "research says" during meetings and policy discussions. If the EA is to model research use, they must hold themselves accountable to asking and answering the question: "What research?" Policy decisions and academic debate at the EA level must become heavily rooted in existing evidence of effectiveness if the same is expected of building and classroom level decision makers. EAs can make their decision-making process transparent by providing detailed research briefs or annotated bibliographies along with their policy rationales. These bibliographies can help ease the infrastructure burden on districts by serving as a starting point when they explore their own policy initiatives.
Time to access and digest research is a barrier frequently cited by educators (Sharp, 2004;Goertz et al., 2016;Penuel et al., 2016;Sato & Loewen, 2019). This barrier results in a heavy use of research synthesis over actual research studies (Goertz et al., 2016). Another way that the EA can support schools as a research intermediary is by curating existing research syntheses and creating new syntheses to fill in the gaps. When the EA promotes specific programs, practices, or strategies to schools, they should provide clearly written research syntheses that align with their recommendations. While it may be appropriate to create targeted syntheses authored by the EA directly, EAs would be well served to curate existing research syntheses to ensure high quality resources are available in a timely manner. Again, these curated syntheses can help expand upon limited district infrastructure and provide a boost to the knowledge base.

Building Capacity in Educators
As has been previously discussed in this paper, educators are trying to use research but they lack both knowledge and engagement in the use of research as a decision making mechanism. Educators struggle to determine if research is valuable, relevant, and of high quality (Sharp, 2004), an issue that will continue to become more pronounced as ESSA's EBP rules are applied to various federal funding mechanisms in the future. Without the capacity to effectively vet and evaluate research, educators will struggle to meet new federal mandates.
A recent evaluation of grant applications to the National Education Association (NEA) found that research citations were sparse and often referred to local data or research that supported the conceptual underpinning of a proposal rather than its potential impact on student outcomes (Malin, 2019). These same grant applications frequently featured authoritative, yet unsourced, statements of success. This same phenomenon often holds true when evaluating research provided by vendors to support their own programming. Vendors often bear the burden of producing evidence of effectiveness for their programs, but this burden creates inherent bias. Vendors are more likely to cite specific data points and targeted case studies using a more broad definition of research than what policy makers would like to see and in some cases include citations for research that does not explore the impact of their program at all (Farley-Ripple & Jones, 2015). ISSN 2377-22632020 www.macrothink.org/jei 6 EAs must take steps to up-skill local level education leaders and help them become more fluent in the interpretation and application of quality research. Providing targeted training to educators about research methodologies has been shown to be effective in increasing willingness to use research (Green, 1990). This training must help leaders to make connections between research findings and local issues (Levin et al., 2011) and interpret the findings of research beyond mere "recipes" for implementation (Levin, 2013). By building and maintaining a library of readily accessible training mechanisms, EAs can support schools and districts in meeting their professional learning challenges. Training should range from a variety of topics, including traditional research methodologies, determining local implications of research, and evaluating the quality (and potential bias) of published research.

Promoting Action Research
One of the most effective ways to learn about research is to participate in research. Action research helps practitioners better understand their own work by applying traditional research methodologies to daily practice and program evaluation. When done correctly, action research has been shown to nurture system wide emphasis on school improvement, promote a commitment to continuous improvement, and increase the perceived relevance of research in general (Glanz, 1999;Zambo, 2011;Ostinelli, 2016;Sato & Loewen, 2019). Action research that is undertaken by practitioners in a systemic way can promote a culture of evidence-based teaching, because the act of teaching becomes synonymous with the act of creating evidence of effectiveness. Unfortunately, action research can often be viewed as a lesser means of research (Cain, 2016) leading to a lack of use.
The EA can promote action research in two ways: (1) by creating its own, transparent, program evaluation procedures, and (2) by crafting policy that allows for the use of action research when meeting EBP requirements. Action research is, at its core, a self-reflection and self-assessment process (Glanz, 1999). While many programs at the EA level collect data related to the cost and impact of their work, a more research driven approach to program evaluation will promote a deeper understanding of the impact of programs and allow for richer modifications during improvement processes. By being transparent and open with the public about these evaluations the EA can promote the use of action research in schools. By extension, schools that follow the EA's model and begin using action research to evaluate the impact of their own programs should be allowed to submit quality action research findings as evidence of the effectiveness of their own programs.

Conclusion
This paper has presented three ways that EAs can support their schools and districts as the teaching profession shifts to include a stronger focus on evidence-based practices. There are, of course, authors who believe that the transition to evidence-based teaching practice is imprudent. These authors cite many of the concerns discussed in this article, such as the lack of access and understanding of available literature (Biesta, 2010) and the importance of professional judgement, knowledge, and experience in making classroom level instructional decisions (Biesta, 2007). Others worry that, in time, research may be used to undermine the value of the teacher and the moral imperatives of education (Cain, 2016). It is the opinion of