AAQEP Accreditation 2022
Standard 1
Conclusion & Next Steps
Looking across the findings from the self-study conducted by the Multiple Subject, Bilingual Authorization, Single Subject, Agriculture Specialist, and Education Specialist programs highlights that, based on the available data sources, overall, completers of our programs are prepared to perform as professional educators with the capacity to support access for all learners.
Areas of Strength:
As program faculty engaged in self-study in response to the AAQEP standards, they
did so with a long history of successful accreditation from the California Commission
on Teacher Credentialing. Many of the findings from Standard 1 confirmed that the
strengths of the programs aligned with our School’s mission and goals. In particular,
given the high percentage of students in our region who are emergent bilinguals and
the diverse range of cultural backgrounds they represent, as an educational unit,
our mission is to prepare educators to be leaders in diverse communities.
Findings from across the QAR highlighted the ways in which all of our programs emphasize the development of culturally sustaining pedagogy, yet this self-study led to findings of additional program strengths. In particular, Multiple Subject, Single Subject, and Education Specialist program completers who responded to the CSU Educator Quality Center items related to working with culturally and linguistically diverse students overwhelmingly highlighted their preparation to do so. It is worth noting that these results do incorporate those candidates who also earned a Bilingual Authorization or Agriculture Specialist credential.
Another important finding when looking across the program responses is the way in which our preliminary credential programs prepared candidates to use assessment to inform their instruction. While programs used a variety of data sources to examine candidates’ development of their ability to use assessment--including FAST scores, field placement evaluations, and scores on signature assignments--they all also examined responses to the CSU Educator Quality Center completer survey. Across the Multiple Subject, Single Subject, and Educational Specialist programs, completers overwhelmingly reported that the programs’ focus on using assessment to plan and adapt instruction helped them feel confident in their abilities to do just that as they enter into the teaching profession. This both highlights the work our programs do while candidates are enrolled, but it also demonstrates the value of gathering multiple perspectives when examining data sources in order to gather a complete picture. Similarly, the BAP program did not have access to the CSU completer survey because their candidates are also enrolled in the Multiple Subject program, and data are not disaggregated for this added authorization. Consequently, the program developed its own internal survey which, coupled with findings from a key assignment, demonstrated how the program prepares candidates to use assessments to inform their instruction.
The results of the Agriculture Specialist Program’s analysis provided another unique finding by demonstrating how the program provides extensive opportunities for candidates to gain expertise in the unique skill sets required of agriculture teachers. From the beginning of their field placements, candidates become involved in the workings of an agriculture program and learn the ins and outs of organizing and maintaining a successful FFA program. Additionally, as the findings for Standard 1A highlighted, the experience required of all program candidates before they even begin the program guarantees they enter with a solid foundation on which specific pedagogical knowledge can be built.
Finally, as highlighted throughout, in preparing their responses to the standard aspects, programs overwhelmingly drew upon the resources provided to us by the CSU Educator Quality Center and the California Commission on Teacher Credentialing, in addition to the more localized data sources such as students’ scores on the FAST. While program faculty were aware of each of these data sources, the results were not used on a regular basis to inform program practices. Engaging in the self-study afforded programs the opportunity to see both the value of these data and how they might use the data moving forward.
Areas for Growth:
While the findings of our analyses did highlight the success of the work our programs
do to prepare our completers for their future roles, we also discovered several areas
for improvement. In particular, as stated above, this process helped programs to realize
the rich data we have access to from the FAST and the Educator Quality Center. However,
prior to engaging in this self-study, not all programs have engaged in regular, systematic
analysis of data, though, as a unit, we have begun to take steps in this area with
the implementation of our regular Data Summits in Fall 2020. Still, the Multiple Subject
program provides a wonderful model for using data to reflect on our work and set goals
for continuous improvement. Seeing the power of the work they are doing inspired us
to consider how we can develop a unit-wide approach to collecting data from all stakeholders,
making it available and a topic of exploration in our stakeholder meetings, and working
towards reflective unit-wide goal setting.
Related to this, another important take-away we had is that we do not have a unit-wide systematic approach to collecting data from any of our key stakeholder groups--completers, K-12 partners, employers. This pertains to our findings from both Standard 1 and Standard 2. Although between the CSU Educator Quality Center and the California Commission on Teacher Credentialing, surveys are administered to program completers, employers, and year-out professionals, we discovered that their measures do not always align with the analysis we were trying to do in response to the Standard 1 aspects. Another challenge is that, particularly in the case of the CCTC Employer data, we were unable to disaggregate the data in a way that made the findings meaningful to us. While we do plan to advocate for revision both with the CSU survey and the CCTC survey, we also realize that we need to develop a systematic, unit-wide approach to collecting and analyzing data related to our programs. But the result was that, for this QAR, we were not always able to capture the perspectives of each key stakeholder group. More often than not, we relied on the perspectives of our faculty and our candidates.
Moving forward, we intend to develop unit-wide surveys that can be administered annually to each stakeholder group that will include both general programs about the work our institution does as a whole and program-specific questions. The hope is that this will allow us to collect data that will be useful at both levels but that will not lead to survey-fatigue from administering too many surveys, which is already a concern given the administration of both the CCTC and CSU Educator Quality Center surveys.
On the individual program level, as highlighted in the responses and in the table below, we plan to begin holding annual focus group discussions with key stakeholders as a way to gather additional data. Ideally, these will occur after the administration of the surveys so that survey responses can inform what gets asked in the focus group discussions. We see these discussions as a way to both gather valuable information about how we can improve our program and a way to continue to build relationships with our completers, P12 partners, and employers of our alumni.
Additionally, in order to then make the necessary changes to program practices, program faculty plan to spend time examining current coursework, assessments, and evaluation tools to ensure that coursework aligns with expected outcomes, that assessments provide a valid way for candidates to demonstrate mastery of those outcomes, and to ensure that the tools used for evaluation actually measure what they are intended to measure. As they do so, faculty will also engage in inquiry, examining student work across courses to ensure the validity and reliability of both the assignments used and the tools used to evaluate those assignments. We envision that this work will take time and be ongoing as program faculty will need to try new approaches, examine their effectiveness, make revisions, and then implement those revisions.
To support faculty in their efforts, as a unit, we will continue holding our Data Summits to further conversations about how to effectively use data to inform program practices.
Standard 1: Candidate and Completer Performance Program Next Steps
Action to Take | Rationale for Action | Steps w/Proposed Timeline |
Collaborate with the CSU Educator Quality Center (EdQ) to accurately disaggregate the program completer survey by pathways (i.e., Traditional, Residency, Internship). | Inaccuracies were identified in the CSU EdQ dataset that prevented data quality assurance in using that data to look at completer perceptions of preparedness by pathway experience. The program will benefit from understanding the ways in which the pathways prepare candidates and strengthen our data interpretations across all aspects of Standard 1. |
|
Address program completers’ perceptions of lack of preparation in the areas of a) critical-creative thinking; b) knowledge of child development and human learning to inform instruction; c) classroom management and discipline and support teacher candidates in developing the skills needed to handle a range of classroom management or discipline situations; d) use of research-based instructional strategies for emergent bilingual students. | Data for Standards 1a, 1b, 1c, and 1e, such as from the CSU EdQ program completer
survey and formative rubric items, indicate that these are areas in which the program
can improve. Standard 2e data from the program completers one year out from the program parallel the findings from the CSU completer survey completed at the end of the program, providing additional evidence that this is a worthy action to take. |
|
Examine, select, and/or develop various representative measures/data sources that are more directly connected to the signature assignments in the program. | Data from Standards 1a-f often rely on mostly program-level collected data such as completer surveys, formative rubrics, and performance assessments that are primarily quantitative in nature. There are signature assignments aligned to TPEs with rubric data such as in Standard 1b and 1f that provide insights into the quality of the core curriculum of the program. These classroom assessments would add a qualitative aspect to evaluating our program that is currently missing. A mixed method approach to data collection and analysis would be more informative and authentic in looking at the way we prepare teachers for the classrooms of the future. |
|
Action to Take | Rationale for Action | Steps w/Proposed Timeline |
Create system to workshop and reflect course content | Data findings in several standard 1 responses indicated issues with this. Given that new coursework went into effect Fall 2021, it is important to begin this process and maintain it moving forward |
|
Action to Take | Rationale for Action | Steps w/Proposed Timeline |
Create a centralized key signature assignment timeline for evaluating Standards 1A-1F within the Single Subject Program. | The data from the FAST, the EdQ Completer Survey and the key signature assignments for Standards 1A-1F indicate that there are areas in which the program can improve. However, there is no internal system for aligning the key signature assignments with Standard 1. |
|
Increase the number of qualitative measures present in our current Single Subject data collection system to address Standards 1A-1F. | Most of the data used to evaluate Standards 1A-F were quantitative. |
|
Develop and administer an internal Single Subject completer survey that is inclusive of AAQEP Standards 1A-1F. | No internal measure exists to gather the perspective of program completers. CSU Educator Quality Center survey items do not always capture the necessary information. |
|
Action to Take | Rationale for Action | Steps w/Proposed Timeline |
Explore options for collecting more data that is specific to the Agriculture Specialist Credential. Create a data entry system for the three Ag. Specialist evaluation forms to allow students, mentor teachers and university supervisors to input data. | Currently there are three evaluation forms mentor teachers complete for the agriculture specialist candidates that are not included in an electronic database. |
|
Examine and update the EHD 154A and the AGRI 280 Seminar curricula to provide more instructional time focused on improving student performance on the Site Visitation and Teaching Sample assignments. | The two seminars allow for time to assist students with the Site Visitation and Teaching Sample Projects; however, in order to improve scores on these projects more seminar time will be devoted to assisting students in completing these projects. |
|
Action to Take | Rationale for Action | Steps w/Proposed Timeline |
Explore and develop measurement tools that provide a more specific breakdown of signature assignments to measure candidates’ competence in meeting the standards. | In Standard 1d, data was collected on the signature assignments from SPED 136 (UDL
Instruction Unit) and SPED 125, (Functional Behavior Assessment and the Behavior Intervention
Plan). After reviewing the available data on these assignments, we found that measurement tools did not break down assignments into individual, measurable parts for data collection and analysis. Having measurable tools that address the program standards and TPEs as well as consistent reporting of scores on signature assignments would provide a clearer picture of candidates’ competence, application and retention of skills needed to meet the TPEs and program standards. |
|
More purposeful data collection and analysis from program completers to inform program practices | The surveys sent to program completers were only recently implemented to help inform program practices | Annually each fall:
|