Skip to main content Skip to main navigation Skip to footer content

AAQEP Accreditation 2022

Standard 1

Conclusion & Next Steps

Looking across the findings from the self-study conducted by the Multiple Subject, Bilingual Authorization, Single Subject, Agriculture Specialist, and Education Specialist programs highlights that, based on the available data sources, overall, completers of our programs are prepared to perform as professional educators with the capacity to support access for all learners.

Areas of Strength:
As program faculty engaged in self-study in response to the AAQEP standards, they did so with a long history of successful accreditation from the California Commission on Teacher Credentialing. Many of the findings from Standard 1 confirmed that the strengths of the programs aligned with our School’s mission and goals. In particular, given the high percentage of students in our region who are emergent bilinguals and the diverse range of cultural backgrounds they represent, as an educational unit, our mission is to prepare educators to be leaders in diverse communities. 

Findings from across the QAR highlighted the ways in which all of our programs emphasize the development of culturally sustaining pedagogy, yet this self-study led to findings of additional program strengths. In particular, Multiple Subject, Single Subject, and Education Specialist program completers who responded to the CSU Educator Quality Center items related to working with culturally and linguistically diverse students overwhelmingly highlighted their preparation to do so. It is worth noting that these results do incorporate those candidates who also earned a Bilingual Authorization or Agriculture Specialist credential.

Another important finding when looking across the program responses is the way in which our preliminary credential programs prepared candidates to use assessment to inform their instruction. While programs used a variety of data sources to examine candidates’ development of their ability to use assessment--including FAST scores, field placement evaluations, and scores on signature assignments--they all also examined responses to the CSU Educator Quality Center completer survey. Across the Multiple Subject, Single Subject, and Educational Specialist programs, completers overwhelmingly reported that the programs’ focus on using assessment to plan and adapt instruction helped them feel confident in their abilities to do just that as they enter into the teaching profession. This both highlights the work our programs do while candidates are enrolled, but it also demonstrates the value of gathering multiple perspectives when examining data sources in order to gather a complete picture. Similarly, the BAP program did not have access to the CSU completer survey because their candidates are also enrolled in the Multiple Subject program, and data are not disaggregated for this added authorization. Consequently, the program developed its own internal survey which, coupled with findings from a key assignment, demonstrated how the program prepares candidates to use assessments to inform their instruction.

The results of the Agriculture Specialist Program’s analysis provided another unique finding by demonstrating how the program provides extensive opportunities for candidates to gain expertise in the unique skill sets required of agriculture teachers. From the beginning of their field placements, candidates become involved in the workings of an agriculture program and learn the ins and outs of organizing and maintaining a successful FFA program. Additionally, as the findings for Standard 1A highlighted, the experience required of all program candidates before they even begin the program guarantees they enter with a solid foundation on which specific pedagogical knowledge can be built. 

Finally, as highlighted throughout, in preparing their responses to the standard aspects, programs overwhelmingly drew upon the resources provided to us by the CSU Educator Quality Center and the California Commission on Teacher Credentialing, in addition to the more localized data sources such as students’ scores on the FAST. While program faculty were aware of each of these data sources, the results were not used on a regular basis to inform program practices. Engaging in the self-study afforded programs the opportunity to see both the value of these data and how they might use the data moving forward.

Areas for Growth:
While the findings of our analyses did highlight the success of the work our programs do to prepare our completers for their future roles, we also discovered several areas for improvement. In particular, as stated above, this process helped programs to realize the rich data we have access to from the FAST and the Educator Quality Center. However, prior to engaging in this self-study, not all programs have engaged in regular, systematic analysis of data, though, as a unit, we have begun to take steps in this area with the implementation of our regular Data Summits in Fall 2020. Still, the Multiple Subject program provides a wonderful model for using data to reflect on our work and set goals for continuous improvement. Seeing the power of the work they are doing inspired us to consider how we can develop a unit-wide approach to collecting data from all stakeholders, making it available and a topic of exploration in our stakeholder meetings, and working towards reflective unit-wide goal setting.

Related to this, another important take-away we had is that we do not have a unit-wide systematic approach to collecting data from any of our key stakeholder groups--completers, K-12 partners, employers. This pertains to our findings from both Standard 1 and Standard 2. Although between the CSU Educator Quality Center and the California Commission on Teacher Credentialing, surveys are administered to program completers, employers, and year-out professionals, we discovered that their measures do not always align with the analysis we were trying to do in response to the Standard 1 aspects. Another challenge is that, particularly in the case of the CCTC Employer data, we were unable to disaggregate the data in a way that made the findings meaningful to us. While we do plan to advocate for revision both with the CSU survey and the CCTC survey, we also realize that we need to develop a systematic, unit-wide approach to collecting and analyzing data related to our programs. But the result was that, for this QAR, we were not always able to capture the perspectives of each key stakeholder group. More often than not, we relied on the perspectives of our faculty and our candidates.

Moving forward, we intend to develop unit-wide surveys that can be administered annually to each stakeholder group that will include both general programs about the work our institution does as a whole and program-specific questions. The hope is that this will allow us to collect data that will be useful at both levels but that will not lead to survey-fatigue from administering too many surveys, which is already a concern given the administration of both the CCTC and CSU Educator Quality Center surveys. 

On the individual program level, as highlighted in the responses and in the table below, we plan to begin holding annual focus group discussions with key stakeholders as a way to gather additional data. Ideally, these will occur after the administration of the surveys so that survey responses can inform what gets asked in the focus group discussions. We see these discussions as a way to both gather valuable information about how we can improve our program and a way to continue to build relationships with our completers, P12 partners, and employers of our alumni.

Additionally, in order to then make the necessary changes to program practices, program faculty plan to spend time examining current coursework, assessments, and evaluation tools to ensure that coursework aligns with expected outcomes, that assessments provide a valid way for candidates to demonstrate mastery of those outcomes, and to ensure that the tools used for evaluation actually measure what they are intended to measure. As they do so, faculty will also engage in inquiry, examining student work across courses to ensure the validity and reliability of both the assignments used and the tools used to evaluate those assignments. We envision that this work will take time and be ongoing as program faculty will need to try new approaches, examine their effectiveness, make revisions, and then implement those revisions. 

To support faculty in their efforts, as a unit, we will continue holding our Data Summits to further conversations about how to effectively use data to inform program practices.

Standard 1: Candidate and Completer Performance Program Next Steps

Action to Take Rationale for Action Steps w/Proposed Timeline
Collaborate with the CSU Educator Quality Center (EdQ) to accurately disaggregate the program completer survey by pathways (i.e., Traditional, Residency, Internship).  Inaccuracies were identified in the CSU EdQ dataset that prevented data quality assurance in using that data to look at completer perceptions of preparedness by pathway experience. The program will benefit from understanding the ways in which the pathways prepare candidates and strengthen our data interpretations across all aspects of Standard 1.   
  1. Schedule a meeting with the CSU EdQ data coach to discuss the inaccuracies identified. (Fall 2021)
  2. Provide the correct identifiers needed in the survey to increase quality assurance of disaggregated data. (Fall 2021) 
  3. Develop a timeline with the CSU EdQ team for survey updates. (Fall 2021)  
Address program completers’ perceptions of lack of preparation in the areas of a) critical-creative thinking; b) knowledge of child development and human learning to inform instruction; c) classroom management and discipline and support teacher candidates in developing the skills needed to handle a range of classroom management or discipline situations; d) use of research-based instructional strategies for emergent bilingual students.  Data for Standards 1a, 1b, 1c, and 1e, such as from the CSU EdQ program completer survey and formative rubric items, indicate that these are areas in which the program can improve. 
Standard 2e data from the program completers one year out from the program parallel the findings from the CSU completer survey completed at the end of the program, providing additional evidence that this is a worthy action to take.  
  1. Within our program learning community meetings, revisit these data with program faculty and university coaches to dialogue about data - what do they notice in the data? (Fall 2021)
  2. Engage program faculty and university coaches in discussions about where frameworks related to these content areas are intentionally embedded in the program. (Fall 2021)
  3. Program faculty and university coaches decide on the area in which to focus improvement and co-develop a change idea to test out to see if it leads to an improvement. Develop a plan, do study, act (PDSA) cycle for one cohort (Fall 2021).
  4. Implement the PDSA cycle for the one cohort (Spring 2022). 
  5. Program faculty and university coaches review the outcomes of the PDSA cycle and consider if the idea should be adopted, adapted, or abandoned. (Fall 2022)
  6. Continue planning and PDSA process for the other content areas in need of improvement. (Fall 2022, Spring 2023, Fall 2023, Spring 2024, Fall 2024)   
Examine, select, and/or develop various representative measures/data sources that are more directly connected to the signature assignments in the program.  Data from Standards 1a-f often rely on mostly program-level collected data such as completer surveys, formative rubrics, and performance assessments that are primarily quantitative in nature. There are signature assignments aligned to TPEs with rubric data such as in Standard 1b and 1f that provide insights into the quality of the core curriculum of the program. These classroom assessments would add a qualitative aspect to evaluating our program that is currently missing. A mixed method approach to data collection and analysis would be more informative and authentic in looking at the way we prepare teachers for the classrooms of the future. 
  1. Within course-alike faculty sessions, program faculty who teach CI 162 (1b) and LEE 160 and LEE 169s (1f) will review their respective signature assignment rubrics related to the AAQEP standards (Fall 2022).
  2. LEE 162 faculty will develop revisions to the rubric to increase alignment with AAQEP standards (Fall 2022).
  3. LEE 160 and LEE 169s faculty revise rubrics to evaluate and assess the deficit and asset-based language around TK-6 students, language, and their families. The LEE 160 and 169s assignment and the rubric will also include a new section around student’s lived experiences, including linguistic and navigational wealth. With a program-wide rubric, faculty will be able to track growth of asset-based teacher candidate’s perceptions towards students and their families (Fall 2022). 
  4. Both faculty groups will engage in calibration of scoring of the new rubrics for these signature assessments (Spring 2023)

Action to Take Rationale for Action Steps w/Proposed Timeline
Create system to workshop and reflect course content Data findings in several standard 1 responses indicated issues with this. Given that new coursework went into effect Fall 2021, it is important to begin this process and maintain it moving forward
  1. Convene program advisory board meeting, which include course instructors. 
  2. Engage committee members in workshopping and providing feedback on 3 course syllabi.
  3. Share report of feedback and discuss with each program instructor.
  4. Program coordinator can then work with each instructor to ensure each course aligns to standards and complement each other nicely. 

Action to Take Rationale for Action Steps w/Proposed Timeline
Create a centralized key signature assignment timeline for evaluating Standards 1A-1F within the Single Subject Program. The data from the FAST, the EdQ Completer Survey and the key signature assignments for Standards 1A-1F indicate that there are areas in which the program can improve. However, there is no internal system for aligning the key signature assignments with Standard 1.  
  1. Revisit the data collected for Standard 1 with program faculty and university coaches (Fall 2021)
  2. Discuss where these items are embedded in the Single Subject program. (Fall 2021)
  3. Develop a timeline  to look at each course within the program (Fall 2021).
  4. Implement the timeline (Spring 2022). 
  5. Review the outcomes of the timeline. (Fall 2023)
  6. Continue reviewing the timeline (Spring 2024). 
Increase the number of qualitative measures present in our current Single Subject data collection system to address Standards 1A-1F. Most of the data used to evaluate Standards 1A-F were quantitative. 
  1. Discuss ways qualitative data can be used to gather feedback from candidates and completers to inform program practices (Spring 2022).
  2. Implement qualitative data collection activities (Fall 2023)
Develop and administer an internal Single Subject completer survey that is inclusive of AAQEP Standards 1A-1F. No internal measure exists to gather the perspective of program completers. CSU Educator Quality Center survey items do not always capture the necessary information.
  1. Work with program faculty to develop an internal survey that can capture the candidates perspectives in relation to Standard 1A-1F (Spring 2022-Fall 2023)
  2. Pilot-test the survey (Spring 2023).
  3. Assess the results, reliability, and validity of the survey against the Standards 1A-1E (Fall 24). 
  4. Use survey findings to develop focus group protocol to use with recent program completers

Action to Take Rationale for Action Steps w/Proposed Timeline
Explore options for collecting more data that is specific to the Agriculture Specialist Credential. Create a data entry system for the three Ag. Specialist evaluation forms to allow students, mentor teachers and university supervisors to input data. Currently there are three evaluation forms mentor teachers complete for the agriculture specialist candidates that are not included in an electronic database.
  1. We will develop a process for capturing this data in a format that can be compiled into a searchable database and utilized to determine candidate strengths and weaknesses. 
  2. We will research methods and procedures for getting this task accomplished during 2022 with a goal for completion of fall 2023.
Examine and update the EHD 154A and the AGRI 280 Seminar curricula to provide more instructional time focused on improving student performance on the Site Visitation and Teaching Sample assignments. The two seminars allow for time to assist students with the Site Visitation and Teaching Sample Projects; however, in order to improve scores on these projects more seminar time will be devoted to assisting students in completing these projects.
  1. Beginning with the spring 2022 semester seminar instructors will dedicate additional time to accomplishing this task. 
  2. At the end of each semester agricultural education faculty will check on the progress toward meeting the goal of improved scores and identify the specific project areas that need to be  emphasized. The goal is to see improvement in the scores beginning with the spring 2022 semester.

Action to Take Rationale for Action Steps w/Proposed Timeline
Explore and develop measurement tools that provide a more specific breakdown of signature assignments to measure candidates’ competence in meeting the standards. In Standard 1d, data was collected on the signature assignments from SPED 136 (UDL Instruction Unit) and SPED 125, (Functional Behavior Assessment and the Behavior Intervention Plan).
After reviewing the available data on these assignments, we found that measurement tools did not break down assignments into individual, measurable parts for data collection and analysis. Having measurable tools that address the program standards and TPEs as well as consistent reporting of scores on signature assignments would provide a clearer picture of candidates’ competence, application and retention of skills needed to meet the TPEs and program standards.
  1. Develop measurement tools that provide specific data points on candidates’ competence on the standards for each course (Spring 2022).
  2. Offer training and ongoing support to adjunct faculty on the new standards, new program design, and the use of the measurement tools.
  3. Develop a centralized internal data collection system that involves all faculty in the reporting and analysis of signature assignment data to inform program practice (Spring-Summer 2021).
More purposeful data collection and analysis from program completers to inform program practices  The surveys sent to program completers were only recently implemented to help inform program practices Annually each fall:
  • Focus group discussion with program completers
Annually each Spring/Summer:
  • Administer exit survey to program completers
  • Administer year+ survey to year+ completers
Ongoing:
  • Analyze data from data collection efforts in program meetings 
  • Share findings from data analysis with advisory board 
  • Use findings + recommendations from advisory board to inform program practices