Program completers adapt to working in a variety of contexts and grow as professionals.
Across all programs, we viewed our responses to the Standard 2 aspects as an opportunity to work within our existing data system to learn where we are, both programmatically and as an educational unit. We understand that the intent of Standard 2 is to evaluate how well our programs prepared our completers to work in their designated fields, and although the CSU/CTC distribute surveys to one year out and employers of our completers, within Kremen we have not collected our own data about completers’ growth as professionals. Consequently, we chose to focus on the ways in which the work we do within our programs prepares candidates in each of these areas, documenting that work, with a plan to collect data from completers in the future. Like with their analysis of data sources in response to Standard 1, programs primarily relied on existing data sources. In some cases, programs did pilot tools to begin to gather the perspectives of completers. Still, not all perspectives were captured to the extent that we would like nor to the extent that we intend to capture them in the future. It is our belief that the findings we present in this Quality Assurance Report represent a baseline portrait of the work we do, a starting point from which we can continue to build and grow.
Because they were working to evaluate the ways in which their program prepared candidates to be successful completers in each area, like with Standard 1, for direct measures of completer performance, faculty relied on existing assessments. Again, in some cases, this meant drawing on the Site Visitation Plan (SVP) and the Teaching Sample Project from the FAST II. In other cases, programs relied on Pre and Post Dispositions survey data to evaluate the growth of candidates in key areas during their time in the program. Again, as was the case with Standard 1, programs also looked to student performance on signature assignments or final grades in key courses in which content is aligned with the appropriate California Commission on Teacher Credentialing (CCTC) Program Standards and which aligned with the focus of the aspect.
For indirect measures of candidates, programs again, overall, relied on existing data sources, especially surveys of employers and of program candidates at the time of completion. In some instances, programs administered pilot measures to individuals who completed the program one or more years ago to begin capturing their perspectives on areas of strength and growth. Both the Education Specialist and Bilingual Authorization programs developed surveys aligned with the AAQEP Standard 2 aspects that were piloted with recent completers. The Bilingual Authorization Program also developed a survey, discussed in its response to Standard 2 aspects, it intends to pilot in Spring 2022.
We are fortunate to be a part of the California State University (CSU) system, which has its own Educator Data Quality Center. This Center administers annual surveys to completers of our programs and year-out completers on an annual basis, with items connected to the California Standards for the Teaching Profession. These surveys proved to be a valuable resource for our Multiple Subject, Single Subject, and Education Specialist programs. However, because the Bilingual Authorization and Agriculture Specialist programs have fewer numbers of completers annually, results are unavailable disaggregated by program. Instead, these programs turned to internal measures.
Another challenge programs encountered was with the CSU Educator Quality Center employers survey. This measure was administered in 2015-2016 and 2016-2017 to employers in reference to 2014-2015 and 2015-2016 program completers. The survey was then stopped, and the CCTC began to administer a survey to employers of program completers across the state. However, the CCTC employer survey has no items to specify which program the completer was a part of. Consequently, results could not be disaggregated by program, and so we did not find the data to be meaningful on a programmatic level. Instead, we chose to use the dated CSU Educator Quality Center employer survey data, with the understanding that findings may not be exactly in line with the perceptions of employers of our more recent completers. We believe that the available data still provide us with a baseline from which we can measure growth in the future.
Because we are in the beginning stages of our AAQEP journey, we made the decision to allow program faculty to determine what data sources would be most meaningful to them and the work they do. Program faculty worked together to identify the most appropriate data sources, analyze the data, interpret the findings, and articulate next steps for each aspect, creating their own continuous improvement journey to move their program forward. As a unit, we then looked across the responses to see how programs might learn from one another as they engage in this work and how we might support their progress. We document all of this in our QAR. Within the Standard 2 responses, reviewers will find each program’s responses to each aspect, along with the program’s synthesis and next steps. In the conclusion, we work to synthesize all five programs’ findings and highlight our next steps in our ongoing process to ensure our program completers are ready to perform as professional educators who are able to work in a variety of contexts and continue growing as professionals.
***Please Note: Throughout Standard 2, as discussed above, we utilize data from the CSU Educator Quality Center surveys. We included screenshots of the analyzed data within the Aspect responses. Unfortunately, we are unable to download raw data to include as links within the responses and the EdQ Center does not allow us to provide guest logins. We are happy to work with reviewers to login to the system jointly to allow for any necessary checks.