Skip to contentSkip to navigation

Get the latest information about Fall 2021 Repopulation and COVID-19. Before coming to campus, take the COVID-19 Daily Screening.

Standard 2

Standard 2: Program completers adapt to working in a variety of contexts and grow as professionals.


Across all programs, we viewed our responses to the Standard 2 aspects as an opportunity to work within our existing data system to learn where we are, both programmatically and as an educational unit. While we understand that the intent of Standard 2 is to evaluate how well our programs prepared our completers to work in their designated fields, historically, our programs have not collected these data from completers who have left our programs. Consequently, we chose to focus on the ways in which the work we do within our programs prepares candidates in each of these areas, documenting that work, with a plan to collect data from completers in the future. Like with their analysis of data sources in response to Standard 1, programs primarily relied on existing data sources. In some cases, programs did pilot tools to begin to gather the perspectives of completers. Still, not all perspectives were captured to the extent that we would like nor to the extent that we intend to capture them in the future. It is our belief that the findings we present in this Quality Assurance Report represent a baseline portrait of the work we do, a starting point from which we can continue to build and grow.

Because they were working to evaluate the ways in which their program prepared candidates to be successful completers in each area, like with Standard 1, for direct measures of completer performance, faculty relied on existing assessments. In some cases, this meant drawing on performance assessments, such as the California Administrator Performance Assessment (CalAPA), which candidates in the Preliminary Administrative Service Credential take portions of throughout the program and which program faculty rely on to inform the work they do. In other cases, such as School Nursing and School Counseling, faculty drew on the evaluations completed by our P-12 partners in the field, who observe and evaluate our candidates as their site supervisors. For the Reading/Literacy Specialist, School Counseling, and School Nursing programs, faculty also relied on signature assignments in key courses where the content aligned with the aspects. Many of these signature assignments also aligned with assessments used as part of programs’ Student Outcome Assessment Plans for their review by the university.

For indirect measures of candidates, programs again, overall, relied on existing data sources, especially surveys of employers and of program candidates at the time of completion. In some instances, programs administered pilot measures to individuals who completed the program one or more years ago to begin capturing their perspectives on areas of strength and growth. The Reading/Literacy Specialist Program, in particular, worked to develop and pilot an instrument with the goal of collecting data from completers. Similarly, the Preliminary Administrative Services Credential program developed a tool to administer to recent graduates, using the AAQEP Standard 1 and 2 Aspects as a framework.

While we are fortunate to be situated in California and to have access to survey data administered by the California Commission on Teacher Credentialing to our completers and to employers of our completers, we did have some challenges in using these data sources. For some of our programs, such as the Reading/Literacy Specialist, the number of responders annually is too small, and so data are not disaggregated by institution. Similarly, because the School Counseling credential is considered a Pupil Personnel Services (PPS) Credential, responses to that survey are aggregated with other PPS programs. The same problem holds true with the Employer Survey, which is administered to employers of completers of all our programs with no items to specify which program the completer was a part of. Consequently, results could not be disaggregated by program, and so we did not find the data to be meaningful on a programmatic level. 

Because we are in the beginning stages of our AAQEP journey, we made the decision to allow program faculty to determine what data sources would be most meaningful to them and the work they do. Program faculty worked together to identify the most appropriate data sources, analyze the data, interpret the findings, and articulate next steps for each aspect, creating their own continuous improvement journey to move their program forward. As a unit, we then looked across the responses to see how programs might learn from one another as they engage in this work and how we might support their progress. We document all of this in our QAR. Within the Standard 2 responses, reviewers will find each program’s responses to each aspect, along with the program’s synthesis and next steps. In the conclusion, we work to synthesize all four programs’ findings and highlight our next steps in our ongoing process to ensure our program completers are ready to perform as professional educators with the capacity to support access for all learners.

***Please Note: Throughout Standard 2, we utilize data from the CSU Educator Quality Center surveys. We included screenshots of the analyzed data within the Aspect responses. Unfortunately, we are unable to download raw data to include as links within the responses and the EdQ Center does not allow us to provide guest logins. We are happy to work with reviewers to login to the system jointly to allow for any necessary checks.