Skip to main content Skip to main navigation Skip to footer content

AAQEP Accreditation 2022

Standard 2 Aspect C

Standard 2c: Program completers engage in professional practice in educational settings and show that they have the skills and abilities to do so in a variety of additional settings and community/cultural contexts. For example, candidates must have broad and general knowledge of the impact of culture and language on learning, yet they cannot, within the context of any given program, experience working with the entire diversity of student identities, or in all types of school environments. 

Candidate preparation includes first-hand professional experience accompanied by reflection that prepares candidates to engage effectively in different contexts they may encounter throughout their careers.


Data Sources & Analysis

The following three data sources represent three different teaching experiences/contexts for our program completers. For Data Source #1, the SVP occurs during students’ initial student teaching placement. Data Source #2, the EHD 155B evaluations, occur during the students’ final student teaching at a different school site. Data Source #3 is collected when the CATIP Induction Assessment is completed by our candidates after they finish our program, are hired by a school, and begin their 2 year induction program. These data sources provide evidence of how well our program prepares students to create productive learning environments throughout different phases of our program and also after students complete our program.

Data Source 1

Fresno Assessment of Student teachers (FAST) Site Visitation Project (SVP) Reflection

Description of Data Source:
The Site Visitation Project portion of the FAST requires candidates to plan and deliver a lesson that is consistent with the current methods recommended for teaching the subject and designed to encourage the acquisition and use of academic language in the subject area. The lesson is to be designed to be relevant to the students prior experiences, interests, and backgrounds. Activities/strategies are to be designed to encourage active participation and communication by all students with opportunities for inquiry and reflection. Candidates are to determine the learning needs, backgrounds, and interests of their students and select one focus student. They are required to video the delivery of the lesson. They are to effectively implement and monitor their instruction consistent with subject specific pedagogy to teach the identified academic content standards. The assessment is scored using specific, task-focused rubrics. All coaches in the Ag Specialist program are calibrated to score the assessment.

Perspective Captured from Data Source: University Coach

Rationale for using Data Source:
For Data Source #1, the SVP Reflection Section requires candidates enrolled in initial student teaching to reflect on a lesson they planned and taught. They are expected  to demonstrate a realistic understanding of the relationship between content knowledge and planning and teaching. Candidates are to provide suggestions for improving access to content for students and provide examples of interactions from the lesson and how these interactions promote productive student learning--all actions that contribute to developing a positive learning environment. Candidates collect information on their students that includes a) English proficiency level, b) identified needs (lEP, 504, behavioral plans) c) ethnicity, and d) reading and writing proficiency. They then prepare a lesson plan, teach, and video the lesson. Candidates then watch the video and prepare a self-evaluation of their planning and teaching of the lesson. 

Specific Elements of Data Source: 
Site Visitation Project Reflection Rubric: Applying Knowledge of Students (TPE 2.2, 3.2, 6.1) 

Overall score for three sections:

  • Subject Specific Pedagogy
  • Applying Knowledge of Students
  • Student Engagement

Definition of Success for Each Element:
The university coaches encourage candidates to strive for a score of four on the scoring rubric and would like to see scores of 2.5 or better. Candidates must score a two or better on the scoring rubric to show they meet the expectation for the site visitation project. To score a two, the rubric calls for candidates to effectively implement instruction consistent with subject-specific pedagogy.

Displays of Analyzed Data:
Table 1:  Site Visitation Project Data Summary Fall 2018 - Spring 2020

info
Semester N SVP Planning SVP Implementation SVP Reflection
Fall 2018 14 2.14 2.14 2.07
Spring 2019 14 2.00 2.07 2.07
Fall 2019 15 2.27 2.27 2.07
Spring 2019 16 2.13 2.31 2.25

Link to Full Dataset:  FAST Scores SVP F18 - Sp20 Ag_Students Summary & Data

Interpretation of Data:
Although the candidates average scores did not reach the goal of 2.5 or better, all candidates successfully met the minimum expectation for the Reflection portion by scoring 2.0 or better (a 2.0 score is required to pass) using the SVP scoring rubric. Very few candidates score above a 2.0 on their SVP, however being that the SVP is conducted and assessed early on in the program, it is understandable that their mean scores would be slightly above the minimum program expectation with room for improvement as they progress through their initial and final student teaching experience.  

Data Source 2

Mid-Semester and Final Evaluations

Description of Data Source: 
In their second semester of their field experience, candidates receive two formal evaluations on their teaching from their university coach in collaboration with their mentor teacher. One of these is completed during the mid-semester and the other is completed at the end of the semester. Candidates are evaluated on five key areas: maintaining effective environments, monitoring student learning and making adjustments during lessons, addressing needs of all students, subject-specific pedagogy, and assessment.

Perspective Captured from Data Source: Mentor Teacher and University Coach

Rationale for Using Data Source:
The EHD 155B Mid-Semester and Final Evaluation rubric is completed by the mentor teacher and university coach collaboratively. In doing so, the mentor and coach evaluate the candidate’s ability to maintain an effective learning environment and their ability to monitor the learning environment and make adjustments while teaching on a 4 point rubric scale. 

Specific Element of Data Source:
Scores for Maintaining Effective Environments

Definition of Success for Each Element: 
The university coaches encourage candidates to strive for a score of four on the scoring rubric and would like to see mean scores of 3.0 or better. A score of 2.0 indicates that the candidate meets the expectations.

Displays of Analyzed Data:
Table 2: EHD 155B Mid-Semester & End-of-Semester Evaluations:

info
Spring 2020
Evaluation Criteria Mid-Semester
N = 15
Final Eval
N = 15
Maintaining Effective Environments 2.93 3.29
Monitoring Student Learning and Making Adjustments During Lessons 2.71 3.21
Addressing Needs of All Students 2.64 3.14
Subject-Specific Pedagogy 2.86 3.07
Assessment 3.00 3.07
Fall 2020
Evaluation Criteria Mid-Semester
N = 17
Final Eval
N = 17
Maintaining Effective Environments 2.88 3.05
Monitoring Student Learning and Making Adjustments During Lessons 2.71 3.05
Addressing Needs of All Students 2.65 2.94
Subject-Specific Pedagogy 2.35 2.82
Assessment 2.65 2.82
Spring 2021  
Evaluation Criteria Mid-Semester
N = 24
Final Eval
N = 24
Maintaining Effective Environments 3.26 3.35
Monitoring Student Learning and Making Adjustments During Lessons 2.96 3.17
Addressing Needs of All Students 3.00 3.22
Subject-Specific Pedagogy 2.96 3.17
Assessment 2.78 3.47

Interpretation of Data:
Although the candidates average scores did not reach the goal of a three or better for all areas in Fall 2020, all candidates met the expectation of 3.0 or better for Maintaining Effective Environments for all three semesters. All three cohorts showed notable increases between their Mid-Semester and Final Evaluation scores, demonstrating the positive influence of their EHD 155B experience on their ability to maintain an effective learning environment.

Data Source 3

California Agricultural Teachers’ Induction Program (CATIP) Individual Induction Plan (IDP) Self-Assessment

Description of Data Source:
The California Agricultural Teachers’ Induction Program (CATIP) sprang out of the need to induct new agriculture teachers into the dynamic field of agriculture education as identified by the Vision 2030 process of the California Agricultural Teachers’ Association (CATA) that began in 2013. CATIP is a consortium built for the purpose of providing accredited induction services meant to support early career California Agriculture Teachers in their first two years. CATIP provides contextualized mentoring and support for early career agriculture teachers with Single Subject—Agriculture and Agricultural Specialist credentials. The California Agricultural Teachers’ Induction Program is meant to be a two-year program for those electing to begin and complete induction services in CATIP. The general program structure follows this approximate timeline for new teachers, further referred to as Credential Candidates (CC). As our Ag. Specialist completers enter the CATIP program, they complete a self-assessment where they indicate their level of preparedness across various skills and responsibilities required of a secondary agricultural educator. Their results are analyzed and used to assist each new teacher and their mentor to formulate their Individual Induction Plan.

Perspective Captured from Data Source: Completers

Rationale for using Data Source:
The CATIP self-assessment measures our completers’ perceptions of their level of preparedness as they are entering their induction program. Participants indicate their perceived level of preparation for 16 skills/ responsibilities related to the California Teaching Performance Expectations by selecting the appropriate number on a 5 point Likert-type scale: 1 = Not Prepared, 2 = Less than Adequately Prepared, 3 = Adequately Prepared, 4 = More than Adequately Prepared, and 5 = Well Prepared. 

One of the TPE’s is “Creating and maintaining effective environments for student learning.” The chart below provides the data for the area of creating and maintaining effective environments for student learning.

Specific Elements of Data Source:
Teaching Performance Expectations (TPE): Creating and maintaining effective environments for student learning.

Definition of Success for Each Element: 
It is our program expectation that completers will rate themselves as adequately prepared (3) or above. 

Displays of Analyzed Data:
Table 3: 2018-2020 Fresno State Calif. Ag Teachers Induction Program Participants

info
Creating and Maintaining Effective Student Learning Environments
Year N Mean Score
2018 4 3.50
2019 15 4.00
2020 12 3.67
Grand Mean   3.81

Link to Full Dataset:  CATIP Completer Data Fresno State 2018-2021 Summary & Data Overall

Interpretation of Data:
Thirty-one of the completers who entered the CATIP teacher induction program in the past three years rated their self-perceived ability to create and maintain an effective student learning environment. Across the three different cohorts, means ranged from 3.50 to 4.00.  An analysis of the data indicates that our completers perceive that they were at least  “Adequately Prepared” in the “Creating and Maintaining Effective Environments for Student Learning” competency area.  Based on each individual's self-assessment and their identified needs, CATIP mentors and participants work together to formulate the participants’ professional development plan, including improving their ability to maintain an effective learning environment.

Next Steps:
In order to address what we found, we will devote more time on the importance of productive learning environments and the importance of reflection to help candidates determine specific activities and strategies that would improve the learning environment. We will also continue to track completers to identify areas we need to work on to improve the performance of future candidates. To evaluate our efforts in this area, we will continue to analyze the data we collect each year and discuss as program faculty how we might improve the scores of our candidates.

Aspect D →