Instrument and Design Oversights

We found several problems with our original survey instrument and survey plan. One problem was that we decided at the last minute to differentiate students in 2-year and 4-year degree programs. The result was students had to indicate their degree plans by writing a 2 or 4 in the upper corner of their surveys. This method resulted in three of the collected surveys not containing data regarding student degree program. The second problem was that the wording of the willingness to participant instrument provided by Menzel led to missing data on 4 of the 21 surveys we randomly chose to analyze.

Specifically, students were asked how willing they were to participate when they sat in the front of the room, and the three students who did not answer that item wrote in the margins that they never sat in the front of the room; the message in the margin was similar for the student who opted not to answer for sitting in the back of the room. A third problem was a numbering problem that made data entry confusing; we consecutively numbered the items within the two scales measuring student gender and teacher gender when it would have been less confusing to start the numbering a one for each individual scale.The result of the consecutive numbering was incorrectly entered data that had to be re-entered, and occasionally the reentered data required re-entering; the accuracy of the data set was beginning to become questionable. A fourth problem was our concern that surveying during week four might not have given student enough exposure to the classroom and the teacher they were evaluating. The fifth problem was potential survey fatigue. Because of the number of variables, the original survey was three sheets of paper.The final problem was that once we started doing the data analysis, we realized that we had four independent variables with three potential states for each variable; our plan was far too complex and would require an enormous data set for anything meaningful to be decided.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

Doing the pilot study was enormously educational and enlightening, and we were able to make important revisions as explained below. Instrument Revision and Final Participants In our study, we addressed our six concerns and resurveyed a larger student population.First, we added a survey item at the bottom of the survey to differentiate students in 2-year and 4-year degree programs.

We placed it last because we felt that placing it first might somehow confound the answers students gave. We addressed the second problem by adding the following sentence to the survey. “If certain questions seem to not apply to you, answer them as if the situation did apply. For example, you may never sit in the back of the room, but if you did, how often do you think you would choose to talk.

” This strategy was extremely successful. Of the 152 surveys collected, only two were unusable, and these were unusable due to missing answers on other scales on the survey instrument. This 98. 6% rate of usable surveys in the study greatly surpassed the 89.

2% rate of usable surveys in the pilot study. With regard to the potential for mis-entered data, we separately numbered the items of each scale for clearer data entry, and as a result we experienced only 3 lines of mis-entered data requiring reentry.To combat survey fatigue, we removed the variable of student reported gender, thus shortening the survey to two sides of one sheet of paper. Furthermore, dropping the variable simplified our design plan and made analysis more manageable.

Once all these changes were made, we used the pilot study’s protocol to survey 150 students from the same community college, and in response to the final problem, this time we surveyed during week seven when the students had almost twice the exposure to the classroom and the teacher they were evaluating.Data Analysis In the following sections, the measures for perceived immediacy, willingness to talk, and perceived teacher gender will be explained, and the alphas will be given for each scale. Then an explanation of the type of statistical analysis performed will be followed by the Results and Discussion section. Measures Perceived Immediacy. Like Menzel and Carrell (1999), we used the instruments developed by Christophel (1990) to measure both nonverbal and verbal instructor immediacy behaviors.While Menzel and Carrell (1999) treated each type of immediacy as a distinct variable to be summed and analyzed separately, we treated immediacy as one variable represented by the sum of the nonverbal and verbal scores.

The Christophel (1990) instruments asked students to rate how often the instructor engaged in 14 nonverbal and 20 verbal immediacy behaviors on a scale from 0 (never) to 4 (very often). Christophel (1990) reported a Cronbach’s alpha of . 84 for the verbal immediacy instrument and . 77 for the nonverbal. In our study, Cronbach’s alpha for the combined scale was . 92.Like Menzel and Carrell (1999), we used the scores on the immediacy instrument to evenly divide instructors into categories of high, moderate, or low in immediacy behavior. With 34 items scored from 0-4, the potential range of instructor immediacy scores was 0-136.

Because there were 150 surveys in the data set, 50 instructors were placed in each category based on their immediacy score, with low immediacy being a score less than 79, moderate immediacy being a score between 79-93, and high immediacy being a score greater than 93. Table 1 provides the means for each immediacy category.