FAQ For Faculty

  •  FAQs Regarding IDEA with CampusLab Platforms

    Is there any way to copy the extra questions that I used last semester over to this semester's IDEA surveys?

    Unfortunately, since we are moving to a new platform this semester, there is no way to copy from a previous semester. In future semesters, you will be able to copy custom questions from one semester to the same course in another semester.

    In trying to select learning objectives, there are several which are unable to be excluded that do not apply to writing courses. How can I remove these inapplicable learning objectives?

    Unfortunately, the questions on the default OSF are standard and instructors cannot remove questions.

    Is it possible to change the start date for my surveys?

    Yes, this can be done. However, there is a complication under the new platform. The existing administration will have to be deleted and a new administration will be created. If you have already completed the OSFs, they will be deleted and you will need to complete them again.

    Is it possible to change the end date for my surveys?

    Yes, email uaa.idea@alaska.edu to request the end date be changed. If you decide to close your surveys earlier, you will need to give your students plenty of notice that the dates in the default emails are not correct for your classes, and give them time to complete the surveys before they close.

    One of the courses I'm teaching this semester only has 2 students enrolled. Can I still do a survey?

    Under the new platform, courses with enrollments of 3 or more will receive a survey.

    How do students access the survey? Is it the menu link on Blackboard that says "Course Evaluation"?

    Students will receive an email with a link to the CampusLabs site to access their surveys. They will log in with their UA credentials (user name and password) and be able to see the survey. There is one URL that goes to a website where students log in to see their surveys, so you can write the URL on the board or put it in your syllabus?

    About how much time do the surveys take to complete?

    Surveys shouldn't take too long for students to complete, as the surveys can now be done on smart phones, iPads, tablets, laptops, etc.; we encourage faculty to allow time in class to complete the survey to increase the response rate.

    How will I know when the survey has been completed and how many students have completed it?

    With the new platform, you should be able to log in as soon as the survey is available to students (November 25th for fall 2019) to see how many responses your survey has received. Once surveys close (December 18th for fall 2019), you should be able to review the results in 3-5 days.

    Is it possible to incentivize students to complete the survey? For example, offer a couple of extra credit points if they send me confirmation that they have completed it?

    You can certainly offer extra credit points to encourage students to complete the survey. They will receive an email confirmation when they complete the survey, and can provide that to you as proof.

    Is it possible to select a CIP/discipline code? In the past, this was part of the process.

    No, it is not possible for individual faculty to select discipline codes on this new platform. Programmers are working to figure out if this is something we can import from Banner in the future.

    For CampusLabs technical support, please call (716) 270-0000 or email support@campuslabs.com For username/password issues, contact UAA IT Services at (907) 786-4646, option 1, or uaa.techsupport@alaska.edu

  • Filling out the Faculty Information Form

    How do I choose objectives?

    Generally the objectives you choose on the OSF should be a subset of the course goals that you wish to evaluate that semester. Please consider the questions below for details.

    Should I make the objectives match the course goals in the CCG?

    Generally they should be a subset of the course goals, since you are not likely trying to achieve objectives beyond the scope of the course. Because the IDEA feedback system is designed to help faculty measure the effects of their teaching choices on the course, objectives you select should be the objectives that you wish to measure that semester. This might be only some of the course goals, as the IDEA system provides the most accurate information when the number of objectives is small. See 鈥淗ow many objectives should I choose and at what level?鈥 below for information on how many to select.

    Do I set the objectives or does my department?

    This varies. Some departments have discussed the objectives and selected a set together. Please ask your department chair or director for this information. Note that even with common departmental objectives, individual faculty may wish to select a smaller subset or adjust the 鈥淚mportant鈥 or 鈥淓ssential鈥 rating in order to evaluate progress on a specific objective in a semester. See鈥淪hould I make the objectives match the course goals in the CCG?鈥 above for more information.

    How many objectives should I choose and at what level?

    Except in unusual circumstances you should not pick more than three objectives as 鈥淚mportant鈥 or 鈥淓ssential.鈥 Based on past results with IDEA the 鈥淧rogress on Relevant Objectives鈥 score decreases with each additional objective (see IDEA's website). The 鈥淧rogress on Relevant Objectives鈥 score is a weighted average of student responses to the objective questions with each 鈥淓ssential鈥 objective twice as important 鈥淚mportant鈥 objectives. The student responses on the rest of the objectives are ignored.

    What do the objectives mean?

    To better understand each objective, you may want to read the following documents on IDEA's . 鈥淪ome Thoughts on Selecting Objectives鈥 describes each objective in depth.

    What do students actually answer?

    The students answer a set of three questions for each objective plus additional information about themselves that is used to adjust the scores. This means they answer questions about objectives that you list as not important. This is a technology issue that cannot currently be changed.

    Do the "Contextual Questions" matter?

    The options labeled 鈥淐ontextual Questions鈥 are not part of the rating system. These are part of IDEA鈥檚 internal research. How you answer them does not affect your results.

    Does my choice of department matter?

    The department code that you select determines which department your scores are compared against nationally in the 鈥淒iscipline鈥 section of the report.

    Do my choices affect the adjusted scores?

    None of the choices on the OSF form are used in calculating the adjusted score at this time. The adjustments are based on information the students self report and some information about the class (not on the OSF).

    Can I complete the OSF after the survey becomes accessible to students?

    Yes. The objectives on the OSF can be changed and the OSF can be completed. However, additional questions cannot be changed or added after the survey becomes accessible to students.

  • Improving Response Rates

    How do I improve the response rates?

    The largest student response rates involve students who understand the importance of the forms, faculty and departments who are organized in their handling of setup and communication, and faculty and students who understand the technology involved. See the questions below for tips on each of these aspects. Also see the questions on filling out the OSF.

    How do I convince students that completing the surveys is important?

    The largest response rates have included classes in which students understand how feedback forms are used and in which they believe the results will be heeded. Following are some options for helping your students understand the use and importance of this feedback system.

    • Throughout the semester inform the students how your course has changed in response to student feedback. When anything in class is the result of student feedback mention it. They will then believe in the importance of their feedback. Multiple reminders will also help them remember the importance when the surveys are available.

    • Take time in class before the surveys are available to explain how you and the university use the forms. 

    • Allow time in class for students to complete the survey. The new platform allows students to access surveys via smartphone, tablet, laptop or desktop computer by logging on to a single URL. Remind students ahead of time to come prepared to class with either their cell phone, tablet, or laptop in order to complete the survey in class.

    • Students are usually not aware that their feedback, if provided, is used in retention, tenure, and promotion. Let them know that this mechanism is their chance to be heard. Also remind them of the changes that you made. State the changes in wording that is similar to the questions they answer, so that they make the connection.

    • Prepare what to say to students about IDEA at the beginning of the course. One of the most important things faculty can say to students on the first day of the course is to share with them what they have learned from student feedback in previous courses. They might say something like, 鈥淏ased on what I learned from my IDEA Reports (or Student Ratings of Instruction Reports), I have changed something this semester (and tell what it is that has been changed),鈥 or 鈥淏ased on what students said on their IDEA Surveys, I have confidence that this course design will help you as you work to achieve the goals of the course.鈥 Finally, as faculty review the syllabus with their students, they might want to point out how the course objectives relate to the IDEA Learning Objectives.

    What ways can I communicate the survey dates to the students?

    The more frequently students are reminded, the more likely they are to respond. Consider the following options for communicating the dates and methods of filling out the forms.

    • Include a note about the IDEA surveys in the syllabus and the dates the survey will be available for your class.

    • Add an announcement on Blackboard before surveys are available and the dates they will be open for your class.

    • Email your students multiple times when the surveys are available. The email can be sent through Blackboard.

    • Discuss the importance of the surveys in class before they are available.

    • Remind the students every class period during the time in which they are available for your class.

    What can I do to help them understand how to fill out the surveys (the technology)?

    When students can easily find and begin the surveys they are more likely to fill them out. Consider the following options to minimize the technology barrier for students.

    • Require students to access your course in Blackboard multiple times before the survey is available in your class. This may include accessing the syllabus, assignment lists and descriptions, and viewing their grades.

    • Include links to the survey in multiple locations on Blackboard. Request help from ITS help desk to learn how to add links.

    • Demonstrate getting to the survey in class. Note you won't have the link on your account, but you can show the page. A student might also do the demonstration.

  • Using IDEA Results

    Preface: 

    Teaching is a complex picture that involves multi-faceted talents, including 鈥 among other things 鈥 interpersonal dynamics between instructors and students, crafting of assignments, clarity of lectures, speed and quality of grading, inspiring students to learn outside the classroom, etc. There is, consequently, no single way to assess how well someone manages all the complexities of teaching.  Instead, a complex picture requires multiple methods of assessment, including 鈥 among other things 鈥 peer observations of classroom teaching, peer review of assignments, and student ratings. Students, of course, are a valuable source of information about teaching because they see the class from a point of view that instructors don鈥檛 see.  However, students鈥 perceptions/ratings are only part of the picture: instructors could be highly effective but get modest or poor evaluations from students (e.g., perhaps because the course material is very difficult); or instructors could get strong evaluations (e.g., perhaps because of a dynamic personality or easy grading) from students but not be very effective in teaching the material. IDEA is not designed to provide a complete picture of an instructor鈥檚 teaching; it cannot, for instance, reveal how effectively an instructor is imparting the material. Rather IDEA focuses on only one piece of the picture 鈥 students鈥 perceptions.  Unlike UAA鈥檚 previous SDIS system, IDEA

    1. gives faculty the flexibility to customize the questions that are asked of students.
      For instance, an instructor can easily add questions to get students鈥 feedback about a new approach that the instructor is implementing in the course,
    2. gives faculty the opportunity to customize  those aspects of teaching that are most relevant for the course, rather than being evaluated on across-the-board objectives that might not be relevant.
      For instance, instructors can specify whether their course should be evaluated more on its ability to encourage the search for personal values, or on its ability to teach a series of steps in some complex problem-solving tasks, or on some other course-specific learning objective.
    3. allows faculty to see how their courses compare nationally to other courses in their discipline or subdiscipline.
    4. provides statistical adjustments for factors that are known to affect students鈥 evaluations (e.g., class size).
    5. advertises its weaknesses, calling attention, for instance, to low response rates.

    How can I use the IDEA results?

    If you have a specific goal, then you can fill out the OSF to match your goal, collect data over multiple semesters, and use the student surveys as part of the evidence that you have achieved that goal. IDEA surveys can be indicators of change in context. They are not good indicators of static concepts of quality.

    Student survey results can be used as evidence of effective change in a class. If student鈥檚 response to the 鈥淧rogress on Relevant Objectives鈥 and the individual responses to objectives improve after making a change in a course, you have some evidence that the change improved student perception of objective achievement.
    Example: An instructor adds a guided tour of library resources (provided by the Consortium Library faculty) to a course in which research is expected. If after doing this for the first time the responses to 鈥淟earning how to find and use resources for answering questions or solving problems鈥 increases noticeably, the instructor has some evidence that a change might have been successful.

    Student survey results can be used as evidence of consistency. If over a number of semesters, the student responses on the objectives remain similar (remain in the same bands on page one鈥擬uch Higher, Higher, Similar Lower, Much Lower) then student perception is constant over time demonstrating consistency in your work.
    Student survey results from the diagnostic form (page 3) can be used for faculty development. The students鈥 answers to individual questions can 鈥 in conjunction with other information -- guide a faculty member in changing how they achieve the course objectives.
    Example: A faculty member notices over multiple semesters that students rate the course high for 鈥淚ntroduced stimulating ideas about the subject鈥 but they constantly rate 鈥淒emonstrated the importance and significance of the subject matter鈥 lower. If both aspects are important for a given course, the instructor might then choose to include more applications if that is appropriate or explain to the students in which courses they will learn to apply the theory being learned in this class, or some other action consistent with the goals of that course. If in following semesters that response increases, the faculty member also has some evidence of successful development.

    What are the limitations of the IDEA results? 

    The results cannot measure whether a good or bad job was done in a class. The results indicate student perception which may not match reality. Also, failure to meet some objectives may not be bad if the objective missed is not required in the course. 


    Example: A faculty member decided to use groups in class to improve student engagement. The instructor adds the 鈥淎cquiring skills in working with others as a member of a team鈥 objective on the OSF and then instructs the students on how to work in groups throughout the semester. The students may perceive that their instruction on how to work in groups was insufficient and provide low ratings for this objective. The instructor鈥檚 鈥淧rogress on Relevant Objectives鈥 will now be lower. However, if the groups were not a goal of the course as defined in the CCG and by the department, then this faculty member has not done a bad job. They may choose either to improve their group instructions or cease using groups.

    The results often cannot be used to compare faculty members. If 鈥渞eliability鈥 is low, then comparison to other faculty members, or use of the discipline or institution fields is statistically invalid and inappropriate. Note this does not mean the results are not useful as a measure of effectiveness in that class which does not require comparison to others.

    How do I 鈥

    Check quickly if I am meeting the objectives I recorded for this course?

    The 鈥淧rogress on Relevant Objectives鈥 entries on the first page answer this question. The adjusted score on the left (a number between 1 and 5) is the students' perception of meeting the objectives on a five point scale adjusted for known effects outside instructor control. See Adjusted below for more information. Higher scores represent students perceiving better achievement of the objectives. 

    You can also check where your adjusted score is in the five bands on the right (鈥淢uch Lower,鈥 鈥淟ower,鈥 鈥淪imilar,鈥  鈥淗igher,鈥 鈥淢uch Higher鈥). The words refer to students perception of meeting objectives in your class in comparison to other classes. For example, if your adjusted score for 鈥淧rogress on Relevant Objectives鈥 is in the 鈥淪imilar鈥 band, then students reported the same perceived level of success in your class as students reported in all classes reporting to IDEA. For instance, if your adjusted score in the discipline is 47, you can see in the boxes above that 45-55 is in the 鈥淪imilar鈥 band. Thus students in your class reported a perception that you met the objectives in your class as well as students reported in all classes in your discipline for all schools using IDEA. Note these comparisons to discipline and institution will consistently be above or below the main comparison. This reflects student biases again. For example departments that teach more students in general education courses than they do in elective courses will find that their comparison on meeting objectives is higher in the 鈥淒iscipline鈥 category than in the main comparison (鈥淎ll Classes in the IDEA database鈥).

    Note, since the students may not fully understand the material of the course, they may not be able to accurately judge whether objectives were met. Additional measures of success must be checked. 

    Check if I met a specific objective I recorded for this course? 

    The same information provided as a summary on the first page is provided per objective on the second page. Note that there will be no information for objectives that you did not select.

    What do these mean?

    鈥凌别濒颈补产濒别鈥

    This is a technical concept of statistics. Think 鈥榮tability.鈥  In brief, reliability/stability focuses on whether the results from those students who completed IDEA are likely to be relatively stable and not fluctuate or oscillate widely with additional respondents. 鈥淯nreliable鈥 results are reported when there are relatively few respondents (even in low-enrollment courses) or a small percentage of student respond; in these cases the addition of a few more respondents can have a profound impact on the results. 鈥凌别濒颈补产濒别鈥 results are reported when a sufficient number and sufficient percent of students respond, suggesting that the results are not likely to fluctuate widely with additional respondents. 
    Example: A faculty member teaching a small course incorporates service learning into the class. Note, sufficiently small classes are always unreliable. The results are listed as representative, and the students gave a higher rating for 鈥淟earning to analyze and critically evaluate ideas, arguments, and points of view鈥 the instructor can be confident that the service learning did encourage broader perspectives. They cannot claim to have done so better than someone else, however.


    鈥凌别辫谤别蝉别苍迟补迟颈惫别鈥&苍产蝉辫;

    This is a technical concept of statistics. In brief it means that the average results represent the perceptions of all students in the class whether or not all filled out the survey. If results are continually not representative, an instructor cannot make claims about helping all students solely on the basis of the student surveys. Other evidence will be needed. However, the instructor can use the results to indicate quality of work and to indicate change.
    Example:  A faculty member consistently has a 50% response rate. The response to 鈥淕aining factual knowledge鈥 is consistently high. The instructor does have evidence that the type of student who responds to a survey perceives that they are learning. Other evidence will be needed to address those who do not respond to the survey.
    Example: A faculty member consistently has a 50% response rate. The instructor incorporates writing assignments in the course to help students improve their ability to communicate their knowledge. If the responses to 鈥淒eveloping skill in expressing myself orally or in writing鈥 increases after adding these assignments, then the instructor has evidence that the assignments are effective. The instructor does not know if the students not responding have improved in their work, but this does not speak to the assignment but rather to those students.


    鈥淎djusted鈥   

    These scores are modified to reflect effects on student responses that are outside the instructor鈥檚 control. The adjustment is based on information provided by the university and reported by the students. For a complete description see the IDEA web site. The most commonly noticed adjustment is based on whether the class was required (e.g., general requirement) or optional (e.g., upper division elective in the major). Scores are adjusted upward for required courses and downward for elective courses to account for a known student bias based on their desire to take a course. The 鈥渞aw鈥 scores reflect student responses as reported, but they may not be used for comparison purposes. The 鈥渁djusted鈥 scores are better for broad comparison purposes.