NRC FAQs

NRC — FREQUENTLY ASKED QUESTIONS

(If you have further questions, please contact JP Monroe 346-2085 or email jpmonroe@uoregon.edu)

  1. What is the NRC Assessment?
  2. How are the 2010 NRC rankings different than the 1995 rankings?
  3. Which University of Oregon programs are rated?
  4. What properties of departments are rated? What are the five different rankings?
  5. What does it mean that these are “ranges of rankings”?
  6. How were the ranges of rankings calculated?
  7. How are these data defined? How can I check whether my department’s data is correct?
  8. Where did the data come from? Who at University of Oregon vetted the data?
  9. These data don’t reflect the current state of my department. Can we update them?
  10. As a faculty member, I spent a lot of time filling out a faculty survey. Why are most of those items not included?
  11. Why isn’t my department ranked at all?
  12. How can faculty and departments use the rankings?
  13. How can prospective graduate students use the rankings?
  14. How can current graduate students use the rankings?

 

1. What is the NRC Assessment?

The NRC Assessment of Research Doctorate Programs is a national study that aims to evaluate the quality of PhD programs across the United States. It was conducted by the National Research Council (NRC). The rankings will be released on September 28, 2010. In all, 4,838 doctoral programs at 212 universities in 62 fields were rated. The data focus on many dimensions of doctoral programs to facilitate comparisons among programs in the same field.

The 2010 report is the third time such an assessment has been conducted. The first two were published in 1995 and 1982. The current project was conducted between 2005 and 2010. The data were collected in 2006-07, based on data provided by each university about students and faculty during the 2005-06 academic year.

 

2. How are the 2010 NRC rankings different than the 1995 rankings?

There are substantial differences between the 2010 and 1995 rankings, from data collection to statistical analysis to the format of the rankings themselves. These differences are addressed throughout this FAQ; some of the major changes include the following:

  • More programs are ranked in 2010.
  • Each program will receive five ranges of rankings rather than one ranking.
  • Rankings are based on different variables.
  • Rankings are determined by a new statistical method.

 

3. Which University of Oregon programs are rated?

Twenty-three of University of Oregon‘s doctoral programs were included in the study.

  • Anthropology
  • Biology
  • Chemistry
  • CIS
  • Comp Literature
  • East Asian Languages and Literature
  • Economics
  • English
  • Geography
  • Geological Sciences
  • History
  • Human Physiology
  • Journalism
  • Linguistics
  • Mathematics
  • Music
  • Philosophy
  • Physics
  • Political Science
  • Psychology
  • Romance Languages
  • Sociology
  • Theatre Arts

 

4. What properties of departments are rated? What are the five different rankings?

The NRC used 20 variables that it considers “indicators of program quality.” Variables include measures of faculty research activity, student support and outcomes, and faculty and student demographics. The indicators come from the extensive data provided by the institutions themselves as well as some data collected by the NRC (e.g., faculty awards, publications, and citations).

 

Each program will receive five ranges of rankings:

  • Overall S-rankings (“Survey”): Based on 20 variables, weighted based on field-specific faculty opinions of the relative importance of the various program factors.
  • Overall R-ranking (“Regression”): Based on 20 variables, weighted based on field-specific faculty rankings of actual programs.
  • Research Activity subscale: Based on 4 variables used in the Overall rankings.
  • Student Support and Outcomes subscale: Based on 5 variables: 4 used in the Overall rankings, plus “Whether the department collects student outcome/placement data.”
  • Diversity of the Academic Environment subscale: Based on 5 variables used in the Overall rankings.

 

5. What does it mean that these are “ranges of rankings”?

The rankings will be presented in a different form than most other rankings. Rather than receiving a single ranking (e.g., 1st, 5th, 32nd), each program’s five sets of rankings are presented in ranges. The ranges mark 90% confidence intervals.

A program’s range of rankings might be, for example, 2-8 or 4-27 or 13-37. These ranges reflect the inherent uncertainty of ranking a particular program due to differences among raters, statistical uncertainty, and variability in year-to-year data. These ranges of rankings are intended to reflect greater statistical certainty. A range of 2-8 should be read, “It is 90% certain that the program is ranked between 2nd and 8th in this field.”

 

6. How were the ranges of rankings calculated?

The ranges of rankings were produced from a complex statistical analysis. A brief summary follows.

Overall S- and R- ranges of rankings are derived from the values of the indicators and the field-specific weights for each variable. The S- and R- weights differ by field, recognizing that faculty members in different disciplines value different aspects of doctoral programs. The 20 variables are weighted to produce quantitative estimates of program quality. The field-specific weights are based on two faculty opinion surveys conducted in spring 2007.

  • S-weights, based on surveys: The first survey asked all faculty across all fields to rate the importance of 21 variables that influence overall program quality.
  • R-weights, based on regressions: The second survey, the “implicit” or “anchoring study,” asked subset of faculty to rate a sample of programs in their field. Regression analysis was then used to determine which quantitative variables at what weights most closely predicted the program rankings in each field.

Ranges of rankings: Rankings from many raters were aggregated and arranged in order to yield ranges of rankings. The NRC study used a “random halves” procedure in which weights are calculated based on the responses of a randomly selected subset of faculty respondents. This is done 500 times, calculating 500 rankings that are ordered each time. The 500 resulting rankings are ordered from best to worst, and the bottom five percent and top five percent are dropped. This results in two scores for each program covering the middle 90% of the 500 rankings.

 

7. How are these data defined? How can I check whether my department’s data is correct?

The data categories and definitions used by the NRC are often different from those used in most University of Oregon reports. The data may not coincide with numbers in University of Oregon fact books and other university information sources. Therefore, in understanding and checking your department’s data, it is important to understand the details of how each variable is defined, what it measures and how it was calculated. Data definitions are given briefly on the NRC Departmental Profile sheet, and in detail in the NRC methodology guide.

Some of the most important data definitions are:

  • Faculty data are based on the number of Core, New, Associated, and Allocated Faculty for each program, as defined by NRC. “Core” faculty members are generally Academic Council members with a primary, secondary, or joint appointment in the department. “New” faculty members are like Core faculty members, but with an appointment beginning between 2003 and 2006. “Associated” faculty members are affiliated with the program through a Courtesy, Acting or other similar appointment, or through dissertation advising.
  • Assignment of Core, New and Associated faculty was done by University of Oregon’s Institutional Coordinator, in consultation with School Coordinators, department chairs, and program directors based on the NRC definitions. NRC then determined the number of “Allocated Faculty” using an algorithm based on data about dissertation committee supervision and membership to allocate faculty members on a proportional basis to all departments with which they were affiliated.
  • Student data are based on an NRC-defined set of entry cohorts, as well as criteria for continuous enrollment, full-time enrollment, and program status. Student numbers may seem lower than the population generally thought of as associated with a particular department.
  • The 18 student activities measures (e.g. “Is there an orientation for graduate students in this department?”) give each program credit if each activity provided by either the university (answered centrally, for all departments) or by the department (answered by each department). Each program received credit for nine activities provided by University of Oregon university-wide, plus any others provided by the program.

 

8. Where did the data come from? Who at University of Oregon vetted the data?

University of Oregon participated in the data collection process by providing data about its programs, faculty and students to the NRC in 2006-07. Some data were also developed directly by the NRC, including data on publications, citations and grants. The full list of data used in these rankings, with sources for each, is available on the NRC Departmental Profile.

Some University of Oregon data were generated centrally by staff in the Office of Institutional Research. Other data were provided by programs.

 

9. These data don’t reflect the current state of my department. Can we update them?

There is no mechanism for updating the data in the NRC report.

Across the country, most departments have changed since 2005-06, the time period reflected in the study. These changes may include demographic shifts, policy changes, or departmental reorganizations.

 

10. As a faculty member, I spent a lot of time filling out a faculty survey. Why are most of those items not included?

All faculty members in programs participating in this assessment were asked to complete the “Faculty Questionnaire”. A subset of faculty was also asked to complete the “Survey of Program Quality,” also known as the “anchoring study.”

Data from these faculty surveys contributed primarily to developing the variable weights used in the two Overall rankings, as described above.

The general Faculty Questionnaire also supplied data for the final rankings on how many faculty members in each department are supported by grants. Other data from this questionnaire, like much of the data provided by departments and universities, was ultimately not used. This is the result of the NRC’s statistical process for identifying a small set of variables (ultimately 20) which they propose as indicators of program quality.

 

11. Why isn't my department ranked at all?

The NRC selected the fields to be ranked. Many departments at University of Oregon were not included in the data collection phase of this assessment. A separate assessment of Education programs is now underway by the American Education Research Association (AERA) and the National Academy of Education (NAEd).

In some fields, although data were collected, there were ultimately not enough programs for the NRC to be able to calculate statistically valid rankings. This is the case for a group of programs in “Languages, Societies and Cultures” (at University of Oregon, East Asian Languages and Cultures and Romance Languages). The NRC may make some of the data collected for these fields available for comparisons across departments.

 

12. How can faculty and departments use the rankings?

These rankings may help programs identify characteristics of their students, faculty, and program features, in comparison to other programs in their field. For example, one can determine whether the program had more or fewer female faculty members, or how their mean time to degree compares. Even so, this should be done with caution.

Be aware that each of these data items is very precisely defined by the NRC, and that the definitions are not necessarily intuitive, or the same as those used in most University of Oregon data reports.

 

13. How can prospective graduate students use the rankings?

One possible use of these rankings is to allow prospective students considering doctoral studies to compare programs. Every student’s assessment of the best place to pursue graduate studies should be based on his or her own analysis of what the program will have to offer when they plan to undertake pursue their degree. The decision of where to enroll should not be based on a rating or ranking from any organization.

Prospective students could use the information in the study to help them consider and inquire further about different dimensions of a particular program, for example in Research Activity, Student Support, and Diversity, and then place more weight on the those program characteristics that are more important to each individual.

 

14. How can current graduate students use the rankings?

Current graduate students considering academic careers, much like prospective graduate students, may use these data to compare and inquire further about characteristics of programs at different universities.

Graduate students may also use the data to put their educational experiences in a wider context. For example, graduate students may have a good understanding of the demographics of their particular department, or of the average number of publications by their program’s faculty. They may not, however, know whether these properties are typical of programs in their field. The comparative data provided by the NRC can assist students in contextualizing their experiences.