Anthropology #1 in doctoral rankings

By Jane R. Elgass

Fourteen of the University’s doctoral programs have been ranked among the top 10 nationally in a survey released this week by the National Research Council. Among the 14, the anthropology program is ranked number one nationally; four other programs are ranked in the top three and eight in the top five.

The study, “Research-Doctorate Programs in the United States: Continuity and Change,” was guided by the Committee for the Study of Research-Doctorate Programs in the United States and sponsored by The Conference Board of Associated Research Councils.

The reputational study examined programs in 41 fields in the arts and humanities, biological sciences, engineering, physical sciences and mathematics, and social and behavioral sciences. It covered 3,634 programs at 274 universities (105 private and 169 public) that have about 78,000 faculty members and graduate about 90 percent of the Ph.D.s produced in the studied fields between 1986 and 1992. This year’s study, which builds on one reported in 1982, includes 214 institutions that participated in the 1982 study and many additional programs.

Fields were included in the study based on a combination of three factors:

  • The number of Ph.D.s produced nationally.

  • The number of programs training Ph.D.s within a particular field.

  • The average number of Ph.D.s produced per program.

    Overall, 38 U-M programs are ranked, out of more than 100 that are offered. The U-M programs ranked in the top 10 and their rankings are: anthropology (first); psychology (second); political science and classics (both third); sociology and industrial engineering (both fourth); aerospace engineering and mechanical engineering (both fifth); electrical engineering (sixth); philosophy (eighth); mathematics, French and music (all ninth); and civil engineering (10th).

    While it will take some time to analyze the rankings, President James J. Duderstadt says “they indicate that the academic reputation of the University has not only been sustained but actually strengthened in many key areas over the past decade. This is all the more remarkable in view of the significant deterioration in state support over this same period.

    “The University,” Duderstadt says, “continues to be the national leader in the social sciences, among the leaders in the humanities and engineering, and improving in the sciences. While there are some areas of concern, these rankings do indicate the exceptional strength of our faculty and the quality of our academic programs in these important areas of graduate education.”

    Robert Weisbuch, interim dean of the Horace H. Rackham School of Graduate Studies, notes that “it is important for people to look at raw scores, not just rankings. The University had 24 programs ranked in 1982 in terms of perceived faculty quality also ranked in the current survey. The raw score is up in 19 of the 24 programs.

    “We have to be concerned with competing with ourselves and seeing whether we are improving and we are. Michigan,” he adds, “does even better on the question of effectiveness of the programs. We have good curriculums, the students have good experiences, demonstrating that we are good and conscientious educators.”

    Weisbuch notes that data provided in a survey done as carefully as the NRC report “provides lots of helpful information that will be important for us to consider. In addition, we have much more extensive data on all our programs than any such survey can provide and we’re constantly reviewing it.

    “It’s important for people to understand,” Weisbuch says, “that reputation follows reality by several years. There is a problem with these types of surveys. When I fill one out, I think of two things: recent events and how the program ranked when I was applying to graduate school. It gets skewed toward the past, like light from the moon. It’s real but in the past. We can use that data, but we have more immediate and extensive data on how our programs are working.”

    Weisbuch cites two U-M programs as examples:

  • The Department of English Language and Literatures ranked 16th this year, as it did in 1982, but the raw score on faculty is way up, “increased hearteningly,” Weisbuch says.

    “When the 1982 survey was done, the English department admitted about half of its applicants. Now we get three-to-four times the number of applicants and admit only 8 percent, and student quality has improved extraordinarily. That is internal data we have that is not translated into a ranking in a reputational study.”

  • The Department of History “has an excellent chair and fine faculty, but lost key members at the time of the current survey. What the survey doesn’t reflect is that there have been a lot of new hires and the program never stopped being one of our strongest.”

    “There’s no question that Michigan is up. It’s just not getting fully reported,” Weisbuch says.

    Weisbuch also notes that the U-M has a number of programs—such as American culture and classical art and archaeology, “that are the most respected in the country but don’t get ranked. We have other programs that don’t offer discreet degrees, such as women’s studies, which is one of the strongest in the country. The same is true with the Center for Afroamerican and African Studies. When rankings are done of Black or African American programs, we’re among the top four or five. There are places where we are strong, but that’s not evident in the survey by their categories.”

    “This is not an athletic contest,” Weisbuch says. “To use rankings reductively is anti-intellectual, anti-educational. We need to look at the figures very carefully and bring our intelligence to bear on them. It’s not perfect and it’s very complex, and we need to give the complexity careful attention.”

    Much of the data for the faculty opinion of program quality was generated by the National Survey of Graduate Faculty done in spring 1993. The survey elicited ratings on the scholarly quality of program faculty and the effectiveness of each program in educating research scholars. At least 100 faculty raters evaluated each. The committee also updated statistics from the 1982 study and included other demographic data drawn from a variety of sources.

    Ratings for scholarly quality of faculty were pooled, resulting in an average rating on a five-point scale, with 0 signifying “not sufficient for doctoral education” and 5 signifying “distinguished.”

    According to the report, “about 62 percent of the programs were rated as ‘distinguished,’ ‘strong’ or ‘good,’ although this varies by field.”

    The same approach was used for program effectiveness, with 0 being “not effective” and 5 “extremely effective.” “About two-third of the programs,” the report says, “were considered to be ‘extremely effective’ or ‘reasonably effective.” Fewer than 10 percent were considered to be ‘not effective.’”

    The rank orderings in each field are based on a mean rating derived from the pooled responses to the national survey. Acknowledging that “rank ordered information requires careful interpretation,” the report includes an appendix that illustrates the relative standing of programs with respect to a number of variables.

    John H. D’Arms, former dean of the Horace H. Rackham School of Graduate Studies and former vice provost for academic affairs, was a member of the committee that conducted the study. He is now a visiting distinguished professor at the National Humanities Center in Research Park, N.C., working on portions of a book on funding patterns in the humanities being done for the Mellon Foundation.

    D’Arms characterizes the report as “significant and of considerable value,” particularly in light of the criticism frequently leveled at colleges and universities that they do not evaluate themselves.

    “This is an attempt on the part of the people producing the Ph.D.s to look at the quality of the institutions doing the producing,” he says. He also notes that this report is much more comprehensive than those done, for example, by U.S. News and World Report, which last spring reported on only 13 fields of doctoral study.

    Nine fields not included in the 1982 report are in this year’s report. Of interest, D’Arms notes, is that almost all of them are interdisciplinary in nature, such as comparative literature, materials science and biomedical engineering. “This not only indicates that Ph.D. production has reached a certain level,” he explains, “but also points up the shifting nature of these fields.”

    D’Arms says there are a number of broad findings illustrated in the report:

  • Programs that were rated highly in 1982 tend to remain high in the current report.

  • It is taking longer to earn a Ph.D. at almost every institution in almost every field, and times are even greater for lower-rated programs.

  • In many of the 41 fields, women and minorities are still underrepresented in the numbers that receive doctorates, but this varies from field to field.

    D’Arms notes, however, that women and minorities in the higher ranked programs are just as likely to earn Ph.D.s are non-minority males. “If they enter the programs, their chances of success are just as good. It’s important that we continue to encourage them to pursue graduate study.”

  • The highly rated programs seem to be somewhat larger. While this may mean “larger is better,” D’Arms says this data deserves a close look. “It may be a function of a large number of faculty in the program,” he says, with an institution having so many people in a program that they are bound to be noticed and recalled by those responding to reputational studies.

  • In every field common to both studies, the number of faculty has increased since 1982. This is true even for the social sciences, the arts (primarily music) and the humanities, even though the number of graduates in these areas has declined. D’Arms notes that this should be of interest to policy-makers.

    The study was funded by the Ford, Andrew W. Mellon, Alfred P. Sloan, and William and Flora Hewlett Foundations and the National Academy of Sciences.

    An electronic file of selected tables from the 740-page report is available on the Research Council’s World Wide Web home page at http://www.nas.edu. A CD-ROM that will include more detailed program-level data is being developed and will be distributed for public use.

    Paper copies of the report, $59.95 (prepaid) plus shipping of $4 for the first copy and $.50 for each additional copy, are available from the National Academy Press, 2101 Constitution Ave., N.W., Washington, D.C. 20418; (202) 334-3313 or 1-800-624-6242.

    Noting that NRC study is reputational, with quality based on perceived reputation in the eyes of the 100 or more faculty who reviewed each program, John H. D’Arms does have a few disappointments.

    “This is not the only way to measure quality,” he says, particularly since “reputation changes more slowly than facts.”

    “For instance, the English Department at the U-M has undergone dynamic changes and probably rates more highly [than in the report]. The changes are not yet well reflected.”

    And, some early hopes on the part of D’Arms and several other committee members for including data from other sources were not realized due to time and resource limitations.

    “We hoped to evaluate the experiences of recent Ph.D. holders who were graduates of the programs. This is another way to test reputation, particularly since the questionnaire does not emphasize the degree to which the program prepares one for life in the academy or industry.

    “We also were unable to get reactions from employers of recent Ph.D.s who work in their research laboratories, another independent measure. It also would have been nice to ask international students who have returned to their countries what their experiences had been.”

    D’Arms also would have liked to include data on a more elusive characteristics, that of the value added to the doctoral experience by non-degree-granting centers and institutes and other resources on a campus, such as the Institute for the Humanities or the Clements Library at the U-M.

    “The Humanities Institute is a very lively place,” D’Arms says, where doctoral students “are able to connect and come away with a richer, more diverse experience. There are similar centers elsewhere. We have superb collections at the Clements and the Kelsey, and there’s no way of capturing this. The same is true for the Center for Afro-american and African Studies and women’s studies and foreign area centers.

    “The presence of these resources adds immeasurably sometimes to the experience. It makes a positive impact on graduate education in these fields.”

    Noting that a number of fields were added for the current report, D’Arms says that finding a way to organize the biological sciences was one of the biggest frustrations faced by the committee.

    “We had a terrible time organizing the biological sciences, since they come from literary schools, medical and pharmacy programs and engineering. What this does say is that life sciences are in a major ferment. There are big differences since 1982. Other fields, such as women’s studies, also have changed, but their organization has not changed, so the change is not so noticeable.”

    As a committee member, D’Arms says in the long run he hoped for an even better report. “There is value, however, in trying to do what we did. It provides a reference for what to study in the future, a benchmark to work from even if it is not perfect.”

  • Tags:

    Leave a comment

    Commenting is closed for this article. Please read our comment guidelines for more information.