Mind Matters Natural and Artificial Intelligence News and Analysis
backside-graduation-hats-during-commencement-success-graduates-of-the-university-concept-education-congratulation-graduation-ceremony-congratulated-the-graduates-in-university-during-commencement-stockpack-adobe-stock.jpg
backside graduation hats during commencement success graduates of the university, Concept education congratulation. Graduation Ceremony ,Congratulated the graduates in University during commencement.

Will Machine Learning Disrupt Academic Rankings?

A case study in the best computer science and artificial intelligence programs
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Note: Our author, Ross Gulik, an educational consultant who closely follows the academic ranking industry, offers us a look at how such rankings are created and how they might be improved.

1 Introduction to AcademicInfluence.com

AcademicInfluence.com is a new website that claims to do academic rankings right.

Getting academic rankings right has become a pressing concern for higher education. As AcademicInfluence.com correctly observes on its methodology page, schools face enormous pressure to perform well in the big annual rankings. The big academic ranking organizations (such as US News and Times Higher Education) carry enormous weight. If a school slips in these annual rankings, it can affect student enrolments, the bottom line, and the prestige of the institution.

This pressure leads to perverse incentives in which schools adapt themselves to the rankings without, in many cases, actually improving the quality of education they offer. As the AcademicInfluence.com methodology page puts it, “Schools are motivated to ‘game’ the rankings, introducing superficial or cosmetic changes that elevate their ranking but do little to provide a better education for students or a more fruitful work environment for faculty.” The pressure can be so intense that schools cheat and misreporting their actual accomplishments in order to rise in a major ranking.

As I began to write this article on December 22, 2020, an email appeared in my inbox linking to the following post from Jonathan Turley: “There is an interesting settlement this week between the Department of Education and Temple University’s Fox School of Business and Management. Temple has long been accused of falsifying information to inflate its ranking on U.S. News and World Report. The USNWR rankings are now key to admissions for most schools and Temple has been accused of lying to both the magazine and its applicants for years. The DOE alleged that Temple’s Fox School of Business and Management submitted false information between 2014 and 2018.” As it is, Temple had to pay a substantial settlement for this alleged fraud.

Interestingly, Turley doesn’t fault the ranking organizations, such as US News: “Academics often criticize the USNWR rankings. I am not one of those critics. While I often disagree with some of the rankings, they are generally accurate and give applicants information that was simply not available when I applied to college and law school. What I do find objectionable is the annual manipulation of the criteria to force changes in the rankings each year. The rankings hardly sell without dramatic dips and rises each year. This often seems achieved by sudden changes in criteria or weighing of criteria. Nevertheless, the rankings add a degree of transparency and accountability for universities and colleges.”

By contrast, AcademicInfluence.com argues that the transparency here in fact constitutes a pitfall for schools, undercutting rankings like those of US News. In an article, “A New Philosophy of Academic Ranking,” ,AcademicInfluence.com agrees with Turley about the “annual manipulation” of ranking criteria but disagrees with him about their innocuousness: “Often one gets the sense that all the fiddling with criteria and weights surrounding these rankings are simply there to obtain novelty for the sake of novelty… One might think that by making the criteria and weights explicit, the major ranking organizations are in fact giving the people who study their rankings full transparency. And transparency is always a good thing, isn’t it? Not so fast. The transparency here turns out to be a pitfall, inviting schools to game the rankings.”

So stated, the point seems obvious: By making the ranking criteria explicit, the big ranking companies invite schools to adapt their profiles to those criteria, perhaps even fudging how well they fulfill the criteria (as, apparently, in the case of Temple University).

AcademicInfluence.com compares the underlying problem here with abuses in standardized testing: “Because states put a premium on students doing well on these tests, rewarding teachers and school districts whose students score high, students are taught how to take the test rather than to actually learn the subject being tested. So, instead of being taught math or English for genuine comprehension, students are taught to score well on the standardized test about math and English. Colleges and universities trying to improve their ranking face the same temptation as school districts intent on raising test scores.” So, rather than actually making their education better, schools are tempted to make their rankings better.”

AcademicInfluence.com proposes to overcome such flaws in the annual rankings issued by the big academic ranking organizations. How? According to AcademicInfluence.com, the flaws in these rankings are systemic and infect all the major ranking organizations (US News, Quaquarelli Symonds, Times Higher Education, Shanghai Academic Ranking of World Universities, etc.).

AcademicInfluence.com’s answer to these flaws is to focus on one master criterion, namely, influence, and uses it to rank institutions and disciplinary programs. Moreover, it gauges influence via a machine-learning algorithm, thus eliminating human intervention from the rankings so produced. At the same time, because this algorithm operates on vast publicly available databases, gaming the AcademicInfluence.com rankings means changing vast portions of these databases, which is effectively impossible.

At the heart of AcademicInfluence.com’s ranking methodology is the InfluenceRanking engine: “The InfluenceRanking engine calculates a numerical influence score for people, institutions, and disciplinary programs. It performs this calculation by drawing from Wikipedia/data, Crossref, and an ever growing of body of data reflecting academic achievement and merit.”

The underlying logic is this: (1) Start by calculating influence scores according to discipline for people in the academy. (2) Use those influence scores for people to induce influence scores and thus rankings for disciplinary/degree programs associated with those people. (3) Average the influence scores in (1) and (2) to induce overall rankings of influence for persons and for institutions.

Has AcademicInfluence.com in fact introduced a disruptive new technology that will fundamentally transform higher-ed rankings? It’s premature to answer this question one way or the other—the enterprise is new and their InfluenceRanking engine seems still to be evolving (“[we] are constantly striving to improve [it]”). Still, the AcademicInfluence.com site shows real promise. On the About page are links to articles explaining the nuts and bolts of the underlying technology. Readers interested in understanding its academic rankings are urged to have a look, especially at the Methodology page and the Philosophy page, both cited above.

A detailed evaluation of AcademicInfluence.com’s ranking technology will take time, requiring users to put the site through its paces.

AcademicInfluence.com’s project is ambitious and there are lots of moving pieces. In the meantime, however, it is instructive to perform a simple case study comparing the AcademicInfluence.com rankings of CS (computer science) and AI (artificial intelligence) programs with the rankings of those programs by US News. This choice of discipline (CS) and subdiscipline (AI) is of course relevant to MindMatters.ai. There’s also the irony that AcademicInfluence.com is claiming, via its machine-learning algorithm, to be using AI to assess CS and AI programs.

Whatever the ultimate importance of the AcademicInfluence.com rankings for higher education, this case study at least suggests that AcademicInfluence.com may be onto something big and that its rankings may indeed constitute a disruptive technology for the academic ranking business. And a business it is.

2 The Twenty-Five Best CS and AI Programs

Let’s therefore compare AcademicInfluence.com’s rankings for computer science (CS) and artificial intelligence (AI) programs with the corresponding rankings of US News. We’ll start by simply stating the rankings. Then, in the following sections, we can evaluate to what degree these rankings hold up. Of special interest will be gauging the influence of the people associated with the ranked programs, the idea being that more highly ranked programs should have more highly influential people (experts in CS and AI) associated with them. Do AcademicInfluence.com rankings adequately capture that connection between influential people and influential programs? And to what degree do the U.S. News rankings also capture that same connection?

As it is, AcademicInfluence.com allows you to toggle the time frame for gauging influence. If you do this for the years 2000-2020 (thus focusing on the most recent advances in CS and AI), AcademicInfluence.com yields the following two rankings for CS and AI programs. After listing these two rankings here, in the next section, we give the corresponding US News rankings.

AcademicInfluence.com Ranking of the Best CS Programs in the U.S.

1 Massachusetts Institute of Technology
2 Stanford University
3 Carnegie Mellon University
4 Harvard University
5 University of California, Berkeley
6 Princeton University
7 University of Michigan
8 Cornell University
9 Columbia University
10 New York University
11 California Institute of Technology
12 Yale University
13 University of Washington
14 University of California, Los Angeles
15 University of California, San Diego
16 Brown University
17 University of Chicago
18 University of Pennsylvania
19 University of Texas at Austin
20 University of Southern California
21 Georgia Institute of Technology
22 University of Illinois at Urbana-Champaign
23 Purdue University
24 Duke University
25 Rice University

SOURCE: Academic Influence Schools result


AcademicInfluence.com Ranking of the Best AI Programs in the U.S.

1 Stanford University
2 Massachusetts Institute of Technology
3 Harvard University
4 University of California, Berkeley
5 Princeton University
6 Carnegie Mellon University
7 University of Pennsylvania
8 New York University
9 Cornell University
10 Columbia University
11 University of Michigan
12 California Institute of Technology
13 University of Chicago
14 University of Southern California
15 University of California, San Diego
16 Yale University
17 Brown University
18 University of California, Los Angeles
19 University of California, San Francisco
20 University of Illinois at Urbana-Champaign
21 Boston University
22 New York University
23 University of Washington
24 University of Texas at Austin
25 Northwestern University

SOURCE: Academic Influence Schools result

Ranking of the Best AI Programs in the U.S

1 Stanford University
2 Massachusetts Institute of Technology
3 Harvard University
4 University of California, Berkeley
5 Princeton University
6 Carnegie Mellon University
7 University of Pennsylvania
8 New York University
9 Cornell University
10 Columbia University
11 University of Michigan
12 California Institute of Technology
13 University of Chicago
14 University of Southern California
15 University of California, San Diego
16 Yale University
17 Brown University
18 University of California, Los Angeles
19 University of California, San Francisco
20 University of Illinois at Urbana-Champaign
21 Boston University
22 New York University
23 University of Washington
24 University of Texas at Austin
25 Northwestern University

SOURCE: Academic Influence Schools result

US News Ranking of the Best CS Programs in the U.S.

1 Carnegie Mellon University
1 Massachusetts Institute of Technology
1 Stanford University
1 University of California, Berkeley
5 University of Illinois at Urbana-Champaign
6 Cornell University
6 University of Washington
8 Georgia Institute of Technology
8 Princeton University
10 University of Texas at Austin
11 California Institute of Technology
11 University of Michigan
13 Columbia University
13 University of California, Los Angeles
13 University of Wisconsin, Madison
16 Harvard University
16 University of California, San Diego
16 University of Maryland, College Park
19 University of Pennsylvania
20 Purdue University
20 Rice University
20 University of Massachusetts, Amherst
20 University of Southern California
20 Yale University
25 Brown University
25 Duke University
25 Johns Hopkins

SOURCE: US News Computer science rankings
METHOD: US News Best Graduate Schools Methodology


US News Ranking of the Best AI Programs in the U.S.

1 Carnegie Mellon University
2 Massachusetts Institute of Technology
3 Stanford University
4 University of California, Berkeley
5 University of Washington
6 Cornell University
7 Georgia Institute of Technology
8 University of Illinois at Urbana-Champaign
9 University of Texas at Austin
10 University of Michigan
11 University of Massachusetts, Amherst
12 Columbia University
13 University of Pennsylvania
14 University of California, Los Angeles
15 University of Southern California
16 University of Maryland, College Park
17 Princeton University
18 Harvard University
19 California Institute of Technology
20 University of Wisconsin, Madison

SOURCE: US News Artificial intelligence rankings
METHOD: US News best graduate schools

3 Some Observations About These Rankings

Before evaluating these rankings in terms of how well they gauge influence of the people associated with the programs being ranked, let’s just eyeball these rankings to see what a superficial exploration reveals. Glancing over them reveals quite a bit. First off, it seems that all the rankings are in the right ballpark. All the schools have solid STEM coverage. They all are well known institutions with a large brand recognition. And schools at the top are ones you expect to see at the top, especially MIT and Stanford. So far so good.

Digging a little deeper, however, raises some immediate questions about the US News rankings. The AcademicInfluence.com rankings depend on their InfluenceRanking engine, which is a proprietary algorithm and thus not open to the public. But at least it’s clear what their rankings are claiming to do. It may even be possible to (partly) reverse engineer this algorithm by seeing how it assesses influence for people, institutions, and disciplinary programs (consider their “Schools” and “People”. AcademicInfluence.com regards schools as “best” to the degree that they are influential according to their InfluenceRanking engine, i.e., best = influential. And since influence for them always in the end comes down to the influential people associated with a school, that means the success or failure of the AcademicInfluence.com approach to ranking depends on adequately mapping the influence of people, specifically, of faculty and alumni.

But how do the US News rankings gauge “best”? It turns out that both the CS and the AI rankings by US News cite the same methodology statement: these rankings are “based solely on the results of surveys sent to academics in biological sciences, chemistry, computer science, earth sciences, mathematics, physics and statistics.” This statement ought immediately to raise a red flag. Who exactly is being surveyed? What are their qualifications for deciding which are the best degree programs in CS and AI, especially if those surveyed fall outside those disciplinary boundaries (why, for instance, should we believe what a physicist or biologist regards as the best CS or AI programs?). And what are their incentives for sharing their true views of a degree program’s academic excellence (as opposed to talking up a school they happen to like simply to make it rise in the rankings)?

By contrast, the US News rankings for the best colleges and universities in general (rather than for particular disciplines or subdisciplines) employs a multicriteria approach in which a “peer assessment survey” plays only a 20 percent part. Twenty percent is a lot less than 100 percent. Indeed, for US News to put the entire CS and AI rankings on the backs of surveys seems excessive. Basically, you’re just asking people “which schools do you put ahead of which other school in CS (or AI)” and then taking a vote among the people sampled.

Who are these people? Why should be trust their assessments? How do we know their assessments have been accurately scored? US News doesn’t say. The AcademicInfluence.com rankings, by contrast, are said to be generated entirely via a machine-learning algorithm that assesses disciplinary influence based on data from Wikipedia.org, Crossref.org, etc. It would be good to know the details underlying this machine-learning algorithm, but at least it seems to make sense. With these US News rankings, however, it’s hard to see how to even begin to make sense of them.

A curious feature with the two US News rankings is that the CS ranking shows many schools as tied whereas the AI ranking shows none as tied. Thus, for instance, the best four schools in CS are all tied for first place (Carnegie Mellon, MIT, Stanford, and UC Berkeley). On the other hand, the four best schools in AI are the same as in CS, but in this case, there are no ties (the school listed first is ranked first, the one listed second is ranked second, etc.). This presence of ties in one list and absence in the other seems strange. AI is a subdiscipline of CS. With fewer AI students and fewer AI faculty than CS students and CS faculty, it would seem easier for AI programs to tie than for CS programs to tie. That’s because there are just so many more ways for CS programs to distinguish themselves than for AI programs.

But of the 27 schools shown here as ranked by US News for CS, there are only 11 ranking distinctions. The first four schools are all considered to be tied. The first five schools from twenty onward are likewise all considered to be tied, and the next three after that are likewise considered tied. All these ties seem excessive. Are the tied CS programs really equivalent in academic excellence? Are the surveys on which this ranking is based really so coarse-grained as not to distinguish among tied schools? In the number 1 spot, is it really the case that Carnegie Mellon is in the same league as MIT and Stanford? And how is it that when we turn to AI, the ties all suddenly vanish?

There’s another reason to be suspicious of the AI ranking. When we look at the US News ranking for AI, the first four schools not only correspond to the first four on the CS list, but also to the first four on the CS list as they appear in alphabetical order. Why is this significant? When US News lists schools that are tied, they list them in alphabetical order. But when schools are not tied, alphabetical order becomes irrelevant. And yet, the US News ranking for AI shows, in the top four places, exactly the same schools as in the CS ranking when these are tied and placed in alphabetical order (is Carnegie Mellon at the very top of the AI list because it is really the best AI school or because it happens to begin with the letter “C”?). Granted, this is not a smoking gun. But it looks suspicious. And the more general principle still seems to hold that ties should be less prevalent at the disciplinary than at the subdisciplinary level.

Eyeballing the AcademicInfluence.com rankings for CS and AI doesn’t raise the same flags. The only school that seems clearly out of place in these AcademicInfluence.com rankings is UCSF (University of California, San Francisco) at the #19 spot in the AI ranking. A little digging, however, reveals that even though USCF is not a powerhouse in AI as such, it has a world-class medical school and has launched an “Artificial Intelligence Center to Advance Medical Imaging.” So perhaps the AcademicInfluence.com InfluenceRanking engine has picked up on this fact.

None of the observations in this section constitute a slamdunk in favor of AcademicInfluence.com’s CS and AI rankings nor are they “against” US News’s CS and AI rankings. All these rankings raise questions. The AcademicInfluence.com rankings depend on a proprietary algorithm that users cannot directly study. The US News rankings depend on survey results whose underlying data and scoring methods are likewise inaccessible to users. In the next section, therefore, we evaluate (or, more accurately, begin to evaluate) both sets of rankings on how well they gauge influence.

4 How Well Do These Rankings Gauge Influence?

In ranking CS and AI programs, the U.S. News approach looks to reputational surveys whereas the AcademicInfluence.com approach gauges excellence of these programs via influence of its affiliated academics (faculty and alumni). Yet, independently of U.S. News’ s surveys and independently of AcademicInfluence.com’s InfluenceRanking engine, it’s possible to examine the influence of at least some key persons specializing in CS and AI and use their institutional affiliations to get some sense of where their affiliated institutions should be ranked in CS and AI. A simple expedient here is to examine winners of the Nobel Prize in computer science. Of course, there is no actual Nobel Prize in CS, but the equivalent of the Nobel Prize for CS exists: the ACM’s (Association for Computing Machinery’s) Turing Award, which, courtesy of Google, includes an annual cash prize of $1,000,000.

In addition to its article on the Turing Award, itself, Wikipedia lists all Award winners by university affiliation. This is convenient for our purposes. True, the Turing Award has been given out annually since 1966, but even if the total list of winners doesn’t provide a completely up to date indicator of influence within CS and AI, the award covers a very relevant timeframe, especially if we consider that CS and AI programs in the present gain from the impact and prestige of people historically associated with those programs. Perhaps the two most towering figures in AI over the last 60 years have been Marvin Minsky and John McCarthy.. Both were early Turing Award winners. Minsky spent the bulk of his career at MIT, McCarthy at Stanford. McCarthy even coined the term “artificial intelligence.”

Wikipedia’s list of Turing Award winners by university affiliation has two particularly helpful tables for our purposes, one ranking schools by number of affiliate winners since the start of the award in 1966, the other since 2000. Here are the two rankings, in descending order, with number of affiliated winners listed in the right-most column:

Fifteen Universities with the Most Turing Award Winners Since 1966

1 Stanford University 28
2 Massachusetts Institute of Technology 26
3 University of California, Berkeley 25
4-5 Harvard University 14
4-5 Princeton University 14
6 Carnegie Mellon University 13
7 New York University 8
8 University of Cambridge 7
9-11 California Institute of Technology 6
9-11 University of Michigan 6
9-11 University of Oxford 6
12-13 University of California, Los Angeles 5
12-13 University of Toronto 5
14-15 Cornell University 4
14-15 University of Chicago 4

Ten Universities with the Most Turing Award Winners Since 2000

1 Massachusetts Institute of Technology 15
2 Stanford University 11
3 University of California, Berkeley 9
4 Princeton University 5
5-9 Carnegie Mellon University 4
5-9 Harvard University 4
5-9 New York University 4
5-9 University of California, Los Angeles 4
5-9 University of Cambridge 4
10 University of Toronto 3

What do these two lists (“rankings by most Turing Award winners”) show about CS programs (leaving aside AI programs for the moment)? The most obvious lesson is that MIT and Stanford are in a class by themselves, clearly ahead of everyone else. The AcademicInfluence.com ranking of CS programs shows this, giving the edge to MIT. And since the AcademicInfluence.com ranking of CS programs focuses on the years 2000 to 2020, this advantage to MIT is consistent with the greater number of Turing Award winners associated during that time with MIT rather than with Stanford. By contrast, the US News ranking of CS programs makes MIT and Stanford equivalent with Carnegie Mellon and UC Berkeley. On the basis of the Turing Award, this equivalence seems unfounded: MIT and Stanford clearly lead the pack.

Nonetheless, according to these two lists of schools with Turing Award winners, UC Berkeley is not far behind MIT and Stanford. The US News CS ranking agrees with this, but then also places Carnegie Mellon in a tie with UC Berkeley (along with MIT and Stanford), even though Carnegie Mellon is substantially behind UC Berkeley in Turing Award winners, both recently (2000-2020) and since the award’s inception (1966). On the basis of school affiliations of Turing Award winners, therefore, MIT and Stanford are in a clear lead, UC Berkeley should be in a clear third position (not equivalent with MIT and Stanford), and then Carnegie Mellon should trail slightly further back, with only Harvard and/or Princeton ahead of Carnegie Mellon but behind UC Berkeley.

The US News ranking of CS programs misses this distinction by putting MIT, Stanford, UC Berkeley, and Carnegie Mellon in a tie, and then putting the University of Illinois at Urbana-Champaign (UIUC) in the fifth position, right behind these schools. As it is, UIUC only has two Turing Award winners associated with it, and these are alumni (Richard Hamming, who received his degree from UIUC in 1942, and Andrew Yao, who received his in 1975). UIUC seems clearly out of place if we are trying to gauge influence of CS programs on the basis of Turing Awards.

The AcademicInfluence.com ranking of CS programs thus, in the first five schools listed, seems more consistent with the Turing Award, though the match is not entirely perfect. Rather than occupy the third position, as it would if the match were perfect, UC Berkeley occupies the fifth position on the AcademicInfluence.com ranking of CS programs. Carnegie Mellon is then at the third position and Harvard at the fourth position. Interestingly, the University of Illinois at Urbana-Champaign drops to number twenty-two on AcademicInfluence.com CS ranking, which is consistent with its poorer showing on the Turing Award.

The next five or so positions on the AcademicInfluence.com CS ranking are consistent with the ranking of schools by affiliated Turing Award winners provided we factor out non-U.S. schools, such as Cambridge University. Thus, behind MIT, Stanford, UC Berkeley, Harvard, Princeton, and Carnegie Mellon, we would expect, by counting Turing Award winners, to see NYU, University of Michigan, Cornell, UCLA, and Caltech. As it is, UCLA drops out of the top ten to the number fourteen position for CS on the AcademicInfluence.com ranking. But the other “next five” schools listed in the AcademicInfluence.com ranking match up with the Turing Award affiliated schools. Not that this should be given too much weight in that after the top five or so schools, the number of Turing Award winners per school drops to a handful with lots of ties. Yet, among the very top schools, it seems that the number of Turing Award winners associated with them sends a clear signal of academic excellence.

The “next five” on the US News CS ranking (roughly, spots six through ten in their CS ranking) seem likewise less consistent with the number of Turing Award winners associated with various schools than the AcademicInfluence.com CS ranking. University of Washington, Georgia Tech, and University of Texas at Austin all show up among the top ten but not top five in the US News ranking, but none of them makes a strong showing with the Turing Awards. On balance, then, it seems that the AcademicInfluence.com CS ranking seems more consistent with the record of Turing Award winners than the US News CS ranking. Again, there’s no slamdunk case here for preferring one ranking to another. It could, for instance, be argued that “the Nobel Prize of CS” (i.e., the Turing Award) doesn’t adequately capture the full breadth of influence in the field. And that may be. But at least it’s a starting point, and other things being equal, it seems one should give the advantage to the school with the preponderance of Turing Award winners.

Where does that leave the two AI rankings? To reiterate, it is suspicious that the US News AI ranking has exactly the same four schools as are tied at the top in the US News CS ranking, all the while preserving alphabetical order. It makes one wonder if the ranking is real or fabricated. Carnegie Mellon certainly seems to deserve a high spot in this ranking. But at the very top? Yes, it runs the famous Robotics Institute. And in its history, Carnegie Mellon boasts Turing Award winner (in 1975) Herbert Simon, who is the only Turing Award winner also to have won an actual Nobel Prize (in 1978, for economics). Simon is a pioneer of artificial intelligence, though not with the same impact as Minsky or McCarthy.

Judging the two AI rankings on the basis of Turing Award winners, however, seems less straightforward or reliable than for CS. The fact is, especially if one looks at the last 20 years, Turing Awards have been little awarded for work in AI. Probably the category that has seen the most Turing Awards during that time is cryptography. The clearest case of a Turing Award in the last 20 years going for AI was in 2018 when three “Godfathers of AI” were honored with the award: Yoshua Bengio, Geoffrey Hinton, and Yann LeCun (whose academic affiliations respectively are University of Montreal, University of Toronto, and NYU).

The AcademicInfluence.com ranking for AI programs, in placing Stanford at the top and ahead of Carnegie Mellon, seems to make better sense than the US News ranking. Stanford, with Stanford Research Institute International just off its premises and with Silicon Valley just around the corner, not least with the AI behemoth of Google located in Mountain View, seems to offer a more fecund AI environment than any other school on the planet. That said, for AcademicInfluence.com to put Carnegie Mellon at the number 6 spot on its AI ranking seems a bit low. Do we simply trust that the AcademicInfluence.com algorithm is accurately sorting through the influence scores of the influential AI people associated with these various schools? That’s going to require some more digging than is possible in this article.
In closing, let me therefore suggest several possible places for digging further in order to assess how well the US News and AcademicInfluence.com rankings of CS and AI programs may reflect the influence in these fields of affiliated academic persons:

  1. List of other ACM awards. The ACM, or Association for Computing Machinery, lists about 20 other awards besides its gold standard Turing Award. It would be interesting to determine the academic affiliations of the winners of these awards, and especially how they divide between CS in general and AI in particular.
  2. List of IEEE awards. The IEEE, or Institute of Electrical and Electronics Engineers, does a lot of work in CS and AI, though its scope is more general than that of the ACM. Some of its awards focus specifically on computation, such as its John von Neumann Medal. https://en.wikipedia.org/wiki/IEEE_John_von_Neumann_Medal The list of recipients of this medal shows substantial overlap with the Turing Award winners.
  3. AcademicInfluence.com lists of CS influencers. The link here is to an article that is really a portal to lists of influential persons in CS, covering the last 10 years, the last 20 years, the last 50 years, and all time. These lists tend to substantiate the AcademicInfluence.com CS ranking given above.
  4. AcademicInfluence.com lists of AI influencers. The link here is a dynamic page that can be toggled by dates. It gives lists that in descending order of influence identify key people that AcademicInfluence.com regards as influential in AI. Sorting through these lists and matching up the influential people listed with institutions is a non-trivial task in that many of the people regarded as influential in AI are not academics (such as Bill Gates and Mark Zuckerberg). The challenge is therefore how best to prune these lists to get a fair assessment of academic influence/excellence in AI.

  • Disclosure: Robert J. Marks, Bradley Center Director, is a member of the Core Team at AcademicInfluence.

Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Will Machine Learning Disrupt Academic Rankings?