F.A.Q. / Contact

This page provides responses to the most frequently asked questions about the Report Card project. If you do not find an answer to your question here, you are welcome to use the form below to contact us.

  1. Who produced this Report Card?
  2. Where does the data used in the Report Card come from?
  3. What are the main publicly-available data sources used in the Report Card?
  4. What is the G-FINDER report, and does it really provide a comprehensive picture of neglected disease research?
  5. How did UAEM collect data from universities for the metrics that relied partially or wholly on self-reported information?
  6. How does the Report Card fairly evaluate non-responding universities?
  7. How does the Report Card fairly evaluate universities with varying sizes and research budgets?
  8. Why was the Report Card renamed since the release of the first iteration?
  9. Does the Report Card evaluate university global health research and training activities beyond neglected diseases?
  10. Does the Report Card address research on chronic/non-communicable diseases (NCDs) like cancer, heart disease or mental health, which are increasingly prevalent in the developing world?

Please use the form below to submit questions, feedback and any other input on the Report Card:

Fill out my online form.

Who produced this Report Card?

This project was conceived, developed and produced by Universities Allied for Essential Medicines (UAEM), an international nonprofit organization of graduate and undergraduate students in medicine, research, law, and related fields. Students from a wide range of U.S. and Canadian institutions, including many of those evaluated by the Report Card, contributed to the project. Research and analysis to produce the Report Card was conducted over the course of 2014 and early 2015. Funding for the project was provided by the Doris Duke Charitable Foundation, the Open Societies Foundation, the Perls Foundation and the Moriah Fund.

Where does the data used in the Report Card come from?

As detailed in the methodology, this evaluation is based on a combination of metrics derived from a.) publicly available information, and b.) self-reported data from evaluated institutions.

To promote fair evaluation and methodological rigor, we used standardized, authoritative, publicly accessible data sources for as many metrics as possible. Nine of our sixteen metrics rely entirely on publicly available data sources, while another four are derived from a combination of publicly available and self-reported data.

Self-reported data was only sought on metrics for which public information was limited or inconsistent. Even then, we verified this data wherever possible – for example, we asked respondents to include names, descriptions, and course catalog links for courses on neglected diseases or access to medicines issues, which we then verified and supplemented with our own searches of online university course catalogs.

What are the main publicly available data sources used in the Report Card?

The most significant sources of publicly-available data used in this evaluation are:

What is the G-FINDER report, and does it really provide a comprehensive picture of neglected disease research?

As one of our most significant sources of publicly available data, G-FINDER deserves particular attention and explanation. The G-FINDER report is produced annually by the nonprofit organization Policy Cures with funding from the Bill and Melinda Gates Foundation. It is a comprehensive survey of worldwide funding for research and development of innovative neglected disease treatments, medicines and health technologies, compiling grant data from more than 100 funders, including USAID, the Bill and Melinda Gates Foundation, and the Howard Hughes Medical Institute.

The G-FINDER also establishes a specific, inclusive and empirically-grounded definition of “neglected diseases.” That definition and the specific diseases included are detailed here.

For these reasons, we consider the G-FINDER a “gold-standard” data source for both defining neglected diseases and cataloging the extent of research in these areas, and UAEM relied heavily on it for both purposes in developing our own evaluation. While the G-FINDER data and definitions may not capture fully 100% of all university research projects that could conceivably relate to neglected diseases, we are confident that it is the most rigorous and comprehensive record of neglected disease research funding available today, and more methodologically sound than relying on self-reported estimates of research investment from universities with varying definitions of neglected diseases and varying methods of budgeting and accounting.

How did UAEM collect data from universities for the metrics that relied partially or wholly on self-reported information?

Separate questionnaires were developed for each of the Report Card’s three sections (access, innovation and empowerment). Questionnaires were provided in online format using Adobe FormsCentral, a leading survey tool. Each section questionnaire was e-mailed to the officials best suited to provide data for that section – vice-presidents, provosts or equivalent heads of research for the innovation questionnaire, technology transfer officials for access, and deans or equivalent heads of medical, public health and law schools for the empowerment section. All universities were given more than a month to provide a response to each survey and for those schools from whom we did not receive an initial response follow-up emails were sent. Furthermore, if requested, additional time was provided for completion of the surveys.

Finally, UAEM sent advance notice with provisional scores and grades to every university President or Chancellor’s office preceding the public release.

How does the Report Card fairly evaluate non-responding universities?

We took great care to weight the Report Card metric scores such that a non-reporting institution that received high marks on the public information-based metrics could still receive a competitive score. It is also important to note, however, that because transparency and disclosure are elements that we sought to emphasize in every aspect of this project, universities that did respond to the self-reported questionnaire for a given section received a minimum credit for those metrics (typically 1 point out of 5), regardless of the substance of their response.

How does the Report Card fairly evaluate universities with varying sizes and research budgets?

Because the universities selected for evaluation vary in significant ways (e.g. levels of funding, student body size, public vs. private institutions), we designed Report Card metrics and scoring systems to minimize the impact of such differences.

Most importantly, almost all quantitative metrics are “normalized” with respect to degree of institutional funding, total number of licenses executed, or another school-specific variable that serves as a proxy for university size. For example, rather than scoring a university on the absolute dollar amount of funding devoted to neglected disease research, or the absolute number of non-exclusive licenses executed in a given year, these numbers were divided by a relevant total for that school (total NIH plus Gates funding or total licenses executed) to arrive at a percentage for each institution. All institutions with percentages falling in the same scoring range received the same score, regardless of absolute institutional size. This approach enabled meaningful comparison across institutions while minimizing or eliminating the impact of variations in size, budget, or resources.

For non-quantitative metrics, the Report Card employs pre-defined sets of discrete categories by which all universities can be uniformly evaluated, and for which performance is again likely to be independent of variations in university size, funding, capacity or resources. For example, on the first question in the access section, public university commitments to socially responsible licensing were sorted into five pre-defined categories based on the specificity and details of the commitment each school had made. All universities falling into the same category received the same score.

Why was the Report Card renamed since the release of the first iteration?

The new name, University Report Card: Global Equity & Biomedical Research, was selected in order to clarify that the Report Card project does not measure global health impact in its entirety. The focus of the Report Card is rather to evaluate, as robustly as possible, universities’ global health impact pertaining to access to medicines and innovation for neglected health needs as well as their empowerment of future global health leaders in these fields. The second iteration has been designed in order to clarify that it is not an evaluation of global health impact. Even so, as with the first iteration, it would not be feasible to produce an evaluation that measures every single data point representing a contribution that a given university has made to addressing the access and innovation gaps and empowering students to do the same. Furthermore, these metrics only capture a snapshot in time – because of limitations in the time period of available data and the time required to compile and produce this evaluation, significant university initiatives launched within the past 12-18 months many not be captured.

We also recognize that many individuals and research groups within lower-ranked universities are doing ground-breaking and high impact work that may not be specifically highlighted or fully accounted for by our methodology. Our intention is that the Report Card be viewed as an assessment of each institution as a whole in relation to its peers, and should in no way be seen as discrediting outstanding individual efforts.

We took great pains to develop a very wide range of methodologically rigorous metrics, in order to capture a diversity of significant access and innovation contributions for neglected health needs. We believe that our metrics are rigorous and fair, providing a methodologically sound “snapshot” of university contributions to several of the most critical global health domains.

Does the Report Card evaluate university global health research and training activities beyond neglected diseases?

The Report Card includes several metrics intended to capture activities in broader global health areas, particularly in the Empowerment section. Empowerment question 1 credits institutions for offering global health programs or study tracks, while question 2 evaluates schools on the percentage of research funding received from the Fogarty International Center, which is the NIH’s primary funding institute for research and training focused explicitly on international health as well as from the Gates Foundation specifically for global health. In several cases, universities have requested that they be credited for research or partnership programs which are already included in these metrics.

At the same time, we acknowledge that the Report Card emphasizes research on neglected diseases and access to health technologies originating at universities. These are areas of global health that universities are uniquely positioned to impact, but which have been traditionally overlooked or under-emphasized – as the Report Card’s general findings of lower performance in these areas confirms. These metrics can also be reliably measured using high-quality, consistent, publicly accessible data sources, such as the G-FINDER.

While other aspects of global health research and education are undeniably important, our organization’s mission is to encourage universities to take action in order to improve access to medicines and address neglected health needs around the world. Therefore, we chose to highlight university activities that fall within these confines not only in order to protect the methodological rigor of our evaluation and limit the challenges encountered in data collection and analysis but to also emphasize the importance of tackling these areas of global health at the university-level.

Does the Report Card address research on chronic/non-communicable diseases (NCDs) like cancer, heart disease or mental health, which are increasingly prevalent in the developing world?

The Access section of the Report Card evaluates university activities that are absolutely essential to addressing the growing global NCD epidemic. When it comes to NCD research, the primary challenge is not that universities are failing to devote a large percentage of research dollars to cancer or heart disease, or other leading NCDs; it’s that their innovations are likely to come to market at astronomical prices unless they are patented and licensed in a socially responsible manner.

This is exactly the issue in the recent Indian court ruling on Novartis’ leukemia drug Gleevec.  The basic research to develop Gleevec was conducted largely in academic laboratories, but ultimately transferred to the drug company Novartis, which sought to enforce exclusive intellectual property rights in India on tenuous grounds in order to reduce competition from more affordable generic alternatives. Today, NCD innovations regularly come to market at prices of tens of thousands of dollars per patient per year (the U.S. price for Gleevec is approximately $70,000 per patient per year). Such medicines and treatments simply won’t reach low-income developing world patients unless steps are taken to promote locally affordable versions.

The bottom line is this: While we laud universities’ extensive and important research on globally prevalent non-communicable diseases, institutions that are seriously committed to impacting global health through NCD research must be vigorously employing socially responsible licensing strategies to enable affordable generic production of resulting medicines in developing countries without delay.