Four major limitations to these types of cross-sectional data sources should be noted. First, although the academic career pathway is a longitudinal process, much of the data available cannot follow the same individual over a long period of time. Some faculty are surveyed in more than one SDR, but the SDR is not a panel study, even though it is longitudinal in its tracking of cohorts. For university studies, it is also possible that faculty would be in more than one survey. Lon­gitudinal data that cover most of an individual faculty member’s career are rare; the most consistently available data are snapshots of faculty at different points in their careers, taken at the time of the survey.[113]

Large gaps exist between the time periods selected for data collection. While some data collection occurs annually, such as salary surveys conducted by the American Association of University Professors (AAUP) or the American Chemi­cal Society’s (ACS’s) survey of top 50 chemistry departments, most of the data available are not collected annually. Many university gender equity studies appear to be one-time events. The SDR is biennial.[114] The NSOPF has been conducted every 5 years since 1988, most recently in 2004.[115]

Second, the data may be biased or certain data points omitted. Doctoral graduates, for example, who fail to be hired and faculty who leave a university before or after tenure or promotion are less likely to be surveyed. The faculty who leave may exhibit different characteristics than the faculty who stay. As a result, analysis is likely to be restricted to the population of faculty who may be termed “successful” but does not represent all faculty. And it does not allow us to address other critical factors playing a significant role in determining the career paths of men and women in academia. Also, as these survey results are self-reported, data on productivity and job satisfaction may be biased, or faculty may simply misre – member specific quantitative information from earlier stages of their career.

Third, comparability across studies is a major limiting factor, both in com­paring surveys from the same series undertaken in different years and comparing

different surveys. In the case of the SDR and NSOPF, both of which have been car­ried out multiple times, the questions, how the survey is implemented, the sample size, and the response rate may all change. The NSF notes regarding the SDR:

There have been a number of changes in the definition of the population surveyed over time. For example, prior to 1991, the survey included some individuals who had received doctoral degrees in fields outside of S&E or had received their degrees from non-U. S. universities. Since coverage of these individuals had declined over time, the decision was made to delete them from the 1991 survey. The survey improvements made in 1993 are sufficiently great that SRS staff believe that trend analyses between the data from the 1990s surveys and the surveys in prior years must be performed very cautiously, if at all.[116]

A more difficult task is comparing several university studies. Myriad approaches have been taken by universities in evaluating and assessing characteristics of their faculty, but concerns over comparability somewhat reduce the usefulness of the information gathered.

Fourth, in the interest of preserving confidentiality, surveys often provide aggregated information rather than the raw (i. e., individual) data. Certainly confi­dentiality is critical, but it means that some studies are less transparent in describ­ing how the study was conducted and who was surveyed, making it more difficult to replicate or disaggregate the data and examine it differently. Readers are con­strained by the findings reported by the scholars who put the data together.