The Gadsby Syndrome

The Gadsby Syndrome in IS Research: Rev. 0.1


Brian Fitzgerald, University of Limerick, Ireland (


‘Now, any author, from history's dawn, always had that most important aid to writing—an ability to call upon any word in his dictionary in building up his story. That is, our strict laws as to word construction did not block his path. But in my story that mighty obstruction will constantly stand in my path; for many an important, common word I cannot adopt, owing to its orthography.’

In reading the above passage, did you notice anything unusual? Probably not. However, the entire passage does not contain a single instance of the letter ‘e,’ the most commonly used letter in the English language, used nearly five times more often than any other letter. More remarkably, the passage is taken from a book called Gadsby, published by Ernest Vincent Wright in 1939, and the entire 50,110-word book itself did not contain the letter ‘e.’

I have coined the Gadsby syndrome term to characterise that unfortunately large body of IS research which fails to engage with IS itself in any useful or meaningful way. For example, the focus may primarily be on the intricacies of the research procedure itself, perhaps over-designing an artificial research context for the sake of statistical analysis. Alternatively, and just as pathologically, the research may be primarily focused on some area other than IS, perhaps some form of social theory. In both cases, as a consequence, the IS topic itself, the ‘e,’ is largely absent.

The Gadbsy story is truly fascinating. The original manuscript was burned in a fire at a Los Angeles library in 1939, and it is extremely difficult to obtain a copy of the book now. However, it is pleasingly appropriate that the book is not available in the e-world of Amazon! Ernest Wright had written a number of other books, such as the intriguingly-titled, The Fairies That Run The World And How They Do It (1903), and was motivated to write Gadsby after reading an ‘e-less’ four-stanza poem, and being told that it would be impossible to write a book that did not contain the letter ‘e.’ In order to achieve the task, Wright tied down the letter ‘e’ on his keyboard, and managed to complete the novel in 165 days. Unfortunately, he died, aged sixty-six, on the day the novel was published in 1939.

Wright described some of the difficulties in writing the book: for example, the fact that the past tense of verbs almost always ends with ‘-ed,’ and thus were not available. He reveals his inherent chivalry, in a charming admission that probably would not be politically correct today:

“The numerals also cause plenty of trouble, for none between six and thirty are available. When introducing young ladies into the story, this is a real barrier; for what young woman wants to have it known that she is over thirty?”

Endeavours such as Gadsby are actually not completely uncommon. In 1969, a Frenchman, Georges Perec, published a book, La Disparition, which did not contain the letter ‘e. It is not apparently any easier to write in French without using the letter ‘e,’ but interestingly, La Disparition was translated into English in 1995 as The Void, and the translation also did not use the letter ‘e.’ Interestingly,Gadsby has not been translated into French, an interesting project surely. Also, on the French school curriculum is an e-less book, La Modification, by Michel Butor.

There is also a literary form in which all variations of the verb ‘to be’ are removed, called ‘e-prime.’ The rationale behind e-prime is that the various forms of the ‘to be’ verb lead to imprecision of expression, ambiguity, and even logical mistakes. A complete language exists, that of the Lakhota Dakota Indians, which does not contain any equivalent form of the verb ‘to be’ (Simanek, 2001). A related concpt is that of lippograms—a literary technique whereby certain letters or words are omitted. One of the purposes of the lippogram is to stimulate greater creativity. This could have resonances with IS research, as by omitting the ‘e’ of IS in their research, IS researchers have indeed been capable of wonderful feats of creativity. Which serves as a timely reminder that this paper is supposed to be about IS research, but instead is on the verge of diving into a discussion about topology, a concept with a very precise mathematical definition, but which is much abused in the writings of philosophers such as Derrida. This would clearly be an illustration of the Gadsby syndrome, as this paper is supposed to be about IS, but it is surely a good sign of a paper if it can illustrate its own concepts. Anyway, back to IS research without further ado.

The paper is laid out as follows: In the next section, my worldview in relation to basic ontological and epistemological positions is stated. Following this the relevance versus rigour debate, a frequently recurring one in IS, is reprised. This paper not about the rigour v. relevance debate, and the issue is considered mainly to illustrate that the research from both perspectives is guilty of the Gadsby syndrome, that is, an absence of focus on IS itself.  Also, the discussion elaborates an issue that is seldom made explicit in this debate, namely, relevance to whom. The paper then suggests some reasons for why the Gadsby syndrome exists, and concludes with some recommendations as to how it might be rectified.

My Worldview

In the currently fashionable (but ultimately bogus) confessional trend, I will declare my worldview at the outset. I believe that the world is best viewed from the overall standpoint of interpretivism—there is no universal truth. Multiple realities exist as subjective constructions of the mind. Socially-transmitted terms direct how reality is perceived, and this will vary across different languages and cultures, and historical time-periods. Furthermore, uncommitted neutrality is impossible to achieve, and it is inevitable that understanding and interpretation are based on one’s own frame of reference. In a nutshell, context is everything. Nevertheless, I do believe that there is a difference between guns and roses (certainly from a topological perspective at the very least), but do not wish to consider whether being shot is a social accomplishment or a technical event.[1]

On the other hand, however, when it comes to conducting research, and even more so  when it comes to the communication of research ideas and results, some degree of realism is unavoidable, and the process becomes inherently positivist: research papers are by necessity structured in a linear fashion; the research 'data' gathered is unitised and categorised to a greater or lesser extent; reductionism is present in that choices have to be made as to what should be included or omitted; some explanation and interpretation of the findings will be included, implying some degree of cause-effect; and some degree of 'objectivity' will be affected in so far as political and polemic tirades will generally be avoided. The view that all interpretations are equally valid is not really accepted—if it were, issues such as trustworthiness of the research would be largely immaterial. Rather, the account being presented by each author will not be simply presented as just another account. There will be an implicit assumption (sometimes explicit) that their contribution is more valuable and accurate than the contributions of previous researchers in the area.

Yet Another Reprise of the Rigour versus Relevance Debate

Clearly, the rigour v. relevance debate is an important one, since it has featured so prominently in the field (Benbasat and Zmud, 1999; Davenport and Markus, 1999; Davis, 2000; Galliers, 1994, 1995; Keen, 1991; King and Applegate, 1997; Lee, 1999; Lyytinen, 1999; Robey and Markus, 1998; Senn, 1998; Wainwright, 1999; Westfall, 1999; Zmud, 1998), and it has also been the subject of a vigorous and robust debate on ISWorld in 2001.[2] The debate has also featured in other disciplines, most notably management/organisational science. These terms are often conflated with positivism v. interpretivism, and quantitative v. qualitative, in an unresolvable debate that has been conducted in the IS field over many years, and, indeed, in many others (cf. Allen and Ellis, 1997; Fitzgerald and Howcroft, 1998).

While some might believe that the rigour v. relevance debate is thus largely vacuous, it is investigated here more tangentially and at a different level, with a view to surfacing some aspects that have not been explicitly elaborated in the past. For example, in the many discussions of rigour and relevance, the terms rigour and relevance are almost never elaborated; rather, there seems to be an assumption that these terms are self-explanatory, well-understood, or that a common-sense, intuitive notion of their meaning is sufficient. Furthermore, it is a value-laden debate, and some stereotypical myths are propagated.

Stereotype 1: ‘Rigorous Research’ as viewed by ‘Relevant Researchers’[3]

According to this stereotype, which is often assumed to be geographical, ‘rigorous researchers’ primarily focus on research method issues such as reliable measurement, seen as a sine qua non for any research endeavour, whereas realistic context and content issues are relegated to a poor second. Papers are forced to fit into a positivist hypothesis-testing, causation-seeking model, with hypotheses drawn from some ‘valid theory base’ (which seems to comprise anything as long as it has been previously published). These hypotheses are generally fairly innocuous and blindingly obvious, and ‘rigorous researchers’ know at least a dozen ways of rejecting the null hypothesis. ‘Relevant researchers’ would usually take these hypotheses as given, if indeed they would consider them worth investigating at all. Statistical testing will loom large, and these ‘tiny’ hypotheses will usually be proven in the statistically significant sense. At the basic level, ‘rigorous research’ will involve analysis of variance and regression analysis. At the more sophisticated level, more complex factor analysis and structural equation modeling can be expected. Experimental design is paramount, and, rather than risk a real world context, ‘rigorous research’ will often involve student subjects as surrogates for organisational workplace participants. Extremely inappropriate surrogate substitutions will be common, in that graduate students are often assumed to be valid surrogates for experienced practitioners while undergraduate students will then be taken as surrogates for novice practitioners. While these studies may appear elegant, they are pointless in that the lack of a realistic context means that the conclusions are not generally useful. Also, in this stereotype, a conservative, deterministic and capitalist orientation is expected from the ‘rigorous researchers’, who are assumed to be singularly consistent in their position with regards to ontology, epistemology and methodology, i.e., ‘hard’ adjectives such as positivist, objectivist, confirmatory, hypothetico-deductive, quantitative, nomothetic can all be applied in their entirety.

Stereotype 2: ‘Relevant Research’ as viewed by ‘Rigorous Researchers’

The so-called ‘relevant research’ (the term being self-proclaimed and therefore quite meaningless) is  pretentious and affects sophistication, but in reality, standards in relation to research are actually quite sloppy. Research constructs are almost never validated, and interviewee perceptions are generalised wildly beyond what is appropriate. At the basic level, research will involve case studies of organisations, chosen apparently on the basis of a convenience sampling strategy. Sampling strategies and issues to do with construct development and will rarely be adequately covered. Also, research methods such as action research (which is really just a fancy name for consulting[4]) will loom large. And the justification for the choice of action research will be that real-world contexts are unpredictable, and as such, any experimental control is always inappropriate. More likely, the researchers haven’t actually done the basic work which would be necessary to derive appropriate hypotheses, and action research is used to ‘dive in’ to the research situation. At the sophisticated level, research studies will employ more obscure interpretivist research methods, such as discourse analysis. In so far as there is a theory base, it will often be drawn from obscure and esoteric areas such as structuration theory, actor network theory, and the like. Thus, Habermas, Foucault, Giddens, Latour, Bourdieu can be expected to loom large at different periods. Also, a naïve radical and socialist orientation can be perceived. The position in relation to ontology, epistemology and methodology is assumed to be ‘soft,’ i.e., adjectives such as subjectivist, exploratory, inductive, qualitative, idiographic may all be applied in their entirety.


Both these stereotypes are obviously seriously flawed. The debate is often portrayed as a dichotomous one. However, these terms are too value-laden to be viewed as truly dichotomous; rather, they represent a miniature hierarchy in that each of the terms is viewed by the camp espousing it as being superior to the other. For example, relevance may be assumed to be more important than rigour, and, a little arrogantly, European research is often portrayed as scoring high on relevance while US research is classified somewhat dismissively as merely rigorous (cf. Benbasat and Weber, 1996; Galliers, 1995). However, the issue of relevance to whom is rarely addressed. Likewise, the meaning of rigour is often left implicit. Also, despite the denigration of ‘rigorous research,’ the power base still resides there, in that many of the most highly rated journals, in which the ‘relevant researchers’ would desperately (if perhaps secretly) like to publish, have been dominated by ‘rigorous research.’ Ironically, ‘relevant researchers’ often protest loudly that they never read the ‘rigorous journals,’ but one might assume that they would relax this if their own work was published.

Relevance to Whom?

Even though the question of relevance to whom is rarely explicitly considered in the rigour v. relevance debate, the assumption is usually that of relevance to practice (e.g., Hanseth and Monteiro, 1996; Lyytinen, 1999; Robey and Markus, 1998; Senn, 1998; Zmud, 1998). However, there are several other stakeholders that could be implicated such as students, other academics in IS and other disciplines, funding agencies, and even society at large. Furthermore in terms of relevance to practice, the issue of currency is critical, in that what appears to be irrelevant and esoteric research at present may be extremely important in the future. These issues are discussed next.

Relevance to Practitioners?

The charge frequently levelled against IS research is that it is of little relevance to practitioners, a charge that is not confined solely to IS research, but to organisational and management science research in general (Benbasat and Zmud, 1999). Benbasat and Zmud cite some negative opinions from business school deans about academic research, who state that “as much as 80% of management research in the field may be irrelevant,” and further that academics “say nothing in these articles and they say it in a pretentious way” (to which an academic would probably respond, “Pretentious? Moi?”).

However, it is not just the opinion of academic deans. There is much evidence of the perception of irrelevance to practice. In 1995, the Society of Information Management (SIM) decided to end their practice of bundling a subscription to MIS Quarterly, regarded by many as the premier IS journal (and regarded by many as not) as part of their membership fee. When subscription became voluntary, few SIM members chose to subscribe to MIS Quarterly, even at a heavily discounted priceThis, despite the fact that the journal’s mission is to publish research targeted at information system managers.

Confirmation of the negative perception of IS practitioners to the scholarly IS publications is common. Khazanchi (2001) reports the following:

Academic research, if it canned (sic) be called that, is usually aimed at providing subsidized vacation opportunities for faculty and staff.  Paper acceptance is generally determined by the number of paid attendees required to cover the hotel and personal costs of the organizers.

This is pretty blunt. Even the presumably misspelled “canned” could be interpreted as a Freudian slip in so far as it is suggestive of what one might do with academic research. Confirming this perception, Denning (1993) states that critics of university research have labelled it as a “the scientists’ welfare system that has many of the same scandals and defects of the social welfare system.” While these criticisms may seem strong, the concept of an ‘airplane ticket paper’ is not unknown, that is, writing a paper primarily to get funding to travel to a desired exotic location (such as Umeå!).

A very graphic illustration of the negative perception of academic research occurred at a premier international IS conference a few years ago: A very successful IS practitioner delivered an extremely insightful and thoroughly entertaining plenary address to the assembled congregation of IS researchers. He challenged the academic audience repeatedly, asking questions directly of individual academics. As a front row member of the audience, I kept my head determinedly down whenever he came near. Anyway, this was not one of those occasions where the academic audience publicly applauded politely and then later privately scornfully dismissed it as excessively simplistic and deterministic (an all too frequent scenario, unfortunately). At the end of the plenary session, the Conference Chair thanked the practitioner, and as a gesture of appreciation offered him the weighty, multi-volume set of the Conference Proceedings. Even before the dismay registered on the face of the speaker (and it did), the audience erupted into spontaneous laughter. Yet, if the very idea that a successful practitioner could learn from the most leading-edge research in the IS field is a source of hilarity, then, as an applied discipline, IS clearly has a credibility problem. As a postscript, to the story, it is probably worth noting that the practitioner left the Conference without his ‘gift’. 

But is relevance to practice an important issue for IS researchers? One might presume so since some steps have been taken to try ensure that IS research is more relevant to practice. The aforementionedMIS Quarterly created an Executive Summary feature to make its research more accessible. Presumably, since SIM cancelled the automatic subscription of its members, it was not actually perceived by the practitioners as relevant. Another attempt to bridge the gap to practice is to have practitioners deliver keynote addresses at academic conferences, but as mentioned above, these are often judged as overly simplistic and unstimulating by academic participants.

Some evidence that academics don’t really care about relevance to a practitioner audience is the fact that those journals which actually publish practitioner-oriented research are rated lower in academic tenure and promotional criteria. Yet, as Williams (2001) points out, “as rigorous as an academic journal may be, nothing is a greater professional risk than putting your ideas in front of 100,000 IT professionals for them to evaluate in a popular trade magazine.” Thus, publishing in a ‘rigorous’ journal readership with a readership cast of hundreds is probably a significant safety factor for researchers.

Also, the publication cycle from paper submission to eventual journal publication can be measured in terms of years often. Thus, research on current challenges that practitioners face is not going to be available in an appropriate time-frame to be of any real use. There is no real evidence that any changes in the journal ranking criteria are being put in place to encourage submission to electronic journals which could deliver the information quicker to the audience. In a similar fashion, the publication cycle time for conferences is much quicker than journals, and if timeliness of communicating research results to practitioners was important, then conference publications should be rated higher than journal publications, but this is clearly not the case.

Furthermore, the acceptance criteria for research publication is actually biased against practitioner papers. Thus, even when conference organisers and journal editors solicit papers from practitioners, the review process is often still mired in the standard regulations thus resulting in rejection of these papers.

Also, few researchers are former practitioners, or make any attempt to closely couple themselves with practice. The stereotypical ‘rigorous researcher’ can insulate him or herself with statistical analyses of students in laboratory settings. The stereotypical ‘relevant researcher’ can insulate him or herself inside a complex web of social theory to the extent that no practical or useful result emerges. Indeed, there is no reward system to encourage researchers to become intimately associated with IS practice. Those IS academics who do turn their hand to commercial endeavours are often seen as avaricious treasure hunters. Also, former practitioners who return to academe are not well respected, and their professional expertise is completely undervalued (Heart, 2001).

Relevance to Students?

So if IS research is not directly relevant to practitioners, who else might it be relevant to? An argument frequently proposed is that IS research is relevant to students, be they undergraduates, graduate research students, or on executive MBA programmes. In this case, the argument generally takes the form of the strength of “weak ties” connection (Granovetter, 1973). That is, it is assumed that research papers would not be directly accessible to a practitioner audience, partly on the unspoken assumption that they do not have the intellectual capacity to understand the ‘depth’ of the research. Academic researchers interpret their work and that of others in scholarly research papers and translate it into a more palatable form for the consumers. However, if this was a really important consideration, timeliness of publication would be a more critical factor. Also, if the primary audience is the student one, then books would be much more highly regarded as a publication outlet. Yet, in the US, one study has ranked books as only worth 2.3 research papers, and in the research assessment exercise in the UK, books are not admissible at all. Yet, these are a publication format in which results can be provided in more readily digestible form for students. Furthermore, if the primary consumer of academic research is indeed the student, then it might be expected that researchers would seek an increase in their number of teaching hours. Not likely!

Another argument which is used to explain and defend the perception of irrelevance in academic research is that research is part of a bigger picture, and viewing any individual paper in isolation does not reveal the overall coherent picture of the relevance of the research, whereas when viewed in its entirety over a stream of publications, the research might well be deemed relevant. However, if this were the case, again, books, which can depict a complete stream of research, would surely be more highly valued. Also, there would be more evidence of a cumulative tradition in relation to research being carried out by individuals and in research clusters. But this evidence is sadly lacking in IS research.

A similar argument which could be advanced is that academic research falls into two broad categories, applied research, which might be relevant in the sense that it is more immediately applicable, andbasic research which is more esoteric, and perhaps of long-term benefit. Also, a basic research category would insulate researchers from the danger of being pushed into a research agenda dictated by the latest management or technological fad. This categorisation into basic and applied research is potentially useful, as research is never wholly predictable; therefore, research which seems esoteric and irrelevant today might be of enormous significance in the future. Examples might be research on neural networks which might be of enormous importance to organisations who possess a huge amount of information in various databases and who wish to uncover possible patterns in the data—insurance companies, for example. Likewise, research today in biotechnology could play a similar role. However, if this was truly important to IS academics, the distinction between basic and applied research would be more formally established. Also, if there did exist a category of basic research, one could again expect more evidence of a cumulative tradition in certain basic research areas.

There is further compelling evidence that reaching a student, or indeed a practitioner, audience is quite a remote concern for IS researchers. As already mentioned, conference proceedings, with their short publication life-cycle and constant updating of research themes, represent a very good opportunity of disseminating relevant research in a timely fashion to both students and practitioners. However, these proceedings are printed in very limited print-runs by leading publishers who apply very strict copyright policies. The proceedings are ridiculously expensive ($150 per copy is not unusual), which clearly makes them unattractive to both students and practitioners. It would seem to be a simple matter to use print production companies, or the Internet, to disseminate low-cost copies. However, the principal argument for using established publishers with their exorbitant prices is that these publishers are able to fulfil recurring orders from university libraries around the world, thus ensuring that these proceedings are available for other academics researchers.

Relevance to Other Academics?

Since IS researchers do not seem to strive to make their research relevant to practitioners or students, another obvious audience is other academics. Here, there is plenty of evidence of a quest for relevance. The whole canon of received wisdom on how to construct papers suggests that suitable homage be paid to previous research, even if it is only to briefly acknowledge the muddled fumblings of others. Also, the first and principal reviewers of these papers are other academics, so pleasing this audience is important, often to the extent of employing the well-established tactics of citing the work of those who are potential reviewers or editors, and citing work from the journal to which the research paper is being submitted.

Citation analysis has been elevated to a scientific form. Sophisticated analyses to see how many others have cited a piece of work are conducted, and used as part of tenure and promotion applications. As a result, eliciting citations from other academics is a very important task. Also, since the citation is the important part, there is much superficial and transitive referencing whereby researchers provide blanket references to a whole stream of research (cf. Jones, 2000; Robey and Markus, 1998). Also, it is a circular phenomenon since certain ‘seminal’ papers will be routinely cited, often perhaps without being actually read. Thus, there is much shallow knowledge of what the main papers relating to a particular topic are, but the deep knowledge of the content of these papers is by no means guaranteed. Impact on practice is not really considered. Rather, academics submit papers to conferences and journals at great personal cost, provide reviewing services for free, and then pay exorbitant sums of money for the privilege of reading these journals and conference proceedings. And we are supposed to be the intelligentsia who can teach business strategy!

The model which can explain the apparently illogical behaviour of academics best arises in the economics of reputation signalling incentives (Lerner and Tirole, 2000)Summarising very briefly, signalling incentives is an umbrella term capturing both career concern and ego gratification incentives. The career concern incentive relates to the fact that academic researchers enhance future job prospects for tenure and promotion by publishing their work in scholarly journals. These publications are the principal criterion for career advancement. The ego gratification incentive is premised on peer recognition, which is very highly valued. Thus, research is pursued as an end itself, becoming a kind of hobby which is intrinsically satisfying. Also, the sense of belonging to a community is gratifying and fulfils basic human needs. The academic community with its often undocumented norms and taboos becomes a closed system in which relatively pointless research can be pursued and published ad infinitum. In such a system, the age-old problem of means-ends inversion occurs, in that rather than conducting and publishing research (the means) with a view to addressing some relevant research problem (the end), publishing the research becomes an end in itself, and its practical usefulness is forgotten about.

Also, as a direct consequence of the career concern incentive, it is a more adaptive strategy to publish as frequently as possible. Slightly different papers, often with merely a change of title, are disseminated in as many outlets as possible. At first glance, this might appear to be a strategy for causing information overload, but most experienced academics understand the publication frequency, and can detect overlaps. Indeed, it is not uncommon for researchers to advise readers as to which of their papers is best in a range of publications which may have slightly different titles. However, it is unlikely that students and practitioners understand the nuances of this publishing game.

In order to attract research funding to perpetuate this cycle of research, research proposals seeking funding are submitted to large and remote bureaucratic bodies. The decision makers who review these proposals are frequently other academics; thus, the academic paper is a language they understand. Even when practitioners are part of the reviewing body for these funding submissions, the academic language is not a neutral factor; rather, the persuasive and authoritative rhetoric is hard to discount, and the practitioner reviewer perhaps believes that there is real substance to the proposal, even if he or she cannot readily detect it.


As already mentioned, this paper is not intended to be just a reiteration of the rigour v. relevance debate, but seeks to make explicit some aspects that have not been elaborated in previous research, and also to illustrate the presence of the Gadsby syndrome in both research perspectives. The issue of what is meant by rigour in IS research has been comprehensively dealt with in previous research, both by those who are in the ‘rigorous researcher’ camp, but also, quite significantly, by some in the ‘relevant researcher’ camp, who provide somewhat apologist recommendations as to how the standard principles of ‘rigorous research’ might be addressed in ‘relevant research.’

At a high level, these main principles are concerned with objectivity, reliability, and validity, which are very much inter-related. These principles and their inherent flaws have been widely discussed, both by those who reject them as inadequate, but also even by some of those who espouse the ‘rigorous approach.’ Summarising very briefly, objectivity, i.e., a neutral and detached position on the part of the researcher in relation to the research situation, is seen as necessary to avoid problems with researcher bias due to preconceived notions. However, it is now accepted even by proponents of the positivist approach that true objectivity is impossible to achieve. Reliability is concerned with ensuring that the research can be replicated, thus overcoming concerns about fraudulent claims, and also facilitating extra generalisability. But as Heraclitus pointed out centuries ago, you can’t step into the same river twice, so replicability is not actually feasible in social situations. Validity comes in two flavours, internal validity and external validity. Internal validity has to do with the internal consistency of the results, and the appropriateness of the relationship between research constructs, e.g., are changes in the dependent variables caused solely by changes to the independent variables. External validity is concerned with the generalisability of the findings to other situations. A major aspect of both reliability and validity is the desire for tight control of the research situation and the research artefacts. However, even in very tightly controlled experiments, there are many examples of studies which have investigated the same research topic, but whose findings are completely at variance with each other. For example, Hiltz and Johnson (1990), in their study of user-satisfaction levels with information systems, reviewed the findings of twelve previous studies which sought to identify reliable variables that could predict user acceptance of information systems. They found widespread divergence of findings for almost all the variables studied, even where the variables investigated were as clear-cut or trivial as ‘age of user’ or ‘speed of typing’. In a similar fashion, Jarvenpaa et al. (1985), strong advocates of quantitative research, refer to several quantitative studies which tested the suitability of graphical information presentation formats and arrived at diametrically-opposed conclusions.

Thus, there is a huge question mark over the value of ‘rigorous research.’ The need to simplify situations so variables can be experimentally controlled leads to omissions of factors that are probably critical. Weick (1984) cites Vickers who describes this phenomenon whereby some obviously important variable (the ‘e’ of the research, in fact) is omitted because it cannot be quantified, thus effectively valuing it at zero—the only value it cannot have. This illustrates the central problem which leads to the Gadsby syndrome in ‘rigorous research,’ namely, that a chain is as strong as its weakest link, and it is futile to apply a complex methodological arsenal to research, when the really important factors which probably swamp everything else are eliminated, simply because they do not lend themselves to experimental control. Indeed, in many instances, the methodological arsenal obscures the actual limitations and sterility of what is really being researched.

As the examples cited above illustrate, tiny fairly insignificant findings are established through statistical significance in individual studies, but despite tight control, reliability is a problem as subsequent studies overturn previous findings, thus resulting in a whole set of findings that so not add up (cf. Huber, 1983).

These weaknesses are well known and readily acknowledged by even the proponents of the approach. As Einstein, one of the greatest exponents of the scientific method in the history of mankind expressed it:

Not everything that can be counted counts, and not everything that counts can be counted.

Again, if we pose the question, does rigour really matter to IS researchers, the answers might be revealing. Certainly, statistical testing has become more sophisticated in recent years, but this is probably much more a direct consequence of extra functionality being provided as part of the SPSS software package. Also, one could argue that rigour is improved by several iterations of the review cycle. Thus, an electronic submission, review and publishing environment actually allows for several additional iterations of the review process in the same time-frame as the conventional paper-based model. But there is no evidence of any academic request for extra rigour to be achieved in this manner.

Why Does the Gadsby Syndrome Exist?

Given that both rigorous and relevant research are equally culpable in relation to the Gadsby syndrome, a useful step on the way to addressing the problem is to try explain why such a situation prevails.

The Gadsby Syndrome in ‘Rigorous Research’

The preoccupation with rigorous application of research methods might be defended on the basis that it is the right or only way to do research. Certainly, one could mount a valid argument that in a new discipline such as IS, the internal validity of research should be a priority as a means of developing a usable base of reliable theoretical constructs. However, Ross (1991) analyses the historical development of the American social sciences and argues that early in their development, political and ideological pressures caused researchers to avoid certain issues of content and concentrate instead on research method issues, as this was likely to be a much less troublesome route. In the specific case of IS research, it is perfectly understandable that researchers in a fledgling discipline would seek the comfort of the methods of natural science, as a means of making both their research and the IS field itself more ‘scientific’ and therefore respectable. For these reasons, the emphasis on the rigorous application of a research method at the expense of a consideration of a realistic context and content is much better explained as a legacy of history rather than as a free and informed choice made by each researcher anew on commencing a research endeavour. Researchers were thus keen to exhibit rigour in adopting a scientific engineering-like approach. However, solutions to the problems of IS practice, such as the ‘software crisis’ and the like (Naur et al., 1976), did not immediately emerge. Researchers continued striving for more precision and rigour in their experiments to the point of spurious accuracy, whereby a raft of statistical techniques are being applied to investigate constructs of little practical usefulness.

The Gadsby Syndrome in ‘Relevant Research’

In a sense, the failure of the rigorous scientific, engineering approach to provide appropriate solutions led to the development of the other camp, a case of definition by opposition. The crucial contribution of this camp was to recognise that IS is fundamentally a socio-technical discipline, and to emphasise and investigate the vitally important social aspects. These researchers faced an uphill battle from the beginning, initially in trying to legitimise the alternative research methods from the interpretivist perspective (cf. Mumford et al., 1985). Indeed, Lyytinen argued in 1986 that the socio and technical camps were unevenly balanced with the dominance still been placed on technical issues. However, once the investigation of the social issues began, the pendulum began to swing in that direction. In the area of information systems development—one of the core areas of the IS field—one of the manifestations of this new emphasis was an increased interest and thus an elaboration of the early phases of the systems development life-cycle—systems planning and requirements analysis, and also the latter phases of the lifecycle—implementation. Once underway, investigation of these early and later phases gained momentum, and the central problem, the design and development of the information system itself was overshadowed, so much so that the idea that IS development is the core of the field, once taken for granted by many researchers (Cotterman and Senn, 1992) is now a very problematic and controversial notion (Bacon and Fitzgerald, 2001). Thus, the pendulum has now swung very much towards the social end, and papers and conferences may now even be measured by the amount of social theory in their content (Jones, 2000). While obviously not advocating a return to naive technological determinism, which is clearly problematic (Lee, 1999), IS is very much a technolological field; indeed, this is what primarily defines the field in a sense for the lay-person, and a technological element would loom large in any academic description of the field to a lay-person. Obviously, the ‘e’ in IS research is some combination of the social and technical, but this balance has not been clearly identified. Researchers have begun to consider this phenomenon of late in the field (e.g., Holmstrom, 2001; Orlikowski and Iacono, 2001). In a very detailed investigation of the issue, Orlikowski and Iacono analysed all the papers published in one of the leading IS journals, Information Systems Resarch, since its inception then years ago. They bemoan the “nominal view of IT, in so far as twenty-five percent of the published papers treated IT as if it was absent, and indeed, as they damningly note (2001, p.128):

In many of these articles, we noticed that we could have substituted another term for “IS”—for example, “HR” personnel, “logistics” outsourcing, or “marketing” strategy—and the articles would still have made sense.

Other Factors Contributing to the Gadsby Syndrome

A number of further conditions in the evolution of the IS field have contributed to the emergence of the Gadsby syndrome. Firstly, the fact that there are no widely-accepted definition of what the core principles of IS actually is. These core principles would help establish the ‘e’ of IS research. The absence has ensured that there are no barriers to entry to IS research; anyone can come and plant a flag, and many have found refuge in the discipline. The ‘barriers to entry’ argument is certainly not a politically correct argument. For many, the openness of the field is one of its strongest features (e.g., Robey, 1996). Certainly, if the debate is couched as one between the sclerosis of an introspective field which talked mainly to itself about itself, and the creative potential of an open creative field where research issues can be illuminated by several research traditions and methods, one would clearly opt for the former. However, it is not a completely black-and-white issue, and, at the risk of being labelled a ‘closet Kuhnian,’ if the debate was couched as one between the efficiencies of communication of researchers working on well-defined problems, and the muddled confusion of researchers working on whatever takes their fancy without any regard fro previous research, then one might opt for a more closed model.

Also, very few IS researchers have any real practical experience of IS practice (Benbasat and Zmud, 1999; Wainwright, 1999). Indeed, practical experience does not seem to be all that highly regarded (Heart, 2001). Yet, it is obviously more difficult to identify and conduct coherent research in an applied field, if one does not understand what practice actually entails.

Putting the ‘e’ back into IS research

Several of the researchers mentioned above have made suggestions as to how the problem might be overcome. However, the problem is complex, and the easy solutions such as creating an executive summary for journal papers, or inviting practitioners to deliver key note addresses are not likely to succeed. One is reminded of H.L. Mencken’s classic comment: “To every complex problem, there is a solution which is short, simple, and wrong.” Rather, since the problems are fundamental structural ones, the whole system must be changed, right through to the norms and reward systems that are intrinsic to the community.

Such revolutionary change is not easy to achieve. As a first step, researchers need to realise the problem of the missing ‘e’ in IS research. They can then try to ensure that their papers have some degree of practical relevance, either immediately, or at least to have the potential to be relevant at some point in the future. Clearly, the ‘e’ in IS research hovers somewhere between the social and technical perspectives, and more attention could be paid to ensuring some balance in this dimension. Other useful strategies would be to respect practice more. If IS is to be regarded as a truly applied discipline, then researchers could be expected to have some degree of familiarity with the practice they purport to research. This is certainly the case to a greater or lesser extent in the other professions, such as law and medicine, for example (Davenport and Markus, 1999). Certainly, academic researchers could be encouraged to spend time, perhaps as part of sabbatical leave, working in IS departments in real organisations (Saunders, 1998). This would give them the intuitive familiarity with the topic that one should expect from professionals, and allow some kind of plausibility or reality check on some of the research topics being proposed. Also, given the pace of change in technology, such experience should be replenished every few years.

Certainly, a clear distinction between basic research and applied research would help. Obviously, again, it is a question of balance, and as mentioned earlier, all research should not be driven by an applied agenda, as this could lead to research driven only by the latest management fad, which would clearly be disastrous. The publication of applied research results in a more timely fashion in cheaper, more accessible media would also help, and these media should not be valued less than the traditional paper-based ‘rigorous’ journals. The ability to write in a practitioner-friendly format would be a useful skill which could be taught to researchers. Benbasat and Zmud (1999) provide a concise list of recommendations that would help to make the style and tone of research papers more accessible, none of which, in their view, compromise the rigour of the research. The non-mutual exclusivity of rigour and relevance is also a point made forcefully by Robey and Markus (1998) who argue for “consumable academic research” which strikes a balance between both. They suggest four key characteristics that consumable research should exhibit, an accessible style; a storyline that is novel and critical, yet constructive; a credible evidential base; an support from useful (and usable) logic and theory.

It should be noted, however, that the fault is not entirely with the academic community. Davenport (1997) suggests that the practitioner community have not in general been good at communicating their needs, or at building long-term research collaborations with the academic community. In relation to this, funding agencies are often located too far from the trenches. A research agenda focused on problems faced in everyday practice would be better administered by practitioner bodies. At the moment, funding decisions are in the hands of bureaucracies, and the funding game is best played by skilled academic researchers.

One of the most promising signs of recent times has been the increased legitimacy of action research as a valid means of researching IS. Among the central tenets of action research are that it should take place in a realistic context, and that the research should help solve some problem faced by the organisational participants. These two criteria will of themselves predispose towards research projects that some potential to be practically useful.

Also, the multiple publication of essentially the same research under different paper titles should be much more transparent in the IS research community. Just as in software package version releases, different revision numbers could indicate the degree of progress n a paper. Thus, Revision. 0.n of a paper would indicate quite early work. After it has received some feedback and review from peers and practitioners, and becomes a more robust piece of work, the revision number might be incremented to Rev. 1.n, and so on. This works quite well with software packages and is a central feature of the open source software community. Also, just as in open source, one could bypass the lengthy publication cycles, and disseminate research findings immediately via the Internet. Other researchers and practitioners could supply feedback, and these could be acknowledged in the paper history. Again, such a practice is common in the open source community. A summary of the open source model is beyond the scope of this paper, but may be found in one of the landmark papers on the topic, The Cathedral and the Bazaar[5] by Eric Raymond, which had its first formal publication (Revision 1.16) in 1997. It has since been officially translated into 15 languages, and is now available as Revision 1.51. The paper also provides a history of the changes across each revision. Also, a list of related and derived papers is provided. The similarities between the open source model and academic research are striking and have been discussed by several researchers (cf. Feller and Fitzgerald, 2001). Currently, the open source model is being adopted in the legal area (Bollier, 1999), and in medical research, where Harold Varmus, Nobel prize-winning medical researcher and Director of the National Institutes of Health (NIH), has opted to publish NIH research directly according to an open source model.


This paper has described a common problem with IS research, whether it be of the so-called relevant variety or the rigorous variety, namely, a failure to engage meaningfully with the IS topic itself. Instead the research focuses on an over-design of the research artefacts, or on the outlying subject areas such as social theory from whence the concepts are drawn. The Gadsby syndrome, based on the ‘e-less’ book is the term that has been coined to represent this phenomenon. Some explanation for the syndrome, and ways in which it mght be addressed have been proposed.  




Allen, D. and Ellis, D. (1997) Beyond Paradigm Closure in Information Systems Research: Theoretical Possibilities for Pluralism, in Proceedings of 5th European Conference on Information Systems, Galliers, R. et al. (Eds), Cork Publishing Ltd., Cork, Ireland, pp. 737-759.

Applegate, L. (1999) Rigour and relevance in IS research – an introduction, MIS Quarterly, Vol. 23, No. 1, pp. 1-2.

Benbasat, I. And Zmud, R. (1999) Empirical research in IS: the practice of relevance, MIS Quarterly, Vol. 23, No. 1, pp. 3-16.

Chomsky, N. (1977) Language and Responsibility, Pantheon Books, New York

Cotterman, W. and Senn, J. (1992) Challenges and Strategies for Research in Systems Development, Wiley & Sons, Chichester.

Davenport, T. (1997) Storming the ivory tower, CIO Magazine, April 15, 1997.

Davenport, T. and Markus, L. (1999) Rigor vs relevance revisited: response to Benbasat and Zmud, MIS Quarterly, Vol. 23, No. 1, pp. 19-23.

Denning, P. (1993) Designing new principles to sustain research in our universities, Communications of the ACM, Vol. 36, No. 7, pp. 99-104.

Galliers, R. (1995) A manifesto for information management research, British Journal of Management, Vol. 6, Special Edition, pp. 1-8

Hanseth, O. and Monteiro, E. (1996). Navigating future research: judging the relevance of information systems development research. Accounting, Management and Information Technologies, 6(1/2):77 – 85.

Hiltz, S. and Johnson, K. (1990) User satisfaction with Computer Mediated Communication Systems, Management Science36, 6, 739-765.

Huber, G.P.  (1983) Cognitive styles as a basis for MIS and DSS designs: much ado about nothing,  Management Science, 29, 5, 567-579.

Jarvenpaa, S., Dickson, G. and DeSanctis, G. (1985) Methodological issues in experimental IS research: experiences and recommendations, MIS Quarterly9, 2, 141-156.

Keen, P. (1991) Keynote address: relevance and rigor in information systems research, in Nissen, H., Klein, H. and Hirschheim, R. (eds) (1991) Information Systems Research: Contemporary Approaches and Emergent Traditions, Elsevier Publishers, North Holland, 27-49.

King, J. and Applegate, L. (1997) Crisis in the case study crisis: marginal diminishing returns to sacle in the quantitative qualitative research debate, in Lee, A., Liebenau, J. and DeGross, J. (Eds.)Information Systems and Qualitative Research, Chapman & Hall, London.

Lyytinen, K. (1986) Information Systems Development as Social Action, University of Jyvasykla.

Mumford, E., Hirschheim, R., Fitzgerald, G. and Wood-Harper, A. (Eds) (1985) Research Methods in Information Systems, Elsevier Publishers, North Holland.

Naur, P. Randell, B. and Buxton, J. (1976) Software Engineering: Concepts and Techniques, Charter Publishers, New York.

Robey, D. and Markus, L. (1998) Beyond rigor and relevance: producing consumable research about information systems, Information Resources Management Journal11, 1, pp. 7-15.

Ross, D. (1991) The Origins of American Social Science, Cambridge University Press, New York.

Saunders, C. (1998) The role of business in IT research, Information Resources Management Journal11, 1, pp. 4-6.

Senn, J. (1998) The challenge of relating IS research to practice, Information Resources Management Journal11, 1, pp. 23-28.

Wainwright, D. (2000) Consultancy, learning and research: the on-going debate over rigour versus relevance in IS research, Proceedings of the 10th BIT Annual Conference, e-futures, Manchester Metropolitan University, November.

Wright, E. (1939) Gadsby, Wetzel Publishing Co., Los Angeles.

Zmud, R. (1998) Conducting and publishing practice-driven research, in Larsen, T and Levine, L. (Eds.) Information Systems: Current Issues and Future Changes, Proceedings of IFIP WG8.2 and WG8.6 Joint Conference, Helsinki, Finland, December, Chapman & Hall, pp. 21-33.


[1] These questions featured in the acrimonious debate between Kling and Grint & Woolgar in the early 1990s (see Grint & Woolgar, 1992; Kling, 1991; 1992a; 1992b; Woolgar & Grint, 1991) which is discussed in Holmstrom (2001). On a broader level, this type of ‘knowing aside’ is typical of academic papers, whereby readers are expected to be familiar with fairly obscure events from the literature, a feature of research much disparaged by Robey and Markus, (1998). However, I do like the topological differentiation, and would be loath to remove that.


[3] The terms ‘rigorous research’ and ‘relevant research’ are used to document these extreme positions. These terms are very problematic, but this is just a stereotypical account. Some might replace these terms rigorous and relevant with the terms American and European respectively, but this is even more problematic.

[4] Interestingly, action research did enjoy a period of popularity in the USA in the 1940s and 50s, and later spread from there to Europe. However, it went into decline in the US in the 1960s (see Carr & Kemmis, 1986; McNiff, 2000).