Science and Engineering Ethic

Science and Engineering Ethics (2003) 9, 363-376

Science and Engineering Ethics, Volume 9, Issue 3, 2003 363

Keywords: ascribed ethics, risk, engineering, moral imagination

ABSTRACT: Discovering obligations that are ascribed to them by others is potentially an important element in the development of the moral imagination of engineers. Moral imagination cannot reasonably be developed by contemplating oneself and one’s task alone: there must be some element of discovering the expectations of people one could put at risk. In practice it may be impossible to meet ascribed obligations if they are completely general and allow no exceptions – for example if they demand an unlimited duty to avoid harm. But they can still serve to modify engineers’ prior ethics, for example by limiting a purely utilitarian approach to deciding who should bear risk and how much risk they should bear. Ascribed obligations can also give engineers insight into the public reaction to risks that arise from engineered systems, and the consequent expectations that the public have about how much protection is desirable and where the responsibility for this protection lies. This article analyses the case for taking ascribed obligations seriously, and reviews some of the obligations that have been ascribed in the aftermath of recent engineering failures. It also proposes ways in which ascribed obligations could be used in engineers’ moral development.

Introduction

The purpose of this paper is to examine the ethical issues arising from the risks that engineers cause for others, typically by designing products or systems from which hazards can arise. More particularly, however, the intention is to consider the ethics ascribed to engineers by the people at risk, and the contribution of these ascribed ethics to the ethics that engineers actually espouse. The principle is that engineers’ responsibilities are defined at least in part by social ascription,1 that engineers or

The Social Ascription of Obligations to Engineers

J. S. Busby and M. Coeckelbergh Dept. of Mechanical Engineering, University of Bath, UK

Address for correspondence: J. S. Busby, University of Bath, Department of Mechanical Engineering, Faculty of Engineering and Design, Bath BA2 7AY, UK; email [email protected]. Paper received, 18 September 2002: revised, 1 April 2003: accepted, 9 April 2003.

1353-3452 © 2003 Opragen Publications, POB 54, Guildford GU1 2YF, UK. http://www.opragen.co.uk

J. S. Busby and M. Coeckelbergh

364 Science and Engineering Ethics, Volume 9, Issue 3, 2003

engineering organisations cannot simply define their own responsibilities. This then raises a number of questions – for example about the efficiency with which potential risk bearers’ expectations are communicated to engineers, the extent to which such expectations are consistent with one another, and the extent to which they can reasonably be met in a world of finite resources.

We have had two main aims in this article. The first has been to develop an argument that says ascribed obligations are important to engineers, not because they provide pre-formed rules that engineers can blindly follow, but because they can be used to help engineers develop a capacity for moral imagination.2 The second aim has been to make inferences about the obligations that people have in fact ascribed to engineers in the aftermath of some recent engineering failures. These yielded some interesting if not unexpected insights into what people at large expect of engineers – for example in terms of whether improvements in technology should be taken as improved protection or improved performance.

The ascription problem

The relevance of ascribed obligations The first argument to make is that engineering is not, in practice, morally neutral. Engineers are certainly constrained by the prevailing culture, client relationships and subordination to managers in the firms for which they work.3 They plainly do not enjoy complete freedom, and in many cases their work only involves the detailed embodiment of some general solution principle. Most of their decisions perhaps do not cause widespread harm or disquiet. When their decisions or their conduct do have the potential for harm they are often quite rigidly constrained by the law. For example, in the UK there are statutory regulations requiring major accident hazards to be treated by quantitative risk analyses, accompanied by predetermined acceptance criteria. Even if the artefact being designed by an engineer has the potential for harm, it will not have been the engineer who decided such a harm was the price worth paying for whatever benefit accrues. It will similarly not have been the engineer who determines what is an acceptable distribution of the potential harm. Generally, the degree of risk may be an engineering problem but its acceptability is not,4 so it will not have been up to an engineer to decide.

But the picture of engineering as morally neutral is misleading. Engineers do not simply implement managers’ goals5 because such goals are invariably incomplete. Telling someone to develop a design for a hazardous installation, within the law and subject to prevailing engineering standards, does not relieve engineers of the moral burden of deciding, for example, whether certain kinds of maintenance staff should be put at risk by adopting certain designs. Statutory risk acceptance criteria are also sufficiently imprecise to allow the engineer the latitude of asking ‘how could I justify the design I want to develop’ rather than ‘how could I find the design that reasonably minimises risk?’ And the fact that a client might instruct an engineer that a certain risk is acceptable does not mean the engineer is relieved of the responsibility to consider the risk and its moral dimensions. The argument that the engineer is so constrained that his

The Social Ascription of Obligations to Engineers

Science and Engineering Ethics, Volume 9, Issue 3, 2003 365

or her actions are morally neutral ignores the moral question as to whether the engineer should accept such constraints. It therefore seems to us that engineering is not morally neutral, in general, even if much of the engineering routine does not raise moral dilemmas.

The next argument to make is that engineering does not simply amount to utilitarianism. The importance of trade-offs in engineering decisions, the naturally quantitative inclination of engineers, and the advocacy of utilitarian decision models for engineers and managers, all suggest that engineering is essentially utilitarian. The history of engineering, as an enterprise that has involved risk in order to find a better future, reinforces this. And some contemporary views go along with it. Loui6 states that most engineers use consequentialist reasoning, and Vesilind and Gunn7 suggest that engineers’ utilitarian orientation naturally puts them at odds with a public that does not develop expectations of engineering along utilitarian lines. But some recent work looking at engineers’ routine decision-making8 suggested that it is rather seldom that engineers perform cost benefit analyses, and that they are much more often guided by duties like following a ‘matching’ or reciprocation principle. Davis9 has suggested that engineers are in fact less inclined to balance risk against benefit than managers. There is also empirical research10 that found that both scientists and administrators reason about the ethics of research practice mainly in terms of what the investigator does, rather what the final outcome turns out to be. The harm that might arise from an investigator’s actions was virtually irrelevant to the judgments of whether the actions were ethical. We cannot assume from this that people would necessarily think about engineers’ ethics in the same terms, but in the absence of a similar study on engineers it seems to us quite likely that they would. The upshot is that a consequentialist orientation both to examining engineers’ actions and describing how engineers themselves make their moral determinations would be inappropriate.

The implication is that one cannot find the optimal ethical action by considering the properties of the available actions alone. There have as a result been various recent proposals for virtue ethics in engineering. Both Robinson and Dixon11 and Pritchard12

offer suggestions for suitable virtues. The difficulty with these virtues, however, is their provenance. Should the suggested virtues be whatever occurs to respected commentators? Should they be the product of engineers’ own contemplations? Some naturalistic theories argue that ethical conduct arises in self-affirmation, where the self is defined by its relationships with others.13 But again the tenor is of the self determining what matters in morality, without giving the others participating in the relationships a say. There is a basic difference between defining what is good for other people in one’s conduct by self-inspection, and defining it by acquiring empirical knowledge about other people and what they expect. Our view is that the latter is as important as the former.

The benefits of thinking about ascribed ethics One argument in favour of discovering what responsibilities other people expect you to meet, as an engineer, is its causal importance: its importance in causing other people to behave in particular ways towards engineers and (in particular) engineered systems. In

J. S. Busby and M. Coeckelbergh

366 Science and Engineering Ethics, Volume 9, Issue 3, 2003

accident causation, the misconceptions of people operating complex systems about the responsibilities of designers, and vice versa, are prime contributory factors. For example, operators sometimes act as though they believe designers have a duty to make system boundaries obvious and benign, and on the basis of this ascribed duty they seem to predict that designers will in fact make boundaries obvious and benign. As a designer you may not feel you have obligation X, but if someone else expects you to have obligation X and bases their behaviour on it, and imperils their own and others’ safety in doing so, you may come to take on obligation X. Taking on X need not spring from a belief in the intrinsic rightness of X. A difficulty in this respect is that the operator-designer relationship is mediated by the product, and is often not a direct one. Designers and operators ordinarily cannot reveal and resolve discrepancies in ascribed and adopted responsibilities by direct negotiation.

A second argument in favour of considering ascribed ethics is that discovering one’s own ethics can be a process of finding out what others’ expectations are. It is rather like the activity of risk identification, in that the best risk identifiers tend to be people who go out and talk to others, make a wide range of contacts, read widely, and generally look to the external world to stimulate their imagination. Risk identification performed by isolating oneself from the world and engaging in contemplation is usually an impoverished exercise. It seems to us reasonable that, for most people, coming to an ethical view both about the particular and the general would be assisted, not obstructed, by finding out others’ ethical expectations. This is likely to be especially true where there is asymmetry in a relationship. Engineers have an expertise that many people bearing the risks of engineering lack, and the relative ignorance of risk bearers can contribute both to the perception of the risk and its substantive nature. It is quite possible, then, that others’ expectations would not be obvious to engineers, and would be material to the obligations engineers feel towards such others.

A third argument in favour of ascribed ethics is that moral imagination is an important element in one’s ethical conduct, and that discovering ascribed ethics assists with its development. The ability to imagine the implications of one’s actions, such as taking risks with others’ welfare in one’s product design, seems to us to be as important to morality as any general principle. Principled thought is arid if the principles are not applied to a profound and extensive understanding of the world. One can make a utilitarian calculation based on ridiculously narrow conceptions of what is good and harmful. One can consistently perform universalisable duties (like being truthful) with no good content (such as being truthful and at the same time highly ignorant). One can continually demonstrate virtues like moderation and yet be moderately engaging in behaviours that have no obvious goodness. The capacity to imagine others in each of these instances seems to be what is missing: to imagine the goods and harms experienced by others; to imagine the duties that others attach to one by virtue of one’s role or standing; to imagine the judgments that others make about what is virtuous and the desirability of consistent virtuousness. We are also suggesting that a powerful component in the development of this imagination, and its maintenance, is the discovery of ascribed ethics. The imagination cannot be developed by contemplating oneself and one’s task alone: there must be some element of

The Social Ascription of Obligations to Engineers

Science and Engineering Ethics, Volume 9, Issue 3, 2003 367

discovering others’ expectations. The claim that imagination is a prerequisite to being moral does not imply that being imaginative is necessarily moral. A reviewer of an earlier version of this paper gave an example where engineers had used their imagination to plainly immoral ends. But the case that morality hinges on imagination does seem cogent to us.

In a recent paper on the problem of patient autonomy in medical ethics, Atkins14

drew on Nagel’s work to point to the subjective character of experience – the fact that a reductionist strategy can help us envisage what it would be like for us to have the characteristics of another being, but not what it is like for the other being to have those characteristics. There is a need, according to Atkins, for an epistemological humility with respect to the lives of others and what can be said to be right for them. Appreciating this subjectivity of experience lies at the heart of empathy. This is very much in tune with our own proposal, because we are arguing that it is not enough for engineers to know the facts of a risk that some piece of engineering imposes on people. Knowing the objective characteristics of a risk and putting himself or herself in the shoes of a risk bearer is not a sufficient process for the engineer to develop empathy. He or she must also find out what the risk bearer thinks, and should pay attention to the obligations the risk bearer ascribes to the engineer.

The upshot is that we cannot simply fall back on general, universal principles: it will still be informative to know about the responsibilities that people in other roles ascribe. These might turn out to be quite obvious, but even if they were obvious when one is confronted with them there might be virtue in being reminded of them. They might turn out to be highly unreasonable, but even if they were they might give useful insight into others’ expectations that one can modify. They might well turn out to be inconsistent, with different people having mutually exclusive requirements. But it might be possible to use them to refine one’s existing obligations. An engineer who stresses above all a duty of care might be dismissive of public consultation on the basis that the participation of non-experts in decision processes diminishes the likelihood that optimally safe options are chosen. Discovering that the public ascribes a duty to consult and involve risk bearers might influence the engineer at least to broaden his or her inventory of duties – if not to re-think the commitment to duty ethics more generally.

A brief examination of ascribed ethics

Our approach has been to find a feasible, if limited, approach to discovering ascribed ethics, and developing some tools to help engineers think through moral problems based on social ascription as a central element. The basic procedure was to analyse a series of reports on hazardous engineering failures written in the UK broadsheet press. This analysis involved making inferences about lay people’s expectations of engineers’ conduct towards technology that could cause harm. For example, one of the reports concerned the design of air bags in cars. A short person, who had had to sit very close to the steering wheel, was injured by the deployment of the airbag. The report described how the person in question felt aggrieved towards the designer because this

J. S. Busby and M. Coeckelbergh

368 Science and Engineering Ethics, Volume 9, Issue 3, 2003

protective technology was positively harmful to a small section of the population whose stature meant they had to violate a rule to sit at a certain distance from the device. Our inference was that people sometimes ascribe an obligation to engineers not to discriminate unreasonably among those they protect from harm. As with most of the ascribed obligations, it seems likely that those who ascribe them only do so because they have had reason to discover them through some set of events, typically being the victims of a harm of some kind. There is therefore no suggestion that these ascriptions are general to society in any sense.

An attempt was made to generalise on these ascribed obligations so that they could be made independent of a particular technology. For example, obligations that referred to airbags in cars were generalised to obligations about protective devices of any kind. The results are shown in Table 1. The first column is essentially a label, and the second a brief description, of each type of obligation.

It is important to emphasise that this was not an empirical piece of research to characterise the nature and variety of public expectations. It was rather an attempt to stimulate ethical development and understanding by making explicit the ethics that some people, especially in the aftermath of a harm, ascribe to engineers. Plainly there are severe limitations in this approach: • Expectations evoked by some harm are likely to be of a harsh and demanding

nature, and not necessarily expectations that the same people would have at other times.

• In many cases people will probably have no expectations on the matter in question until they or someone close is harmed, and will often be irrelevant otherwise.

• Not everyone in the same circumstances would develop the same expectations so they are hardly universal.

• Relying on journalists’ accounts of the harm and of people’s reactions and expectations is suspect.

• A sample of 50 reports is plainly not enough for inferring a full set of ascribed obligations. Nonetheless, despite these limitations, the procedure gives us a way of generating

ethics that could be ascribed to engineers at some time or other.

Observations

The expectations shown in Table 1 did not seem at all surprising and some were rather trite – for example the obligation that engineers should incorporate the lessons of experience. If there is a virtue in citing such obligations it is that sometimes there will be good reason not to meet such obligations (for example a particular historical experience may be irrelevant to a new technology), and people will expect some demonstration or argument as to why the obligation is inappropriate. Ascribed obligations are not simply there to be met, but reveal the legitimate concerns of a society to one of its professions. They perhaps reflect an increasingly assertive public that is increasingly averse to risks that are not voluntarily borne, but it would be arrogant for engineers not to communicate about such concerns.

The Social Ascription of Obligations to Engineers

Science and Engineering Ethics, Volume 9, Issue 3, 2003 369

Table 1: Some examples of inferences from articles in the general press

Ascribed obligation Brief explanation

Nondiscriminating conditions

Devices should not impose safe operating conditions that leave certain classes of user less protected or at risk of positive harm

Uniform benefit Technology improvement should be used to protect those at risk equally Unsurprising rules Safe use of artefacts should not require the application of unintuitive rules Precautionary principle Defective tests should lead to the inference that tested products are

defective Knowing failure Known failures, especially, should be averted Reliability gain Technological progress should be taken as a reliability gain, not just a

performance gain Required disclosure Failures of products in service should be disclosed to the public Appropriate claims Commercial exploitation of claims to safety should be matched by safety

performance Implementing knowledge

Failures that can be predicted or envisaged should always be counteracted

Incorporating experience