uniform benefits

Recommendations arising from failures should always be implemented

Systemic responsibility Human error on the part of users and operators should not be regarded as the basic cause of failures

Independent oversight Regulatory oversight should draw on independent information Positive integration Engineering organisations should not rely on market mechanisms to

integrate a fragmented system Timely investment The no-right-time-to-invest problem should not be allowed to delay

investment Unpalatable obsolescence

Systems should be renewed if technology is developing significantly over time

Significant disagreementLack of consensus among engineering decision makers should lead to the application of precautionary principle

Legitimate subjectivity The engineering culture of objective, deterministic certainty should be circumscribed to allow reasonable doubt

Allowable dissent Engineers should be able to show professional dissent from managerial decisions

Repeated failure Repeated failures should not be allowed to occur, regardless of relative frequency

Distraction failure Failures that are not safety critical but distract attention should be treated as safety critical

Remaining traces Systems that have been designed to have autonomy should leave full traces of their decision making

Political subsumption The risks arising in engineering projects should be subsumed by political and commercial considerations

Meaningful reputation Performance, for example in terms of reliability and safety, should match reputation

Cue provision Engineers should avoid hiding cues that reveal danger even if this means reducing comfort

State-of-art investment Investment should be made in the state of the art technology Lifestyle consistency Engineers’ lifestyles should be consistent with the values embodied in the

technologies they are developing Collateral protection Technologies should not be developed without accompanying

technologies that protect against their harms

J. S. Busby and M. Coeckelbergh

370 Science and Engineering Ethics, Volume 9, Issue 3, 2003

It is perhaps also true to say that most of the expectations listed in Table 1 are non- utilitarian in requiring engineers to avoid some harm, without qualification. They did not involve avoiding some harm as long as doing so was reasonably practicable, for instance. This is probably inevitable given that such expectations have been voiced in the aftermath of failures of various kinds, when the victims, especially, are concerned not with the balance of risk and benefit prior to a failure, but with the actual harm that follows a failure. But the broadly non-utilitarian nature of the expectations is consistent with Vesilind and Gunn’s7 view that public morality tends to be deontological in nature, in contrast with what they claim is the utilitarian nature of engineering thinking. We argued in the Introduction that utilitarianism was not a particular feature of engineers’ reasoning, but it is true that the engineering task requires some reconciliation of trade-offs. The inferred expectations in Table 1 make little concession to this. For example the obligation to avoid discriminating between risk bearers in the extent of the protection afforded to them would rule out any technology that cannot protect people equally well. It would rule out car airbags that protect most of the people most of the time because people who have to sit too close lose out on the protection and are vulnerable to a positive harm from the airbag’s deployment. This is all fairly informative to engineers, showing that a utilitarian defence of a technological decision will not suffice, in some people’s eyes, in the event of some harm.

Some of the expectations in Table 1 were concerned with how benefits should be taken from technology improvements. For example the ‘uniform benefits’ expectation was that as technology improved all risk bearers should gain, in roughly equal or proportionate measure. This was inferred from an article that described improvements in the safety of car occupants and the lack of improvement in the safety of pedestrians and others outside cars who were vulnerable to impact with cars. The ‘reliability gain’ expectation was that technological improvement should be taken as a reliability improvement rather than a performance improvement. In other words, if there was some improvement in technology, this should be used in a such a way that it makes products less likely to fail, rather than, say, operate more quickly. This is rather similar to Reason’s15 point about how safety gains are often converted into productivity gains. The ‘unpalatable obsolescence’ expectation was that old systems should be renewed as technology improves, and should not be left in place, side-by-side with new systems that embody the latest technology. There is an essential unfairness for those who have to use the old systems. And the ‘collateral protection’ expectation was that we should not be putting technologies that produce some harm into effect until there are accompanying technologies to deal with this harm.

Not all the ascribed obligations were about the minimisation of harm, at least in a direct sense. The ‘meaningful reputation’ obligation, for instance, was that engineering firms should act in a way that is commensurate with their reputations, and the fault where they do not is a fault of deception as much as a fault of poor workmanship. Reputation is important information in a market economy and firms have a duty to maintain it. The argument that there is no moral element to this – that firms will simply suffer commercially if they do things that are at odds with their reputations – is inadequate because, by the time a reputation has been damaged, people who acted on it

The Social Ascription of Obligations to Engineers

Science and Engineering Ethics, Volume 9, Issue 3, 2003 371

might have been harmed. Thus society tolerates variability in performance so long as it knows who are the high and low performers.

One could also argue that many of the ascribed obligations are relevant to managers in engineering industry rather than engineers themselves. For example, the obligations related to the distributions of technological improvement really point to managerial decisions about where and how to invest, as much as engineering decisions about how a technology should perform. This would be consistent with Goldman’s3

‘socially captive’ view of the engineer. It is difficult, however, to draw a clear line between engineering and management within engineering firms, and it is perhaps as much a matter of self-image as objective test when an engineer advancing through the management structure becomes a manager and loses his or her role as an engineer. It is also difficult to look at the decision, for example, of whether to take a technological improvement as a protection gain or a performance gain, and say that this is clearly a management decision or an engineering decision. When, for instance, a new kind of device can be used to make a process faster or safer it is not necessarily clear whether the decision to do one or the other is a managerial one or a technical one. Questions both of technical difficulty and commercial attractiveness will enter into the decision, and will in any case be inter-related. It seems to us reasonable, therefore, that we should treat the entries in Table 1 as obligations ascribed to engineers, but not exclude their relevance to managers.

Finally, we wanted to describe a case described in one of the reports where we felt that both public and designers were neglecting a relevant duty. The case concerned an infant’s spoon that changed colour if food it came into contact with was too hot for a child to eat. Both commentators and evidently designers saw this protection as being desirable. But it is not hard to see that such protective products have the drawback that they relieve parents of the need to develop their own judgment about hazardous conditions. This leaves them less well prepared in situations where the device is unavailable, and in such situations perhaps more likely to hurt the child. There is thus a kind of protection to which both designers and users subscribe, but which is actually rather ambiguous in terms of its benefits. This is rather similar to the ‘levee’ effect cited by Fischhoff et al.16 Protective measures, for example against the effects of flooding, are seen as desirable by both engineers and members of the general public who are thereby protected. But the defences lead people to neglect additional hazards which the defences do not tackle. This is also similar to the ‘abused redundancy’ effect that we have found in our own work. A designer sees there is a possibility an operator will forget an action so provides an automatic device to protect the system. It is quite likely that operators or their representatives will bring such possibilities to the designers’ attention in the first place. However, operators in practice – when they see such devices – often take their existence as an opportunity to neglect the action completely and rely completely on the automatic device. The device is designed only to provide sufficient protection as an occasionally activated device, so soon fails when relied on routinely. The upshot is that both engineering designers and users can both, inadvertently, advocate protection when it is inappropriate or ineffectual. This then seems to imply that designers cannot depend solely on ascribed obligations as a way of

J. S. Busby and M. Coeckelbergh

372 Science and Engineering Ethics, Volume 9, Issue 3, 2003

limiting their responsibilities, or providing moral ideas. An engineer might find this unfortunate, since it would be a source of certainty to be able to say that if users want protection X they should be given it. But it does not seem defensible to do so. Thus ascribed ethics can be informative and arresting, and capable of affecting engineers’ morality, without taking on some higher status than other influences on this morality.

Conclusions

Recommendations arising from failures should always be implemented

Systemic responsibility Human error on the part of users and operators should not be regarded as the basic cause of failures

Independent oversight Regulatory oversight should draw on independent information Positive integration Engineering organisations should not rely on market mechanisms to

integrate a fragmented system Timely investment The no-right-time-to-invest problem should not be allowed to delay

investment Unpalatable obsolescence

Systems should be renewed if technology is developing significantly over time

Significant disagreementLack of consensus among engineering decision makers should lead to the application of precautionary principle

Legitimate subjectivity The engineering culture of objective, deterministic certainty should be circumscribed to allow reasonable doubt

Allowable dissent Engineers should be able to show professional dissent from managerial decisions

Repeated failure Repeated failures should not be allowed to occur, regardless of relative frequency

Distraction failure Failures that are not safety critical but distract attention should be treated as safety critical

Remaining traces Systems that have been designed to have autonomy should leave full traces of their decision making

Political subsumption The risks arising in engineering projects should be subsumed by political and commercial considerations

Meaningful reputation Performance, for example in terms of reliability and safety, should match reputation

Cue provision Engineers should avoid hiding cues that reveal danger even if this means reducing comfort

State-of-art investment Investment should be made in the state of the art technology Lifestyle consistency Engineers’ lifestyles should be consistent with the values embodied in the

technologies they are developing Collateral protection Technologies should not be developed without accompanying

technologies that protect against their harms

J. S. Busby and M. Coeckelbergh

370 Science and Engineering Ethics, Volume 9, Issue 3, 2003

It is perhaps also true to say that most of the expectations listed in Table 1 are non- utilitarian in requiring engineers to avoid some harm, without qualification. They did not involve avoiding some harm as long as doing so was reasonably practicable, for instance. This is probably inevitable given that such expectations have been voiced in the aftermath of failures of various kinds, when the victims, especially, are concerned not with the balance of risk and benefit prior to a failure, but with the actual harm that follows a failure. But the broadly non-utilitarian nature of the expectations is consistent with Vesilind and Gunn’s7 view that public morality tends to be deontological in nature, in contrast with what they claim is the utilitarian nature of engineering thinking. We argued in the Introduction that utilitarianism was not a particular feature of engineers’ reasoning, but it is true that the engineering task requires some reconciliation of trade-offs. The inferred expectations in Table 1 make little concession to this. For example the obligation to avoid discriminating between risk bearers in the extent of the protection afforded to them would rule out any technology that cannot protect people equally well. It would rule out car airbags that protect most of the people most of the time because people who have to sit too close lose out on the protection and are vulnerable to a positive harm from the airbag’s deployment. This is all fairly informative to engineers, showing that a utilitarian defence of a technological decision will not suffice, in some people’s eyes, in the event of some harm.

Some of the expectations in Table 1 were concerned with how benefits should be taken from technology improvements. For example the ‘uniform benefits’ expectation was that as technology improved all risk bearers should gain, in roughly equal or proportionate measure. This was inferred from an article that described improvements in the safety of car occupants and the lack of improvement in the safety of pedestrians and others outside cars who were vulnerable to impact with cars. The ‘reliability gain’ expectation was that technological improvement should be taken as a reliability improvement rather than a performance improvement. In other words, if there was some improvement in technology, this should be used in a such a way that it makes products less likely to fail, rather than, say, operate more quickly. This is rather similar to Reason’s15 point about how safety gains are often converted into productivity gains. The ‘unpalatable obsolescence’ expectation was that old systems should be renewed as technology improves, and should not be left in place, side-by-side with new systems that embody the latest technology. There is an essential unfairness for those who have to use the old systems. And the ‘collateral protection’ expectation was that we should not be putting technologies that produce some harm into effect until there are accompanying technologies to deal with this harm.

Not all the ascribed obligations were about the minimisation of harm, at least in a direct sense. The ‘meaningful reputation’ obligation, for instance, was that engineering firms should act in a way that is commensurate with their reputations, and the fault where they do not is a fault of deception as much as a fault of poor workmanship. Reputation is important information in a market economy and firms have a duty to maintain it. The argument that there is no moral element to this – that firms will simply suffer commercially if they do things that are at odds with their reputations – is inadequate because, by the time a reputation has been damaged, people who acted on it

The Social Ascription of Obligations to Engineers

Science and Engineering Ethics, Volume 9, Issue 3, 2003 371

might have been harmed. Thus society tolerates variability in performance so long as it knows who are the high and low performers.

One could also argue that many of the ascribed obligations are relevant to managers in engineering industry rather than engineers themselves. For example, the obligations related to the distributions of technological improvement really point to managerial decisions about where and how to invest, as much as engineering decisions about how a technology should perform. This would be consistent with Goldman’s3

‘socially captive’ view of the engineer. It is difficult, however, to draw a clear line between engineering and management within engineering firms, and it is perhaps as much a matter of self-image as objective test when an engineer advancing through the management structure becomes a manager and loses his or her role as an engineer. It is also difficult to look at the decision, for example, of whether to take a technological improvement as a protection gain or a performance gain, and say that this is clearly a management decision or an engineering decision. When, for instance, a new kind of device can be used to make a process faster or safer it is not necessarily clear whether the decision to do one or the other is a managerial one or a technical one. Questions both of technical difficulty and commercial attractiveness will enter into the decision, and will in any case be inter-related. It seems to us reasonable, therefore, that we should treat the entries in Table 1 as obligations ascribed to engineers, but not exclude their relevance to managers.

Finally, we wanted to describe a case described in one of the reports where we felt that both public and designers were neglecting a relevant duty. The case concerned an infant’s spoon that changed colour if food it came into contact with was too hot for a child to eat. Both commentators and evidently designers saw this protection as being desirable. But it is not hard to see that such protective products have the drawback that they relieve parents of the need to develop their own judgment about hazardous conditions. This leaves them less well prepared in situations where the device is unavailable, and in such situations perhaps more likely to hurt the child. There is thus a kind of protection to which both designers and users subscribe, but which is actually rather ambiguous in terms of its benefits. This is rather similar to the ‘levee’ effect cited by Fischhoff et al.16 Protective measures, for example against the effects of flooding, are seen as desirable by both engineers and members of the general public who are thereby protected. But the defences lead people to neglect additional hazards which the defences do not tackle. This is also similar to the ‘abused redundancy’ effect that we have found in our own work. A designer sees there is a possibility an operator will forget an action so provides an automatic device to protect the system. It is quite likely that operators or their representatives will bring such possibilities to the designers’ attention in the first place. However, operators in practice – when they see such devices – often take their existence as an opportunity to neglect the action completely and rely completely on the automatic device. The device is designed only to provide sufficient protection as an occasionally activated device, so soon fails when relied on routinely. The upshot is that both engineering designers and users can both, inadvertently, advocate protection when it is inappropriate or ineffectual. This then seems to imply that designers cannot depend solely on ascribed obligations as a way of

J. S. Busby and M. Coeckelbergh

372 Science and Engineering Ethics, Volume 9, Issue 3, 2003

limiting their responsibilities, or providing moral ideas. An engineer might find this unfortunate, since it would be a source of certainty to be able to say that if users want protection X they should be given it. But it does not seem defensible to do so. Thus ascribed ethics can be informative and arresting, and capable of affecting engineers’ morality, without taking on some higher status than other influences on this morality.

Conclusions