What do you think?
Rate this book


10 pages, PDF
Published December 18, 2018
When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof.PP is thus used as a counterweight to the position that lack of evidence can always be construed as evidence of lack of harm. Regulators may thus ban the introduction, development or use of certain technological innovations based on the PP until sufficient evidence is gathered about its harmlessness.
1. Experimentation is allowed only if there is no other way of acquiring knowledge for complete risk assessment,Bogner and Torgensen points out how PP has been used as a policy tool for strictly political ends rather than with the good of the many in mind:
2. The experimentation is controllable, which means it is closely monitored for possible harmful effects, monitoring is fed back into a review of risks, which in turn is used to decide whether it is safe to continue or not experimentation, and the effects of the experimentation can be contained if deemed necessary;
3. Everyone who may be possibly affected by experimentation effects must provide his/her informed consent and are allowed to opt out of the experimentation at any time,
4. Risks and benefits are proportional
Not only social scientists had long suspected that risk and its perception is a political issue. Early on, critics of the PP had found the principle to be socially biased as it is said to be sensitive to risks associated with technological change or ecological interventions while being blind for risks from regulation [...Besides] the publics (and eventually politics) in different countries are sensitive for particular risks and not for others, subject to national patterns of cultural value preferences. As a result, precaution fosters regulation only if the risk addressed is politically relevant. Therefore, the PP fails to reduce overall risks as it ignores some of them. For example, avoiding potential environmental or health risks by prohibiting a technology does not make away with risks from older competing technologies and, in addition, may entail new risks from regulation, if only indirectly.Responsible Research and Innovation is a framework based on value-driven design and public participation. However public participation is a slippery creature:
[T]here are severe challenges to participation, especially with respect to emerging technologies, along several dimensions: with regard to (1) social aspects, (2) the ‘issue framing,’ i.e., how to discuss what, (3) the timing of an event and (4) the definition of the problem to be addressed.The authors discuss these challenges in more detail and based on the results obtained for GM crop introductions into Europe they come to rather pessimistic conclusions
[R]egarding [PP and RRI's] common function of ‘making technology happen,’ both show a disappointing performance.But does RRI "often" end up in a mere tick-boxing activity or in "futile" participatory activities? Evidence of such claims would have been most welcome in this article; the lack of evidence raises the suspicion of authorial bias. From this point of view, the article can be seen as throwing into interesting relief the tension between encouraging high ROI, fast-track innovations and making sure they are safe.
The PP turned out to prevent not only risky developments but the implementation of the technology in general. Designed as a last resort tool to ensure that ambiguous risks would not
lead to endless court trials and block the technology as such, it was applied when political decisions appeared impossible to defend. Referring to the PP allowed actors to use seemingly scientific arguments that nevertheless were politically grounded. The PP may have been intended as a reflexive way of dealing with potential risks; however, the controversy over GM food has never been a risk debate only. Rather, it had many roots like the widespread public unease over current agricultural food production systems. [...] This suggests that risk regulation may be an essential part of the regulatory process but an inappropriate tool to cope with political stakes.
Responsible Research and Innovation was intended to guide research and innovation practice toward societal acceptability while fostering innovation. However in practice, it often ends
up in a mere tick-boxing activity filling in research proposals forms or in somewhat futile participatory activities as ends in themselves. Activities to involve stakeholder and the public
without real impact on the decisions taken have an unclear remit and mostly serve to introduce bits of new technology to a public that has little to say about it. Referring to ethics and a poorly
defined ‘responsibility’ of stakeholders (or even laypeople) does not solve the political problem of organizing the relevant sector, agriculture, in a way that would find support with stakeholders
and critical citizens alike.