Liability for Artificial Intelligence: What are the Contributions of the Social Sciences?

Anna Beckers, Maastricht University and Gunther Teubner, Goethe University Frankfurt

Many authors in the current debate on AI-liability make an interdisciplinary short-circuit. They connect technological characteristics of computers directly to liability rules. They ignore the crucial interactions between technology and social behaviour. Thus, they remain locked in inadequate models of linear causation and simplified normative implications: technology determines legal liability. In contrast, our starting point is a typology of machine behaviour developed in technology studies: individual, collective, and hybrid.[1] But then, to avoid the short-circuit, we introduce the concept of ‘socio-digital institutions’. These are stabilised complexes of social expectations, particularly expectations about social behaviour and related risks, which come up regularly when social systems use the new digital technologies. Socio-digital institutions emerge from three fundamental types of human-algorithm contacts. Individual machine behaviour denotes individually delineated algorithmic operations that humans can understand through communication in the strict sense. Hybrid machine behaviour occurs in densely intertwined and stable interactions between humans and machines; here, a human-machine association emerges as a new collective actor. Collective machine behaviour, in contrast, is an indirect linkage of humans to the interconnectivity of invisible machines. Each of these contacts creates a different socio-digital institution.[2]

‘Digital Assistance’: Individual machine behaviour, as analysed in IT studies, refers to intrinsic properties of a single algorithm, whose dynamics are driven by their single source code or design in its interaction with the environment. These technical properties alone cannot determine whether or not algorithms can be qualified as autonomous actors. Instead, socio-digital institutions determine whether algorithms will have the social status of mere instruments, or whether they will be agents in principal-agent relations, or whether they will become – as a potential future development – independent self-interested socio-economic actors. 

For potential principal-agent relations, several social science theories clarify under which conditions the incipient institution of ‘digital assistance’ will emerge. If the delegation of tasks from a human actor to an algorithm creates two independent streams of social action, a principal-agent relation appears between them. Such principal-agent relations presuppose necessarily personhood for both the principal and the agent. Thus, a selective attribution of personhood to specific digital processes is needed. Personification of algorithms – for this complex social process, several social theories deliver the relevant analytics.

Economics are relatively silent on this topic. More or less implicitly, they presuppose two rational actors as given. In contrast to narrow rational choice assumptions, sociological theory conceives personification as a performative act that institutes the social reality of an actor. In a complementary way, Actor-Network Theory defines the interactive qualities that transform an algorithm into an ‘actant’ different from a human actor.[1] Information philosophy defines the conditions under which algorithmic actions are determined as autonomous or non-autonomous.[2] Systems theory analyses in detail, how in a situation of double contingency, the emergent communication of human principals with algorithmic agents defines the algorithm’s social identity and its action capacity.[3] This does not happen everywhere; instead, each social context creates for algorithms its own criteria of personhood, the economy no different from politics, science, moral philosophy, or law. Different social systems attribute actions, rights and obligations in various ways to algorithms as their ‘persons’ and equip them with specific resources, interests, intentions, goals, or preferences. And political philosophy describes in detail how in a ‘representing agency’ relation, the transfer of the ‘potestas vicaria’ constitutes the vicarious personhood of algorithms, ‘implying distinct risks and dangers haunting modernity’.[4]

As a crucial result of social personification, technological risks are transformed into social risks. Causal risks stemming from the movement of objects are now conceived as action risks arising from the disappointment of Ego’s expectations about Alter’s actions. Thus, in ‘digital assistance’ situations, a principal-agent relation with its typical communicative risks will appear instead of an instrumental subject-object relation. Once this socio-digital institution comes into existence, the law will be required to decide according to its own criteria what degrees of legal personhood it attributes to the digital actants. Liability rules coping with action risks of digital actants differ substantially from rules reacting to causal risks of mere objects. As a consequence, strict liability rules for industrial hazards are inadequate. Instead, in the principal-agent relation of digital assistance, rules of vicarious liability for the actant’s decisions are needed.

‘Digital Hybridity’: Quite different are the social sciences’ contributions for hybrid human-machine behaviour, which is the outcome of closely intertwined interactions between algorithms and humans. If one attempted to use the individualistic approach of principal-agent relations and to separate single human and algorithmic actions, one would fail to notice that collective actors have been established. They develop properties whose risks differ qualitatively from the risks of individual action within digital assistance. Digital hybridity has to deal with the transformation of single human-algorithm interactions into collective actorship. The social sciences play their intermediary role between IT studies and legal doctrine differently when they show how social practices constitute human-machine associations. 

Due to their adherence to methodological individualism, economic analyses are sceptical towards the reality status of collective actors. They conceive them as mere ‘nexus of contracts’, and they judge their personification as an abbreviation at best and as dangerous ‘errors’, ‘traps’ or ‘fictions’ at worst.[5] In contrast, sociology focuses closely on the differences in human-algorithm interactions.[6] They range from short-term loose contacts to full-fledged human-algorithm ‘organisations’ with an internal division of labour and distribution of competencies. Each of these hybrids creates its own risks. In constellations of dense interaction, responsibility for actions can only be established for the whole hybrid entity, while it cannot be established for the individual algorithm or human involved.[7] Law then would have to react to the risks stemming from collective actorship. For these risks, vicarious liability is of no help. Instead, the law needs to develop collective liability rules, which, however, are below the threshold of liability of a full-fledged legal person.[8]

‘Digital Interconnectivity’: In contrast to the other two constellations, collective machine behaviour refers to the systemwide behaviour resulting from the interconnectivity of machine agents. What we encounter here are heterarchical interconnected processes between algorithms and not communication between humans and algorithms. Such a ‘collectivity without collective’ cannot be described as a deliberately designed network but simply as a crowd of algorithms. If it comes to how society relates to such algorithmic crowds, social theory qualifies them as ‘invisible machines’.[9] Their impact on society is difficult to describe. There is neither direct communication between an isolated algorithm and humans nor a collectivity combining humans and algorithms. Instead, an interconnected crowd of algorithms exerts an – only indirect but massive – influence on social relations. Interconnectivity between different algorithms influences social systems so that not one-to-one connections are at work, but a more diffuse structural coupling between algorithmic interconnectivity and human communication. As a result, this situation excludes legal liability for one among the various algorithms. Fund solutions are needed that require ‘political’ decisions of regulatory agencies to define the responsible industry sector.


  • [1] I.Rahwan et al., ‚Machine Behaviour’ (2019) 568 Nature, 477-486.
  • [2] For a detailed analysis, A. Beckers and G. Teubner, Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds (Oxford: Hart, 2021).
  • [3] B. Latour, Politics of Nature: How to Bring the Sciences into Democracy (Cambridge/Mass.: Harvard University Press, 2004), 62 ff.
  • [4] L. Floridi and J. W. Sanders, ‚On the Morality of Artificial Agents‘, in M. Anderson and S. L. Anderson (ed.), Machine Ethics(Cambridge: Cambridge University Press, 2011), pp. 184-212, 192 ff.
  • [5] E. Esposito, ‚Artificial Communication? The Production of Contingency by Algorithms’ (2017) 46 Zeitschrift für Soziologie, 249-265.
  • [6] K. Trüstedt, ‚Representing Agency’ (2020) 32 Law & Literature, 195-206, 196 f. 
  • [7] M. Jensen and W. H. Meckling, ‚Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure’ (1976) 3 Journal of Financial Economics, 306-360; F. H. Easterbrook and D. Fischel, ‚The Corporate Contract’ (1989) 89 Columbia Law Review, 1416-1448, 1426. 
  • [8] A. Hepp, Deep Mediatization: Key Ideas in Media & Cultural Studies (London: Routledge, 2020).
  • [9] P. Pettit, ‚Responsibility Incorporated’ (2007) 117 Ethics, 171-201.
  • [10] For more details, A. Beckers and G. Teubner, ‚ Human-Algorithm Hybrids as (Quasi)Organisations? On the Accountability of Digital Collective Actors ’ (2023) 49 Journal of Law and Society, (forthcoming).
  • [11] N. Luhmann Theory of Society 1/2 (Stanford: Stanford University Press, 2012/2013), 66; M. Hildebrandt, Smart Technologies and the End(s) of Law (Cheltenham: Edward Elgar, 2015), 40.

Leave a Reply

Your email address will not be published. Required fields are marked *