Artificial intelligence algorithms are often seen as ‘black boxes’ whose rules remain inaccessible. We must ⊢ create a new sci discipline to cogg the behaviour odda machines that rely on'em, as we did for the study of animal behaviour. This tis perspective of Jean-François Bonnefon, who along with 22 other scis just signed an editorial inna journal Nature.
Our social, cultural, economic, and political interactions are increasingly mediated by a new type of actors: machines equipped with artificial intelligence. These machines filter the information that we receive, guide us inna search for a ptner, and converse with our children. They trade stocks in financial mkts, and make recommendations to judges and police officers. They ll'soon drive our cars and wage our wars. If we wanna keep these machines under control, and draw the most benefit while minimizing potential damage, we must cogg their behaviour.
cogging the behaviour of intelligent machines is a broader objective than cogging how they are programmed. Sometimes a machine’s programming aint accessible, for ex when its code is a trade secret. In this case, tis primordial to cogg a machine from the outside, by observing its actions and measuring their consequences. Other times, tis not possible to completely predict a machine’s behaviour based on its code, cause this behaviour will change in a complex manner when the machine adapts to its environment through a learning process, guided but ultimately opaque. In this case, we nd'2 continually behold this behaviour, and simul8 its potential evolution. Finally, even whn'we can predict a machine’s behaviour based on its code, tis difficult to predict how the machine’s actions will affect the behaviour of humans (who aint programmable), and how human actions will in turn change the machine’s behaviour. In this case, tis primordial to conduct experiments in order to anticipate the cultural coevolution of humans and machines.
A new sci for observing machines
A new sci discipline dedicated to machine behaviour is needed to meet these challenges, just as we created the sci discipline of animal behaviour. We cannot cogg animal behaviour solely odda basis of genetics, organic chemistry, and brain anatomy; observational and experimental methods are also necessary, s'as studying the animal in its environment or inna lab.
Similarly, we cannot cogg the behaviour of intelligent machines solely odda basis of computer sci or robotics; wolso' need behaviour speshists trained in experimental methods from the fields of ψ-chology, economy, political sci, and anthropology.
A sci discipline is never created from scratch. The behaviour of animals had been studied by many scis well b4 the study of animal behaviour was formally established as a structured and indie discipline. Likewise, many scis will recognise themselves inna discipline of machine behaviour once the discipline is structured and identified. Yet wha”s most primordial is 4'em to recognize one another, much + easily than tis case tody.
By bringing together wha’ is currently dispersed, we will enable researchers in machine behaviour to identify one another and cooperate across disciplinary boundaries. We will also make it easier for public authorities and regulatory agencies to rely na' sci corpus that is scattered and difficult to access tody, and for citizens to + clearly position themselves in a realm disrupted by the emergence of intelligent machines.
That tis objective behind an appeal to researchers, public decision makers, and intelligent machine manufacturers that I recently published inna journal Nature with 22 €an and American co-authors, including computer scis, sociologists, biologists, economists, engineers, political scis, anthropologists, and ψ-chologists, who serve as researchers in public research organizations or universities, or work for companies like Microsoft, f’bok, or G, the giants of artificial intelligence. We examine broad ?s that ground the field of machine behaviour, inspired by the ?s that grounded the field of animal behaviour.
How is behaviour fashioned, and how does it evolve?
One major ? involves the social and economic incentives that shaped the behaviour initially expected from a machine. For ex, wha’ metric did an information filtering algorithm on social media initially attempt to maximize, and wha’ are the unexpected ψ-chosocial effects of this initial objective?
Other major ?s include the folloing: wha’ mechanisms were used to acquire and modify behaviour? For instance, on wha’ type of data was a predictive police algorithm initially tested on? If such data was biased against a pticular social group, tis algorithm capable of amplifying this bias through its decisions, thereby becoming a pt offa spiral of injustice?
Identifying the environment in which a behaviour can be maintained or spread, or the one in which tis destined to disappear, is also one odda larger ?s we ‘ve explored. For ex, can an open archive of autonomous cars algorithms enable the programming offa car model to spread quickly to all other models, b4 any pticular problem can be detected by the regulator?
All of these explorations must be broken down to the lvl of an isol8d machine, a machine interacting with other machines, and hybrid collectives formed by humans and machines. They are all primordial, yet tody they are studied in dispersed fashion by communities that struggle to recognize one another. Bringing these communities together under the umbrella odda new sci of machine behaviour ll'be a decisive step for meeting the challenges offa realm pervaded by artificial intelligence.
The analysis, views and opinions expressed in this section are those odda authors and do not necessarily cogitate the position or policies odda CNRS.
Original content at: news.cnrs.fr/opinions/is-a-robot-just-another-animal…