We know a lot about how and why people make decisions. Insights from behavioural economics and cognitive science are now often used to inform government policies in areas as diverse as taxation, social services and healthcare. We know that humans generally struggle to make good decisions under conditions of uncertainty and information overload. Decisions about risk, and possible future gains or losses are also often fraught. Increasingly, governments and businesses have become better at helping citizen decision-makers navigate complex information environments and use information in ways that positively informs their decision-making. It is also now widely recognised that the ‘choice architecture’ of different information ecosystems affects how people perceive information, and how they use it to inform their decision-making. For example, the way a website store is designed influences purchasing behaviour. People make choices in this setting based on default options, price presentation and often, pleasure

Yet while academia, industry and even governments have a solid understanding of how to ‘nudge’ decision-makers under benevolent circumstances, we are yet to fully consider the implications of decision science in the national security context.  This is particularly important in situations where choice architecture has been engineered to be coercive. This is important because – from disincentivising terrorist attacks, developing extremist disengagement programs and the social and economic impacts of protective measures on everyday Australians – the outcomes of citizen decisions which affect national security matters are long-lasting and widely felt.

For the national security fraternity, online information disorder creates one the biggest challenges to individual and group choice sovereignty. Online environments, according to Debra Satz, can be noxious markets facilitating “weak agency, harm to individuals, harm to society and vulnerability.” Putting a price on “things that cultural norms dictate should not be for sale” further creates opportunities for choice coercion. For example, Uber’s surge pricing during the 2014 Sydney siege reflected the real-world disconnect between automated algorithmic market forces and human situational awareness. But not all information disorder is the result of default programmatic automation. Some information disorder is specifically engineered to exploit noxious environments, leveraging the complex choice-architecture systems built to analyse our every keystroke, click, scroll, pause and emotional reaction to their advantage, blurring the line between benevolence and coercive influence.

For national security practitioners, viewing influence operations from an adversary’s perspective provides insights into why influencing decision-making at all levels is such a valuable pursuit. Influence operations are exceptionally cost effective and only limited by the adversary’s own choices, particularly in so far as those choices stop just short of creating enforcement-decision responses. 

Additionally, the swift impacts influence operations can deliver to the decision-making capability of their targets results in the generation of cascading decision impacts such as the diversion of resources and denigration of legitimate information sources.  While influence operation costs are kept low, yields will remain high.

So, what happens when our information environment becomes covertly coercive?

We are at least three years past day zero of the ‘misinfoapocalypse’, which means that our nation’s decision-making capabilities at every level have been actively constrained, if not entirely compromised in some contexts.  Authoritarian governments benefit when citizens become overwhelmed by misinformation and ‘give up on trying to figure out the truth’, resulting in, for example, the coordinated polarisation of Swedish debates on national security. Worse, when the choice architecture being delivered by organised content cartels is the result of social engineering by a foreign choice architect or a non-state actor, the effects generated can destabilise social cohesion and ignite cultural divisions.

Social media, for example, is engineered to serve audiences more of the content they like, engage with and respond favourably to. National security practitioners must develop a better understanding of the ability for individuals, groups and decision-makers to meaningfully disengage from that environment. Disengagement is not an easy pursuit. Humans are neurophysiology wired to seek out and remain in states of cognitive consonance; and online information ecosystems are engineered to keep people happy on-platform.

For national security practitioners this presents an unconventional challenge. Low levels of understanding among both the general public and national security practitioners of what constitutes a coercive information environment and its impacts on their choices and decision making is hugely problematic. As a nation, we need to become better at identifying when we are being manipulated. So long as external state actors or their proxies are able to degrade individual autonomy we are all at risk of making decisions that are not in our own best interests, let alone Australia’s broader national interest.

Of course, nation-states and their proxies are hardly going to admit their interest in contributing to or creating chaos in other sovereign nations; and will instead point to the social divisions already existing in democratic and multicultural societies. Concurrently, individual members of conspiracy groups such as QAnon will argue that it’s mainstream society who are the problem and that the world has and continues to be deceived by governments and elites.

If knowledge is power, then Australia’s power and security will be best served if we can benevolently establish a domestic ‘influence equilibrium’ – where all citizens and interest groups have equal understanding of media literacy and influence operations. This has been achieved in countries such as Sweden, Denmark and Finland. Australia’s Department of Defence Science and Technology have already started on this path to better understand and make sense of big data, but governments must take a broader view. Sweden’s move, for example, to establish a Psychological Defense Agency highlights the civil-defence approach required to mobilse the whole of Australian society towards achieving a similar level of sovereign information resilience.

Concurrently, governments have to move beyond contesting the information domain to a position where we can deploy choice architecture that impacts adversary decision-making, to drive their costs of influence operations up and their benefits down. While we may be starting this journey, we must not underestimate the generational commitment our adversaries have already shown in constructing choice pathways that undermine our values; or lose sight of the foothold domestic actors and proxies have already gained.

To effectively educate Australians about the motivations and methods behind influence campaigns and the impacts it has on the ways they perceive the world around them and base their decisions – at the ballot box, in the community and online – governments at all levels must invest in ways of building critical thinking skills and provide citizens with ways to evaluate the legitimacy of information. Part of this work must include our national security leaders balancing the needs of public diplomacy with holding conversations of national importance about foreign – and domestic – interference and the impacts it has across Australian society.

Until people can tangibly understand and conceptualise how influences on their decision-making at a micro level occurs, their ability to comprehend the decision impacts influence has at a state or national level will remain elusive.

Nicole Matejic

Doctoral Candidate
University of Southern Queensland

View Profile