Our opinions feel personal, but they have often been shaped by repetition and framing long before debate begins, writes Professor Niusha Shafiabady.
WE LIKE TO THINK our opinions are the product of careful thought. We read, weigh arguments and decide where we stand. But in reality, many of our views are formed much earlier and much more quietly. Long before we engage in debate, the language we encounter and the stories we repeatedly see have already shaped what feels normal, controversial, or even worth arguing about.
Commercial surrogacy is a clear example of how this happens. It is a complex issue, touching ethics, law, medicine, family and money. Public debate about it is often heated, yet the ground on which that debate takes place is rarely neutral. Certain narratives recur frequently in media coverage, while others remain marginal. Over time, those repeated frames shape how the issue is understood before most people consciously form an opinion.
Artificial intelligence makes this process easier to see. By analysing large volumes of media reporting at once, AI can reveal patterns that individual readers miss. Instead of focusing on whether a particular article is fair or biased, this approach looks at what keeps recurring across coverage as a whole. The result is a clearer picture of how public understanding is constructed through repetition.
What matters most is not that individual journalists get things wrong. It is that repetition creates familiarity and familiarity creates boundaries. Certain ways of talking about commercial surrogacy become taken for granted, while alternative perspectives feel unusual or extreme simply because they appear less often. This is how debate narrows unintentionally.
This has consequences. Media framing does not tell people what to think, but it powerfully shapes what they think about. By the time ethical or policy arguments are presented, the reader is already primed to see some concerns as central and others as secondary. The range of “reasonable” positions has already been set.
The conclusion here is not about surrogacy itself. It is about how public debate works. When framing goes unexamined, discussions become polarised, repetitive, and stuck. People argue passionately while sharing many of the same unspoken assumptions, inherited from the narratives they have absorbed.
Artificial intelligence does not solve this problem, but it helps expose it. Used carefully, AI can act as a mirror, showing us how our collective conversation has been shaped over time. It does not decide what is true or right, but it makes visible the forces that quietly influence what we take to be our own views.
Recognising this influence does not undermine individual agency. On the contrary, it is a precondition for meaningful judgement. If we want public debates that are thoughtful rather than reactive, we need to pay attention not only to arguments, but to the frames that make some arguments seem natural and others almost unthinkable.
Our opinions feel personal. Often, they are anything but.
Professor Niusha Shafiabady is an internationally recognised expert in the field of Computational Intelligence and the director of Women in AI for Social Good. She is the inventor of a computational optimisation algorithm and has developed the predictive analysis tool Ai-Labz.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Australia License
Support independent journalism Subscribe to IA.
Related Articles
- From propaganda to profit: The hidden economy of media control
- The cold facts of convenient truths
- ABC News shamelessly spreads Liberal Party’s blatant lies
- Media consumption falls as Aussies seek trusted sources
- Media inflating false equivalency between Left and Right







