The Future of Everyday Text
Magnus Söderlund, February 2025
As artificial intelligence continues to evolve, particularly the forms that generate humanlike text, we are approaching a world where much of the text we encounter is generated by non-humans. Its implications have mainly been discussed in an area in which non-human text is still limited – literary fiction. It has been argued that literary fiction is an art form shaped by the unique experiences, thoughts, and emotions of the writer. A novel or short story reflects the author’s imagination, which makes his or her name – and authorship – an integral part of the reading experience: the identity of the text’s creator is intrinsic to its value and appreciation. Indeed, readers often seek out works by specific authors because of their distinctive voices and storytelling techniques. And, according to many who have discussed machine-made literary text, a machine cannot base its text on the lived human experiences that make literary fiction captivating.
But the text that most people are exposed to most of the time is not literary fiction. It is text that surrounds us in everyday life. This text comprises news articles, social media posts, messages between friends and colleagues, teachers’ power point slides, birthday greetings and advertisements as well as descriptions of products and services, companies, and other organizations. Such text is increasingly AI-generated, too. Already today, much of this is hard to distinguish from human-written text. And the transformation of written material of this type can have profound effects on us.
One may distinguish between two main types of effects. The first is our immediate response to one particular text, such as when we react to the content of one specific email message or one news article. The second is most likely more important from a transformation point of view; in this case, it is the repeated exposures to the recurring themes in the overall dose of messages that influence us. This, then, has to do with influence in the aggregate. A distinction between effects along such lines has been made, for example, in studies of media violence when it is claimed that it is not the content of one particularly violent movie that can make us more violent. It is the aggregated volume of media violence from several sources, which we are exposed to again and again, that can make us more violent (or at least more aggressive). Similar arguments have been made about pornography, advertising and television series when it comes to our beliefs about what the world looks like and our behavioral attempts to adapt to such beliefs. Seen in this light, then, it is indeed possible that the repeated exposure to AI-generated text in everyday life, in the aggregate, can influence us. And it can do so in several ways.
One possibility, as AI-created text becomes easy for everyone to initiate, is that readers can begin to question the authenticity of what they read. This could lead to an erosion of trust not only in written content but also with respect to interpersonal relations. For example, when people with whom we communicate use AI-based smart replies, they do not need to think about what they should write. The result can be seen when we get “Yes, I can do that”, “I’ll get back to you on that” and “Sounds good” as replies. And if I know that a person did not need to think much about the reply to me, what would that do to my view of that person’s view of me? Similarly, if I want to comment on something on LinkedIn, and if I have no idea about what to write, I can choose “Fantastic”, “I like this”, and “Thanks for sharing this”. And let’s say I want to write a postcard to someone. But I cannot make up my mind about what to write. No problem, I’ll ask my chatbot to generate the message. Here it is (I can use it for all my postcards):
“Just a quick note to say hello and send my best wishes your way!
I hope all is well with you. I’m having a wonderful time, taking in
new experiences and enjoying the moments as they come. Thinking
of you and looking forward to catching up soon. Wishing you happiness
and good times wherever you are!”
If I do this all the time, day in and day out, so that I continuously expose myself to “my” writing, which is not really “mine”, however, I may begin to question my own authenticity. If I cannot even respond to someone I know with my own words, who am I? Can I trust myself if it is unclear who I am?
Another possibility is that we – as writers – may feel demotivated to produce our own texts. Why should we do it, if there is easy help to get? In general, we are effort-averse creatures. And writing can be very effortful. Yet, there are potential benefits of writing. It can help us clarifying our thoughts and it can foster critical thinking – it encourages us to evaluate evidence and consider counterarguments – and it can foster a better memory of what we are actually doing. Many think that keeping a diary (i.e., to write about what happens and what we think about it) is helpful in many ways. It seems unclear, however, if it would be helpful to the same extent if we ask our AI tools to produce our diaries.
It is also possible that we begin to feel devalued – as individuals and as a species – if we think that AI can produce text in more efficient ways and with a better net result. And this can add further fuel to thoughts that AI can outperform us. We have already seen this in several fields, such as chess, Go, Jeopardy, medical diagnosis and so on. It seems as if the agency aspect is a part of this; a real sense of being outperformed arises mainly when AI is seen as an autonomous agent. And if we feel that we are indeed outperformed, it is likely that we would also feel threatened. Then, it is not far-fetched to think that AI will take our jobs and, ultimately, destroy us. These are not favorable thoughts if we need to get used to co-existing with AI.
There is also a content aspect to prolonged exposure to AI-created text. If such text has no obvious errors, high readability scores (i.e., it requires little effort to read it), a more or less given average sentence length, a general positive tone, and is not containing elements that are considered bad, evil and wrong (e.g., non-inclusive language and words for genitalia), because that’s the way the algorithms have been trained, well, this is likely to do something to us in the aggregate, too: it could shift language norms and thereby influence our view of the world. Eventually, this could make us end up in a position in which we are less well-prepared to deal with what is indeed bad, evil and wrong. And what is bad, evil and wrong is unlikely to go away in our lives, and in our societies, just because the AI-based text generators prefer to not write about these things.
