Nowadays we often hear about Artificial Intelligence. Many of the emerging perspectives describe how AI can help our lives in many contexts, from healthcare to design. However, what if it could have negative consequences on people’s habits?
For this blog post, we’d want to offer a more speculative perspective. What if, in the future, we won’t need to read long texts anymore? Maybe AI will be embedded within our brains and we won’t need to focus or keep the attention anymore. The implications of this scenario can be many. Students, for example, won’t even need to read books, because technology would summarize for them.
In a dystopian future, as Stuart Russell Viking showed us in his book “Human Compatible”, AI will bring out the problem of control . How far can a machine decide for us? How will it be able to make these decisions on its own? Is it going to be able to evaluate what is right or what is wrong? Or is going to be up to us, who programmed them, to decide? Are we pushing the concept of ethics to the limit?
Let’s investigate the implications of AI-driven design in typography. In type design, experiments with AI are actually still limited. Some of the most interesting experiments are those proposed by Erik Bernhardsson, former Spotify engineer. One of his most interesting experiments shows how, starting from incomplete letters of a given typeface, the machine is able to reconstruct the missing glyphs . Another intriguing experiment was done by Barney McCann, who crafted an AI that is able to generate types. He called it “a computer’s handwriting”. It’s coded in Processing, and the overall results is more artistic that functional .
Even if today we still have few examples of AI applied to typography, we still find it interesting to speculate on it. We imagined the world 10 years from now and pictured the future of typography. From our perspective, AI will be able to extract portions of a long text and automatically select only those parts that are more relevant to know. Let’s assume that this summarizing feature will be available in only 10 years, and that we will have a complete understanding of AI by then. Will the portion of text shown to us be the same for everyone, or will they change from reader to reader? And if so, on what basis will they change?
An interesting metaphor by Guido Vetere  compares artificial intelligence to the Golems of the Jewish tradition, clay humanoids created through dark rituals to be strong and obedient, but which often turn out to be treacherous and dangerous for humans. On one hand, technology can be interpreted as a helpful resource, supporting us in reading—like in the case of Focus Ex, an application developed for people with ADHD, helping them follow the flow of texts and trying to maintain attention . On the other hand, offering too much support and letting the machine select what we will learn and what we won’t could lead to a cultural flattening and a lowering of our attention threshold.
We live in a historical period in which the average threshold of our attention is lowering very quickly. Within the last 15 years, studies revealed that the average time in which we can attain to a specific stimulus went from 12 to 8 seconds. As we are becoming more and more able to multitask, we are also becoming mentally lazy. For the sake of our future, we must certainly avoid pursuing such a dystopian path, bearing in mind that AI must support and help us, and it must not completely replace our cognitive processes. Should we even try using AI to transform typography? Will we find the necessary balance between cognitive effort and support?