“Language is the stuff almost all human culture is made of,” writes Yuval Noah Harari, a historian and philosopher, in a recent By Invitation essay. Religion, human rights, money—these things are not inscribed in our DNA, and require language to make sense. In his essay, Mr Harari poses the question: “What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?” The answer, he believes, casts a dark cloud over the future of human civilisation.
We have spent a lot of time thinking about the staggering potential of language-focused artificial-intelligence tools. We recently published a cover package that considers how to worry wisely about AI. We’ve written about how large, creative AI models will transform lives and labour markets; explained why it is too soon to fear an AI-induced jobs apocalypse; and considered how good China can get at generative AI. On balance we believe that, properly regulated, the new generation of AI tools can be a net positive for humans. But I will leave the last word to Mr Harari: “We should regulate AI before it regulates us.”