In its more than 130-year history the Financial Times has upheld the highest standards of journalism. As editor of this newspaper, nothing matters to me more than the trust of readers in the quality journalism we produce. Quality means above all accuracy. It also means fairness and transparency.
That’s why today I am sharing my current thinking on the use of generative artificial intelligence in the newsroom.
Generative AI is the most significant new technology since the advent of the internet. It is developing at breakneck speed and its applications, and implications, are still emerging. Generative AI models learn from huge amounts of published data, including books, publications, Wikipedia and social media sites, to predict the most likely next word in a sentence.
This innovation is an increasingly important area of coverage for us and I am determined to make the FT an invaluable source of information and analysis on AI in the years to come. But it also has obvious and potentially far-reaching implications for journalists and editors in the way we approach our daily work, and could help us in our analysis and discovery of stories. It has the potential to increase productivity and liberate reporters and editors’ time to focus on generating and reporting original content.
However, while they appear to be very articulate and plausible, AI models on the market today are ultimately a prediction engine and they are learning from the past. They can fabricate facts — this is what is referred to as “hallucinations” — and make up references and links. If sufficiently manipulated, AI models can produce entirely false images and articles. They also replicate the existing societal perspectives, including historic biases.
It is my conviction that our mission to produce journalism of the highest standards is all the more important in this era of rapid technological innovation. At a time when misinformation can be generated and spread rapidly and trust in the media in general has declined, we at the FT have a greater responsibility to be transparent, to report the facts and to pursue the truth. That is why FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to reporting on and analysing the world as it is, accurately and fairly.
The FT is also a pioneer in the business of digital journalism and our business colleagues will embrace AI to provide services for readers and clients and sustain our record of effective innovation. Our newsroom too must remain a hub for innovation. It is important and necessary for the FT to have a team in the newsroom that can experiment responsibly with AI tools to assist journalists in tasks such as mining data, analysing text and images and translation. We won’t publish photorealistic images generated by AI but we will explore the use of AI-augmented visuals (infographics, diagrams, photos) and when we do we will make that clear to the reader. This will not affect artists’ illustrations for the FT. The team will also consider, always with human oversight, generative AI’s summarising abilities.
We will be transparent, within the FT and with our readers. All newsroom experimentation will be recorded in an internal register, including, to the extent possible, the use of third-party providers who may be using the tool. Training for our journalists on the use of generative AI for story discovery will be provided through a series of masterclasses.
Every technology opens exciting new frontiers that must be responsibly explored. But as recent history has shown, the excitement must be accompanied by caution over the risk of misinformation and the corruption of the truth. The FT will remain committed to its fundamental mission and will keep readers informed as generative AI itself and our thinking on it evolve.