Last week, Apple Intelligence generated a piece of fake news involving Luigi Mangione, triggered by its notification summary feature. It mistakenly concluded that the individual suspected of killing United Health CEO Brian Thompson had committed suicide.
While the error itself is not particularly shocking—AI systems often commit such mistakes—it is surprising that Apple didn’t prevent it from occurring in the first place…
AI errors can be either amusing or perilous
Modern generative AI systems can deliver remarkable outputs but are inherently not intelligent, and that shortcoming leads to some notable blunders.
Many of these missteps are humorous. For instance, a McDonald’s drive-through AI kept adding chicken nuggets to orders until they reached a total of 260; Google inaccurately suggested consuming one rock per day as advised by a geologist; and Microsoft mistakenly identified a food bank as a tourist attraction.
However, there have also been serious instances of AI giving hazardous advice. Examples include an AI-written mushroom foraging guide that suggested tasting mushrooms to distinguish poisonous ones, navigation apps directing users into active wildfires, and the Boeing system linked to two tragic air crashes resulting in 346 deaths.
Or they can simply be embarrassing
The erroneous Apple Intelligence summary regarding a BBC News story was neither amusing nor perilous, but undeniably embarrassing.
Launched in the UK just last week, Apple Intelligence utilizes artificial intelligence (AI) to summarize and categorize notifications. This week, its AI-generated summary erroneously suggested that BBC News published a story asserting that Luigi Mangione, the individual arrested in connection with the murder of healthcare CEO Brian Thompson in New York, had taken his own life. This claim is false.
This isn’t Apple’s first misstep—the previous notification summary from Apple Intelligence incorrectly asserted that Israeli Prime Minister Benjamin Netanyahu had been detained, when in fact the International Criminal Court had issued a warrant for his arrest.
The Mangione fake news incident was preventable
Avoiding these errors entirely may be impossible, as they are intrinsic to generative AI systems.
This is particularly relevant regarding Apple’s news notification summaries, which inherently provide partial insights into stories. Apple Intelligence is condensing an already shortened version of a story; thus, it’s unsurprising that things can go awry.
While Apple cannot eliminate all errors, it can take steps to avoid inaccuracies with sensitive topics. Implementing filters for keywords such as killing, killed, shooter, shooting, death, etc., could prompt a human review before dissemination.
In this case, the error was merely cringe-worthy, yet it’s easy to envision how an error regarding a delicate issue could incite public outrage. For instance, a summary that seems to hold victims of a violent act responsible could be very damaging.
Human oversight would indeed add to the workload for the Apple News team, but Apple could establish a round-the-clock review with the investment equivalent to a handful of employees working shifts. This seems like a minor cost for Apple to safeguard against potential public relations crises for this nascent feature.
Photo by Jorge Franganillo on Unsplash
FTC: We use income-earning auto affiliate links. More.