Decoding the dialect: The AI translational paradox - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
The other day, I came across this article from the Guardian on the use of AI translation in asylum applications. In the article, a Brazilian refugee who tried to seek asylum in the US was misunderstood not just by people, but by AI translation tools acting as interpreters. The article highlights the current lack of ability for AI to capture regional accents or dialects. In this case, this failure led to the refugee spending six months in ICE detention, unable to communicate with anyone.
AI technology is, of course, still developing, and it is understandable that it will take time to guarantee limited errors. However, deployment before this process is complete has already led to serious consequences in matters which are life and death. The Guardian isn’t the first to capture a story like this. A quick search on Google shows several articles and stories of people who were denied asylum because AI technology made minor errors that had large impacts. In one, AI changed ‘I’ to ‘We’ leading officers to believe that more than one person was seeking asylum. In another, AI claimed one woman was trying to escape abuse from her boss, when in fact it was her father she was trying to escape from.
Generative AI has the potential to massively improve upon the older, imperfect technology of machine translation. But cases like these show that we must have conversations around the ethics of using this new and developing technology in such complex situations.
Likely in response to these concerns, OpenAI updated its user policies in late March 2023 with rules that prohibit the use of ChatGPT in ‘high-risk government decision-making’ work; including work related to migration and asylum. This is a start, but I can’t help but wonder how many people were impacted by these errors prior to this policy update. Further, as OpenAI’s capacity to enforce this rule is unclear, it won’t help those who are trapped in processes which simply ignore the policy for the sake of convenience.
Despite this, I am a proponent of new technology; I think there’s still more to learn than to lose. If we hadn’t embraced technology, then I wouldn’t have the job I have today. I wouldn’t be able to send out these thoughts into the ether. And I wouldn’t have a title for this Digest because I used ChatGPT to make it up for me (I did combine two of the options, though, so is it really cheating?).
BUT I don’t think our current conversations around the downsides of AI capture the effects the technology has on those who suffer most. I don’t think we’re doing enough to highlight how implementing unfinished technology can harm those escaping things like wars, abuse, and poverty. Because in those cases, one wrong decision could be catastrophic to those who are already just barely surviving. The conversation must include these groups for us to make new technology better for everyone. To that end, until AI technology and the conversation around it is more developed, we will always need human review and input. This will likely always be true in areas where a sense of compassion, empathy, and understanding is vital. So, while I think we should certainly embrace new technology, we must do so in a human led, technology enabled way to minimise errors and protect the most vulnerable amongst us.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Tuesday night saw the celebration of a major political event, a commemoration of political stability and continuity: Guy Fawkes Night.
What is a data breach? A data breach occurs when sensitive, protected, or confidential information is accessed, shared, or stolen by an unauthorised person.
In the mid-20th century, Gilbert Ryle threw sand in the eye of Cartesian dualism, calling the idea of a separate mind a 'category mistake' and dubbing it the 'ghost in the machine'—essentially suggesting that Descartes had outed himself as harbouring an imaginary friend.