AI-Listers - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
If you’ve been keeping up with the latest AI developments, you’ve probably heard of a fairly new chatbot service called ‘Character.ai’. Its functions are pretty self-explanatory: it is an artificial intelligence chatbot web application that allows you to chat with fictional, historical, and celebrity figures in a dialogue. The cherry on top is that the user-generated chatbots have distinct personalities – so if you fancy a chat with global superstar like Taylor Swift or wish to debate with the literary detective Sherlock Holmes – pay a visit to Character.ai.
Now, some people (myself included) will view this kind of tech as fun to use. The application uses deep learning and expansive language models to generate authentic responses matching the characters’ personality, speech, and behaviour. Users are able to rate responses from 1 to 4 stars, helping the AI to further fit the precise dialect and identity the user desires. Furthermore, the developers have increased conversation memory so that the AI can recall messages from further back.
This may sound like an advertisement to visit the app, and I do recommend dabbling with it to see the unique applications of this chatbot. However, as always, it’d be irresponsible of me to not mention the potential drawbacks of an app with such advanced language learning. First of all, the characters (most times) resemble real people. This raises concerns about the privacy of the individuals whose likenesses are being used without their consent. Threat actors could potentially create a fake character impersonating a politician and use it to spread misinformation or propaganda. Character.ai can also be used to generate realistic looking texts, which can be difficult to distinguish from real content. In a political context, this could be used to manipulate public opinion, push hate speech, or even interfere with elections.
Character.ai represents how far we’ve come with deep learning machine intelligence mixed with human creativity. The app itself has been created more for recreational use rather than functional uses for generating accurate information, however individuals with bad intentions may view this app as a potential means to facilitate their actions.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).