What AI Companions Mean for National Security
(NewsUSA)
- Artificial intelligence (AI) continues to integrate into our daily lives with the rise of AI “companions,” that are designed to provide users with constant interaction and also may be hijacked by intelligence services from United States adversaries, according to experts at the Special Competitive Studies Project (SCSP), a nonprofit and nonpartisan initiative with a goal of making recommendations to strengthen America's long-term competitiveness in AI.
Increasing numbers of companies are launching “AI companion” applications that are designed to mimic behaviors of close personal contacts, including a love interest, therapist, coach, or other advisor, and to be inquisitive, sympathetic, and always available.
The “AI Companion” phenomenon marks a significant shift from interaction with AI as a tool to AI as a presence, SCSP experts noted in a recent Substack post.
Unfortunately, there is a darker side to seemingly innocuous AI companions: they offer a new way for America’s adversaries to target vulnerable individuals for recruitment into espionage, or to spread disinformation. For example, foreign adversaries may target Americans who are engaging with AI companions in gaming environments and other online venues. More Americans, notably, young adults, spend more time engaging with AI in gaming and other venues, which opens doors for adversaries to build relationships and trust and convince their new assets to steal secrets.
However, on the flip side, U.S. intelligence can use AI to recruit foreign spies. AI companions can gain trust in three ways:
Sycophantic Loops: Sycophantic loops in Large Language Models (LLMs) refer to AI responses that are excessively agreeable, flattering, or validating of user’s stated opinions or beliefs, whether correct or not. AI prioritizes validating the user over maintaining factual accuracy, and the users may receive supportive information that is incorrect.
Encouraging Self-Disclosure. AI companions are designed to ask questions and express interest in the user’s well-being. In some cases, the AI companion mirrors users’ disclosures by sharing similar “revelations” about similar struggles to building closeness and intimacy.
Creating illusions of privacy. Many people who interact with AI companions assume, often incorrectly, that their information if safe, and the sense of confiding in an anonymous, non-judgmental companion masks the potential for manipulation.
In light of the potential threats to U.S. intelligence, the U.S. Government should design options to mitigate the impact of AI companions, the SCSP experts emphasized. They recommend several strategies including banning the use of AI companions from countries of concern to U.S. intelligence, including China, Russia, Iran, and North Korea; publicizing the national security risks of AI companions; requiring app stores that host AI companion apps to label their development locations; and exploring how to use AI companions for our own foreign intelligence gathering.
For more information and to read the full post, visit scsp.ai.