|
Most people have an awareness now that technologically driven surveillance has become as pervasive in supposedly free western countries, as it is in the most repressive regimes on the planet.
As The Trends Journal has long covered and forecast, a growing web of IoT (Internet of Things), cellphone and other device tracking, data scraping of individuals’ internet activities, AI driven content surveillance of social media and elsewhere, government financial surveillance via supposedly private financial institutions, and government use of proxy tech corporations to surveil and suppress political expression and rights of American citizens, has all become a dystopian new norm.
But surveillance of human activities can largely only inform authorities about what people are doing and have done.
Increasingly powerful AI is opening the door to simulating humans interacting in environments.
Think of it as the classic “Sims” game, only with avatars that can be imbued and kept current with all the data and “personality” gathered from their physical human twins.
By combining powerful AI abilities to bring these avatars to digital life, and allowing them to live and interact in a simulation, a powerful predictive tool that takes surveillance to a new level might be achieved.
ChatGPT Powers AI Avatar Simulation Experiment
Researchers from Stanford University and Google released a paper on 7 April, 2023 that detailed an experiment creating a simulated environment containing generative AI-powered “humans.”
The paper, titled “Generative Agents: Interactive Simulacra of Human Behavior,” summarized its results in part:
“Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents–computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors…”
The simulation, while crudely visualized with a retro Sims look by the team, goes far beyond a game in terms of its potential uses.
The researchers noted concerning possible applications:
“How might we craft an interactive artificial society that reflects believable human behavior? From sandbox games such as The Sims to applications such as cognitive models and virtual environments, for over four decades researchers and practitioners have envisioned computational agents that can serve as believable proxies of human behavior. In these visions, computationally powered agents act consistently with their past experiences and react believably to their environments. Such simulations of human behavior could populate virtual spaces and communities with realistic social phenomena, train people how to handle rare yet difficult interpersonal situations, test social science theories, craft model human processors for theory and usability testing, power ubiquitous computing applications and social robots, and underpin non-playable game characters that can navigate complex human relationships in an open world.”
The sophistication with which these “computational agents” can be programmed and supplied with data, while being brought into dynamic “life,” via advanced generative AI chatbots like ChatGPT and Bard.ai, is groundbreaking.
Millions of people have now gotten a taste of the power of generative natural language AI chat programs, since OpenAI provided a public preview of its technology in November 2022.
More recently, Google has offered an “invite” access option to its rival Bard.ai chat program.
While the study’s authors don’t specifically mention real world predictive surveillance of humans as a possible application, the system designed by the project could enable such a use case.
The authors described the technologies used to create their computational or “generative agents”:
“To enable generative agents, we describe an agent architecture that stores, synthesizes, and applies relevant memories to generate believable behavior using a large language model. Our architecture comprises three main components. The first is the memory stream, a long-term memory module that records, in natural language, a comprehensive list of the agent’s experiences. The retrieval model combines relevance, recency, and importance to surface the records that are needed to inform the agent’s moment-to-moment behavior.
The second is reflection, which synthesizes memories into higher level inferences over time, enabling the agent to draw conclusions about itself and others to better guide its behavior. The third is planning, which translates those conclusions and the current environment into high-level action plans and then recursively into detailed behaviors for action and reaction. These reflections and plans are fed back into the memory stream to influence the agent’s future behavior.”
The study also notes that “past experience” (ie. data that can conceivably be supplied from real world subjects and their activities and experiences) is part of what can be integrated into AI generative agents:
“However, believable agents require conditioning not only on their current environment but also on a vast amount of past experience, which is a poor fit (and as of today, impossible due to the underlying models’ limited context window) using first-order prompting. Recent studies have attempted to go beyond first-order prompting by augmenting language models with a static knowledge base and an information retrieval scheme [52] or with a simple summarization scheme [104]. This paper extends these ideas to craft an agent architecture that handles retrieval where past experience is dynamically updated at each time step and mixed with agents’ current context and plans, which may either reinforce or contradict each other.”
It might be supposed that data from the physical world is so vast that no viable twinning into a simulation could effectively be accomplished or yield predictive results that amounted to specifically predicting individual behavior for example.
But a simulation might not need comprehensive data, but only data shown to be most statistically consequential and relevant in predicting behavior.
Via modeling, researchers have used these techniques to glean data that can model and predict environments in terms of climate change.
As The Trends Journal has detailed, digital twinning is being used to gather relevant data to monitor, predict and evolve commercial processes and equipment in factories, and even, on an experimental basis, for whole cities.
Well before the release of this Stanford-Google research project, The Trends Journal outlined how AI and data collection and surveillance would likely power a predictive surveillance of humans. (See “YOUR DIGITAL TWIN: THE BEST INFORMANT A GOVERNMENT COULD HAVE,” 9 Aug 2022.)
In that story, we noted concerning the increasing powers of AI and data collection:
“But the treasure trove of data and intelligence on virtually every activity people engage in, all used to animate an AI powered digital version of them, is likely to be exploited in a much wider range of uses. In short, HDT based predictive modeling and assessment could be utilized to gauge, influence and / or manipulate almost anything concerning an individual.”
One might wonder how intelligence agencies or the military might wish to employ AI-powered simulations to review possible employees, and predict their reliability and allegiance, before hiring them.
The recent example of Jack Teixeira, a 21 year old member of the Massachusetts Air National Guard, who was arrested this past week for allegedly leaking classified info about American military activities related to Ukraine, illustrates the point.
A CNN story reporting on the arrest of Texiera began with the following:
“Three years ago, before he was granted top secret clearance by the US government and entrusted with highly sensitive eyes-only information intended for officials in the Pentagon, graduating high school senior Jack Teixeira chose the following motto for his yearbook: ‘Actions speak louder than words.’
“The phrase appeared next to a photo of Teixeira, who had already enlisted in the Massachusetts Air National Guard.
“Today, of course, those words appear prophetic.”
(“Man arrested in connection with intel leak: ‘Actions speak louder than words,’” 14 Apr 2023.)
It may only be imagined how data of individuals might be collected and injected into simulations, to predictively test their behaviors and inclinations.
The Stanford-Google project represents a new milestone that could enable a troubling surveillance technology that may prove to be no game for humans in still nominally “free” societies.