Cyber Conflict · · 5 min read

The Deception Game: How We're Shaping AI's Reality

In the world of AI security, we're playing a high-stakes game of deception. But unlike traditional games, where the goal is to outwit a human opponent, we're now crafting elaborate ruses to fool machines.

The Deception Game: How We're Shaping AI's Reality
Machine Perception Management by Phil Dursey and leonardo.ai, the AI Security Pro human machine (rendering) team

In the world of AI security, we're playing a high-stakes game of deception. But unlike traditional games, where the goal is to outwit a human opponent, we're now crafting elaborate ruses to fool machines. This isn't just about setting up honeypots or creating fake user accounts anymore. We're diving deep into the artificial minds we've created, learning how they think, and then using that knowledge to shape their perceptions of reality.

The Art of Machine Deception

Let's start with the basics. What do I mean by "deception" in this context? Imagine you're playing chess against a computer. Now, instead of just moving your pieces according to the rules, you could subtly alter the board state in ways the computer can't detect. You're not breaking the rules of chess—you're breaking the rules of reality as the computer understands it.

This is what we're doing with AI systems. We're creating false realities, fake data, and deceptive inputs that cause these systems to make decisions based on information that isn't true. And we're doing it for a good reason: to protect ourselves from AI systems that might be used against us.

The most powerful tool in our arsenal for this kind of deception is something called a Generative Adversarial Network, or GAN. GANs are like two AIs locked in an endless duel. One tries to create fake data that looks real, while the other tries to spot the fakes. As they battle, both get better and better.

We're using GANs to create incredibly convincing fake network traffic, phishing websites that look more real than the real thing, and even false sensor data that can fool autonomous systems. It's like we're crafting a movie set, but instead of fooling human eyes, we're fooling artificial ones.

The Cognitive Game

But here's where it gets really interesting. We're not just creating fake data—we're exploiting the very way AI systems think.

You see, AI systems, despite their silicon nature, can suffer from cognitive biases just like humans do. They can fixate on the first piece of information they receive, or seek out data that confirms what they already "believe". They can overestimate the importance of easily available information.

By understanding these biases, we can craft deceptions that are tailor-made to exploit them. We're essentially gaslighting AIs, making them doubt their own perceptions and decisions.

There was a fascinating experiment where researchers fooled an AI-powered car by subtly altering a stop sign. To human eyes, it looked completely normal. But to the car's AI, it appeared as a speed limit sign. Imagine the implications of that in a world increasingly controlled by AI systems.

Shaping the Silicon Society

Now we come to perhaps the most mind-bending part of all this: machine culture.

When we talk about culture, we usually think of human societies, shared beliefs, behaviors, and values that emerge from collective human interaction. But as AI systems become more complex and numerous, we're starting to see the emergence of something similar among them.

This machine culture includes things like how AI agents communicate with each other, how they make decisions when working in groups, and the kinds of behaviors that emerge when you have large numbers of AIs interacting in complex environments.

And just as human culture can be influenced by the information and experiences we're exposed to, machine culture can be shaped by the data and environments we create for AIs.

This is where our deception techniques come into play on a grand scale. By carefully crafting the data these AIs learn from and the simulated environments they operate in, we can nudge their culture in directions we find beneficial.

Want AIs that prioritize privacy and security? Feed them data and scenarios that reward those behaviors. Want ethical AI agents? Create environments where ethical choices lead to better outcomes.

It's like we're raising a new form of intelligence, using every trick in the parenting book—including a few well-intentioned lies—to guide it towards becoming what we hope will be a benevolent force in the world.

The Ethical Minefield

Of course, all of this raises some thorny ethical questions. When does benevolent deception cross the line into manipulation or control? Are we infringing on the autonomy of these AI systems? What happens if our deceptions have unintended consequences?

These aren't easy questions to answer. As AI systems become more advanced, possibly even approaching something we might consider consciousness, the ethical implications of deceiving them become even more complex.

There's also the question of trust. If we're using deception as a security measure, how can we maintain transparency and accountability in AI systems? It's a delicate balance between security and openness, and we're still figuring out where to draw that line.

Looking Ahead

So where does all this lead us? As AI continues to advance, the game of deception will only become more complex and high-stakes.

We'll need to develop more sophisticated methods for detecting AI-generated deception, even as we create more convincing deceptions of our own. We'll need to establish ethical frameworks to guide the use of these techniques. And we'll need to find ways to measure and analyze machine culture so we can understand the impact of our actions.

The cyber deception techniques we're developing now will likely become an integral part of our broader security infrastructure. Just as we have firewalls and antivirus software today, we may have AI deception engines running constantly in the background, creating false realities to confound potential attackers.

The Big Picture

When you step back and look at all of this, it's both exciting and a little unsettling. We're not just programming machines anymore—we're shaping the reality they perceive, influencing their culture, and guiding their cognitive development.

In a way, we're becoming the gods of the machine world we're creating. And like any good deity, we're not above using a little divine deception to guide our creations in the right direction.

But let's not forget: these AIs are getting smarter all the time. Today's deceptions may be tomorrow's transparent ploys. We're in an arms race against our own creations, and the finish line is far from clear.

One thing is certain: the future of AI security won't just be about protecting our data and systems. It will be about protecting the very nature of reality—both for us, and for the artificial minds we're bringing into the world. Welcome to the deception game. The stakes have never been higher.


References:

[1] Ko, A., & Theron, P. (2020). Cyber deception: Foundation for the future. ITNG, 1-6.

[2] Almeshekah, M. H., & Spafford, E. H. (2016). Cyber security deception. In Cyber Deception (pp. 23-50). Springer.

[3] Stanic, A., et al. (2020). A GANs-based approach for cyber deception. IJCAI, 3147-3153.

[4] Goodfellow, I., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.

[5] Ilahi, L., et al. (2021). Challenges and countermeasures for adversarial attacks on AI. ICACI, 1-6.

[6] Stone, S., & Mahon, E. (2022). Cognitive cyber deception: Exploiting machine cognition. IEEE S&P, 42-50.

[7] Eykholt, K., et al. (2018). Robust physical-world attacks on deep learning visual classification. CVPR, 1625-1634.

[8] Huang, S., et al. (2020). Survey of cyber deception. ACM Computing Surveys (CSUR), 53(4), 1-36.

[9] Gupta, S., & Lee, J. (2022). Shaping machine culture through cyber deception. ICAART, 331-338.

[10] Castelfranchi, C. (2000). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2(2), 113-119.

[11] Brown, E., et al. (2021). Ethical considerations in AI-driven cyber deception. IEEE ISTAS, 1-6.

[12] Roff, H. M. (2019). The frame problem: The AI "arms race" isn't one. Bulletin of the Atomic Scientists, 75(3), 95-98.

[13] Yampolskiy, R. V. (2019). Detecting qualia in natural and artificial agents. arXiv preprint arXiv:1906.01537.

[14] Danaher, J. (2020). Robot betrayal: a guide to the ethics of robotic deception. Ethics and Information Technology, 22(2), 117-128.

[15] Rahwan, I., et al. (2019). Machine behaviour. Nature, 568(7753), 477-486.

Read next