Google Artificial Intelligence Created its own Cryptographic Algorithm

By S. Rina / 1477927564
(Photo : Pixabay) Artificial intelligence systems are apparently capable of cryptography, a Google experiment has revealed.

Google's Brain deep learning project has achieved another breakthrough as its neural networks developed their own encryptions. The experiment involved three AIs called Alice, Bob, and Eve. It showed that neural networks could learn to protect their communications.

Two of the networks--Bob and Alice--had to exchange messages where Bob was required to decrypt the messages sent by Alice. Eve, on the other hand, had to try to eavesdrop and intercept the message. The experiment started with Alice converting a simple message into an encrypted message. In the initial phase, the experiment did not look promising. However, over the course of time, Alice learned to devise up her own encryption strategy.

The important part of the experiment was that the AIs were not provided with any guidance about the crypto technique to be employed. They were also not taught the methods of encryption. The only connection between these AIs was Bob and Alice's share key. The experiment showed mixed results. In many cases, Bob was not able to decipher Alice's message.

According to David G. Anderson and Martin Abadi, the researchers behind the project, AIs can be made to protect their communications merely by telling them to do so. There is no requirement of supplying cryptographic algorithms. The researchers added that these capabilities might be used for traffic analysis function. It may also be used for proper understanding of metadata.

The researchers did not carry out exhaustive tests on the encryption methods created by Alice and Bob. However, in specific runs, they found that the methods were both plaintext and key dependent.