Wikipedia Bots Fight With Each Other for Years: Study

By Ana Verayo / 1488198943
(Photo : Wikipedia) Wikipedia bots often rival each other by editing each other's content.

In a fascinating new study, scientists have analyzed how bot behavior can result in decades of fighting among each other. This constant bickering by Wikipedia bots can apparently last for several years.

This internet bot rivalry was observed in Wikipedia bots when they are tasked to perform editing and maintenance jobs on a massive database. There are multiple bots that can edit text which are vandalized and fix spelling as well as update content and insert links. Others can place content and language bans and check spelling. Non-editing bots also scan the resulting text for multiple violations such as copyright infringement.

As part of the study, researchers conducted a survey from 2001 to 2010 and obtained evidence about bot interactions with each other. The researchers revealed that these bots were not designed to interact with one another, but surprisingly they did and in multiple events.

"We discovered how Wikipedia bots are intended to support the encyclopedia online database but they often undo each other's edits," the team from the Oxford Internet Institute said. This sterile "fighting" can apparently continue for years.

 

During the study, researchers analyzed bot-bot interactions through the different language versions of Wikipedia. They observed certain activities such as bots on the English Wikipedia often edit each other's content on an average of 105 times in a span of 10 years. However, the Wiki bots of the Portuguese language can be very intrusive, as they was an average of 185 bot to bot reverts for each bot.

These new findings reveal how bot interactions with each other tend to be more unpredictable than previously known. This may have important implications on how artificial intelligence is applied in cybersecurity or even autonomous vehicles.

"We often forget how when we coordinate with collaborative agents, this can only be achieved through frameworks of rules to result in wanted outcomes," Luciano Floridi from the Oxford Internet Institute said. Researchers call this infrastructural ethics or infra-ethics. They suggest that these bots were designed carefully due to this interaction.

Ultimately, algorithms are designed rather simply to manage bots. However, this unpredictable bot interaction with each other can result in complex systems. In addition, bots also behave differently than human editors as they have different conflicts with each other. This study calls for further research about bot sociology and how to design artificial agents that include infra-ethics.

This new study was published in the journal, PLOS ONE.