CALL US

+91 9116117170

Artificial Intelligence (AI) Thinks Like a Criminal Cyberops

Artificial-Intelligence Image by Cyberops.in

Artificial Intelligence Thinks Like a Criminal

By Chandan Singh 0 Comment February 1, 2019

The proliferation of artificial intelligence (AI) technology can lead to an increase in cybercrime and the emergence of its new forms, manipulation of public opinion and damage to the physical infrastructure over the next five years, according to the authors of the report “Criminal AI: prognosis, prevention and prevention”.

The 100-page report was written by 26, experts. Among them are scientists from Oxford, Cambridge and Stanford universities, analysts of non-profit organizations Electronic Frontier Foundation and OpenAI and representatives of other reputable organizations.

The authors of the report compare artificial intelligence with nuclear energy and explosives that can be used for both peaceful and military purposes. “When the capabilities of artificial intelligence become more powerful and ubiquitous, we expect that this will lead to the expansion of existing threats, the emergence of new threats and a change in the type of threats,” the authors warn.

“Many of us are amazed at the scale of what has happened over the past five years. If this continues, you will see the appearance of really dangerous things, ”notes the head of OpenAL, Jack Clark.

“Many of us are amazed at the scale of what has happened over the past five years. If this continues, you will see the appearance of really dangerous things, ”notes the head of OpenAL, Jack Clark.

Developers should in the early stages provide for the possibility of criminal use of artificial intelligence and create appropriate restrictions, the authors advise. If this is not done, artificial intelligence will become a powerful weapon in the hands of people with criminal intentions. The authors identify three main areas of threat. The first – artificial intelligence will help to organize hacker attacks. Technology will facilitate the detection of software vulnerabilities or the selection of potential victims of crime.

Also, artificial intelligence will allow the use of human vulnerabilities. For example, use speech synthesis or create “contextual” malware. Thanks to him, the likelihood that a user clicks on a link that launches a virus, or downloads an application that the attackers need, will increase dramatically. Hacker attacks using these technologies will be much bigger and more effective than they are now.Unfair use of artificial intelligence in the political sphere is also very likely. With it, the authorities will create more powerful monitoring systems for dissidents.

Political forces will be able to conduct “automated, hyper-personalized disinformation campaigns”. Artificial intelligence will be able to generate fake news in such quantities that it will be almost impossible for the user to isolate the real ones among them.

The efficiency and targeting of propaganda will increase, it will become easier to manipulate public opinion, the authors warn. This becomes especially dangerous given that, with the help of artificial intelligence, a step forward in the study of the foundations of the psychology of human behavior can be taken. The third group of threats is the possibility of attacks on physical objects. It is, for example, about the mass management of UAVs and other automated combat complexes.

In addition, artificial intelligence will contribute to the malicious introduction into the system of unmanned vehicles, with further accidents or attacks with their participation.

“Artificial intelligence, digital security, physical security and political security are firmly linked, and their connection will only be strengthened … Although the specific risks are innumerable, we believe that understanding the general patterns will help shed light on the future and improve awareness to prevent and mitigate the consequences” the report says.

However, some experts point out that the scale of the problem may be exaggerated. “Improvements are coming from both sides – this is a permanent arms race. Artificial intelligence is already extremely useful for cybersecurity. It is still unclear which side will benefit more, ”says Dmitry Alperovich, co-founder of the cyber security company CrowdStrike.

error: Content is protected by Cyberops !!