Usually I think it sounds a little paranoid to talk about AI attacks and robots taking over. But, as this technology advances it will be a good idea to consider all repercussions.
Via Motherboard:
Nevertheless, a group of 26 leading AI researchers met in Oxford last February to discuss how superhuman artificial intelligence may be deployed for malicious ends in the future. The result of this two-day conference was a sweeping 100-page report published today that delves into the risks posed by AI in the wrong hands, and strategies for mitigating these risks.
One of the four high-level recommendations made by the working group was that “researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.”
“Current trends emphasize widespread open access to cutting-edge research and development achievements,” the report’s authors write. “If these trends continue for the next 5 years, we expect the ability of attackers to cause harm with digital and robotic systems to significantly increase.”
On the other hand, the researchers recognize that the proliferation of open-source AI technologies will also increasingly attract the attention of policy makers and regulators, who will impose more limitations on these technologies. As for the specific form these policies should take, this will have to be hashed out at local, national and international levels.
No comments:
Post a Comment