AI Could Achieve Human Like Intelligence by 2030 and Destroy Mankind Google Predicts

Google Deepmind This shocking revelation has been made in this research paper released on behalf of. The study states that Given the widespread possible impact of AGI, it is likely that this may pose a possible threat to serious damage. It also includes existent risk. That is, the risk of permanent extinction of humanity can also arise.
One thing is worth noting here that this paper, co-founded by Deepmind co-founder Shane leg, does not mention how AGI can cause mankind’s extinction. Instead of explaining the reason, it tries to draw attention to solutions that Google and other AI companies should do to reduce the risk of AGI.
This study divides the risks of advanced AI into four major categories- misuse, wrong alignment, mistakes and structural risk. It also highlights the risk finish strategy of deepminds which focuses on the prevention of its misuse. Where people can use AI to harm others. In February, Deepmind CEO Demis Hasabis said that AGI is as smart as humans, or even smarter. It will begin to emerge in the next five or ten years. He also advocated a umbrella organization like UN to oversee the development of AGI.
Gadgets 360 for Latest Tech News, Smartphone Review and exclusive offer on popular mobiles Android Download the app and us Google News Follow on
Related news
(Tagstotranslate) Agi (T) Artificial General Intelligence (T) Artificial General Intelligence Impact (T) Artificial General Intelligence effects