Artificial Intelligence

OVERVIEW

AI is advancing so rapidly that experts believe that there is a 50% chance that it will outperform humans in all tasks within 45 years and automate all human jobs within 120 years. AI could be the last invention that humans ever make – it may improve exponentially until humans trying to control it is comparable to ants trying to control Stephen Hawking. This could be one of the best events to happen to humanity, since AI could potentially find solutions to all world issues, bringing an age of paradise. However, AI development could also go very wrong and superintelligent machines could quickly become an existential threat.

One of the biggest problems is that militaries and companies around the world are rushing at top-speed to develop Artificial General Intelligence (AGI). Since AGI could be so powerful, it could be very dangerous to be second place. This means that it may be just as dangerous to take the time to meticulously design AI responsibly as it is to rush ahead and design it irresponsibly.

It is important to note that AI does not need to become conscious or malicious to be an existential threat to humanity.

DANGERS

Existential

  • A government could get AI first and use it to take over the world. “Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin
  • A company could get AI first and use it to take over the world
  • Terrorists could get AI and use it to destroy the world
  • A benevolent organization could get AI first and mis-program it so that its goal accidentally aligns with destroying humans
  • AI could solve all our problems and make humans obsolete, causing society to implode
  • A software bug could cause AI to destroy the world
  • AI could become conscious and/or malicious (unlikely)
  • Alternative outcome: nothing could go wrong with any of the hundreds of companies and governments developing AI, and the world will become a paradise

Non-Existential

CURRENT STATUS

Scroll to Top