The singularity is coming, warns a former Google executive, and it should be considered a potential threat to humanity.
In an interview with The Times, Mo Gawdat, former Chief Business Officer for Google's moonshot organization X Development (formerly Google X), describes the kind of technology one can easily compare to "The Terminator's" Skynet, an extremely powerful AI that may very well bring Earth the apocalypse.
Gawdat told The Times that he reached his terrifying realization while working with Google X AI developers on robot arms that could find and pick up a little ball.
He claimed that after a period of slow progress, one arm seized the ball and held it up to the researchers in a gesture that he perceived as boastful.
"And I suddenly realized this is really scary," Gawdat said.
"The reality is," he added, "we're creating God."
In the IT industry, there is no shortage of AI doomsayers - Elon Musk, for example, has frequently warned the public about the perils of AI one day conquering humans. However, such a hypothetical view obscures the concrete dangers and disadvantages associated with the AI we've already created.
Facial recognition and predictive police algorithms, for example, have caused significant harm in marginalized populations. Numerous algorithms continue to spread and codify institutional racism on a global scale.
These are issues that can be addressed through regulation and oversight.
In ZDNet's "Every Country Must Decide Own Definition of Acceptable AI Use" article, the author stressed the importance of discussions in order to balance commercial potential while assuring ethical use of AI, so that such standards are useable and easily adopted.
But, to return to Musk's issue, it's all about making sure that these kinds of possibilities are taken into account before we deploy the required technology to unknowingly enable a predecessor to Skynet. Furthermore, technology must have the required failsafes (i.e., a killswitch) that can be activated if a scenario like this occurs.
Many startups are dedicated to disarming technologies that have gone rogue. SkySafe, for example, is a San Diego-based firm dedicated only to ensuring that drones never go rogue.
However, the number of companies (and many foreign governments) wanting to employ AI to accomplish something rather than prevent something far outnumbers these types of startups.
Many experts are concerned about what would happen when robots develop "feelings." Perhaps they should be concerned about the phase in which they have no feelings, like right now.