top of page

Terminator vs. AI Product Manager

Terminator vs. AI Product Manager

Today, human species is dominant on earth because the human brain has unique capabilities that other animals lack. The concern with existential risk is to lose dominance to something else, in this case, to artificial intelligence. If AI reaches the artificial general intelligence (AGI) level and gains the capacity to understand or learn any intellectual task that a human being can, then AI won't have to act in the best interest of humanity. Moreover, if AI surpasses general human intelligence and reaches the artificial superintelligence (ASI) level, then AI can take over our dominance.


There are many philosophical debates around this topic; some schools of thought questions if a superintelligence will have moral wisdom, and some others wonder if superintelligence will even acknowledge or just ignore us. However, the AI product manager's role at work is not philosophical debates, but understanding and managing the risk that could threaten the product or organization.


Although artificial superintelligence is a relatively new topic, researchers all around the world are working on managing existential risk. Some of the findings drill down to these six sources of risk, and as an AI product manager, you should keep these factors in check when utilizing AI for your products or services.





The number one threat is poorly defined goals. AI solution outcome is predictable when the optimization objective is clearly defined. If the aim is not clear enough, it may not converge, but it is also possible that it can result in something surprising. Self-driving cars, for example, have to stop at the stop sign. However, if it would be trained with a vague objective, it would learn not to stop in unusual scenarios. For instance, when there is no other car in the intersection. A vehicle rolling through the stop sign maybe does not sound like posing an existential risk, but the problem is that existential threat will stay hypothetical until we encounter a real scenario. Therefore, our best guess is that the habit of defining vague goals can lead one day to unexpected consequences. This is where KPIs (key performance indicators) becomes essential. The AI product manager has to bridge the algorithm optimization objectives to business and product KPIs.


Another threat is the capability of modifying an unsuccessful AI solution. First, one needs to have the instrumentation in place to detect if the AI solution is failing. Second, the AI solution design should be flexible enough to allow corrections. Modifiability is a concern most product and service teams don't take into account. For example, compare Tesla to another brand car. Tesla regularly uploads software updates through OTA (over-the-air) while other cars, like mine, require to be serviced every six months to receive a software update. Either the companies are trying to save costs, or their product managers are not aware of the rigid AI solution dangers. However, the risk is real, and you need to consider it during your AI solution design.


If you don't take the AI risks seriously, you could face the threat of losing the ownership of control. Some believe that it will take some time until we have AGI, which can take over control on its own. However, today, we see hackers using AI tools to take over control of our devices, machines, factories, or appliances, and it is a matter of time to see another level of AI on top of these hacker AI tools which make high-level decisions. The problem with threats of new technologies is that if nobody thought about it, people assume that the danger does not exist. Therefore, you need to be open-minded and exploratory when it comes to identifying AI risks. Hidden threats like ethical trade-offs, for example, if not revealed, stay hidden in the depths of the ML algorithms. Do you want to trust a black-box with the destiny of your product, company, or even your species? Of course not. Therefore, ethical trade-offs have to be part of your risk management. If you are interested in learning more about AI risk management please check out the links to my courses at the end of the article.


Although we can take preventative measures based on our assumptions about AGI and ASI, the causes of existential risk are still very vague. Even if everybody does the right thing and cautiously creates simple AI solutions, nobody can guarantee that these seemingly-harmless solutions won't produce a convergent instrumental goal. Meaning, it can ignore any side effects while fanatically pursuing a goal. The famous paperclip maximizer thought experiment, for example. An AI solution tasked with manufacturing as many paperclips as possible could deplete every resource on planet earth. This hypothesis sounds may be far off, but with little help from an ignorant person playing with powerful solutions, it is not impossible — for example, extreme exploration.





Exploration can create problems in the wrong hands. The balance between exploration and exploitation is fundamental to making choices in every aspect of life. It is no different in AI solutions. During the project, we always use a mix of known information and unknown information to improve the solution continuously. It is not possible to innovate upon the current solution by using only the current information. In online advertising, for example, if one type of ad is always shown to one audience, the ad targeting will never improve. Let's say you show car ads always to 30-year-old male consumers. If you only exploit these targeting criteria and never explore other audiences or other ad styles, you will never increase your conversion rate. This safety-first approach is the best way to eliminate risk. However, today's competitive markets force companies to innovate by exploring new waters and take the risk.


Too much exploration is also not very good. Exploration means going into the unknown with the hope of finding something better without any guarantee. It can be quite dangerous to experiment with extreme AI applications. An armed military drone, for example, is not a good idea to experiment without a reliable risk management strategy. 

Therefore, finding a balance between exploration and exploitation is the key to innovate while minimizing the potential risk for AGI or ASI. It is not as easy as it sounds, and you will need to use sophisticated techniques that I share in my courses, but you are not alone. Today, many organizations and institutes researching AGI and ASI risks and defining standards.


I share more details and frameworks in my on-campus Stanford Continuing Studies class and online Live virtual classroom at AI Product Institute. If you are interested, please feel free to drop me a message or visit https://www.aiproductinstitute.com.

128 views0 comments

Recent Posts

See All
bottom of page