top of page

AI-Harm Framework from Center for Security and Emerging Technology

In the sprint to keep products at the forefront of technology, the temptation to integrate AI to boost functionality and user experience is high for product managers. However, this enthusiasm can bring about significant issues if the potential harms associated with AI are brushed aside. Product managers play a crucial role as their decisions not only steer the product's trajectory but also define its impact on users and society at large. Overlooking the AI-harm factor can result in the rollout of products that might perpetuate bias, breach privacy, or even pose serious risks to users. For example, an AI-driven financial service could unintentionally uphold systemic biases, creating barriers for marginalized communities. Similarly, an AI-powered health application could misdiagnose conditions, leading to severe health repercussions. These ripple effects could stretch beyond individual users, embedding injustices and vulnerabilities within societal frameworks. Given such high stakes, a cautious approach towards the design and deployment of AI-powered products is imperative.


Enter, AI Harm Framework from CSET (The Center for Security and Emerging Technology within Georgetown University's Walsh School of Foreign Service studies). The CSET AI Harm Framework, introduced by Mia Hoffmann and Heather Frase, offers a structured approach to dissect, analyze, and mitigate the adversities spurred by AI technologies. It defines 'AI harm' broadly, encompassing a range of negative impacts, and differentiates between tangible and intangible harms, providing a structured way to approach and analyze the adverse effects associated with AI technologies.


For product managers spearheading AI projects, implementing the principles laid out in the CSET AI Harm Framework is crucial for mitigating risks and ensuring the responsible deployment of AI. Below is a streamlined approach on how you can utilize this framework in your AI projects to ensure a responsible approach towards AI deployment, minimizing potential harms while maximizing the beneficial impact of your AI projects:


Understanding AI Harm

Familiarize yourself with the definition of AI harm as outlined in the framework, and understand the distinction between tangible and intangible harms. This foundational knowledge is crucial for identifying and assessing potential risks associated with your AI project.


Harm Identification and Classification

Use the framework to identify and classify potential harms that could arise from your AI project. Examine both direct and indirect impacts, and categorize them into tangible and intangible harms.


Preventive Measures

Develop preventive measures to mitigate identified harms. This could include implementing fairness checks, privacy-preserving techniques, or other safeguards to minimize the risks associated with your AI system.


Monitoring and Evaluation

Establish a monitoring and evaluation mechanism to continuously track and assess the performance of your AI system against the identified harms. Use the insights gained from this monitoring to make informed adjustments to your system to prevent or mitigate adverse impacts.



Now, you are most probably asking the question, "how to proceed?" The good news is that we offer the 4-week Generative AI Product and Business Innovation (https://www.aiproductinstitute.com/generative-ai ) workshop, which is structured to address the concerns of AI-harm in a practical manner. Through real-world workshops, you'll explore and understand the intricacies involved in mitigating AI-harm by building a solid understanding of generative AI, which is crucial for devising safe AI strategies. With the support of long-term mentorship even after the program concludes, you'll have a resource to turn to for guidance as you navigate the evolving challenges of AI.



41 views0 comments
bottom of page