
For decades, engineers have explored artificial intelligence (AI) to give machines more human-like capabilities. The past 10 years have brought significant progress, from smart assistants like Siri and Alexa understanding our commands and managing simple tasks to driverless cars offering an alternative to taxis in major cities. More recently, generative artificial intelligence (GenAI) has evolved at a breathtaking pace, creating new possibilities for individuals and businesses alike.
Artificial intelligence and its capabilities are fueled by large datasets that can include images, text, audio and numerical data, much of it gathered from the internet, along with the data companies collect about their customers. That’s why ethical technology governance to manage AI responsibly should matter to everyone.
Avoiding Unintended Consequences
How does the use and development of AI affect the people on which it’s being used? As companies pursue opportunities to make their businesses better and stronger with AI, they should also define initiatives to address the risk that misinformation or bias could create unintended consequences for customers. These consequences are often not a result of malicious intent, but rather a byproduct of how AI systems are designed, trained and deployed. Without initiatives to oversee the approach, companies may be exposing themselves to significant financial and reputational damage that could erode shareholder value.
Here are some of the ways a company’s use of AI can have unintended consequences:
• Imperfect outcomes. While AI has progressed significantly, saving time and creating greater efficiency and precision with some tasks, it’s widely acknowledged that AI can make mistakes. Errors may not be immediately apparent due to the complexity of the underlying algorithms and models can be used to create and disseminate misinformation. Companies should continue to uphold high standards for the integrity, quality and safety of the products and services they are selling. Health insurers, for example, may face lawsuits for allegedly using algorithms to improperly deny care.
• Biases. Any direct or indirect discriminatory biases contained in the underlying datasets could lead to outcomes that favor certain groups over others. For example, AI-powered recruitment tools that screen resumes may favor certain characteristics and limit access to the widest and best talent pool for the job. In addition, some companies have paid substantial settlements to resolve allegations of discriminatory practices in areas such as advertising and evaluating medical claims.
• Privacy and security. Companies using AI for customer analytics, employee monitoring or predictive modeling may gather and analyze personal data in ways that violate privacy rights. For example, a company may collect data for one stated purpose, such as improving a product, but then use it to train an AI model for an entirely different purpose, which violates the core tenet of many privacy laws. Many AI models are also trained on data that was “scraped” from the public internet. This can include personal blog posts, social media content and even copyrighted material, where sensitive information may be unknowingly incorporated into an AI model. Additionally, companies are managing larger datasets than ever before, which creates an even-greater risk of a data security breach in which sensitive personal information may be exposed to bad actors.
Building trust requires a full command of how an AI system functions and its ability to achieve proper outcomes. AI systems have been evolving to show more levels of decision making. For business outcomes, AI could include the rationale for key decisions to help its customer understand why, for example, a healthcare claim was denied. Businesses could also notify customers when they are interacting with an AI system and provide clear descriptions of the role it plays.







