Ethics and Regulations in AI: Striking a Balance Between Innovation and Accountability

Ethics and Regulations in AI: Striking a Balance Between Innovation and Accountability

Artificial Intelligence (AI) has evolved from science fiction to a critical component of our daily lives, shaping industries and transforming economies. With its growing influence, questions regarding the ethical implications of AI’s decisions and the need for regulatory oversight have taken center stage. AI bias – the underlying prejudice in data used to develop AI algorithms – can have significant consequences. Research by the University of Washington suggested that OpenAI’s GPT-3 was significantly prone to racist, sexist and other biases, given that it was trained on general content from the internet without adequate data cleansing. As the world grapples with the ethical nuances of AI deployment, it is imperative to strike a balance between fostering innovation and upholding accountability.

Is AI Ready for Real World Scenarios

The inability of AI systems to capture several intangible human factors that go into real-life decision-making (ethical, moral etc.) borders on being worrisome. In a Sep 2022 Harvard Business Review article, one of the authors narrated a personal experience of staying at an Airbnb-rented house which was actually poorly maintained despite amazing online preview pictures and positive reviews. He had made up his mind to give the place a one-star rating and a negative review. But the next morning he met the host of the house, a sweet and caring elderly woman. Upon hearing the hardships she was facing in keeping the place running especially after Covid, her offer to painstakingly get fresh fruits for his family, and the fact that she was caring for someone sick at home, the author chose not to post the negative review. His original decision of posting the negative review was fact-based, while the one to refrain from posting it was purely a human decision. Would an AI system have handled this situation the same way? Likely no, reiterating the essential ethical backdrop in which human decisions and the world at large works.

Organizational Stance and Accountability

At this inflection point, the European Parliament has embarked on an extraordinary journey by greenlighting the first-ever proposal for AI Regulation (June 2023). The aim is to encourage the use of dependable AI that focuses on people and make sure AI protects health, safety, basic rights, democracy, laws, and the environment from possible AI systems’ threats. The calling-off of an Uber self-driving experiment after the self-driving car killed a pedestrian in Tempe, Arizona is a pertinent case in point. A human driver would have realized that the victim had come across a four-lane road away from a crosswalk and potentially stopped the vehicle. Investigation revealed flaws in training and AI model implementation, putting the focus back on the need for a robust framework around real-world AI usage and implementation.

In contrast, the United States has a less prescriptive approach, as Federal Trade Commission’s guidelines that emphasize transparency and fairness do not provide specific regulations for AI as it wants regulations to be agile to match AI’s rapid evolution. In the Uber self-driving accident, the Arizona Police Department and the US National Transportation Safety Board decided that the company was not criminally liable for the pedestrian’s death as the AI failed to classify the jaywalking pedestrian, hinting at inadequate model training.

Moreover, leading organizations recognize the gravity of AI ethics. OpenAI, the pioneer in AI research, is committed to ensuring AGI (Artificial General Intelligence) benefits all of humanity and pledged to strongly influence AGI’s ethical deployment to prevent uses that could harm humanity or concentrate power unfairly. In a bid to ‘out-recruit’ other tech companies, Amazon developed an AI-based recruitment tool which was trained to look for top talent in resumes. But the model was trained on biased data collected over a 10-year period where most of the candidates were men. Inherently, the AI model prioritized male resumes. Even when names were anonymized, it awarded low scores to resumes that participated in women’s activities. After several attempts to make it gender-neutral, Amazon finally had to disband the tool.

The Advent of AI Generative Tools and their Ripple on Governance Paradigms

The GenAI ethics is surrounded by key concerns such as distribution of harmful content, copyright and legal exposure, data privacy violations, sensitive info disclosure, amplification of existing bias, and lack of explainability and interpretation. The proliferation of use cases for GenAI necessitates stronger vigil on how AI-based services are delivered to end consumers. An experimental healthcare chatbot, employing OpenAI’s GPT-3, misbehaved and suggested that a patient commit suicide. Deploying such an AI system on a suicide hotline might be disastrous in the least. The bot’s creator shelved the project and agreed “it was inappropriate for interacting with patients in the real world.” A strong governance structure, based on three pillars – global guidelines, self-regulation, and regulatory framework – has become more imperative now.

Further alluding to the need for a strong regulatory framework are AI-generated deepfake videos and images, which have been shown to have tremendous capabilities in spreading misinformation, manipulating public opinion, and defaming individuals – all of which can lead to serious consequences for individuals and the society.

International initiatives have converged to champion ethical AI deployment. The G7 countries have resolved to prioritize collaborative, inclusive AI governance, aligned with democratic values, and premised on forward-looking, risk-based strategies for trustworthy AI deployment. The EU-US Trade and Technology Council ministerial meeting in May 2023 included GenAI systems within its Joint Roadmap. This endeavor culminates in the establishment of a voluntary AI Code of Conduct, a bridge that transcends formal regulation and underscores the global resonance of ethical AI.

Charting a Holistic Regulatory Approach

Stakeholders from governments, industries, academia, and civil society play pivotal roles in shaping these ethical and regulatory frameworks. To deal with AI risks, stakeholders need to chart out a global regulatory framework that relies on teamwork, self-control, and careful rules. Conclusively, the road ahead signifies a journey where maintaining a balance between innovation and responsibility stands as the guiding light. A collaborative approach which engages the private sector and policymakers for policy formation is the need of the hour, to allow for both international and commercial competitiveness, coupled with instituting the safeguards to protect core values.

Author: Prithwijeet Mukherjee

Sr Consultant, Strategy Consulting

Image courtesy: Markus Winkler

No Comments

Post A Comment