October 6, 2024

laborday 2016

Building the Future, Success Together

Why Responsible AI Innovation Starts With Ethical Technologists

Why Responsible AI Innovation Starts With Ethical Technologists

Aaron Skonnard, Cofounder and CEO, Pluralsight.

In today’s fast-paced race toward technological innovation, ethical considerations can often fall by the wayside. My company’s research shows that 92% of organizations accelerated AI initiatives in 2023. With this increased AI adoption, there’s an increasing need for tech ethicists across the sector—and the White House is taking note.

On the heels of exponential AI advancement and warning cries from prominent tech leaders, the Biden Administration announced the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” on October 30, 2023. The executive order aims to establish new standards for AI safety, security, privacy, equity and civil rights. Among other directives, the order asks developers of powerful AI systems to share safety test results and address algorithmic discrimination and AI-enabled fraud.

As evidenced in the executive order, irresponsible AI can have far-reaching negative impacts—from influencing search engines and social media algorithms to perpetuating discrimination in the justice system and overstepping on individuals’ privacy. Those tasked with developing, training and advancing these models therefore hold an immense amount of power in shaping an increasingly technological society.

With great technical skills comes great responsibility. Organizations investing in AI must, in tandem, invest in responsible AI research and employee education on equity and social impact.

What Ethical AI Looks Like

AI has immense potential to ingest unprecedented amounts of data and positively impact humankind—and its relevance is industry-agnostic. It’s just a matter of keeping ethical development and deployment as a core priority. Below are four key aspects of ethics in AI that organizations must consider.

Fairness

In the development of AI systems, ensuring fairness is a key component. AI’s functioning relies on the data on which it is trained, and the quality of the AI depends on the fairness and equity of the training data. AI inherits human biases. Ethical technologists should take responsibility for training AI systems in a way that treats people fairly and reduces bias. Incorporating diverse and representative datasets is crucial to avoid perpetuating existing societal biases within AI systems.

Privacy And Security

Respecting people’s privacy is a fundamental ethical consideration in the development of AI. Organizations must establish robust systems to ensure that AI does not generate results that violate individual privacy rights. This involves putting into place rigorous data security protocols, anonymizing data as needed and getting proper consent for the utilization of data. Ethical technologists should proactively address privacy and security concerns, ensuring that AI systems are designed with the highest standards of data protection and encryption.

Transparency

It’s critical that users have a certain level of visibility into how AI models work and from where they pull their information. AI systems should be designed to provide context and explanations for their recommendations, allowing users to understand the reasoning behind the decisions. This transparency facilitates the identification and reduction of bias, leading to continuous improvement of AI systems.

Reliability

The number of use cases for AI is expanding dramatically, making the reliability and safety of its application another crucial aspect of ethical technology. AI systems should be developed to perform consistently and accurately for their intended use cases. It is essential to avoid deploying AI in ways that could be used against humanity, such as in intrusive citizen monitoring or malicious surveillance practices. Ethical technologists play a critical role in advocating for the responsible use of AI and establishing guidelines and frameworks to prevent its misuse.

Global Guardrails

Over time, with the maturation of an AI Bill of Rights and alliances such as the global nonprofit Partnership on AI, we’ll start to see a more concrete set of AI guardrails and guidelines for organizations to follow. The development and enforcement of these guardrails should involve the collaboration of major global technology vendors. Companies investing heavily in AI systems need to establish agreed-upon standards that all their systems will adhere to.

The public sector will also need to play a crucial role in shaping the future of AI, and if tech companies do not collaborate effectively with them, governments may block access to AI systems they perceive as harmful to their citizens. Public-private partnerships will be instrumental in driving responsible AI development, and public pressure can also influence decisions, as demonstrated by a recent petition that gained widespread attention for its call to block the advancement of more advanced AI systems.

Creating Ethical Technologists

If companies are liable for ensuring AI products are responsible and equitable before they go to market, that means the power lies with the people. While organizations need to invest resources into bolstering their responsible AI research and focus on social impact, a key area that also requires investment is the skills of their people.

Technologists may have significant experience in AI, but the technology changes so quickly that even experts need continuous and programmatic education to help manage and optimize the environments they helped build. As a result, many organizations investing in AI are learning that investing in the skills of their people is an equally necessary maneuver.

My company’s research shows that 90% of surveyed executives don’t completely understand their teams’ AI skill level and proficiency, pointing to a need for leaders to gain a better grasp of their employees’ AI readiness. Organizations should strive for a highly skilled workforce trained in the technical aspects of building, implementing and maintaining AI systems that are sound from an ethical perspective.

As accountability for these systems shifts back to humans, can employees explain how they trained their models to reduce bias? Can they point to ethical data collection and safeguarding practices? Have they vetted the responsibility of their vendors’ AI?

As technology continues to advance at an unprecedented pace, fostering ethical technologists is of paramount importance. Organizations must recognize the significance of ethical considerations in technology innovation and actively invest in nurturing responsible AI practices.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.