In recent years, the use of Artificial Intelligence (AI) in Human Resources has soared as companies aim to streamline their hiring processes. As federal oversight looms, it’s crucial to grasp the implications of President Biden’s recent executive order, which extends beyond HR into broader AI regulation.
The executive order acknowledges that “irresponsible uses of AI can lead to and exacerbate discrimination, bias, and other injustices, affecting areas like justice, healthcare, and housing.” Before this order, the Biden-Harris Administration had already taken steps to tackle “algorithmic discrimination,” particularly relevant for HR professionals given AI’s primary role in talent acquisition.
President Biden’s recommendations to employers include prioritizing workers’ rights to collective bargaining and investing in AI upskilling. The administration underscores that HR departments can actively address labor standards, workplace equity, health, safety, and data collection, among other vital concerns.
Moreover, the order underscores the need to assess AI’s potential impact on the labor market within specific industries. It advocates adopting best practices that promote positive outcomes, such as fostering innovation and competition, while mitigating potential negative effects, including ethical concerns and job displacement.
AI systems being misused for creating weapons of mass destruction or “deep fakes” with malicious intent.
As the HR landscape adapts to these regulatory changes, employers and HR professionals are urged to stay vigilant, adhering to emerging guidelines and standards to ensure responsible and ethical AI utilization in the pursuit of optimized workforces.
Biden’s Executive Order: Safeguarding AI
President Biden’s recent executive order extends far beyond the realm of HR and delves into comprehensive AI regulation. It underscores the need to address AI’s potential risks and challenges while promoting innovation and responsible AI development.
The order demands companies to report to the federal government on the risks associated with their AI systems being misused for creating weapons of mass destruction or “deep fakes” with malicious intent. “Deep fakes” use AI-generated audio and video to spread fake news, potentially swaying elections or deceiving consumers.
The executive order, representing the United States’ commitment to AI regulation, centers on safety and security mandates, encouraging A.I. development in the country while attracting foreign talent. It aims to counter China’s technological advances, particularly regarding large language models and computer chips.
Already, Europe is moving ahead with rules of its own, and Vice President Kamala Harris is traveling to Britain this week to represent the United States at an international conference organized by that country’s prime minister, Rishi Sunak.
While some companies are relieved by the prospect of government regulation, there are concerns about mandates for federal agencies to address anticompetitive conduct and consumer protection. Nonetheless, the order emphasizes security measures, requiring companies to test advanced AI tools to prevent their use in harmful applications.
The order also promotes watermarking to trace the origin of content created by AI, combating deep fakes and disinformation. It remains clear that AI regulation is a vital step towards harnessing AI’s potential while mitigating its risks, marking a new era of responsible AI governance. However, some directives may face implementation challenges, particularly related to hiring AI experts and enacting privacy legislation. The Biden administration’s commitment to AI regulation is evident, but the path forward involves navigating complex issues and potential Congressional action.