Employees ‘not waiting for permission’ to leverage new tech
Demands for employers to formulate policies and provide training on generative AI are gaining momentum in response to the increasing adoption of this emerging technology by employees in various workplaces.
Jason Lau, ISACA board director and Chief Information Security Officer (CISO) at Crypto.com, has issued a recent call for action, prompted by ISACA’s latest report, which revealed that employees are actively using generative AI despite the absence of clear policies and the lack of training from their employers. ISACA is an international professional association specializing in IT governance.
According to the report, over 40% of employees have confirmed their utilization of generative AI in their work environments. They employ this technology for the following purposes:
- Creating written content (65%)
- Enhancing productivity (44%)
- Automating repetitive tasks (32%)
- Providing customer service (29%)
- Improving decision-making (27%)
Employees are not waiting for permission to harness generative AI to enhance their work tasks.
It’s evident that “employees are not waiting for permission” to harness generative AI to enhance their work tasks, a widespread practice that contrasts with the fact that only 28% of employers acknowledge permitting the use of this technology in the workplace. A mere 10% of these employers have established a formal and comprehensive policy to regulate its usage.
Jason Lau emphasized the urgency of providing policies, guidance, and training to ensure that generative AI is employed appropriately and ethically within organizations.
This call amplifies the growing chorus of experts worldwide advocating for such measures. Alastair Miller, Principal Consultant at Aura Information Security, previously urged New Zealand employers to craft an AI policy outlining guidelines for employee usage. The Conference Board has also urged the prompt establishment of AI usage guidelines, as the technology’s capabilities and scope continue to expand.
In addition to the growing demands for policies and training, attention is also drawn to the potential risks associated with generative AI. The ISACA report highlights that these risks are not receiving adequate consideration.
A substantial 23% of employers indicated that they currently do not have plans to address AI as a risk, with less than a third viewing it as an immediate priority. The most prominent risks associated with generative AI technology, according to 77% of respondents, include:
- Misinformation and disinformation
- Privacy violations (68%)
- Social engineering (63%)
- Loss of intellectual property (58%)
- Job displacement (35%)
- Widening of skills gaps (35%)
Incidents involving employees sharing critical information using generative AI tools often occur in the absence of adequate governance.
These findings mirror previous warnings from authoritative figures, such as New Zealand’s Privacy Commissioner, who alerted businesses to the potential consequences of generative AI in the workplace.
Notably, Samsung, the South Korean tech giant, had to temporarily prohibit the use of ChatGPT due to an information leak caused by employees encoding sensitive information using the generative AI tool.
John De Santis, ISACA board chair, emphasized that incidents involving employees sharing critical information using generative AI tools often occur in the absence of adequate governance. He stressed the need for leaders to swiftly familiarize themselves with the benefits and risks of the technology and equip their team members with the necessary knowledge.