In the year since OpenAI released ChatGPT to the public, the world and the workplace have been remade by AI. Kolide, a cybersecurity company, recently released the Shadow IT Report, which sheds light on the remarkable adoption of generative AI technologies within just one year.
Kolide partnered with Dimensional Research to survey over 300 knowledge workers and found that a staggering 89% use generative AI tools for work tasks at least once a month. These tools include applications such as ChatGPT for writing tasks and GitHub Copilot for coding.
However, the survey also reveals a concerning gap in AI governance. Many businesses are encouraging or requiring employees to embrace AI, but are failing to provide adequate training on responsible and safe usage.
Key Risks of AI in the Workplace
AI Errors: Generative AI tools, particularly Large Language Models (LLMs), are prone to errors or “hallucinations.” The potential legal, reputational, and financial implications of AI-generated inaccuracies underscore the need for vigilant oversight.
AI Plagiarism: The debate over whether AI-generated content constitutes plagiarism or copyright violation is ongoing. Lawsuits involving authors, comedians, and developers highlight the need for visibility into AI use to avoid legal repercussions and maintain the quality of original work.
AI Security Risks: AI introduces vulnerabilities and security flaws in generated code, posing a risk of data breaches and hacks. The emergence of malware disguised as AI further complicates the security landscape, necessitating robust measures to protect sensitive information.
The Kolide Shadow IT Report Findings
The survey delves into employee AI use and surfaces several surprising data points:
- 68% of companies allow AI use: There is a significant gap between the percentage of employees allowed to use AI (68%) and those who actually use it (89%), indicating a lack of oversight and scrutiny on AI-generated work.
- Only 56% of companies educate workers on AI risks: Despite the widespread use of AI, less than half of companies educate their workforce on its aforementioned risks.
- Workers underestimate their colleagues’ AI usage: Most (49%) of workers believe that fewer than 10% of their colleagues use AI-based applications. Only 1% of workers correctly estimate that the real number is closer to 90%.
The Call to Action: Implementing AI Acceptable Use Policies
Kolide advocates for the immediate development and implementation of AI Acceptable Use Policies. These policies should focus on:
- Getting visibility into worker AI use: Establish non-judgmental communication channels to understand how and why employees are using AI, facilitating education on risks and safer alternatives.
- Preventing the riskiest forms of AI: Create enforceable measures to prevent unsafe AI uses, including blocking unapproved tools and AI-based browser extensions.
- Getting cross-department input to craft AI usage policies: Develop comprehensive policies, both legal and practical, that consider the needs of different departments within the organization. Educate workers on issues such as data exposure, bias avoidance, and the use of approved AI tools.
By creating a robust framework for responsible AI use, organizations can harness the transformative power of generative AI while mitigating risks and ensuring ethical practices.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!