A large number of organizations are considering or implementing bans on employee use of ChatGPT and other generative AI applications over security, privacy and brand damage concerns, according to a survey by BlackBerry.
Three-quarters of 2,000 IT decision-makers in the U.S., Canada, the U.K., France, Germany, the Netherlands, Japan, and Australia said that’s the way their organizations are either acting now or are leaning that way.
The majority of those deploying or considering employee bans (61 per cent) say the measures are intended to be long-term or permanent.
Potential risk to data security and privacy is the biggest reason (67 per cent) survey respondents cited for moving to block ChatGPT and similar generative AI tools. The next greatest concern (57 per cent) was risk to corporate reputation.
The survey suggests firms are worried about employee use of unsecured consumer AI products, but at the same time are willing to consider approving other AI tools.
For example, BlackBerry says respondents heavily favoured the use of AI tools for cybersecurity defense (81 per cent).
Most IT decision-makers in the BlackBerry survey also recognized the opportunity for generative AI applications to have a positive impact in the workplace. Just over half agreed AI could be helpful in increasing efficiency (55 per cent), innovation (52 per cent), and enhancing creativity (51 per cent).
The survey, conducted in June and July, comes amid almost daily stories of sometimes questionable results from AI queries. Last week, an Asian woman found that when she submitted a photo of herself to Playground AI asking the chatbot for recommendations to make her headshot look more professional, the resulting image made her white with blue eyes.
The post Many organizations want to limit employee access to AI, survey shows first appeared on IT World Canada.