Artificial intelligence applications are a two-edged sword: They can be used by employees to improve productivity and make better products and services, and they can be used by attackers to undermine an organization.

What infosec pros have to do is prepare their organizations now, John Engates, Cloudflare’s field chief technology officer, said during a session of IT World Canada’s MapleSEC presentations last week.

“The thing that concerns me most is that, while there may be trusted safety mechanisms in commercial tools [like ChatGPT,] the open source side of things lends itself to attackers using these in ways that were never intended,” he said. “There are no guardrails, no oversight.

“So don’t think just because ChatGPT doesn’t allow you to use it for hacking doesn’t mean attackers don’t have this capability” through generative AI tools they create.

Security researchers note that already available for threat actors are AI tools like FraudGPT, WormGPT and DarkBART, which is a version of Google’s generative AI product, Bard.

Defence organizations need tools such as data loss prevention software that can detect the abuse of AI, as well as tools that watch application programming interfaces (APIs), because data goes through them. Stepped-up use of multifactor authentication (MFA) is also vital to protect logins in case an employee is fooled by an AI-generated phishing attack.

But employee education is also important. Engates offered these tips to infosec pros for protecting their organizations from being victimized by the use of AI:

tell employees how to safely use AI apps.

“I would recommend highly if you have if you’ve got any plans for security awareness training this year — which most of you should — embed a little bit of an AI module that you need to be doing AI training as well;”

that training should include — as part of the usual encouraging staff to report suspicious activity like phishing attacks — mistakes they may have made using AI tools.

“You want to make it safe for them to talk about it so it doesn’t feel like they’re hiding from the boss.”

encourage them to use AI, but responsibly. Remember, AI can be a differentiator for your business. “If you’re not experimenting with it now, you’re burying your head in the sand.” You want employees embracing AI and leading the way;

help create a policy in your organization for the responsible use of AI. A scan by IT World Canada of such policies listed by companies on the internet shows that they can include when AI can’t be used, what data can’t be used in an open AI model, insisting that a human make final decisions where automated decision-making systems involve legal implications, and the consequences of violating the policy.

Ask your vendors, governments, and cyber agencies like the U.S. Cybersecurity and Infrastructure Security Agency (CISA) for advice on creating an acceptable AI use policy, Engates said.

The Canadian government, for example, issued these five guiding principles to federal departments:

To ensure the effective and ethical use of AI the government will:

— understand and measure the impact of using AI by developing and sharing tools and approaches;

— be transparent about how and when we are using AI, starting with a clear user need and public benefit;

— provide meaningful explanations about AI decision-making, while also offering opportunities to review results and challenge these decisions;

— be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defence;

— provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better.

You can view Engates’ entire session here.

The post MapleSEC: How infosec pros can make their organizations safe for AI first appeared on IT World Canada.

Leave a Reply