小蓝视频

Skip to content

Canadian businesses have good reason to fear cyber threats

Companies should ban employee use of AI until safeguards are in place: cyber security expert
ransomeware-chriscollinstheimagebankgettyimages
Cyber security survey found 70 per cent of businesses victimized in ransomware attacks paid a ransom.

Canadian businesses are increasingly concerned about fraud, identity theft, ransomware and other forms of cybercrime, as well they should be, according to a number of new surveys.

According to a risk assessment from Travelers Companies Inc. (NYSE:TRV), an American insurance company, 40 per cent of small- and medium-sized businesses in Canada admitted they had been the victim of a cyber breach during the past two years. 

Cybercrime is likely to get a lot worse and lot more sophisticated, thanks to artificial intelligence, which not only gives cybercriminals new tools, but also opens up new back doors through which employees might unwittingly let out sensitive information.

In its annual Cybersecurity Survey, the Canadian Internet Registration Authority (CIRA) found that Canadian businesses and organizations are generally unprepared to handle and recover from cyber attacks, including generative AI-augmented attacks.

It appears some Canadian companies may be simply accepting cybercrime as a cost of doing businesses, as the survey found that, of the companies that were successfully held hostage by a ransomware attack, 70 per cent paid the criminals a ransom, and 22 per cent of those paid up to $100,000.

Of those that were victims of some form of cyber attack, the survey found:

  • nearly 30 per cent suffered loss of revenue from a cyber attack, up from 17 per cent in 2022;
  • more than 40 per cent reported an employee and-or customer data breach this year; and
  • 24 per cent suffered reputational loss.

And AI is only increasing the risks.

“In the hands of criminals, AI can supercharge efforts to trick employees and exploit vulnerabilities in a company’s digital infrastructure,” said Jon Ferguson, CIRA’s general manager of cybersecurity.

“It’s no secret that most organizations struggle to adapt to new technology, and today’s results suggest that Canadian firms still have work to do to prepare for the threats posed by AI.”

Generative AI is already being used to mimic human voices, so phishing scams can now have a component in which a cybercriminal can pose, for example, as a chief financial officer for a company who could “speak” to the CEO by phone to explain why he needs a transfer of funds to some account.

All a cybercriminal would need to fake a senior executive’s voice would be a recording of an earnings call, and generative AI could do the rest to mimic the executive’s voice.

“This is not a future thing – this is a now thing,” Jamie Hari, CIRA’s director of product management, cyber and domain name systems, told BIV News. “Those types of emergent AI technologies are going to make humans very vulnerable.”

AI chatbots also could be opening new holes in an organization's cyber security armour.

Apps like Chat GPT and Microsoft’s 365 Copilot can retain and then reuse information that is fed into them, Hari explained. So if an employee plugs proprietary information into a chatbot in order to ask it a question or to have it perform a function, at some point that information could end up being made public somewhere else.

“For example, like Bing’s Copilot, you are able to put information in as much as you get information out,” Hari explained.

“So if your organization is putting a chunk of sensitive information into these tools – like ‘Hey, Chat GPT, can you analyze this year’s results for me’ – well, the licence and the protections on that information that you put into Chat GPT could be used by Microsoft, or it could just be stored by Microsoft and be accessed by others.”

Hari said companies should develop safeguards around AI use, and until they are in place, he suggests there should be a blanket ban on employees using various kinds of AI tools.

“I would see most organizations start with a blanket ban, and then include a few cases where there are very specifically allowed,” Hari said.

“Whether it’s the C小蓝视频 that’s producing for the nightly news, or whether it’s a financial department trying to analyze some numbers or whether it’s a software division trying to write new code, I think it really doesn’t matter the application – I think the default position should be ‘we’re not using it.’

“And then, only when a proper analysis of the security risks and the exposures are completed, then opening the policy, very small at first, and with a very tight set of controls is the way to start.”

[email protected]

twitter.com/nbennett_biv

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks