Allure Security, a cybersecurity company that uses a patented artificial intelligence (AI) engine designed to provide online brand protection, has announced a technology licensing agreement with CTM Insights, which specializes in advanced image-fingerprinting technology.

According to a release, augmenting the former’s machine learning algorithms which are focused on rooting out online brand impersonation attacks with the latter’s technology will further enhance (its) ability to find more fake web sites, social media accounts and mobile apps.

The release went on to say that consumers are often tricked by phishing emails that take them to a fake Web site that steals log-in credentials, payment data, or other personal information: “This can result in identity theft and fraud, costing billions of dollars at e-commerce sites and financial institutions around the globe.

“Images on the fake web sites sometimes have minuscule changes to confuse automated detection engines. The technology developed by CTM Insights showed 97 per cent accuracy in real-world testing, even with changed images.”

Lou Steinberg, the former chief technology officer at TD Ameritrade who founded the company five years ago, said that the patent-pending approach helps solve the growing problem of deepfakes that are becoming more disruptive to society

“We are able to check to see if a photo is a known ‘bad’ image even if the image was changed,” he said. “While Allure Security is licensing the software to detect fake web sites, the core of this technology can be used to combat other issues like child pornography and fake news.”

On its Web site, the company describes itself as a research lab combined with an early- stage technology studio and incubator, focused on new approaches to structural problems: “We provide seed funding and guidance to prototype ideas that can Change The Model (CTM) to some of the tech’s hardest problems in cybersecurity. Incubated ideas can result in IP licensing or as companies that continue on as independent entities.”

In an interview with IT World Canada, Steinberg said the core reason he launched the company was to solve the problem of fake data.

“That could be fake news or an edited image that gets uploaded. That could be manipulated data in voting records,” he noted.

He recalled that while at TD Ameritrade, what “terrified” him and other cyber security professionals on staff was not “ransomware where somebody would come in and encrypt data – we had good backups – but the fact someone might change data.

“If you change data, you can change positions (the amount of a security asset owned), balances, accounts payable, or accounts receivable. You can even change AI training data and start to teach AI to do the wrong thing.

“We started looking at the next generation of attack that we decided needed to be solved. And we said, ‘it’s not good enough to know if your data was changed. We have to know where your data was changed. We have to isolate the ‘where’ – whether it’s records in a database or part of an image.’ That was the beginning – the problem statement we went after was ‘how do you know where data should not be trusted?’”

As the CTO who oversaw cybersecurity and fraud detection, a common practice was to model the capabilities of threat actors and then project what they might be capable of doing three years into the future.

Steinberg cited the example of a study of cybercrime organization the Russian Business Network or RBN, in which “we know they can do this today; we think they will be able to do that. And then we would build defenses against future attacks, because you can’t wait for them to get good at something to start defending.

“We had a punch list of what happens when they acquire some new capability that we couldn’t defend against. And when I retired, I said, I’m tired of waiting for someone else to solve these. I’m going to start a research lab and we’re going to start solving problems. And that’s what we do.”

He likens it to being in a constant state of co-evolution: “Co-evolution says when the lions get faster, gazelles get faster because slow gazelles get eaten. We have this in cyber, in that we’re both ratcheting up our capabilities. But every once in a while, the lions get jet-packs. And it’s a bad day to be a gazelle. I’m in the anti jet-pack business.”

The post CTM Insights set to license deepfake image detection advances to Allure Security first appeared on IT World Canada.

Leave a Reply