Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday, December 2nd, 2022. From Toronto, I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.



In a few minutes David Shipley of Beauceron Security will join me to discuss recent cybersecurity news. But first a look back at some of what happened in the last seven days:

A member of the Alberta legislature was fined $7,200 for an unauthorized penetration test of a provincial vaccine portal. Did he do anything different from what security researchers and reporters do? David will have some thoughts.

Speaking of fines, Facebook’s parent company Meta Platforms was fined the equivalent of US$227 million by Ireland’s privacy commissioner for not adequately protecting personal information last year, allowing hackers to scrape the profile data of over 500 million people. And France’s data protection regulator fined an electricity provider the Canadian equivalent of $840,000 for storing customers’ passwords with a weak algorithm. A question David and I will discuss: Do fines work? And if so, under what circumstances?

Finally, we’ll take a look at a Dell survey of IT professionals on data protection issues. One finding: 40 per cent of respondents said they couldn’t recover data from their current method of backup

In other news, hackers released another batch of data stolen from Australia’s private health provider, Medibank. Data of about 9.7 million current and former customers was copied in October. Medibank says the personal data stolen isn’t sufficient to enable identification or financial fraud. Some stolen health claims data, for example, isn’t joined with people’s names.

Security researchers have found vulnerabilities in the mobile apps of several major car manufacturers that could have allowed hackers to control the locks, engine, and trunks of some vehicles. Their work is reported by the cyber news site The Record. Compromising the apps may in some cases start with an attacker scanning the vehicle’s VIN number, which can be seen on a dashboard. Hyundai has patched its app. Sirius, a wireless broadcasting service offered to car owners, has also updated its mobile app.

More troublesome Android apps have been discovered in the Google Play store. These apps pretend to be education-related applications in several languages. But according to researchers at Zimperium, their goal is to steal Facebook passwords. The apps have been downloaded some 300,000 times in 71 countries, including Canada and the U.S.

Separately, the Bleeping Computer news site reported that Google has removed a suspicious app called Symoo from the Play store. It’s supposed to be an SMS text app, but many user reviews complain it hijacks their smartphones and generates multiple one time passcodes. Its real purpose appears to be creating accounts on other services.

And researchers at Synopsys found several vulnerabilities in three applications that allow an Android device to be used as a remote keyboard and mouse for desktop or laptop computers. The apps are called Lazy Mouse, Telepad and PC Keyboard.

(The following transcript has been edited for clarity)

Howard: Joining now from Toronto is David Shipley.

Let’s start first with the member of the Alberta legislature who wanted to prove the provincial health department’s COVID vaccine website wasn’t secure. According to a news story the MLA, Thomas Dang, claims he was contacted last year by a constituent with concerns about Alberta’s online vaccine verification portal. To do a test Dang needed to enter a person’s birth date, so without approval he used the birth date of the Premier of Alberta at that time, which, was publicly known. He also used the Premier’s vaccination status, which was also publicly known. Hiding his IP address Dang ran a computer script for four days to see what he could access. What he got was the vaccination records of a woman who had the same birth date as he was searching for. Dang pleaded guilty to violating the provincial Health Information Act. In sentencing, the judge said Dang didn’t need to access a stranger’s records to prove the concern. David, was this foolish or justified to gain evidence?

David Shipley: This was extremely foolish. I think it’s important to set the context: Dang had the skills to write this script. He has a computer science background. He knew there was a problem right off the bat. What he should have done as the MLA to the Health Department to say, ‘This is a problem and here’s why,’ and just showing the structure and nature of the web page and the relationship to the data. He could have asked, ‘Are you going to do something about it? You could do a captcha [as an extra login step], you could do other things.’ But he wanted to make a point. And in doing so he accessed someone’s personal information, which is against the Alberta health records legislation. He didn’t need to prove this. If the department had said no, we don’t think this is serious, he could have held a press conference brought in other computer science experts and really raised attention to the issue. The key thing here is consent.

Howard: So if he had the consent of a third party to use their birth date for the purpose of a test that would have been better?

David: Partly. I definitely think having the consent of someone who you want to use to access the record might have been a really good defense for inappropriately accessing the information. But the other part is you still need the consent of the system provider. In cases where people do not have a security disclosure process, or a bug bounty process or an ethical reporting process in place you don’t have their consent to do a penetration test. Essentially, what he tried to do — and you can get yourself into a lot of hot water. This is a really important lesson for a lot of young aspiring cybersecurity researchers and those passionate about security issues. They genuinely want to fix these problems. But if you don’t have consent you can’t.

Howard: Don’t some security researchers do the same thing as this Alberta politician did? Off the top of my head, I’m thinking of some reports where a researcher tried to see if a web address or URL at a company is secure and it has a number that corresponds to a customer’s account. So after legitimately logging into the site, by changing one digit in the URL the researcher can see another customer’s profile. Then they publicize that they found that the company has bad security.

David: There’s a couple of different things that perhaps some people will see as semantic arguments. But I’ll structure it this way: This [the Alberta incident] wasn’t the case of a URL kind of situation. It was a case of input variables on a web form. It was a brute-force attack in the truest sense of the word. He literally had a script run for four days to try and break into an account. We can all acknowledge that the elements needed to prove identity for access to the vaccination portal was an example of inappropriate identity access management control, but you don’t need to test that to make that argument. As for trying to find if URLs reveal customer data there are a couple of breakdowns of security as well. But I would argue that, yes, absent consent to go and do that test you may in fact, be breaking laws. So you have to be very careful in testing. If you already have an account say with an airline or a service you’re far better off raising this issue with them than pulling the data to make your point. It’s also different from finding publicly available, like data left in open Amazon S3 buckets, because there’s no authentication mechanism to access that data. The moment you start working around authentication mechanisms you’re hacking. In order to ethically hack you need consent.

Howard: What questionable activity have you seen by security researchers or reporters — or politicians for that matter?

David: The most egregious breach that I’ve ever seen was the old phone voicemail hacking that plagued the U.K.

Howard: The reporter who was doing the hacking was betting that the victims had not changed their default PIN numbers. That’s how they were able to get into their phone answering systems.

David: But that was still hacking. And so it’s not ethical hacking. If you’re trying to stay within the confines of the law there are ways of making your point without accessing somebody else’s data. Companies have a duty of care to protect personal data, but proving they’re not living up to that duty of care does not give you permission to see my records.

Howard: So there’s a difference between taking apart software and finding vulnerabilities and hacking a company to show that there’s a vulnerability.

David: Exactly. Dang could have copied the source code from the Alberta Health webpage and shown people the flaw and that it’s a common example of inappropriate authentication controls, and someone could easily do the following. You don’t need smoking gun evidence every single time, particularly when that smoking gun comes a result of the bullet hitting somebody and causing a privacy violation. There’s a ‘Do no harm’ aspect that we need to make sure exists with security research. You can’t say, ‘I did limited harm I saw only a couple of people’s records to make my point.’ There’s also a distinction if after a data breach and data is leaked on the dark web and journalists pick a couple of records and call people. The reporters didn’t defeat an authentication controller or a system. Someone else did. The reporter is trying to figure out if there actually was a hack.

Howard: Is there a need for legislation to protect legitimate researchers as long as they don’t keep personal data that they found and they immediately report a vulnerability to an organization? Or does that create problems with defining who can do what would normally be a criminal offence?

David: It’s an interesting conundrum. I wish I was smart enough to say I had a definitive answer. But as I think about it, what are the potential ways this legislation could go wrong? Could a criminal say, ‘I was just joshing I just wanted to find a vulnerability. I only looked at one record.’ … There might be a middle ground here with respect to saying, ‘You are protected if you are doing security research on a company that has agreed to be the subject of that research to improve their security and if you find something you do report that as quickly as possible.’ Within that framework, I’m okay with legislation that protects that person. I’m not okay with, ‘Anybody can hack anything, go see if it’s broken.’

Howard: News item number two: As I said in the podcast intro, fines were a big part of this week’s news. Meta was hit with the equivalent of US$227 million dollars in fines by Ireland’s data protection commission for not adequately protecting personal information last year. That’s when hackers scraped the profile data of over 500 million people. This was a violation of the EU’s General Data Protection Regulation (GDPR). It’s another example of the toughest privacy legislation in the world being used. Also, France’s data protection regulator fined an electricity provider under the GDPR the Canadian equivalent of $840,000 for storing customers’ passwords with a weak algorithm. Let’s start with the Meta fine. What struck you about this?

David: It is meaningful. In Canada [under the Personal Information Protection and Electronic Documents Act, PIPEDA] if you don’t report a data breach where there is a real risk of significant harm to persons you might get a $150,000 fine. Who cares about that at publicly- traded company? Shareholders and boards care when the fines are in the millions. Are fines perfect? No. Do they send signals that can change behavior? Yes, but you’ve got to exercise them and they’ve got to be meaningful to actually do anything.

Howard: The Reuters news agency noted that this was the fourth fine against a Meta company — Meta is the parent company of Facebook, Instagram, and Whatsapp — by the Irish regulator. For those who don’t know, the data protection regulator in Ireland essentially is the lead privacy regulator for the entire European Union and its rulings basically stand for all EU members. What’s going on here with Meta?

David: This is a company clearly not afraid to burn a lot of money. Look at the billions of dollars that have been sunk into the Metaverse project. Right now Mark Zuckerberg still has the broad support of shareholders and his board, and they’re okay with these business practices. This is a cost of doing business. However, as you point out, it’s the fourth one. Sooner or later this starts to get material. I think these are warning shots across the bow. I think regulators may need to ramp it up if they don’t see behavior actually change. I think what’s going to be really interesting is what do they do with [new Twitter owner] Elon Musk. He was warned last week [by French regulators] about the gutting of Twitter’s content moderation other things. It will be interesting to see if regulators throw a bigger book at Musk.

Howard: The fine against the French electricity company is interesting. Its offense was not only using a weak algorithm for hashing passwords, it also didn’t hash and salt passwords for the best protection. Which raises the question? What do governments have to do to get organizations to follow best [privacy and cyberscurity] practices? Do they have to have better definitions in the legislation, or raise fines?

David: This is the interesting challenge between business, risk-based models with industry experts setting the tempo of what risk appetite and appropriate controls could look like, and governments’ extremely prescriptive and specific controls that say, ‘You must use do this.’ That’s great for the point in time when the regulations come out, but god help you if they don’t update it for five years and the security ball moves. It’s the tension between having no rules and letting businesses handle it themselves, and very specific rules that a regulator can nail you for not following … There’s also how does IT get the budget to maintain what is mandatory? Maybe there have to be regulations that say you have to have a process for the secure development and lifecycle of the IT services that you offer. If you want to avoid getting a big fine you better show some due diligence in that you kept up to date with the life of this product and you kept up to date with industry best practices … That really gets into cybersecurity policy and legislation in Canada, when Bill C-26 [which includes the Critical Cyber Systems Protection Act (CCSPA)] emerges back from Ottawa slumber sometime this spring. [C-26 puts cybersecurity and data breach reporting obligations on four critical infrastructure sectors]

Howard: News item three: Dell released its Annual Data Protection index, a survey of about 1,000 IT decision-makers around the world in organizations with over 250 employees. I’m going to cherry-pick some of the responses: Forty per cent of respondents said they couldn’t recover data from their current data protection system. That compares to about 26 per cent who said they couldn’t do that in each of the previous three years. So for some reason in the last 12 months there’s been a great increase in data recovery problems. What does this mean? Was there something this year that caused data recovery problems, or is this a question that doesn’t really give any useful information to IT pros?

David: I don’t have any evidence to back up what I’m going to say, but data recovery is not just about having a system in place. It’s also the skilled personnel who know how to do it, because. some of these things can be a lot more finicky than expected. Skill matters, and guess what? We’re in a talent shortage. So maybe processes were met missed in the care and feeding and maintenance of the things that keep the backups recoverable. Maybe we’ve lost some very important institutional knowledge on how to successfully recover from existing systems, or maybe we’ve moved to the Brand New Cloud Thing because everyone’s riding the Cloud Train and we didn’t do it right. So I think it’s worth talking about. This is about more than just buying an IT solution. It’s the care, feeding and practicing of using that solution.

Howard: Here’s another question pulled out from that survey: Sixty-four per cent of respondents believe that if their organization suffers a ransomware attack they’re going to get all their data back if a ransom is paid. And 54 per cent of the respondents believe that if their organization pays a ransom they’re not going to be attacked again.

David: I like to save my beliefs for the holiday season as part of the kindness and goodness of humanity. But criminals do what criminals do, and there’s a track record of it. They come back. And if you’ve got one gang playing around in your IT environment odds are a second gang is, too. Maybe, altruistically the first gang doesn’t come back. But there’s data that argues against that. By the way, if you’ve got one gang in you might have more than one coming; they just might stagger. So these survey response are stunning. To be honest, it’s fascinating. We’ve seen so many news stories where ransomware data recovery tools provided by ransomware actors don’t work. These are bad beliefs. These are not beliefs that you should take to the bank in terms of the ease of ransomware recovery The example that comes to mind is some of the difficulties that the Irish healthcare system had using the decryption tools the [ransomware] criminals gave them. It was not a fun time. So you can see why ransomware is still a good business to be in for criminals because of the beliefs of prospective “customers.”

 

The post Cyber Secuity Today, Week in Review for Friday, December 2, 2022 first appeared on IT World Canada.

Leave a Reply