‘Offensive security’ in the form of a security red team is a capability that some IT leaders have created in their organizations to test the protection of their IT environment.
But what about ‘offensive privacy? Why not create a privacy red team?
Scott Tenaglia, engineering manager of Meta‘s privacy red team thinks it’s time more privacy pros started talking about it.
“You all understand how offence helps your security program,” he told an audience at this month’s Black Hat security conference in Las Vegas. “I want to make the case for how offence helps your privacy program.”
Security red teams pretend to be threat actors targeting their organization’s IT systems. If you think of the organization as a container of information, said Tenaglia, that’s the target of the privacy red team.
A security red team may believe it does the same work as a privacy red team, he said: Steal credentials, log into a server, move laterally, install an exploit or malware for persistent and exfiltrate data.
But, he argued, that’s an indirect attack: The team compromised an entire network of systems to access the target — sensitive data.
“The privacy red team is more interested in direct access,” Tenaglia said, such as through user interfaces or APIs.
Few American organizations have one, although Google began planning for one in 2012. Red teaming for privacy got a boost after Europe’s General Data Protection Regulation was passed in 2018. Still, in Canada it’s virtually unheard of.
“This is not common at all in Canada,” said Imran Ahmad, co-head of the information governance, privacy and cybersecurity team at the Norton Rose Fulbright law firm. “I don’t think it’s a bad idea, but it will be dependent on a business’s resources and what the scope of the red teaming will be.”
Cat Coode, founder of the Canadian privacy consultancy Binary Tattoo, called it “a brilliant idea, but sadly few to no companies are doing this. When I attended the Canadian IAPP [International Association of Privacy Professions] conference in May, the universal message was that corporate privacy was underfunded and vastly understaffed. They don’t have enough bodies to run Privacy Impact Assessments and privacy vendor reviews. And these weren’t small start-ups or scale-ups, these were household name retailers, banks, and hospitals So it would be a wonderful concept but it’s not happening… yet. The first stepping stone would be to have Privacy by Design training for the designers and development teams.”
Tenaglia said Meta’s privacy red team’s mission statement is “proactively testing people, processes. and technology from an adversarial perspective to identify the actual risk to our users’ data and their privacy.”
He gave several examples of how a security privacy team differs from a security red team. When a security red team member, or a penetration tester, comes across sensitive data, they don’t know what to do. Discovering unprotected data isn’t the point of the operation. Finding vulnerabilities is. The discovery might be logged, or just ignored because it doesn’t have a security impact.
Tenaglia recalled a presentation he and colleagues made at Black Hat Europe 2016 on the discovery of a code injection vulnerability in a smartphone app that allowed them to do some things. Few in the audience were impressed, he said, because the vulnerability was “pretty vanilla.” They didn’t appreciate that the bug allowed an attacker to pull personal data off the device or turn it into a GPS tracker. “The privacy impact of that vulnerability was super high,” Tenaglia said.
Security and privacy red teams do some similar work, he argued. For example, a standard red team test is to reconnoiter the network. If there is little response, the firewall is working well; if not, it needs to be tweaked.
A privacy red team would see if they can get access to sensitive data and if it can get around data download limits. If they can’t get more than the rate limit, the system works; if not, it needs to be tweaked.
Privacy red teams start with the disadvantage that security is a more mature discipline, Tenaglia said, with its own taxonomy for threat actors and their techniques, technologies and practices (TTPs). There isn’t a privacy equivalent yet of the Mitre Att&ck framework. So Meta has created one of its own, with four “profiles” of adversaries it thinks it faces. It has also created a privacy weakness taxonomy, or catalogue, of the different weaknesses its team finds to be shared with application and security teams. He noted for each organization the privacy threat actors may look different than their security threat actors.
Another difficulty is privacy-related metrics. Traditional red team security metrics — time to compromise or time to detection — aren’t relevant in privacy.
“I don’t know what the perfect set of metrics are,” he admitted. But the goal of any privacy metric should be to drive fundamental changes in the organization’s privacy posture.
As for who should be on a privacy red team, Tenaglia said leaders should look for people who care about privacy.
“A lot of people are not terribly interested in privacy because they think about it as a risk assessment thing, a non-technical discipline,” he maintained. “First and foremost I want to say risk assessments are really, really important to a privacy program. But it is not what we do. We do technical assessments. In fact, privacy engineering operators are privacy engineers.
The types of engineers Meta looks for have three main characteristics:
–they have to be able to think like an adversary;
–they have an offensive skillset. “We don’t want your knowledge of Linux or Windows because we not attacking those things. But we want to know about your technical knowledge of our platforms, and we can teach you how to manipulate those things;”
–and they have privacy instincts — they care about privacy.
He recruits from red teamers, pen testers, vulnerability and security researchers and appsec engineers.
The weaknesses they seek include faults, flaws, errors in code, and bad practices that are the root cause of privacy issues. Meta’s privacy red team doesn’t call them security vulnerabilities because it doesn’t know if they are. But the privacy red team does say to Meta’s product and engineering teams, ‘This is what it is, this is the impact, this is what it looks like in code, this is how you can remediate it or prevent in the future,’ Tenaglia said.
He also warned that accessing, collecting, and storing user data for privacy red teaming might raise legal and regulatory issues so the team should liaise with lawyers to see how those risks can be mitigated.
The goal of a privacy red team is to measure and understand the organization’s resilience to a particular type of adversary, he said.
Twenty years ago, IT security was technology-focused, he said. Now it has found a balance between technology and risk assessment. Privacy started by emphasizing risk minimization because data was on-premises, but is being pulled towards technology because sensitive data can be found on mobile devices and in the cloud.
“Hopefully in the future there will be a nice balance” in privacy between technology and risk, he said. “Twenty years from now people will say ‘I can’t believe we even debated whether offensive privacy is a thing you should have in a privacy program’ … Starting this conversation now is really going to help us get there.”
The post Meta makes the case for creating a privacy red team first appeared on IT World Canada.