Interview with Patrick Harding, Ping Identity: ‘We Need to Get Used to the New AI Threat Landscape’

Why Trust Techopedia

If your CEO rang you — they’re rushing at the airport, and they need a password changed or a payment sent, would you do it?

You might be suspicious, but what if the voice was the genuine voice of your boss, with the same audio tics and phrases, and you spoke for five minutes to allay your doubts?

We no longer live in the world of misspelled emails from a ‘Nigerian Prince,’ it’s a world where the malicious side of artificial intelligence (AI) can unleash personalized deepfake audio on-the-fly, or repeatedly probe you or your company’s attack vectors with little or no manpower cost.

Solutions — such as AI vs AI — may come in time, but they lag against the first wave of attacks.

Techopedia sits down with Patrick Harding, chief product architect at Ping Identity, to discuss why we all need to be very careful — the attacks of today are not the attacks we have been getting used to for 30 years.

As he says: “There’s a level of vigilance that people have to get comfortable with for the next year or two as we catch up on the technologies to start to protect against this.”

Advertisements

About Patrick Harding

Patrick Harding

Patrick Harding, chief product architect at Ping Identity, has more than 20 years of experience in software development, networking infrastructure, and information security.

He is responsible for Ping Identity’s product and technology strategy, leading the Office of the CTO and Ping Labs.

Harding’s expertise includes identity and access management, strong authentication, cyber security, cryptography, blockchain, cryptocurrency, and distributed ledger technology.

Previously, Harding was a VP and the security architect at Fidelity Investments, where he was responsible for aligning identity management and security technologies with the strategic goals of the business.

Key Takeaways

  • AI poses significant security risks as fraudsters leverage AI tools to compromise accounts.
  • Threats now include sophisticated phishing emails, deepfake videos, and synthetic voice messages.
  • AI may help with the solution in time, but right now the threats are very different to the ones we are vigilant against.
  • Passwords are not enough — passkeys, multifactor authentication (MFA), and behavioral biometrics need to be used together.
  • However, there is a growing challenge of MFA fatigue, where users become overwhelmed with authentication requests, leading to inadvertent acceptance of fraudulent attempts.
  • Continuous authentication is essential to prevent issues like session hijacking, where fraudsters can impersonate legitimate users across multiple applications.

How the Use of AI Affects Security

Q: Does the increased use of artificial intelligence necessitate heightened security measures?

A: Absolutely. AI is being used by fraudsters to compromise users’ accounts right now in multiple ways. There are AI tools that exist that allow someone to use AI to generate a very sophisticated phishing email that would read like a regular email that you might get.

These are no longer like emails from a Nigerian prince. The content of these emails is very targeted. They might include names that are very specific to an organization — they look very real.

AI tools are also being used to create videos that can be used over a Zoom call where it impersonates and looks like the real user. AI can also be used to generate a voice that sounds very much like a real person.

That makes it even more important for people to not just implicitly trust the things they see, hear, read, or receive digitally, whether it’s the contents of an email, the content of an SMS message, a voice call, or even a Zoom interaction.

There has to be some secondary authentication-type process that establishes more explicit trust.

It can be a very simple thing, such as asking for some sort of passcode. There are also quite advanced mechanisms, such as decentralized identity and digital credentials, where you would prove who you are. That means you have to prove ownership of a private key and unlock that private key.

The only way the AI could impersonate you would be if it had access to that private key as well, which is obviously a lot more difficult. That’s the type of mechanism that people are going to have to start to expect and rely on.

Vigilance and Impersonation in an AI World

Q: Why is the influx of AI tools putting online users at risk?

A: I think AI tools are risks for organizations because fraudsters are going to use those tools to compromise accounts.

For example, AI could be used to compromise my bank account without me knowing about it, and AI could be used to try and get through the account recovery process without me knowing about it.

So, how is it impacting me as an end user? I’ve got to be a little bit more vigilant about being scammed myself. Am I going to be scammed in a business email compromise attack as an employee?

I think you have to be even more vigilant now on those phishing emails. People have to be more vigilant when they’re receiving phone calls or voice messages from what might be their CEO.

Their CEO theoretically left a voice message saying, ‘Hey, Patrick, can you please reset my password? I’m at the airport, and I need it reset now to this number.’ Would you do it?

There’s a level of vigilance that people have to get comfortable with for the next year or two as we catch up on the technologies to start to protect against this.

We will catch up, but we’re just not there yet because this stuff is so new.

Q: Why is the adoption of trusted AI crucial in reducing vulnerabilities?

A: This is the case of using AI to defend against AI. I think we’re going to see a lot of that emerging where AI can be used to help you understand what might be a phishing email and warn you to be careful.

Or it could be used to say, ‘This is a scam message, be careful.’

AI is going to be used to detect synthetic voice messages, where it can detect that it isn’t a real voice, but an AI-generated voice, an AI-generated video, or an AI-generated photo.

There are ways to detect those things, and we’re going to use AI to sort through all of the signals and events that occur, that allow us to differentiate a bot from a human.

So AI is going to be one of the baseline tools that we have at our disposal to help detect and prevent identity theft and identity fraud going forward as well.

Importance of More Secure Authentication Methods

Q: Why is it important for individuals and businesses to use more secure, easy-to-use authentication methods? And what are those authentication methods?

A: You can’t necessarily just rely on passwords any longer, as they can very easily be compromised. The fraudsters can get hold of email passwords that they can use and try on different accounts.

So you need to be doing something better. What we’ve seen over the years is that [companies] are sending one-time passwords over SMS or email, as well as push notifications to your mobile app. Those are both better than passwords.

Unfortunately, what we’ve started seeing now is multifactor authentication (MFA) fatigue, where the fraudsters are just bombing users with MFA requests because they might have already compromised their passwords.

I think what we’re getting to now is something called passkeys, which is basically a step beyond what we’ve seen with SMS one-time passwords (OTPs) and push notifications.

Passkeys are essentially a mechanism to take advantage of a private key that’s on your mobile device or your laptop. And that private key is protected with a biometric like your face ID or touch ID.

And while it’s a very easy user experience, it’s much more difficult for the fraudsters to compromise and bomb it.

This is an arms race. If we have this conversation in two years, we’ll be talking about the mechanisms the fraudsters have come up with to compromise these things as well.

It’s just at this point in time passkeys are offering up the safest, most secure, best user experience for stronger authentication over passwords.

On Multifactor Authentication Bombing

Q: Can you talk more about multifactor authentication bombing? How does it work? What can businesses do to prevent MFA bombing?

A: The next step in the authentication process is where you’re being asked to do MFA. And this additional factor is now sort of an OTP SMS or push notification.

And the bombing is where the fraudsters will literally just keep requesting that authentication.

They’ll keep sending push notifications, or they’ll keep sending OTP SMS messages.

Users end up just getting fatigued with all of it and basically click on one of the requests and accept it inadvertently.

What can organizations do about it? There are techniques, for example, in the case of push notifications, where you can’t just “click and accept the push notification, and you’re done”.

What we’ve implemented at Ping is a mechanism where in the push notification you receive, we send you a four-digit number. And you’ve got to type that four-digit number back into the web browser, as an example.

The fraudster doesn’t have access to that four-digit number, so they can’t complete that process. So that’s one thing companies can do.

We’ve implemented a lot of things around fraud detection here where we actually look to make sure that the device that you’re using is associated with you and your account. We call this device trust.

Q: Can you explain exactly what device trust means?

A: Basically, we have ways of recognizing your device and associating it with you and your accounts. So if a fraudster is trying to log in as you, they’re likely doing it from a completely different device.

And hence, we would notice that and if it’s a different device, we’ll put you through a stronger authentication mechanism and not just automatically log you in.

Risks of Outdated Security Practices

Q: How are digital identities put at risk by outdated security practices?

A: This is an age-old problem: When do we consider a security practice outdated?

Organizations need to be constantly evaluating their security practices to measure against best practices.

The best practice is constantly evolving as fraudsters and attackers come up with new mechanisms to attack them as new platforms, new capabilities, and new platforms emerge.

It’s having a security program in place that can measure itself against best practices and having a roadmap on what they’re delivering against to get there.

Unfortunately, security comes at a cost, and you have to evaluate the cost, ie, the amount you’re willing to spend on security, against the value of the resources that you’re protecting.

So, let’s compare a bank and an online chicken retailer. Should they be spending the same amount of money on security, given the relative value of the resources that are available online? Probably not.

I think the bank is probably going to be spending more, doing more, much closer to the best practice than the online chicken retailer. Now, that’s not to say the online chicken retailer shouldn’t care about things like account takeover, but they would likely be spending to solve that problem in a slightly different way.

The best practice for a bank and the best practice for a chicken retailer are different things.

Q: What should organizations do to protect their customers and consumers?

A: They need to be implementing a number of things. They need to be implementing MFA technology – they can’t just rely on a password.

They should be implementing two-factor authentication and multifactor authentication. I mentioned OTP SMS and push notifications, but I think they should be adopting passkeys, which is going to be a much better way of authenticating users.

They should be adding fraud detection techniques and behavioral biometrics techniques, i.e., mechanisms that can detect a bot versus a real user. They need to be doing checks to determine that a human is actually creating an account and not a bot.

Companies need to ensure that the names and addresses match, that names and phone numbers match, or that the email address they’re using isn’t on a list of known compromised email addresses.

There’s a lot that organizations can be doing. But quite often, it’s a trade-off between security and user experience.

It’s where we tend to find that most organizations like the idea of doing more to protect their customers, but they don’t want to create friction in the user experience in terms of account onboarding, authentication, or things like that.

Best Practices to Reduce the Attack Surface

Q: What are the best practices companies should implement to ensure they are shrinking the potential attack surface?

A: I think the attack surfaces aren’t shrinking, they’re expanding. We’re talking about lots of cloud applications, lots of software-as-a-service applications. So, there are more ways for fraudsters and bad guys to attack people.

I suppose we start to look at this through the lens of a defense-in-depth [strategy] and least privilege, where you want to be sure users only have the access rights and entitlements that are enough to enable them to do their jobs effectively.

This limits the blast radius of a successful attack, which you want to control. But I don’t think that there’s a silver bullet answer here.

When we talk about it from an identity standpoint, we tend to focus on making sure that you’re constantly authenticating the user, starting with multifactor authentication. But then continuously authenticating the user as they’re accessing applications and resources.

That way, you avoid issues around session hijacking, for example. This is where I’ve logged into my account, and I’ve been given a session cookie or a token that I now use to access different applications.

So the fraudsters no longer need to go and compromise my authentication login process, they just have to steal my session cookie or my token. Then they can take that and go and impersonate me across all these different applications.

So, continuous authentication – as in, applications constantly monitoring to determine that this really me, Patrick, as opposed to a fraudster, a bad guy, or a bot.

This is something that people are going to have to do more and more of.

Advertisements

Related Reading

Related Terms

Advertisements
Linda Rosencrance
Tech Journalist
Linda Rosencrance
Tech Journalist

Linda Rosencrance is a freelance writer and editor based in the Boston area with expertise ranging from AI and machine learning to cybersecurity and DevOps. She has covered IT topics since 1999 as an investigative reporter for several newspapers in the greater Boston area. She also writes white papers, case studies, e-books, and blog posts for a variety of corporate clients, interviewing key stakeholders including CIOs, CISOs, and other C-suite executives.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/