Home Podcast ThreatTank – Episode 1 – 2024 Cybersecurity Predictions

ThreatTank – Episode 1 – 2024 Cybersecurity Predictions

About The Author


Stay ahead of cyber threats with the latest insights from our security experts.


Subscribe now to receive:

An Introduction to Edgio’s New Podcast Series: ThreatTank

Tom Gorup: Welcome to ThreatTank, a podcast covering the latest threat intelligence, threat response, and insights about the threat landscape around the globe. I’m your host, Tom Gorup, Vice President of Security Services at Edgio.

And joining me today are Richard Yew, Senior Director of Product Management for Edgio Security Solutions, and Andrew Johnson, Senior Product Marketing Manager also for Edgio Security Solutions. Welcome Richard, and Andrew.

Richard Yew: Hey, thanks for having me here.

Andrew Johnson: Thanks Tom.

Tom Gorup: This is exciting. Our first threat tank podcast. And I got two heavy hitters like yourselves, and I feel like I need to open up with an icebreaker, a nice little question.

I used to ask interview interviewees. And when I seem to kind of get nervous, but I think this is a good intro. So, I’m going to ask you both and just answer when you get it. If you were a tree, what would be your favorite animal? If you were a tree, what would be your favorite animal?

Richard Yew: You know when you talk about trees, right? I start thinking about acorn and then, and then, you know, what immediately reminds me of, you know, you know, there’s this pig in Spain. It’s called the Ibérico pig. It’s black. It produces the best bacon or ham, whatever you call it, in the world, it’s the most expensive. It’s probably a few hundred bucks per ounce.

So, I guess in this case the pick for me. Cause it makes use of whatever I drop and, oh, all right.

Tom Gorup: That’s interesting. Yeah. No, that’s good. That’s good. First, I was like, where are you going with this?

Richard Yew: Go take my acorn to go.

Tom Gorup: Yeah, yeah, yeah. No, the pig getting use out of that. And then not to mention selling for thousands of dollars a pound. That’s like a Wagyu pig.

Andrew Johnson: That’s pretty good. Let’s see. So, I was thinking maybe from a security perspective of what I wouldn’t want as an animal, if I were a tree myself, I don’t know, maybe I wouldn’t want fungus. I wouldn’t want anything that…wouldn’t want a woodpecker or something that’s going to peck me up and make a hole in me.

Richard Yew: Hey, dude, I’m going to stop you here. Hey fungus is not animal, last I checked. What’s that? Oh, maybe it is. After we watch Last of Us, fungus becomes an animal.

Andrew Johnson: So that’s what I wouldn’t want. So, I’d have to say maybe some birds or something that just leave after they land on me.

Tom Gorup: Again, making use. That’s good. One of the answers I had gotten answering asking that question was that a shark. And when I asked why they’re like, well, the shark would never bother me. Okay. Speaks a little bit into the personality there, but I like how both of you guys picked an animal that you know, is of use to other animals.

That’s pretty cool. Yeah. Speaks a lot. So, we’re not here talking today about animals or trees for that matter.

Will AI Bridge the Cybersecurity Skills Gap?

Tom Gorup: We’re going to dig into three [cybersecurity] predictions really into 2024. There’s a blog post out on Edgio’s blog where we’re talking about again, three predictions. One being – AI bridging the cybersecurity skills gap, then security culture, and the rise of attacks. We’re not going to do it in that order, but those are the three predictions. So, jumping right into AI. I mean, right now the prediction here again, I’ll reiterate that AI is building will be bridging the cybersecurity skills gap in 2024. We’ll see a lot of that.

Now, in thinking about that, you can get, okay. Provocative here and look at the world of AI and see everybody’s got, you know I think I saw a tweet the other day. AI has everyone seeing stars. Everybody’s excited and all these different use cases. We might, you know, you see that new phone or pseudo-phone rabbit, rabbit R, or something to that effect, like all this crazy stuff coming out. But do we really think that AI is where it’s at to effectively bridge the gap? Do we really see that happening in 2024 or do we get another five years of R& D behind AI before it gets to the point where it’s making meaningful impacts in bridging that gap?

Richard Yew: Well, you know, it certainly bridged the gap for the attacker.

Tom Gorup: Yeah, that’s a good point.

Richard Yew: Yeah, I mean, nowadays I just think about my day job, Hey, I want to run a script I want to, Hey just write me a script that runs a loop that helps me make a request, make or get request or post request to just particular URL and do it a hundred times repeatedly.

I mean, obviously you don’t tell AI to try to generate a DDoS attack, but it functionally achieves the same, right? Just skirt around terms and conditions. I, I think it’s, it’s going to make red teams…it’s going to make it you know; any layman can do it. Like anybody can decide to, you know what, I’m just going to put on my black hat, which I have with me, and start playing around with that.

Now, obviously there’s also a lot of benefits for organizations, for blue team, for a defender as well.

Andrew Johnson: I think, I mean, obviously it’s going to help the problem. Is it going to close in 2024? That’s of course. I mean, I don’t know. I think of course not. But it’s already starting to be, you know, implemented in, you know, in, in security software.

You know, it also has challenges too, I mean, instead of people worried about false positives, you have to kind of worry about false recommendations that, you know, AI-backed software is going to give teams. So, I think, you know, level of experience is still going to be extremely important.

Tom Gorup: Yeah, it’s interesting when I look at some stats like ISC2. It says there’s around a four million workforce gap, four-million-person workforce gap. That’s huge. That’s huge. And we’re, I mean, colleges aren’t pumping out security professionals fast enough to fill that gap, not to mention the growth that’s going to happen year-over-year. So, you know, AI closing that gap in 2024, you know, on one side, I hear you, Richard, like the value I see too, is to your point, can I write scripts quickly?

The other day I asked iGPT4 to write me an HTML page and it did, it did a great job. Then I started asking it to tweak and move next. You know, I have an entire webpage. Built in about 30 minutes. And I literally wrote zero code to accomplish that, but also being able to feed it various datasets.

And I think one of the biggest concerns that people have with AI bridging the gap is privacy, you know, accidentally putting, you know, we saw, what was it, it was a Samsung engineer who put in some schematics into AI, but the challenge was I haven’t seen anybody extract that yet. Not to say that it hasn’t happened, but so there’s kind of two sides of that coin.

Richard Yew: I think it’s very important as you start using AI to both well, obviously from an attacker perspective and defender perspective, right? Enhance your workflow, right? It’s also important for you to protect your AI, but I want to get back to that. To talk about closing the gaps.

I would say, I mean, can the gaps ever be closed? No, it can’t, just like I’ll say, can somebody ever be 100% secure? I’ll argue that no, there’s just no such thing to me. AI, what it does is that it adds an additional layer in our defense-in-depth concept that really helps. It’s another two sets in the layers that really help from the defender’s perspective, right? Close and reduce as much of the probability of a breach and an attack happening. And, I believe, a lot of security issues happen, you know, we always say humans are the weakest link because, you know, if you keep doing the mundane stuff over and over and over again, we’ll get complacent, accidents happen, et cetera. This is when AI can come in.

You probably hear this quote from this very famous person that makes cars and rockets and lends itself but it’s true that, and it’s also true in security, like when you just have to do a lot of repetitive stuff, even logs analysis, you know, people have a lapse in attention and it’s understandable. We’re all humans anyway, right? So, this is where I think AI will be very useful in closing the gap. But, you know, that’s provided that the AI is doing the jobs that you wanted, just like a robot in a factory is not, you know, suddenly grabbing the workers and throwing them on top of the cars and, you know, and smashing it. Right?

Tom Gorup: Not yet. That’s interesting. So, you know, when I think about the workflow, so the analyst workflow, you know, an alert comes in and there’s a lot of questions surrounding that particular alert. Right. Is this normal? Is it common? What should I be looking for, for this sort of an attack on this particular type of system?

Hey, it’s a Windows machine or it’s an Apache server. Are there certain things that I should be looking for to determine whether or not this attack was successful? I think the opportunity we have, potentially, is to also lower the barrier of entry, right? So when we talk about closing the gap, it doesn’t necessarily mean that AI is maybe filling the gap, but maybe AI is allowing us to broaden our reach for whom we’re hiring for those sorts of roles and the future that I get excited about is one where we’re hiring less for the technical expertise on a particular product or technology, but rather hiring for curiosity and, and communication skills. So, someone who can ask the right question in the right way to get the answers that they’re looking for, not only from the data, but even from AI for that matter.

Andrew Johnson: I think the last part you mentioned is really good in terms of thinking differently just to be able to even get close to filling this gap in security professionals. You know, hiring on creativity, maybe taking folks with other technical skills, like from help desk or maybe business intelligence analysts or people are very comfortable working with data.

But also, some of the stuff you said earlier, Tom, where you know, if there’s an incident and the AI can maybe recommend, you know, at least look in a few areas. Like I think of my life and a lot of my work is like the 80/20 rule. So, you know, if I don’t have to track down all these considerations, or if I have maybe the easiest considerations there for me, it saves a lot of time. So, I assume that we’re going to see that in a lot of solutions and systems going forward.

Richard Yew: Yeah, you know, I want to double click on the 4 million job gap in security that this is a very important point. You know, if you just look at average income and just extrapolate it, you know, it comes out to about 200 billion dollars You know, which funny enough according to a report by one of these security companies, right?

That 200 billion in fact, it’s specifically 213 billion exactly the size of cybersecurity markets and, you know, so it’s understandable that corporations you know, like when we’re talking about security providers, service providers, or even organizations, they’re trying to fill these gaps.

And imagine how cheaply you can run AI nowadays to achieve certain functionality that there’s potentially a lot of savings for that.

Tom Gorup: I, yeah, a hundred percent. I also see an opportunity for you had mentioned too, Andrew, is like, I think you did it as well. Richard is being able to actually apply your controls within the AI itself.

So, if you have a standard process internally for a brute force attack or ransomware attack, being able to train an AI. What everybody is going to answer those particular questions. You can have a consistency in response across your entire organization. It’s one of the biggest challenges I know CISOs have is they write all these guidelines, all this documentation and nobody reads it.

You know, you go through compliance at the end of the year, and everybody’s forced to read it, but does anybody actually sit down and read it? Now, imagine if you had an AI that you could ask those questions to. So rather than and you still have the documentation, but the AI is trained on that documentation.

And when the circumstances arise, a person, whether it’s a security analyst or The CFO can ask questions to the AI. Hey, what’s the next step in this incident? What do we do next? And AI can be your Sherpa in that way.

Richard Yew: It was funny. Like yeah, I’m sorry. This is my last joke before we move on.

But, you know, I guarantee you nobody would hack the HTML and remove the component that blocks the buttons from so that you can just click next, next, next in your compliance trainer.

Tom Gorup: Exactly. Well, compliance train. That’s a whole other… We’ll talk about culture here in a minute and how that’s adjusted. So, the one last time I missed. This point, Richard, you talked about it earlier was attackers leveraging GPT leveraging AI. You have wolf GPT, wormGPT, fraudGPT, all these kinds of LLMs focused on “hey, how do I make it easier to be a bad guy?”

The way I’m looking at it is if you’re not using as a defender, how do you even stand a chance against an offense, an offender, an attacker who’s leveraging the capability. Richard, I feel like you’ll have a good war analogy. It’s like an F15 fighter jet versus like, I don’t know, like a Warthog or something, you know, like you can’t, those two can’t have a dog fight. That’s not going to end well with the Warthog, you know what I mean?

So, any other ideas on that? Because I think, I mean, we could spend, I think all day on AI really, but ultimately. I do see it as if defenders aren’t using it today, if businesses aren’t trying to find ways to build efficiencies, to bridge the skill gaps, or even to take on mundane workloads today, you’re not going to be effective against defending against the attackers who are leveraging this capability with no limitations, with no fear of data compromise.

Like they’re not concerned about any of this. Sometimes I guess being a good guy. You know, it has its disadvantages, right? You’re already crutched in that way. Hey, we always say, we have to get, we have to be right all the time, but the attacker just needs to be right one time.

Tom Gorup: It’s true. And they don’t have any limits. Right.

Andrew Johnson: And they’re on the bleeding edge. Like, I can’t believe these wormGPTs and other things. These were out in 2021, I think that you could start to buy these services. I mean, you know, a lot of other people. Well, I’ll just talk about myself. Didn’t really use generative AI until this last year when it kind of blew up. But yeah, the adversaries are early adopters.

Tom Gorup: And it also opens up opportunities as we see the economy kind of shift. People are looking for new ways to make money. You know, how many spam calls, how many scams, texts are you seeing?

And they’re getting a little bit better. And GPT is probably to think that anything good created for good can be used for evil as well.

Will We Continue to See a Rise in Ransomware, DDoS and Zero-Day Attacks?

Tom Gorup: So, the next prediction here is a rise in attacks, and it’s kind of a broad brush and attacks, but we’ll focus on, let’s say, three: ransomware attacks, distributed denial service attacks and zero-day attacks.

We look at 2023, especially last quarter, healthcare just got slammed with ransomware attacks over and over again. We saw a number of zero-days, but the real question is, will it continue to grow? What does that growth look like? Is it getting pumped by the media? It’s not necessarily as big of a problem as it looks like. I mean, that’s the sort of provocative question.

Richard Yew: I may have a controversial opinion on this. You know, when we look at, I guess maybe I’m only pedantic, you know, when we look at those attacks, right? I feel like there’s what the media reported that’s, not, and there’s what we actually see, even with that, for example, we talk about rise of DDoS attack, right?

There’s nuance behind that, you know, DDoS attacks are always happening, but what kind, right? You know, we go from back in 2016. 2015 the great Internet outage because a DNS provider got hit. I mean, that was Mirai botnet. That was primarily Layer3, Layer4, you know, millions packets per seconds attack, obviously bandwidth is huge, right?

It’s bandwidth, high bandwidth volume. But nowadays we’re seeing what I think starting like early, late last year, I think, I can’t believe we’re in 2024 already. We’re talking about 2022 right. And, and we’re talking about the rise of anonymous Sudan, QNet, the rise of applications attacks.

We’re talking about the applications Layer7, HTTP flood, right? Going from a record 20 plus requests, million requests per second to like now, what are we looking at like 300 million requests per second from Google? And that’s with various exploits. So, I also think that this attack is cyclical.

It’s like, they never go away. You know, like, at one point, DDoS attack, ransomware was really high, because, you know, obviously Bitcoin pricing was spiking. You know, like, now, in this case, it’s more because of geopolitical instability. You know, over the past couple of years, and we see a rise of state sponsored DDoS attacks, or even Hacktivist DDoS attacks, you know, so I think, I think it’s more like, every year we’re looking at which attacks are in fashion, but next year it could be something else in fashion. And then the year later, we’re going back to, we’re resetting back to like 2022 and something else in 2022 is in fashion again.

Andrew Johnson: Yeah. I don’t know if it’s, especially with like the tax on health care. I mean, I think it’s part hype, but it also, unfortunately, I think it’s just. This is sadder and worse today than it has been. And I, you know, I think where hackers in the past may, you know, at least it’s been reported, they have some ethics, you know like those seem to be out the window now with attacking plastic surgeon clinics and threatening to release photos of patients unless ransoms are paid or what’s it squatting of like of patients.

You know, I think recently there was a compromise of a healthcare system and they had, the hospital network basically, they had to actually send patients, change their patient routing, which is pretty, pretty messed up, really.

Richard Yew: So, you don’t want to show you the attackers, right?

They don’t have to follow rules and they’re constantly pushing a boundary. So, so usually what we look at is three things, right? You’re attacking people with money, you know, you’re attacking the money, like financial institutions. Now they’ve expanded the boundary for years, going to education, they’re attacking schools, right?

You know, there are reports of universities shutting down a couple of years ago. We can provide detail; you know quote for that. But schools shutting down because the entire system literally just got locked down. Right? And, now, you know, and we observe the boundary being pushed, like actually towards late 2025 or early, actually, sorry, late 2022 and early 2023.

Yeah. I’m thinking one year ahead already. And, and we’re talking about hospitals getting attacked and again, it started in war zones, right. But it sets precedence and now hospitals are routinely getting hacked. It’s sometimes life and death, you know, like Tom, you mentioned in your blog, you know, it’s like we’re talking about their situations where ambulance has to be redirected to another farther emergency room because of issues.

Andrew Johnson: It’s, yeah, I mean, I just don’t think you can expect anything better especially when there are people being forced into doing these attacks it not just, you know, some teenager in some random country doing these but, you know, probably state-sponsored sponsored that really, they, they really need that money to continue their, their operations.

Richard Yew: So, it’s kind of like imagine comes to like the, then the question is like when it comes to like emerging attacks, right? What it like, what other boundaries are going to get pushed this year?

Tom Gorup: That’s a good question. Because I agree. I mean, hitting Q4 and seeing ambulances being diverted, emergency services being diverted to further hospitals. Like that is putting life at risk. There is no limitation to which direction that they can go next. I actually, you know, I often worked a number of cases with the FBI a number of years back.

And one, in particular, always stood out to me was a sex extorting case where this guy was taking advantage of…well, for four years, this girl and she was 18 when she finally said I was done. So since she was 14 years old you know, it just shows a lot of people are falling and broken in that way and they will not show no limit.

I saw recently there was another virtual ransom. I don’t know if you’ve seen this yet where they would text an individual, typically a teenager and convinced them, probably through some nefarious way to go hide somewhere, like in a forest or go hide, and then text their parents that they’ve been kidnapped and that they need to send money somewhere.

So, I mean, that’s, it’s just wild, the direction that some people will go. So that is an interesting question is like, where did the crosshairs of these attackers move to? Maybe, you know, often it’s follow the money though. Right? So where’s the money?

Andrew Johnson: Absolutely. Yeah. That’s crazy.

Richard Yew: And there’s the other attacks that you mentioned, you know, rise of zero-days. Yes. That’s actually notable. You know, like if we look at a statistic, right. You know, I always track year-over year, like CVE growth. I, we don’t have predictions of 2024 yet, but you know, 2023. With more than 23,000 CVE, that is more than 10% growth and prior to that we had more than 25% growth in CVE, you know, it kind of speaks to the exponential problems of dealing with like the CVE is exponentially growing. I wonder if any of the security organizations in the audience here that actually have double digit’s growth in the security headcounts and budget, I’m sure everybody gets like gets to have like 10% more budgets and headcounts like every quarter. But that’s what we’ve been observing.

And. It’s interesting. You know, the arm race and it goes back to, it kind of ties back to the first topic, right? AI, it’s important, but also because of a rise of zero days, it’s not just CVEs, but also critical zero days that we used to think of like, you know, like we have Logs4j, Springs4Shell, Confluence, you know, all of these things used to be like, Okay. We see a couple of major ones, high critical, severity score nine and above, like once, twice a year. Now it’s multiple times per quarter.

Tom Gorup: Well, yeah. And a big challenge there, I think a lot of businesses running into is the response to those zero days, right? The existence of zero days. They’ve always been there. When we look at some of the historical ones, like. BashBug. I think that it was there like 20 years before it was discovered. So sometimes I look at the rise of CVEs. I’m like, okay, we’re doing a better job of reporting on vulnerabilities. I’m sure there are that many, if not more vulnerabilities that just don’t have a label to them yet, right? They’ve yet to be discovered. Obviously, that’s the zero-day conversation, but the rise of CVEs over time shows that we’re doing a better job of reporting on these, but we’re not doing a very good job of still, like, how do we effectively respond to the emergency patch management process?

Can you really roll out a patch that fast? Or do you need some enablement? On top of that, I like virtual patching. I always look at that as a low-hanging fruit opportunity to plug this hole while we go and fix the root problem rather than trying to patch the whole thing, which can be expensive.

It’s risky, right? I mean, Log4j, how many patches were rolled out before it actually worked? I think it was three. I think the third one finally you know, Closed a hole in two weeks, I believe in the middle of December.

Tom Gorup: Yeah, that’s miserable.

Andrew Johnson: A lot of disruption.

Tom Gorup: Yeah, that’s miserable. But so in the rise of zero days too, I think you mentioned AI, I think we’ll play a big part of that.

I don’t think we’ve seen that yet. I mean, we’re seeing AI being used for defensive, like doing before you’re checking in code. You can do things like copilot make sure there’s no vulnerabilities on that check in. There are others like what SAST and DAST type tools that are enabling that are starting to use AI.

I would expect to see attackers leveraging AI to do vulnerability discovery. Especially in open-source projects as well. I mean, that’s easy.

Richard Yew: But I mean, ultimately, when you look at the workflow, right? From a defender perspective, when you’re doing your black box test or white box test, what you know, nowadays, we’re quite fast as dynamic application security tasks, right?

It’s really scanning for vulnerability and telling you to patch it. And obviously if the attacker gets access to your repo, they can do the same. I mean, they don’t even need to get you access to your repo. If they’re doing that test, they just hit your running software and the finest vulnerability.

Well, guess what? What we see as a scan from a blue team perspective, it’s a recon from an attacker perspective, if we just look at MITRE’s attack framework, you know, like this is essentially a recon that allows us to launch the attack. So you always think about all of the tools and process you use.

Think about how the attacker could use it against you, it’s very important to recognize that, you know, we always say you know, it’s good to put on a black hat again and think like an attacker. I think that will help a lot in implementing secure workflow, especially in security, ICD you know, DevSecOps, you know, nowadays workflow.

Changing Security Culture

Tom Gorup: So this has been, this is great. And we are over time. So our last topic here was more about culture. So I’d love for each of you to take you to know, a few seconds, a minute, whatever you’d like to kind of give your perspective on probably what needs to change. So less of a prediction, but looking at the businesses today, the problems that they’re facing. Everything we talked about from the resource constraints, attackers getting more effective, zero-day vulnerabilities being discovered. So what are businesses…what do they need to look at? What do CISOs…how do they need to change their mindset going into 2024 and beyond? What needs to change for us to stop running these fire drills every week?

Andrew Johnson: Well, one thing I just read about recently is a Gartner stat that I think it said about 25% of cybersecurity leaders are going to pursue completely different roles within two years. 50% are going to change jobs. Mostly attributable to stress in their job.

And we, you know, we know the CISO is a very difficult job, but a lot of security jobs are as well. So I mean. I think that there needs to be, well, you know, more planning in terms of rotating people, bringing maybe developers into security who don’t have the security background is to be more of a shared responsibility.

A lot of organizations and security people are managing tons of tons of solutions, right? So, when there’s an emergency, you might be going to one person every time burning them out. So, it’s more in terms of process and in a culture of, you know, everybody being a part of security. I think might help.

Richard Yew: I love that. I think, well, if we’re looking at from, from a security leader perspective, right? You know, you have to manage up, you have to manage laterally with your peers. So you see, so, and then you have to, like, manage down. Right? But I think it’s very important to set up cultures.

If we’re looking at it from a lateral and up perspective, right? Expectation setting is very important, it’s very, you know, we always, we always hear joke that, you know, whenever a breach happens, CISOs get fired. That’s why CISO has such a turnaround in industry. I’m sure it’s not happening like this anymore, right?

But, it really speaks to the expectations. We have to set the expectation as a product manager, you know, setting the expectation of a stakeholder is one of the most important parts of my job, right? But setting expectations with the board of directors, with your peers that there is, and I know, I know it’s a surprise, right? There’s no such thing as 100% security. You know, I once heard this from this guy called Dr. Eric Cole, it’s a podcast, right? He’s saying like, you know, how do you make your phone 100% secure? He tossed it into a fire pit right? Because 100% security means 0% functionality.

As a security person, you cannot be dogmatic. Security, your first job is to drive the business. Security is like having a strong brake in a supercar, right? It allows a business to brake hard. Why do you, what do you need to brake hard? Because you want to go fast. That’s the whole point, so start thinking about how does the peg impacts business.

How it’s because like it just because like if, if you, if you want to be 100% security, you want to have 100% guarantee that you’re going to block all of it at the, at the attack. Well, you just go blacklist zero, zero, zero, zero slash zero. And you’re good. Just call it a day. You’re 100% KPI achieved, but that’s not the point, right?

You have to be accelerating the business, you have to build that into a culture. And, you know, if you look from like going down perspective, right? Again, security starts from the beginning. Security starts from, again, another analogy I’ve used before. If you heard me in some other podcast is that security is like making cake.

It’s not an icing on the cake. It’s not an afterthought. You know, it has to go from the first step of planning for a software, especially if you’re a SaaS shop, if you’re like a, you know, web based, you know, you do most of your business online, right? You have to think about security like the flour in the cake.

As there from day one, when you make the cake, and in fact, if you’re doing a good job, you shouldn’t notice the flour when you get a cake… who noticed the flour when you eat the cake. Right? And that’s how security should be. So, it should not be a dogmatic top down kind of a compliance, you know, like driven initially, it has to be initiated, you know, like, and be ingrained, you know, perhaps through like tighter collaborations, embedded security teams or development teams, make sure that security is done right from day one, because whatever you can prevent is stuff that your operations team you know, they don’t need to spend as much time.

Tom Gorup: Yeah, there’s efficiencies with that. And I love the little quote there. 100% secure is 0% functionality. That’s a good stat. And I completely agree. Culture is important. How do you build this into the foundation of your organization? So that becomes part of the conversation and not a, “Oh, we got to get the security team involved.”

We should be having that conversation from day zero, but you need to make it interesting. The one thing that I’ve, I’ve, you look at that compliance training, it’s boring. It’s not relevant. It’s not timely. We need to change that to make security interesting. I remember what, 10 years ago, trying to talk to people about security and their eyes will glaze over.

Then all of a sudden it started making headlines and everybody’s just got super interested in it. And I think you can go right back in the other direction. If we start boring people again with monotonous trainings, and the like, let’s make it timely, let’s make it relevant. And again, let’s build that at the core, the foundation of our business where security is. Not an afterthought. It’s part of the cake, but I’m, I’m no baker either.

Richard Yew: So everyone needs to start putting on their black hat. You know, when you browse a website, see what could go wrong, put on your black hat. Hey, is there a form for you? Is that something I put in the form? Where’s the API end point?

Hey, what happened if I spam that? You know, like start thinking, start putting a hacker mindset wear your black hat and, you know, do that as a culture in your company.

Tom Gorup: Yeah, I love it. I love it. So we are way over. But I’d say this was great. First episode of ThreatTank. So I appreciate you both being on it.

Stay ahead of cyber threats with the latest insights from our security experts.

Subscribe now to receive:

  • New ThreatTank episodes as they launch – the debut episode is out now!
  • Top trending attacks by industry
  • Actionable insights & response strategies
  • And, more!