Frequently Asked Questions

AI Threats & Uses in Cybersecurity

How are AI technologies like ChatGPT and GenAI impacting cybersecurity?

AI technologies such as ChatGPT and GenAI are significantly impacting cybersecurity by increasing the potency and frequency of cyberattacks. Cybercriminals use these tools to automate attacks, write malicious code, create deepfakes, and scale malware and spyware generation. Organizations must adapt by leveraging AI for proactive threat detection and response. (Source)

What are some examples of AI-driven cyberattacks?

Examples of AI-driven cyberattacks include brute force password cracking, building spyware and malware at scale, deepfake video scams, credential cracking, and large-scale phishing campaigns. AI enables attackers to automate and accelerate these attacks, making them harder to detect and defend against. For instance, deepfake videos have been used to scam organizations out of millions of dollars. (Source)

How does AI empower both traditional and new cyberattacks?

AI multiplies the impact of traditional attacks such as brute force, malware, spyware, impersonation, and login spoofing by making them faster, more powerful, and harder to detect. Additionally, AI introduces new attack types like deepfakes and large-scale automated phishing, increasing the threat landscape for organizations. (Source)

What are the risks associated with the increasing use of AI in cybersecurity?

Risks include attackers exploiting AI systems to automate attacks, generate malware, and create deepfakes for impersonation and fraud. If attackers gain access to AI-powered cybersecurity tools, they can exploit vulnerabilities or launch sophisticated attacks. Organizations must implement strong protocols and oversight to mitigate these risks. (Source)

Can AI replace humans in cybersecurity risk prevention?

No, AI cannot fully replace humans in cybersecurity risk prevention. Most AI-powered cybersecurity systems still require human oversight and intervention. AI excels at automating tasks and detecting patterns, but lacks the situational awareness and judgment of human experts. Transparency and human input are essential for effective risk management. (Source)

IONIX Platform Features & Capabilities

What is IONIX and what does it do?

IONIX is an External Exposure Management platform designed to identify exposed assets and validate exploitable vulnerabilities from an attacker's perspective. It enables security teams to prioritize critical remediation activities by cutting through the flood of alerts. Key features include complete attack surface visibility, identification of potential exposed assets, validation of exposed assets at risk, and prioritization of issues by severity and context. (Source)

What are the key capabilities and benefits of IONIX?

IONIX offers complete external web footprint discovery, proactive security management, real attack surface visibility, continuous discovery and inventory, and streamlined remediation. These capabilities help organizations improve risk management, reduce mean time to resolution (MTTR), and optimize security operations. (Source)

What integrations does IONIX support?

IONIX integrates with tools such as Jira, ServiceNow, Slack, Splunk, Microsoft Sentinel, Palo Alto Cortex/Demisto, AWS Control Tower, AWS PrivateLink, and pre-trained Amazon SageMaker Models. These integrations enable seamless workflows and enhanced threat management. (Source)

Does IONIX offer an API for integrations?

Yes, IONIX provides an API that supports integrations with major platforms like Jira, ServiceNow, Splunk, Cortex XSOAR, and more. (Source)

Security & Compliance

What security and compliance certifications does IONIX have?

IONIX is SOC2 compliant and supports companies with their NIS-2 and DORA compliance, ensuring robust security measures and regulatory alignment. (Source)

How does IONIX address product security and compliance?

IONIX ensures robust security by maintaining SOC2 compliance and supporting organizations in meeting NIS-2 and DORA regulatory requirements. This helps customers align with industry standards and maintain a strong security posture. (Source)

Implementation, Onboarding & Support

How long does it take to implement IONIX and how easy is it to get started?

Getting started with IONIX is simple and efficient. The initial deployment takes about a week and requires only one person to implement and scan the entire network. Customers have access to onboarding resources like guides, tutorials, webinars, and a dedicated Technical Support Team. (Source)

What training and technical support does IONIX provide?

IONIX offers streamlined onboarding resources such as guides, tutorials, webinars, and a dedicated Technical Support Team to assist customers during the implementation process. (Source)

What customer service and support is available after purchasing IONIX?

IONIX provides technical support and maintenance services during the subscription term, including troubleshooting, upgrades, and maintenance. Customers are assigned a dedicated account manager and benefit from regular review meetings to ensure smooth operation. (Source)

Use Cases, Pain Points & Customer Success

What core problems does IONIX solve?

IONIX solves problems such as identifying the complete external web footprint (including shadow IT and unauthorized projects), enabling proactive security management, providing real attack surface visibility, and ensuring continuous discovery and inventory of internet-facing assets and dependencies. (Source)

Who can benefit from using IONIX?

IONIX is tailored for roles such as Information Security and Cybersecurity VPs, C-level executives, IT managers, and security managers. It serves organizations across industries, including Fortune 500 companies, insurance, financial services, energy, critical infrastructure, IT, technology, and healthcare. (Source)

What customer success stories demonstrate IONIX's impact?

IONIX has helped E.ON continuously discover and inventory internet-facing assets, Warner Music Group boost operational efficiency and align security operations with business goals, and Grand Canyon Education enhance security by proactively discovering and remediating vulnerabilities. (E.ON, Warner Music Group, Grand Canyon Education)

What business impact can customers expect from using IONIX?

Customers can expect improved risk management, operational efficiency, cost savings, and enhanced security posture. IONIX enables visualization and prioritization of attack surface threats, actionable insights, reduced mean time to resolution (MTTR), and optimized resource allocation. (Source)

Product Performance & Recognition

How is IONIX rated for product innovation and usability?

IONIX earned top ratings for product innovation, security, functionality, and usability. It was named a leader in the Innovation and Product categories of the ASM Leadership Compass for completeness of product vision and a customer-oriented, cutting-edge approach to ASM. (Source)

What feedback have customers given about IONIX's ease of use?

Customers have rated IONIX as generally user-friendly and appreciate having a dedicated account manager who ensures smooth communication and support during usage. (Source)

Resources & Documentation

Where can I find technical documentation and resources for IONIX?

Technical documentation, guides, datasheets, and case studies are available on the IONIX resources page. (IONIX Resources)

Does IONIX have a blog and what topics does it cover?

Yes, IONIX has a blog that covers topics such as cybersecurity, risk management, vulnerability management, exposure management, and the impact of AI on cyber threats. Key authors include Amit Sheps and Fara Hain. (IONIX Blog)

Industry Recognition & Company Information

What industry recognition has IONIX received?

IONIX was named a leader in the 2025 KuppingerCole Attack Surface Management Leadership Compass and won the Winter 2023 Digital Innovator Award from Intellyx. The company has secured Series A funding to accelerate growth and expand platform capabilities. (Source)

Who are some of IONIX's customers?

IONIX's customers include Infosys, Warner Music Group, The Telegraph, E.ON, Grand Canyon Education, and a Fortune 500 Insurance Company. (IONIX Customers)

Go back to All Blog posts

The Rise of ChatGPT & GenAI and What it Means for Cybersecurity

Amit Sheps
Amit Sheps Director of Product Marketing LinkedIn
March 11, 2024
A graphic showing the rise of ChatGPT and GenAI and what it means for cybersecurity. The graphic features a stylized landscape with mountains and a flag with a logo on top of them, and text overlay describing the topic.

The rise of ChatGPT and Generative AI has swept the world by storm. It has left no stone unturned and has strong implications for cybersecurity and SecOps. The big reason for this is that cybercriminals now use GenAI to increase the potency and frequency of their attacks on organizations. To cope with this, security teams naturally need to adapt and are looking for ways to leverage AI to counter these attacks in a similar fashion. Given the frequency of attacks, security teams need to find ways to continuously manage the externally visible attack surface of their organization, and look for opportunities for automation wherever possible. That’s what we discuss in this article – examples of how ChatGPT and GenAI are being used in cyberwarfare, and how security teams can keep pace with the changing times.

The rise of GenAI & ChatGPT in cybercrime

There are many examples of organizations looking for ways to leverage ChatGPT to write code for basic applications, reduce the amount of boilerplate code developers need to write, or build an MVP with little to no effort. This is the good side of GenAI.

However, cybercriminals are usually the first to take notice of such advances and are quick to find ways to put GenAI to use in their operations as well. Here are some examples of the bad side:

Building a zero-day virus using ChatGPT prompts

With a bit of creativity, it’s very easy to bypass ChatGPT’s default guardrails and make it a handy helper to create a zero-day virus.

ChatGPT + developer mode = danger

Japanese analysts use ChatGPT in developer mode to create a virus in minutes, highlighting its potential for misuse.

ChatGPT used to write malicious code

As a glimmer of hope, some researchers have found that its skills are still rudimentary and it takes a seasoned cybercriminal to really put it to good use. Not all that reassuring perhaps, but the key point is that ChatGPT lowers the bar for entry as a cybercriminal and can be badly misused if in the wrong hands.

These examples are enough for us to sit up and take notice of what’s at stake with ChatGPT and GenAI as incidents of AI and machine learning in cybersecurity are on the rise..

Incidents of AI risks in cybersecurity

There have been innumerable reports on ChatGPT and GenAI being used in cybercrime. What kinds of attacks are commonly seen?

  1. Brute force attacks

AI is being used to crack passwords using multiple methods including brute force. The shorter the password, the easier it is to crack. A 6-digit password would take seconds for AI to crack. Along with brute force attacks, other password-guessing techniques like acoustic side-channel attacks and dictionary attacks can be used to wreak even more havoc. 

  1. Building spyware and malware at scale

The FBI warns that AI is already being used to carry out malware attacks. They sound the alarm that AI is being used in multiple ways to write code, create malware, and create deepfakes at scale. On a similar note, Britain’s spy agency calls out AI’s use “for advanced malware generation.”

Facebook parent company Meta, reports that they found 6 spyware networks tied to 8 companies across the world. Another report found that Android spyware has drastically risen in 2023. These are not isolated incidents, but a clear warning that ChatGPT and GenAI are a real threat when it comes to spyware and malware.

  1. Deepfake & impersonation

A deepfake is a fake or digitally altered video of a person that’s used to scam viewers into taking some action like sharing their passwords, IDs, or transferring money to an account. In a huge case of such fraud, a deepfake video of a CFO was used to extract $25M from an organization. In like manner, bots were used to impersonate profiles on X to promote fake cryptocurrencies and even gain access to users’ crypto wallets. Deepfake videos have been used for election manipulation in India, and are a clear threat to the upcoming 2024 US elections as well.

  1. Credential cracking

As mentioned earlier AI can crack weak passwords in seconds. But there’s more to AI’s prowess with login cracking than just brute force. AI is being used to create scam websites at scale. Along with ChatGPT for text generation, DALL-E, also from OpenAI, is equally good at creating images and artwork that can be used to create these same websites with ease. WhatsApp, the widely-used chat mobile app, hasn’t been spared either – with reports of fake WhatsApp login websites surfacing. And then there’s the ol’ email phishing menace that’s not to be forgotten. It’s one of the top cyber risk trends we’ve highlighted in the past.

How AI empowers cybercriminals

The real-world examples above show how ChatGPT and GenAI are used by cybercriminals. But here’s what we can observe from all these reports:

Attackers are using AI to be smarter and faster

Cybercriminals have limited resources, and are always on the lookout for new technologies and advancements that can give them an edge. ChatGPT and GenAI is the perfect tool that can help them get more done in less time. Whether it’s writing code, creating websites, deepfake videos, or creating malware, attackers are much more capable when equipped with GenAI. 

Even novices can leverage AI to automate attacks

ChatGPT and GenAI gives anyone the opportunity to become a hacker, no matter their skill level. While this is empowering when used for good, it is dangerous when motivations are suspect. Because of this lower bar for entry, there’s likely to be a sharp spike in the number of cybercriminals worldwide, and an equally large spike in the number of cyberattacks.

AI empowers traditional attacks & introduces new ones

Brute force attacks, malware, spyware, impersonation, and login spoofing are all age-old attempts to steal, blackmail, and extort money from organizations and people. However, AI multiplies the impact of these types of attacks making them faster, more powerful, and harder to detect and fight. What’s more, AI is introducing new types of attacks like deepfakes.

We’ve described the challenge in great detail so far. It’s now time to transition to talking about the solution – and yes, there is hope despite the bad news.

AI cybersecurity threats call for proactive, not reactive cybersecurity

Reactive, detect-and-response solutions are challenged by the pace, scale, and sophistication of these AI threats. This makes proactive cybersecurity and especially threat exposure management more important than ever. Being proactive means being able to spot attacks before or as soon as they start, and being able to respond to them in real-time to minimize any impact. Further, it’s about improving your organization’s security posture, considering AI and cybersecurity threats.

Using AI to fight AI threats

As the old saying goes, “it takes one to know one” – to fight AI-backed hackers, you need to think and act like one. You need to view your organization’s digital supply chain from the outside-in, the way they see it. Here are four ways to do this:

  1. Discovering the real attack surface

AI can be used to gain a more detailed, and accurate view of your organization’s real attack surface – the entire area of your organization that is externally visible. AI can be used to look at a variety of factors and identify legitimate assets that belong to your organization with a high level of confidence. AI can analyze and attribute relevance, or determine risky connections to assets that humans can’t, and do this much faster than traditional cybersecurity software can.

  1. Detecting anomalies in real-time

AI is great at pattern recognition even ones that simple rules-based tools can’t spot. This is because AI makes connections between data points that go beyond simple rules. This can be used to detect suspicious activity in networking, for example. 

Threat exposure management relies on millions of points of data. AI can help scan, filter, sort, categorize, and prioritize cybersecurity data including logs, metrics, alerts, and more. AI can enable you to use policies to layer on many rules one on top of another for a robust threat detection strategy.

  1. Automating and scaling remediation

AI can be used to make informed decisions on how to respond to a threat. It can be used to better prioritize alerts, suppress false positives, highlight high-priority threats, and suggest a workflow for a threat response. Even more, AI can be used for attack surface management automation to respond to and remediate threats without human intervention. It’s still early days for auto-remediation, but there is immense potential.

  1. Increase ROI in cybersecurity investments

AI can reduce the cost of cybersecurity by reducing the amount of human effort required, and by reducing the number of tools required. AI can perform complex tasks in a repeatable manner. Unlike us humans that need rest, AI can work around the clock. And importantly, AI gets smarter over time. All these factors will contribute to greater efficiencies and reduce costs in cybersecurity activities – all resulting in better ROI for cybersecurity investments.

What are the challenges in adopting AI?

We’re in the days of AI’s infancy. There are teething pains to deal with along the way. Here are some of them:

AI is new to everyone

There are things that even OpenAI, the builders of ChatGPT, can’t predict about the tool they’ve created and its capabilities. This being the case, it’s going to take years to fully understand the potential of AI, let alone using it to that potential. For organizations that are busy with building products and getting them to market, they are unable to find the time to learn & adopt the new technology. It takes focus away from the core business, and many still see it as a thing of the future. 

Unknown unknowns are a real thing with GenAI

As ex-Defense Secretary, Donal Rumsfeld, famously said, “there are also unknown unknowns – [things] we don’t know we don’t know.” This is doubly true of ChatGPT and GenAI. A fear that was only in sci-fi books of machines functioning on their own and introducing new risks they weren’t designed to do is now possibly coming true. OpenAI is still grappling with how to ensure their creation doesn’t flout laws and cause privacy issues.

Can AI replace a human in cybersecurity risk prevention?

The direct answer is ‘No’. Most cybersecurity systems that utilize AI are still “human in the loop” use cases. AI is powerful and has a lot of potential, but it has a long way to go until it has the situational awareness of a human being. Let’s look at the reasons why.

Risks of relying on AI 

Lack of transparency

Due to concerns around transparency with AI, a study found that 41% of executives banned the use of AI in their organizations. It’s clear that AI needs direction & oversight, and some organizations are not willing to take on the risks that come with AI just yet. For example, machine learning algorithms that discover assets and assess their cyber-risk are extremely useful for speeding up discovery. But customers of cybersecurity products want to understand the process of asset allocation and do not want to simply rely on, “this is likely to be your risky asset because the model says it is.” Showing transparency in AI will be key to AI adoption for cybersecurity teams.

Creating too many false-positives

AI models used in cyber are sometimes so good at finding and analyzing cyber data that they can create false-positives, or cases where the model identifies something as risky, but the risk is either not there anymore or not risky enough to be relevant. We need to get AI to the point where training the models with human input can do a better job of reducing false positives.

Get in touch with us to discuss how you can leverage the power of AI to improve your security posture, and how IONIX uses Machine Learning in our discovery and risk assessment processes.

WATCH A SHORT IONIX DEMO

See how easy it is to implement a CTEM program with IONIX. Find and fix exploits fast.