By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

InSmartBudget

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
InSmartBudgetInSmartBudget
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
InSmartBudget > Startups > The Cybersecurity Threat Of LLMs—And How Businesses Can Respond

The Cybersecurity Threat Of LLMs—And How Businesses Can Respond

News Room By News Room September 20, 2023 8 Min Read
Share

Gaidar Magdanurov is the Chief Success Officer at Acronis.

Generative AI and large language models (LLMs) like ChatGPT and Bard quickly got used in all spheres of human activity—helping with generating ideas, assisting with research, creating and editing content, writing code and automating tasks, providing customer support, helping sales and marketing to discover and qualify leads, assisting with education and explaining complicated concepts in simple words—and many, many more applications.

However, as well as this technology helps to increase the productivity of workers, it also helps to increase the productivity of malicious actors.

Understanding LLMs

By processing an enormous amount of content available on the internet, models like ChatGPT are trained to understand text input and provide answers based on the accumulated knowledge.

The ability to learn from new input and generate the text based on use requirements enables generative AI to be an efficient assistant. LLMs are effective in creating unique content and verifying and explaining the information provided by users. For example, it can help generate Excel formulas and write or explain code snippets.

Generative AI’s knowledge is limited by the information it ingests, and it is also prone to making mistakes and generating incorrect information—the output requires validation. Therefore, it makes sense to treat generative AI as an intern that requires a detailed briefing and a thorough review.

Cybersecurity Risks

The weakest link in cybersecurity is the human. Verizon’s 2023 Data Breach Investigations Report shows that 74% of security breaches involved a human element: social engineering attacks, user errors or misuse of systems. People are being tricked, phished and disclosed sensitive data. And AI plays a role in making the issue bigger.

1. Spreading False Information

Let’s start with malicious actors using LLMs to spread false information. Many of us have received advanced-fee scam emails from a “Nigerian Prince.” Even though those emails were not well-written, they inspired users to pay money to fraudsters to receive non-existing rewards in the future.

LLMs allow fraudsters to create more convincing emails and augment them with content on social media and dedicated websites to make them even more believable. Users got used to looking for validation of the offers in the emails. Now, digital con artists can create multiple websites and fill social media with posts to make information look more credible.

2. Advanced Phishing

Generative AI is an excellent tool for assisting in phishing attempts. It takes very little time to create multiple custom-made emails targeting specific people using information from public sources. LLMs allow attackers to build phishing emails at scale and make the content look legitimate.

Imagine receiving emails on behalf of co-workers, friends or services you use with the content mentioning your life events and people you know—chances are you will be inclined to click links and maybe even provide some information through forms.

3. Malicious Code

Generative AI can create and explain code and can be used by threat actors to write malicious code—automating attacks, writing exploits and many other tasks. These AI tools serve as a coding partner for software engineers and security researchers and as a partner and a teacher for the threat actors.

4. Sensitive Information

And last, sharing sensitive information with any public cloud service is risky. Employees using LLMs in their work may inadvertently share confidential information. It will be exposed if the accounts employees use are compromised.

Those threats pressure businesses like managed service providers (MSPs), as AI automation makes every customer a target.

The Importance Of Employee Education

The primary solution for businesses to decrease the risk is to educate users on the potential threats. Recurrent training on phishing and social engineering is required. Many users were trained to recognize phishing based on poor-quality content. This is not the case anymore. A phishing email can look credible and surpass the most advanced email filtering solutions.

There are many more items to check.

• Mismatched URLs: displayed link and the actual link.

• Generic greetings, like “Dear Customer” instead of specific names.

• Requests for sensitive information.

• Urgent or threatening language pressures to act quickly.

• Unnecessary attachments or links.

• Unusual sender address or domain.

• Email not matching previous communications with the sender.

A phishing email can come on behalf of anybody—a co-worker, partner, vendor, bank, government body. Employees should always be on high alert.

As for information disclosure, businesses should advise a policy on using generative AI—and train employees to get value from the tool without the risk of disclosing sensitive information.

Implementing Technology

For those looking for technology solutions, I recommend businesses employ advanced URL filtering to block malicious and suspicious websites. It is crucial to detect suspicious behavior of users falling victim to the attackers. Implementing an endpoint detection and response (EDR) on workstations and receiving alerts in case of suspicious activity helps to prevent the development of successful attacks and limit the damage. (Disclosure: My company provides these solutions, as do others.)

Implementing email security and EDR brings cost and complexity with configuring and maintaining the solutions, generating additional overhead for false positive detections and removing legitimate emails.

To overcome those challenges, it is essential to have rigorous pilot testing and cost/benefit analysis done based on the results of the tests. Continuous monitoring and adjustments of the configuration of the solutions is required, as well as recurrent training of the IT staff working with those solutions.

AI is still an essential tool to stop malicious actors using AI. Modern security solutions rely on AI to detect suspicious behavior and filter through millions of events. Yet, many aspects of our day-to-day jobs still need to be automated with AI.

In the dynamically changing threat landscape, company leaders have to look for solutions that automate their operations constantly and use AI, increasing the capacity of their technicians and preventing human errors.

We live in a world where cyberattacks are available to everybody. A user with limited computer skills can execute sophisticated attacks, and businesses must prepare before it is too late.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

News Room September 20, 2023 September 20, 2023
Share This Article
Facebook Twitter Copy Link Print
Previous Article AcceleratorCON is supporting diverse and inclusive entrepreneurship
Next Article 4 Ways to Avoid Becoming A Nightmare Boss
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Why Passion Alone Won’t Lead to Business Success
June 7, 2025
What I Learned From my First Major Crisis as a CEO
June 7, 2025
Palantir Is Going on Defense
June 7, 2025
How His ‘Hustle’ Became a Business on Track for $300 Million
June 7, 2025
Netflix patent could automate content-clipping as streamer aims to boost discoverability
June 7, 2025

You Might Also Like

Palantir Is Going on Defense

Startups

Auto Shanghai 2025 Wasn’t Just a Car Show. It Was a Warning to the West

Startups

Businesses Got Squeezed by Trump’s Tariffs. Now Some of Them Want Their Money Back

Startups

Donald Trump’s Media Conglomerate Is Becoming a Bitcoin Reserve

Startups

© 2023 InSmartBudget. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

Mark Cuban and Dallas Cowboys’ Micah Parsons on Success
5 Inspirational Quotes to Keep Every Startup Owner Motivated
There’s no place that Spam would rather be than in a campaign with ‘Lilo & Stitch’

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?