Transforming Business Operations Amidst Rising Cybersecurity Challenges

Man in digital security concept pressing button

Generative Artificial Intelligence (GAI) has rocketed from the backrooms of tech giants to the center stage of everyday business. What was once a futuristic promise is now driving real-world results in boardrooms, manufacturing floors, and startup offices alike.

GAI isn’t just automating tasks – it’s predicting demand, rewriting marketing strategies, accelerating product launches, and even analyzing customer sentiment in real time.

GAI, a type of artificial intelligence that can create new content such as text, images, or code, is emerging as a game-changer for business innovation. Examples include ChatGPT, Gemini, Claude, and DALL-E. GAI uses advanced models, such as large language models (LLMs), to generate novel outputs. This allows users to work with data in powerful new ways using natural language prompts, enabling a wide range of applications.

GAI capabilities have already improved substantially since ChatGPT, the first breakthrough example of this technology, was introduced to the public for free in November 2022. The popular chatbot from OpenAI can generate articles, essays, jokes, and poetry in response to prompts.

AI-powered tools are identifying fraud in seconds, slashing routine paperwork, and letting teams accomplish in hours what once took weeks. This leap isn’t just a technology upgrade. It’s an unprecedented shift that’s redefining what’s possible for organizations bold enough to harness it.

“AI itself is extremely complex,” explains Bill Frischling, Distinguished Scientist & VP at FiscalNote. “The stuff that goes into or the methodologies of taking all this data and being able to generate text or data analysis is really complex. Yet using the tool itself is really easy—download the app, and it’s as easy as talking to it like you would your coworker or friend.”

Adoption Surge Driven by Productivity Gains

The business landscape has witnessed an unprecedented surge in AI adoption, primarily fueled by measurable productivity enhancements. Companies using AI tools report significant time savings. Advertising agencies have been forced to change how they bill. No longer does it make sense for them to charge by the hour when AI can create images, write taglines, and compose videos in seconds.

“Adoption increased dramatically in 2023, from about 33% to 65% of organizations using generative AI,” said Lindsay Hayes, AI Solutions Strategist for Breach Secure Now, a cybersecurity training provider that has leaned into AI solutions. “What we’re experiencing is fundamentally a productivity revolution, and over the next 6 to 18 months, we’ll see substantially more adoption, understanding, and tangible benefits.”

Enterprises are prioritizing AI agents that solve critical business problems; top use cases include IT service desk automation (61%), data processing/analytics (40%) and code development/testing (36%), according to the “State of AI Agent Development Strategies in the Enterprise” survey.
This productivity boost has become the primary value proposition for executives.

Balancing Innovation with Responsibility

For all the hype around AI, there’s as much concern about negative side effects of the technology.

A recent INSEAD survey showed 42% of business leaders are “equally concerned and excited” about GAI.

As with any disruptive technology, AI has sparked both anxiety and excitement. Public discourse has often focused on potential negative outcomes, especially regarding economic impacts. However, the GAI survey revealed only 28% of respondents were worried about AI causing job losses. Additionally, concerns about GAI outsmarting humans (30 percent) and reducing human connection (37 percent) were relatively low.

There’s growing optimism about GAI’s potential to deliver meaningful business value and benefit society. But successful implementation isn’t just about the technology—it’s about preparing your infrastructure, policies, and people for this transformative tool.

“Many companies view AI tools like Microsoft Copilot as a natural extension of their existing software suite,” said Rich Miller, Founder and CEO of STACK Cybersecurity. “While this familiarity offers ease of adoption, it can also introduce risks if not managed properly. It’s important to understand that AI tools are not just another piece of software; they are powerful technologies that can significantly impact your business operations.”

Rich Miller is Founder and CEO of Livionia-based STACK Cybersecurity.
Rich Miller is Founder and CEO of Livionia-based STACK Cybersecurity.

Headquartered in Livonia, Mich., STACK Cybersecurity provides outsourced IT help desk and cybersecurity services to businesses across the nation. As IT budgets shrink while cyber threats explode, more corporations are experiencing significant cost savings by outsourcing their IT management to what are known as IT Managed Service Providers (MSPs) or Managed Security Service Providers (MSSPs).

STACK Cybersecurity is both an MSP and an MSSP, which means they detect and expose system vulnerabilities plus they remediate the issues they find. A traditional MSSP does not resolve security issues directly; they monitor, detect, and notify, but remediation is typically up to the client.

The firm is SOC 2 Type 2 certified. Recently STACK Cybersecurity attained Registered Provider Organization (RPO) status to assist Department of Defense suppliers achieve Cybersecurity Maturity Model Certification (CMMC).

As AI becomes increasingly embedded in business processes, enterprises are developing comprehensive frameworks for responsible implementation. AI tools have access to vast amounts of data and can perform complex tasks, which means they need to be handled with care. Without proper management, there’s a risk of data leakage, compliance violations, and security vulnerabilities.

“With the proliferation of AI, the risks to business have just multiplied exponentially,” said Darrin Swan, Co-Founder and Vice President of Sales at Todyl. “It’s time to take action to ensure you safely benefit from this exciting advancement. Know who’s using AI in your organization, establish a policy for proper usage to protect sensitive data and intellectual property, control access to who really needs it, and lock down AI to protect your business from outside risk.”

Todyl offers a comprehensive cybersecurity platform that offers services such as SIEM (Security Information and Event Management), EDR (Endpoint Detection and Response), and MXDR (Managed Extended Detection and Response) to help companies monitor, detect, and respond to security threats effectively.

Real-World AI Security Breaches: Lessons for Business

The swift adoption and deployment of AI capabilities have made them attractive targets for malicious cyber actors.

Hackers targeting AI systems may employ attack vectors specific to AI, in addition to the standard techniques used against traditional IT systems. Given the wide range of potential attack vectors, it’s crucial for defenses to be both diverse and comprehensive.

In March 2023, OpenAI confirmed a bug in ChatGPT’s open-source library exposed some users’ conversation histories and payment-related information. This incident prompted a brief shutdown and public apology from the company.

Other companies have experienced significant security incidents due to GAI tool misuse.

In April 2023, Samsung experienced a serious data breach when employees at its semiconductor division accidentally leaked confidential corporate data through ChatGPT. In three separate incidents within just one month, engineers uploaded sensitive source code while seeking help with bugs, shared proprietary test sequences for identifying chip faults, and even converted internal meeting recordings into documents using a GAI tool. The company subsequently implemented strict restrictions, limiting uploads to 1024 bytes and eventually banning GAI tools entirely on company devices. Samsung’s chief warned that further violations could result in ChatGPT being completely blocked on company networks.

ChatGPT’s web-based interface uses input data to train and enhance the tool. This raises data privacy concerns if employees enter corporate data as part of their prompts.

The primary worry is that if employees input company data into ChatGPT, the GAI could incorporate this information into its learning model. Consequently, this proprietary data could become part of its knowledge base, potentially appearing in responses to other users’ prompts.

For businesses subscribing to OpenAI’s Application Programming Interface (API) instead of the public web interface, different rules apply. Enterprise data submitted through the API is not used to train the GAI models unless companies explicitly opt in. Companies also maintain greater control over their information with a default 30-day data retention policy and options to request even shorter retention periods. This crucial difference underscores why many organizations are developing private AI implementations or using enterprise-grade API access.

These examples underscore the need for clear policies, user training, and vigilant monitoring when using AI tools in business environments.

Importantly, some risks associated with GAI remain unknown, making it challenging to accurately scope or evaluate them due to uncertainties regarding potential scale, complexity, and capabilities. While other risks may be recognized, they are difficult to estimate because of the diverse range of GAI stakeholders, uses, inputs, and outputs. These challenges are further compounded by a lack of transparency in GAI training data and the generally immature state of AI measurement and safety science today.

AI Cybersecurity Tips

• Data Security: Ensure proper data security measures and controlled access to prevent data leakage and compliance violations.
• User Training & Awareness: Provide targeted training emphasizing data privacy and responsible AI usage.
• Monitoring Usage: Regularly monitor AI performance and usage patterns to detect and mitigate risks.
• Establish Data Governance: Manage data used with AI tools in compliance with privacy regulations.

Vulnerabilities Explained, Reported

Cybersecurity incidents typically result from vulnerabilities in software or systems. Vulnerabilities, defined by the National Institute of Standards and Technology (NIST) as “weaknesses in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source,” are central to the cybersecurity of AI systems.

The Cybersecurity and Infrastructure Security Agency (CISA) developed this working definition for AI cybersecurity incidents: “An occurrence that actually or imminently jeopardizes, without lawful authority, the confidentiality, integrity, or availability of the AI system, any other system enabled and/or created by the AI system, or information stored on any of these systems.”

The Cybersecurity Information Sharing Act of 2015 (CISA 2015) established protections for non-federal entities to share cyber threat indicators and defensive measures with the government for cybersecurity purposes. CISA 2015 mandates that the Department of Homeland Security (DHS) operate a system for sharing cyber threat indicators with both federal and private sector entities, providing liability protection for shared information. The statute also extends protections to state, local, tribal, and territorial (SLTT) entities, ensuring shared information is exempt from disclosure under SLTT freedom of information laws.

When CISA receives information on cybersecurity incidents or vulnerabilities, including those specific to AI, it first aggregates and validates the information by entering it into a central tracking platform. During this stage, CISA removes any legitimate or benign indicators that may not pose a threat and ensures any victim-identifying information is stripped from the dataset to protect privacy.

Cybersecurity as a Competitive Advantage

Forward-thinking companies are transforming AI cybersecurity concerns into competitive advantages. STACK Cybersecurity has emerged as a leader in this space. As a managed IT security service provider (MSSP), STACK offers clients AI usage policy templates, an AI security checklist, and an AI readiness assessment. They also instruct clients how to properly secure AI agents and classify sensitive documents so staff cannot access restricted content.

“We’ve taken a proactive approach to AI security that goes beyond basic guidelines,” Miller said. “Across the industry, we’re seeing decisive security actions, like the recent decisions by several security firms, including us, to block certain AI models internally and recommend clients do the same after identifying potential vulnerabilities.”

STACK Cybersecurity emphasizes implementing end-to-end data encryption across AI workflows while offering specialized training programs focusing on responsible AI usage. They advocate establishing continuous monitoring systems to identify anomalous behavior and creating governance frameworks that align with evolving privacy regulations. A cornerstone of their approach involves regularly evaluating and restricting AI tools that don’t meet security thresholds.

Practical AI Guidance for Businesses

As STACK Cybersecurity’s AI Usage Policy states: “AI technology can be useful as a starting point, but it is not a substitute for human judgment. Always review AI-generated products to ensure accurate, appropriate, and ethical output.”

Effective AI security architecture begins with robust access controls that establish a tiered system where only authorized personnel can interact with AI tools processing sensitive data.

“Many vulnerabilities occur simply because too many employees have unrestricted access to AI,” Miller said. “These companies are unwittingly creating a Bring Your Own AI policy. This is dangerous.”

Although AI-enabled technologies are not entirely new, their widespread presence in handheld, bring-your-own devices has introduced additional risks. Without guidance from their employers, employees are making the decision to use AI at work. It’s understandable. AI provides a practical solution to a range of challenges faced by the modern workforce.

For many organizations, content oversharing, exfiltration, and data loss are aspects of data security that pose a challenge. According to Microsoft, more than 30% of decision-makers say they don’t know where their sensitive business critical data lives. Add AI to the mix and the challenge becomes exponentially greater.

Creating AI-specific data classification systems with clear guidelines about what information can be processed through which AI tools has emerged as industry best practice. Leading organizations typically develop frameworks categorizing data into tiers of sensitivity, providing straightforward decision paths for employees regarding which information can safely be processed by AI tools.

“The most secure AI implementations integrate AI data security from day one,” Miller said. “Companies ignoring AI security often face costly remediation efforts.”
Proper content classification and labeling of sensitive information, coupled with the most limited permissions, is key to preparing your data for corporate AI usage.

Ethical and Responsible AI Usage

Many firms are launching and training their own AI agents to ensure data security and integrity.

As businesses consider using AI, they must consider bringing their AI engine in-house,” Miller said. “If you just plug your business data into AI and say, ‘Hey, tell me if this drawing is accurate,’ that’s a big problem. Because that’s public. But you can bring that AI engine in-house and put it on your own system in your own secure network, and then you’re training your AI to answer those questions. It can be a useful tool to your staff, and it can help make them more efficient.”

Corporate AI policies emphasize responsible, ethical, and secure use of AI technologies, providing clear guidelines for confidentiality, human oversight, and approved tool usage.

STACK Cybersecurity’s AI Security Checklist lists these considerations for implementing AI tools:
• AI as a Tool: While AI integrates seamlessly into your environment, it’s crucial to remember that it’s a powerful tool with access to sensitive data. Treating it exactly like a simple browser can lead to data exposure and compliance issues.
• Data Security: AI’s effectiveness relies on data access. Ensuring your data is properly secured and access is controlled is paramount.
• Compliance: AI usage must adhere to data privacy regulations and ethical guidelines.

The checklist also highlights these potential risks:
• Data Leakage: Uncontrolled data access can lead to sensitive information being exposed.
• Compliance Violations: Failure to adhere to data privacy regulations (e.g., GDPR, HIPAA) can result in legal penalties.
• Security Vulnerabilities: Lack of proper security protocols can create entry points for cyberattacks.
Unintended Data Sharing: AI can use information from multiple sources, and without proper user education, data can be shared in ways not intended by the user.
• Lack of Audit trails: Without monitoring, it is hard to know what information was accessed, and what actions were taken by the AI.

Future Outlook: Accelerating Adoption

“We’re witnessing the beginning of a fundamental shift in how companies operate,” Miller said. “Companies that develop robust AI capabilities today will maintain significant competitive advantages for years to come.”

GAI has been adopted with unprecedented speed. While it took seven years for the internet to reach 100 million users, ChatGPT achieved that benchmark in two months. And even though GAI is relatively new to the market, adoption is rapidly expanding as usage among enterprises jumped to 75% in 2024 from 55% in 2023, according to an IDC study. By comparison, TikTok reportedly took nine months after its global launch to reach 100 million monthly users, and Instagram about two and a half years.

For executives navigating this rapidly evolving landscape, the message is clear: AI implementation is no longer optional but essential for maintaining competitiveness. Companies that establish balanced frameworks prioritizing productivity, security, and ethical considerations will be best positioned to harness AI’s transformative potential.