AI Provides Real Benefits, But Comes With Some Pitfalls

If the initials AI conjure up Rosie, the robot maid from “The Jetsons” space-age television show, you’re not alone. It wasn’t long ago that artificial intelligence was a distant-future possibility among other barely imaginable technological advancements.

Since then, artificial intelligence, or the technology enabling computers and machines to simulate human intelligence and problem-solving capabilities, has become a reality. We use it daily in everything from tapping Google or Waze to find the fastest driving route to taking streaming services’ personalized recommendations for entertainment.

Enter 2023, when large language models like ChatGPT, Gemini, Claude and many more became available. These LLMs access and process large amounts of input data from humans and can generate human language, continually adjusting through iterative machine learning. What’s more, this generative AI can mimic human language styles and speech patterns, create photos and other art and more, given certain parameters.

A year into the introduction of generative AI, business use of AI is increasing. In a September 2023 Gartner Inc. poll of 1,400 executive leaders who participated in the Gartner AI webinar, 45% of respondents said they’re in piloting mode with generative AI, and 10% said they’ve put generative AI solutions into production.

A similar March and April 2023 poll showed only 15% piloting generative AI and 4% using it in production. While this is a somewhat biased representative of AI use, the uptick in use is worth noting.

“The State of AI in 2023,” a McKinsey & Co. survey of a more diverse group of C-Suite executives, shows 79% of respondents having some exposure to generative AI personally or at work, and 22% using it regularly at work. One-third of respondents said their organizations are using generative AI, and 28% said generative AI use is on their board’s agenda.

It’s no secret that AI replaces or alters functions traditionally conducted solely by humans. However, AI has the potential to increase productivity and, ultimately, generate money. A January 2024 report by Cognizant Research and Oxford Economics predicted that, by 2032, AI could inject the U.S. gross domestic product with between $477 billion and $1 trillion, depending on the level of business adoption. The report also predicted that 90% of jobs could be disrupted by generative AI in the next 10 years.

Is AI use controversial? Is its future uncertain? Are its generative applications in their infancy? Yes, to all. But is it going away? Absolutely not. Therefore, organizations need to address its use and try to harness its benefits.

Types of AI
AI is by no means new, but recent developments have created a host of possibilities. Broad types of AI include weak AI, also known as narrow AI, and strong AI. Weak or narrow AI is designed for a specific task. Self-driving cars, Alexa, Siri and Google Maps use narrow AI. Strong AI is also known as artificial general intelligence, and it refers to the possibility of a machine possessing equal intelligence to humans and even one day surpassing human intelligence (artificial super intelligence). This type of AI is not yet a reality.

AI can function under machine learning, which has been in place for decades. Machine learning employs simple algorithms and requires humans to structure or supervise data. Deep learning algorithms are much newer and employ deep neural networks that can operate unsupervised.

While generative AI is not new either — it’s been used for years in statistics to predict data — the rise of deep learning has made generative AI more robust. It’s capable of incorporating images, speech and other complex data into what it generates, and of generating more complex output.

How businesses use AI
In the world of production, customer service and even transportation and logistics, AI has staggering possibilities. It gives businesses a leg-up, so to speak. According to Professor Richard Baldwin, who spoke at the World Economic Forum Growth Summit, AI is the great equalizer. “AI is essentially wisdom in a can,” he said. “It’s giving more power to all workers, but especially those average workers.”

Understanding that Baldwin’s comments are future-focused, let’s look at how companies are using AI now and in recent days. McKinsey & Co.’s August 2023 survey shows that the most common uses of generative AI are marketing and sales (14%), followed by product and service development (13%) and service operations (10%). The most common marketing and sales use was crafting first document drafts. For product and service development, it was identifying trends in customer needs. For service operations, it was using chatbots for customer service.

AI in sales and marketing
Andrew Bontz, an AI keynote speaker and founder of Wisconsin-based ResistingBeta Consulting, which serves the men’s health industry, said his use of generative AI shrinks 40 to 50 hours of work to 45 minutes. “I do the work of three to five people with just me,” he said.

Andrew Bontz, an AI keynote speaker and founder of ResistingBeta Consulting, which serves the men’s health industry, said his use of AI shrinks 40 to 50 hours of work to 45 minutes.

In a sentiment similar to that of Baldwin, Bontz noted the opportunity for AI to help small companies compete with their larger counterparts.
“AI has the opportunity to drastically level the playing field between large and small businesses,” he said.

For example, AI can “consume” several books on copy writing, along with specific data about a client, and generate an outline or content for marketing messages using the principles in those books, said Bontz.

It can also help companies target a new audience by building an audience persona and creating a tailored message. This helps, for example, when companies need to market to a different generation with different mindsets and lingo, Bontz said.

AI can also be used as a training tool where, for example, salespeople can role play with AI to learn techniques, he said.

AI tools do need human input and editing, which is why Bontz is working to help companies become AI adjacent, he said. “It’s like having a straight-A student complete a project for you,” he said. “You need to guide and train them.”

Staffing and human resources applications
Staffing businesses like Kelly are also leveraging AI. “We help companies with all aspects of their workforce,” said Ed Pederson, vice president of innovation and product development for Kelly. Traditionally, available workforce channels included full-time, part-time and contract workers, he said. Now, there’s a fourth option — automated workers or processes that fulfill labor needs.

Pederson headed up the development of Kelly Fusion, a suite of automated work solutions, including digital workers, collaborative robots and workforce automation consulting. In this realm, digital workers automate “highly repeatable processes” like moving data in spreadsheets or merging data in finance applications. Kelly just placed its first digital worker.

“Our perfect world is digital workers working alongside humans,” Pederson said. “That human can then focus on the creation and the relational rather than the repetitive tasks.”

One example of humans working alongside AI is Kelly’s recruiter assistant called Grace. While Kelly recruiters focus on interacting with candidates and making important decisions, Grace automates the creation of job descriptions and postings, summarizes resumes and sends standard emails.

Another AI tool, Helix UX, is a portal that enables employers to analyze market conditions and make informed decisions about hiring. For example, they can determine whether labor needs are best met by full-time, part-time or temporary workers based on salaries, amount of time needed, length of project and more.

Lucas Grizz is founder of Raven Cargo, a Chicago-based shipping company.

Transportation and logistics applications
AI is also at work in the transportation and logistics industry. Lucas Grizz, founder of Raven Cargo, a Chicago-based shipping company, has developed a product that integrates disparate data, optimizes routing, forecasts demand and more.

Raven Cargo’s RavenEye, a cloud-based logistics transaction management and information exchange platform, streamlines logistics operations, said Grizz. “It optimizes routing, forecasts demand and enhances predictive maintenance, so it helps minimize downtime. RavenEye also improves document management and supply chain visibility, enabling informed, data-driven decisions that lead to reduced operational costs and heightened customer satisfaction.”

Information technology and beyond
Amy Babinchak, who owns Third Tier, a Michigan-based company that provides information technology services and consulting, said that, while most AI technology is emergent, software companies are starting to integrate it into their products.

Therefore, most companies are using some form of AI technology, even if they don’t label it as such, she said. Eventually, most people will use AI indirectly through applications that integrate the technology, she said.

Babinchak’s information technology clients are using AI to automate some processes, she said, but most of them are waiting for vendors to integrate AI into the tools they use. For example, AI tools could be set to alert businesses when data discrepancies arise or to run processes when specific conditions arise.

Personally, Babinchak, a Microsoft MVP, enjoys exploring tools like Microsoft Copilot and educating people about what it can do. The tool works alongside applications like Word, PowerPoint and Excel to generate content and automate tasks.

The product is aptly named, she said, because it doesn’t teach users anything; it only helps with the work.

“Copilot can’t do much for you if you don’t already have some expertise. It’s going to help you do what you do,” she said.

Babinchak stressed that AI needs human input. “AI doesn’t know why it’s doing something,” she said. “That’s where humans come in.”

This need for human AI expertise is a great reason to learn how AI works as the technology is developing, Babinchak said. She recalled the economist Baldwin’s statement at last year’s World Economic Forum Growth Summit: “AI won’t take your job. It’s somebody using AI that will take your job.”

Security and data cautions
AI may sound like the best thing since sliced bread, but there are cautions when using it. Since AI relies on data input to generate predictions and content, one major caution is to prepare and protect your data.

“If you think about companies, all they really own is their data,” said Babinchak. Combine that with AI’s ability to access any data its user has access to, and you’ll see the issue.

“Most businesses have not paid attention to permission creep,” said Babinchak. “Most of the time, what I see is that data is not all that well structured or protected.”

For example, Copilot could pull old data that’s no longer useful in generating sales statistics for a presentation. It could also pull confidential data that’s not meant to be shared.

The fix, said Babinchak, is to thoroughly audit your data permissions structure and archive or delete old data to make it inaccessible to certain employees who use AI to generate statistics or presentations.

Bontz also noted the importance of data security. Where some AI tools are “closed,” meaning their data isn’t shared with the public, others are based on open architecture. “Anything you put in can be viewed by anyone,” warned Bontz, who also noted that there’s a potential intellectual property question, because anything created by AI may be owned by the tool itself and not the company.

While many employees are experimenting with AI for personal or business use, many companies have no rules governing its use, exposing them to security risks and even legal risks.

Cloud security company Zscaler commissioned Sapio Research to conduct a global survey entitled “All Eyes on Securing GenAI” in October 2023. Findings revealed that 95% of organizations are using generative AI tools in some way, and 89% consider these tools a potential security risk. Yet 23% of responding information technology leaders admit to having no generative AI monitoring in place, and 33% have yet to implement any additional generative AI security measures, though many have it on their agenda.

Sourcing and hallucinations
Another pitfall with using generative AI is that there’s a lack of references. AI usually doesn’t know where it got its information, and this can be a problem in certain circumstances. AI is also capable of what’s called “hallucinations,” said Bontz, which are incorrect or misleading results caused by insufficient data, or assumptions or biases in the learning model.

Bontz referred to a case where a New York attorney inadvertently cited a fake court case in an AI-generated paper. The lawyer used ChatGPT for research in the medical malpractice lawsuit and did not double-check the facts.

One possible answer to this problem, at least in the realm of Internet research, is a new search engine called Perplexity AI. The tool provides top links with references, along with additional research questions you should ask.

Generative AI came about because the public was clamoring for better ways to search the Internet and get useful information,” said Babinchak. This new tool not only provides useful information, but references to verify it.

Amy Babinchak, who owns Third Tier, a Michigan-based company that provides information technology services and consulting.

Ethics
The emergence of AI technology and its continued development bring with it a host of ethical concerns. For example, AI can be used to mislead people or sway public opinion. There are also privacy and security issues, biases built into AI models and the need for accountability in decision-making, not to mention potential job displacement.

“There are amazing uses, and it also can be used for evil,” said Bontz. “You just have to train more people with good morals.”

Babinchak advised companies to evaluate each AI tool based on its ethics policies, which should be well-defined, she said. For example, Microsoft AI has defined six principles for AI development and use, and there is an Open Ethic Initiative that works to keep AI use ethical worldwide.

Expert advice on AI
So, what’s a company to do about AI? Here’s some advice from the experts:
1. Get in the game. Experts agree that now is the time to learn about AI and explore its business applications. Bontz goes so far as to say that the “cost of ignorance” may be being edged out of the market. “Businesses that sit on the sidelines for a year and a half may be eliminated,” he said, likening reluctance to use AI to using paper and pencil to solve mathematical equations when competitors use calculators.

2. Provide guidelines. It’s important not to ignore AI tools, because many employees are already using them, and they could put your company at risk if they use them incorrectly or for the wrong purposes. “People are going to play with them (AI tools) no matter what,” said Pederson. “Let’s give them a protected sandbox.”
To that end, create company guidelines around AI use. “If you really want to be intentional about it, create an internal AI implementation team,” said Bontz.

3. Be strategic. “AI can do everything,” said Bontz. “If you try to do everything, you’ll do nothing.” He advises figuring out which three business problems you want to solve using AI.

Grizz agreed. “For companies just starting with AI, my advice is to begin small. Identify a specific problem area where AI can have a clear impact, such as demand forecasting or customer service enhancements,” he said. Gradually scale your AI initiatives as you gain confidence and understand the technology’s impact on your operations.”

4. Get your data ready. Remembering that AI tools are only as good as the data they’re fed, pay attention to your data. “Ensure your data is clean and organized before implementation,” said Grizz. Also, archive old data and audit permissions, Babinchak advised.

5. Train your employees. If it’s true that AI will not replace jobs, but employees who know how to use AI will rise to the top of the job market, then AI training is paramount. “Invest in training your team to work with AI tools and understand their outputs,” advised Grizz. They’ll need to know how to prompt AI to generate meaningful content, so educate them on using prompts, said Babinchak.

6. Plan for productivity. “This (AI) allows companies to create 10 times what they could before, or to do it 10 times faster,” said Bontz, who estimated that AI increases production by 65%. It’s important to plan for what employees will do with their extra time, he said. This may include developing different career paths for some people. It could also include rethinking work hours and schedules — for example four-day work weeks.

The future of AI
Where will AI end up? There’s a lot of uncertainty, as we’re in the early stages of this technological development. “AI as it stands today has the brain the size of a honeybee,” said Pederson. “We’re a long way from artificial general intelligence.”

“AI is very 1.0,” said Babinchak. “None of it is fully fleshed out.”

Put another way, we’re at the peak of what’s called the Gartner Hype Cycle, which represents the life cycle of technology from development to adoption to decline and obsolescence. Our expectations of AI’s potential are high right now, and we could experience some “disillusionment” with the technology in the near future.

“It could plateau a little bit,” said Pederson. But either way, now is the time to learn it and adapt.