Guest Articles

What Is Generative AI? How Does It Work?

In This Article

Generative artificial intelligence is a relatively new form of AI that, unlike its predecessors, can create new content by extrapolating from its training data. Its extraordinary ability to produce human-like writing, images, audio, and video have captured the world’s imagination since the first generative AI consumer chatbot was released to the public in the fall of 2022. A June 2023 report from McKinsey & Company estimated that generative AI has the potential to add between $6.1 to $7.9 trillion to the global economy annually by increasing worker productivity. To put that in context, the same research pegs the annual economic potential of increased productivity from all AI technologies at between $17.1 to $25.6 trillion. So, while generative AI has the “sizzle” here in mid-2023, it’s still only a part of the whole AI “steak.”

But every action has an equal and opposite reaction. So, along with its remarkable productivity prospects, generative AI brings new potential business risks—such as inaccuracy, privacy violations, and intellectual property exposure—as well as the capacity for large-scale economic and societal disruption. For example, generative AI’s productivity benefits are unlikely to be realized without substantial worker retraining efforts and, even so, will undoubtedly dislocate many from their current jobs. Consequently, government policymakers around the world, and even some technology industry executives, are advocating for rapid adoption of AI regulations.

This article is an in-depth exploration of the promise and peril of generative AI: How it works; its most immediate applications, use cases, and examples; its limitations; its potential business benefits and risks; best practices for using it; and a glimpse into its future.

What Is Generative AI?

Generative AI (GAI) is the name given to a subset of AI machine learning technologies that have recently developed the ability to rapidly create content in response to text prompts, which can range from short and simple to very long and complex. Different generative AI tools can produce new audio, image, and video content, but it is text-oriented conversational AI that has fired imaginations. In effect, people can converse with, and learn from, text-trained generative AI models in pretty much the same way they do with humans.

Generative AI took the world by storm in the months after ChatGPT, a chatbot based on OpenAI’s GPT-3.5 neural network model, was released on November 30, 2022. GPT stands for generative pretrained transformer, words that mainly describe the model’s underlying neural network architecture.

There are many earlier instances of conversational chatbots, starting with the Massachusetts Institute of Technology’s ELIZA in the mid-1960s. But most previous chatbots, including ELIZA, were entirely or largely rule-based, so they lacked contextual understanding. Their responses were limited to a set of predefined rules and templates. In contrast, the generative AI models emerging now have no such predefined rules or templates. Metaphorically speaking, they’re primitive, blank brains (neural networks) that are exposed to the world via training on real-world data. They then independently develop intelligence—a representative model of how that world works—that they use to generate novel content in response to prompts. Even AI experts don’t know precisely how they do this as the algorithms are self-developed and tuned as the system is trained.

Businesses large and small should be excited about generative AI’s potential to bring the benefits of technology automation to knowledge work, which until now has largely resisted automation. Generative AI tools change the calculus of knowledge work automation; their ability to produce human-like writing, images, audio, or video in response to plain-English text prompts means that they can collaborate with human partners to generate content that represents practical work.

“Over the next few years, lots of companies are going to train their own specialized large language models,” Larry Ellison, chairman and chief technology officer of Oracle, said during the company’s June 2023 earnings call.

Video: What Is Generative AI?

Generative AI vs. AI

Artificial intelligence is a vast area of computer science, of which generative AI is a small piece, at least at present. Naturally, generative AI shares many attributes in common with traditional AI. But there are also some stark distinctions.

  • Common attributes: Both depend on large amounts of data for training and decision-making (though the training data for generative AI can be orders of magnitude larger). Both learn patterns from the data and use that “knowledge” to make predictions and adapt their own behavior. Optionally, both can be improved over time by adjusting their parameters based on feedback or new information.
  • Differences: Traditional AI systems are usually designed to perform a specific task better or at lower cost than a human, such as detecting credit card fraud, determining driving directions, or—likely coming soon—driving the car. Generative AI is broader; it creates new and original content that resembles, but can’t be found in, its training data. Also, traditional AI systems, such as machine learning systems are trained primarily on data specific to their intended function, while generative AI models are trained on large, diverse data sets (and then, sometimes, fine-tuned on far smaller data volumes tied to a specific function). Finally, traditional AI is almost always trained on labeled/categorized data using supervised learning techniques, whereas generative AI must always be trained, at least initially, using unsupervised learning (where data is unlabeled, and the AI software is given no explicit guidance).

Another difference worth noting is that the training of foundational models for generative AI is “obscenely expensive,” to quote one AI researcher. Say, $100 million just for the hardware needed to get started as well as the equivalent cloud services costs, since that’s where most AI development is done. Then there’s the cost of the monumentally large data volumes required.

Key Takeaways

  • Generative AI became a viral sensation in November 2022 and is expected to soon add trillions of dollars to the global economy—annually.
  • AI is a form of neural network–based machine learning trained on vast data sets that can create novel text, image, video, or audio content in response to users’ natural language prompts.
  • Market researchers predict that the technology will deliver an economic boost by dramatically accelerating productivity growth for knowledge workers, whose tasks have resisted automation before now.
  • Generative AI comes with risks and limitations enterprises must mitigate, such as “hallucinating” incorrect or false information and inadvertently violating copyrights.
  • It is also expected to cause significant changes in the nature of work, including possible job losses and role restructuring.

Generative AI Explained

For businesses large and small, the seemingly magical promise of generative AI is that it can bring the benefits of technology automation to knowledge work. Or, as a McKinsey report put it, “activities involving decision making and collaboration, which previously had the lowest potential for automation.”

Historically, technology has been most effective at automating routine or repetitive tasks for which decisions were already known or could be determined with a high level of confidence based on specific, well-understood rules. Think manufacturing, with its precise assembly line repetition, or accounting, with its regulated principles set by industry associations. But generative AI has the potential to do far more sophisticated cognitive work. To suggest an admittedly extreme example, generative AI might assist an organization’s strategy formation by responding to prompts requesting alternative ideas and scenarios from the managers of a business in the midst of an industry disruption.

In its report, McKinsey evaluated 63 use cases across 16 business functions, concluding that 75% of the trillions of dollars of potential value that could be realized from generative AI will come from a subset of use cases in only four of those functions: customer operations, marketing and sales, software engineering, and research and development. Revenue-raising prospects across industries were more evenly distributed, though there were standouts: High tech topped the list in terms of the possible boost as a percentage of industry revenue, followed by banking, pharmaceuticals and medical products, education, telecommunications, and healthcare.

Separately, a Gartner analysis correlated with McKinsey’s predictions: For example, that more than 30% of new drugs and materials will be discovered using generative AI techniques by 2025, up from zero today, and that 30% of outbound marketing messages from large organizations will, likewise, be synthetically generated in 2025, up from 2% in 2022. And in an online survey, Gartner found that customer experience and retention was the top response (at 38%) of 2,500 executives who were asked about where their organizations were investing in generative AI.

What makes it possible for all this to happen so fast is that, unlike traditional AI, which has been quietly automating and adding value to commercial processes for decades, generative AI exploded into the world’s consciousness thanks to ChatGPT’s human-like conversational talent. That has also shed light on, and drawn people to, generative AI technology that focuses on other modalities; everyone seems to be experimenting with writing text, or making music, pictures, and videos using one or more of the various models that specialize in each area. So, with many organizations already experimenting with generative AI, its impact on business and society is likely to be colossal—and will happen stupendously fast.

The obvious downside is that knowledge work will change. Individual roles will change, sometimes significantly, so workers will need to learn new skills. Some jobs will be lost. Historically, however, big technology changes, such as generative AI, have always added more (and higher-value) jobs to the economy than they eliminate. But this is of little comfort to those whose jobs are eliminated.

How Does Generative AI Work?

There are two answers to the question of how generative AI models work. Empirically, we know how they work in detail because humans designed their various neural network implementations to do exactly what they do, iterating those designs over decades to make them better and better. AI developers know exactly how the neurons are connected; they engineered each model’s training process. Yet, in practice, no one knows exactly how generative AI models do what they do—that’s the embarrassing truth.

“We don’t know how they do the actual creative task because what goes on inside the neural network layers is way too complex for us to decipher, at least today,” said Dean Thompson, a former chief technology officer of multiple AI startups that have been acquired over the years by companies, including LinkedIn and Yelp, where he remains as a senior software engineer working on large language models (LLMs). Generative AI’s ability to produce new original content appears to be an emergent property of what is known, that is, their structure and training. So, while there is plenty to explain vis-a-vis what we know, what a model such as GPT-3.5 is actually doing internally—what it’s thinking, if you will—has yet to be figured out. Some AI researchers are confident that this will become known in the next 5 to 10 years; others are unsure it will ever be fully understood.

Here’s an overview of what we do know about how generative AI works:

  • Start with the brain. A good place to start in understanding generative AI models is with the human brain, says Jeff Hawkins in his 2004 book, “On Intelligence.” Hawkins, a computer scientist, brain scientist, and entrepreneur, presented his work in a 2005 session at PC Forum, which was an annual conference of leading technology executives led by tech investor Esther Dyson. Hawkins hypothesized that, at the neuron level, the brain works by continuously predicting what’s going to happen next and then learning from the differences between its predictions and subsequent reality. To improve its predictive ability, the brain builds an internal representation of the world. In his theory, human intelligence emerges from that process. Whether influenced by Hawkins or not, generative AI works exactly this way. And, startlingly, it acts as if it is intelligent.
  • Build an artificial neural network. All generative AI models begin with an artificial neural network encoded in software. Thompson says a good visual metaphor for a neural network is to imagine the familiar spreadsheet, but in three dimensions because the artificial neurons are stacked in layers, similar to how real neurons are stacked in the brain. AI researchers even call each neuron a “cell,” Thompson notes, and each cell contains a formula relating it to other cells in the network—mimicking the way that the connections between brain neurons have different strengths.
    Each layer may have tens, hundreds, or thousands of artificial neurons, but the number of neurons is not what AI researchers focus on. Instead, they measure models by the number of connections between neurons. The strengths of these connections vary based on their cell equations’ coefficients, which are more generally called “weights” or “parameters.” These connection-defining coefficients are what’s being referred to when you read, for example, that the GPT-3 model has 175 billion parameters. The latest version, GPT-4, is rumored to have trillions of parameters, though that is unconfirmed. There are a handful of neural network architectures with differing characteristics that lend themselves to producing content in a particular modality; the transformer architecture appears to be best for large language models, for example.
  • Teach the newborn neural network model. Large language models are given enormous volumes of text to process and tasked to make simple predictions, such as the next word in a sequence or the correct order of a set of sentences. In practice, though, neural network models work in units called tokens, not words.
    “A common word may have its own token, uncommon words would certainly be made up of multiple tokens, and some tokens may just be a single space followed by ‘th’ because that sequence of three characters is so common,” said Thompson. To make each prediction, the model inputs a token at the bottom layer of a particular stack of artificial neurons; that layer processes it and passes its output to the next layer, which processes and passes on its output, and so on until the final output emerges from the top of the stack. Stack sizes can vary significantly, but they’re generally on the order of tens of layers, not thousands or millions.
    In the early training stages, the model’s predictions aren’t very good. But each time the model predicts a token, it checks for correctness against the training data. Whether it’s right or wrong, a “backpropagation” algorithm adjusts the parameters—that is, the formulas’ coefficients—in each cell of the stack that made that prediction. The goal of the adjustments is to make the correct prediction more probable.
    “It does this for right answers, too, because that right prediction may have only had, say, a 30% certainty, but that 30% was the most of all the other possible answers,” Thompson said. “So, backpropagation seeks to turn that 30% into 30.001%, or something like that.”
    After the model has repeated this process for trillions of text tokens, it becomes very good at predicting the next token, or word. After initial training, generative AI models can be fine-tuned via a supervised learning technique, such as reinforcement learning from human feedback (RLHF). In RLHF, the model’s output is given to human reviewers who make a binary positive or negative assessment—thumbs up or down—which is fed back to the model. RLHF was used to fine-tune OpenAI’s GPT 3.5 model to help create the ChatGPT chatbot that went viral.
  • But how did the model answer my question? It’s a mystery. Here’s how Thompson explains the current state of understanding: “There’s a huge ‘we just don’t know’ in the middle of my explanation. What we know is that it takes your entire question as a sequence of tokens, and at the first layer processes all of those simultaneously. And we know it then processes the outputs from that first layer in the next layer, and so on up the stack. And then we know that it uses that top layer to predict, which is to say, produce a first token, and that first token is represented as a given in that whole system to produce the next token, and so on.
    “The logical next question is, what did it think about, and how, in all that processing? What did all those layers do? And the stark answer is, we don’t know. We … do … not … know. You can study it. You can observe it. But it’s complex beyond our ability to analyze. It’s just like F-MRI [functional magnetic resonance imaging] on people’s brains. It’s the crudest sketch of what the model actually did. We don’t know.”
    Although it’s controversial, a group of more than a dozen researchers who had early access to GPT-4 in fall 2022 concluded that the intelligence with which the model responds to complex challenges they posed to it, and the broad range of expertise it exhibits, indicates that GPT-4 has attained a form of general intelligence. In other words, it has built up an internal model of how the world works, just as a human brain might, and it uses that model to reason through the questions put to it. One of the researchers told “This American Life” podcast that he had a “holy s—” moment when he asked GPT-4 to, “Give me a chocolate chip cookie recipe, but written in the style of a very depressed person,” and the model responded: “Ingredients: 1 cup butter softened, if you can even find the energy to soften it. 1 teaspoon vanilla extract, the fake artificial flavor of happiness. 1 cup semi-sweet chocolate chips, tiny little joys that will eventually just melt away.”

Why Is Generative AI Important?

A useful way to understand the importance of generative AI is to think of it as a calculator for open-ended, creative content. Like a calculator automates routine and mundane math, freeing up a person to focus on higher-level tasks, generative AI has the potential to automate the more routine and mundane subtasks that make up much of knowledge work, freeing people to focus on the higher-level parts of the job.

Consider the challenges marketers face in obtaining actionable insights from the unstructured, inconsistent, and disconnected data they often face. Traditionally, they would need to consolidate that data as a first step, which requires a fair bit of custom software engineering to give common structure to disparate data sources, such as social media, news, and customer feedback.

“But with LLMs, you can simply feed in information from different sources directly into the prompt and then ask for key insights, or for which feedback to prioritize, or request sentiment analysis—and it will just work,” said Basim Baig, a senior engineering manager specializing in AI and security at Duolingo. “The power of the LLM here is that it lets you skip that massive and costly engineering step.”

Thinking further, Thompson suggests product marketers might use LLMs to tag free-form text for analysis. For example, imagine you have a huge database of social media mentions of your product. You could write software that applies an LLM and other technologies to:

  • Extract the main themes from each social media post.
  • Group the idiosyncratic themes that arise from individual posts into recurring themes.
  • Identify which posts support each recurring theme.

Then you could apply the results to:

  • Study the most frequent recurring themes, clicking through to examples.
  • Track the rise and fall of recurring themes.
  • Ask an LLM to dig deeper into a recurring theme for recurring mentions of product characteristics.

Generative AI Models

Generative AI represents a broad category of applications based on an increasingly rich pool of neural network variations. Although all generative AI fits the overall description in the How Does Generative AI Work? section, implementation techniques vary to support different media, such as images versus text, and to incorporate advances from research and industry as they arise.

Neural network models use repetitive patterns of artificial neurons and their interconnections. A neural network design—for any application, including generative AI—often repeats the same pattern of neurons hundreds or thousands of times, typically reusing the same parameters. This is an essential part of what’s called a “neural network architecture.” The discovery of new architectures has been an important area of AI innovation since the 1980s, often driven by the goal of supporting a new medium. But then, once a new architecture has been invented, further progress is often made by employing it in unexpected ways. Additional innovation comes from combining elements of different architectures.

Two of the earliest and still most common architectures are:

  • Recurrent neural networks (RNNs) emerged in the mid-1980s and remain in use. RNNs demonstrated how AI could learn—and be used to automate tasks that depend on—sequential data, that is, information whose sequence contains meaning, such as language, stock market behavior, and web clickstreams. RNNs are at the heart of many audio AI models, such as music-generating apps; think of music’s sequential nature and time-based dependencies. But they’re also good at natural language processing (NLP). RNNs also are used in traditional AI functions, such as speech recognition, handwriting analysis, financial and weather forecasting, and to predict variations in energy demand among many other applications.
  • Convoluted neural networks (CNNs) came about 10 years later. They focus on grid-like data and are, therefore, great at spatial data representations and can generate pictures. Popular text-to-image generative AI apps, such as Midjourney and DALL-E, use CNNs to generate the final image.

Although RNNs are still frequently used, successive efforts to improve on RNNs led to a breakthrough:

  • Transformer models have evolved into a much more flexible and powerful way to represent sequences than RNNs. They have several characteristics that enable them to process sequential data, such as text, in a massively parallel fashion without losing their understanding of the sequences. That parallel processing of sequential data is among the key characteristics that makes ChatGPT able to respond so quickly and well to plainspoken conversational prompts.

Research, private industry, and open-source efforts have created impactful models that innovate at higher levels of neural network architecture and application. For example, there have been crucial innovations in the training process, in how feedback from training is incorporated to improve the model, and in how multiple models can be combined into generative AI applications. Here’s a rundown of some of the most important generative AI model innovations:

  • Variational autoencoders (VAEs) use innovations in neural network architecture and training processes and are often incorporated into image-generating applications. They consist of encoder and decoder networks, each of which may use a different underlying architecture, such as RNN, CNN, or transformer. The encoder learns the important features and characteristics of an image, compresses that information, and stores it as a representation in memory. The decoder then uses that compressed information to try to recreate the original. Ultimately, the VAE learns to generate new images that are similar to its training data.
  • Generative adversarial networks (GANs) are used across a variety of modalities but appear to have a special affinity for video and other image-related applications. What sets GANs apart from other models is that they consist of two neural nets that compete against each other as they train. In the case of images, for example, the “generator” creates an image and the “discriminator” decides whether the image is real or generated. The generator is constantly trying to fool the discriminator, which is forever trying to catch the generator in the act. In most instances, the two competing neural nets are based on CNN architectures but may also be variants of RNNs or transformers.
  • Diffusion models incorporate multiple neural networks in an overall framework, sometimes integrating different architectures such as CNNs, transformers, and VAEs. Diffusion models learn by compressing data, adding noise to it, denoising it, and attempting to regenerate the original. The popular Stable Diffusion tool uses a VAE encoder and decoder for the first and final steps, respectively, and two CNN variations in the noising/denoising steps.

What Are the Applications of Generative AI?

While the world has only just begun to scratch the surface of potential uses for generative AI, it’s easy to see how businesses can benefit by applying it to their operations. Consider how generative AI might change the key areas of customer interactions, sales and marketing, software engineering, and research and development.

In customer service, earlier AI technology automated processes and introduced customer self-service, but it also caused new customer frustrations. Generative AI promises to deliver benefits to both customers and service representatives, with chatbots that can be adapted to different languages and regions, creating a more personalized and accessible customer experience. When human intervention is necessary to resolve a customer’s issue, customer service reps can collaborate with generative AI tools in real time to find actionable strategies, improving the velocity and accuracy of interactions. The speed with which generative AI can tap into an entire large enterprise’s knowledge base and synthesize new solutions to customer complaints gives service staff a heightened ability to effectively solve specific customer issues, rather than rely on outdated phone trees and call transfers until an answer is found—or the customer runs out of patience.

In marketing, generative AI can automate the integration and analysis of data from disparate sources, which should dramatically accelerate time to insights and lead directly to better-informed decision-making and faster development of go-to-market strategies. Marketers can use this information alongside other AI-generated insights to craft new, more-targeted ad campaigns. This reduces the time staff must spend collecting demographic and buying behavior data and gives them more time to analyze results and brainstorm new ideas.

Tom Stein, chairman and chief brand officer at B2B marketing agency Stein IAS, says every marketing agency, including his, is exploring such opportunities at high speed. But, Stein notes, there are also simpler, faster wins for an agency’s back-end processes.

“If we get an RFI [request for information], typically, 70% to 80% of the RFI will ask for the same information as every other RFI, maybe with some contextual differences specific to that company’s situation,” says Stein, who was also jury president of the 2023 Cannes Lions Creative B2B Awards. “It’s not that complicated to put ourselves in a position for any number of the AI tools to do that work for us.…So, if we get back that 80% of our time, and can spend that time adding value to the RFI and just making it sing, that’s a win every which way. And there are a number of processes like that.”

Software developers collaborating with generative AI can streamline and speed up processes at every step, from planning to maintenance. During the initial creation phase, generative AI tools can analyze and organize large amounts of data and suggest multiple program configurations. Once coding begins, AI can test and troubleshoot code, identify errors, run diagnostics, and suggest fixes—both before and after launch. Thompson notes that because so many enterprise software projects incorporate multiple programming languages and disciplines, he and other software engineers have used AI to educate themselves in unfamiliar areas far faster than they previously could. He has also used generative AI tools to explain unfamiliar code and identify specific issues.

In R&D, generative AI can increase the speed and depth of market research during the initial phases of product design. Then AI programs, especially those with image-generating capabilities, can create detailed designs of potential products before simulating and testing them, giving workers the tools they need to make quick and effective adjustments throughout the R&D cycle.

Oracle founder Ellison pointed out in the June earnings call that “specialized LLMs will speed the discovery of new lifesaving drugs.” Drug discovery is an R&D application that exploits generative models’ tendency to hallucinate incorrect or unverifiable information—but in a good way: identifying new molecules and protein sequences in support of the search for novel healthcare treatments. Separately, Oracle subsidiary Cerner Enviza has teamed up with the U.S. Food and Drug Administration (FDA) and John Snow Labs to apply AI tools to the challenge of “understanding the effects of medicines on large populations.” Oracle’s AI strategy is to make artificial intelligence pervasive across its cloud applications and cloud infrastructure.

Generative AI Use Cases

Generative AI has the far-reaching potential to speed up or fully automate a diverse set of tasks. Businesses should plan deliberate and specific ways to maximize the benefits it can bring to their operations. Here are some specific use cases:

  • Bridge knowledge gaps: With its straightforward, chat-based user interfaces, generative AI tools can answer workers’ general or specific questions to point them in the right direction when they get stuck on anything from the simplest queries to complex operations. Salespeople, for example, can ask for insights about a targeted account; coders can learn new programming languages.
  • Check for errors: Generative AI tools can search any text for mistakes, from informal emails to professional writing samples. And they can do more than correct errors: They can explain the what and the why to help users learn and improve their work.
  • Improve communication: Generative AI tools can translate text into different languages, tweak tone, create unique messages based on different data sets, and more. Marketing teams can use generative AI tools to craft more relevant ad campaigns, while internal staff can use it to search through previous communications and quickly find relevant information and answers to questions without interrupting other employees. Thompson believes this ability to synthesize institutional knowledge on any question or idea that a worker may have will fundamentally alter the way people communicate within large organizations, supercharging knowledge discovery.
  • Ease administrative burden: Businesses with heavy administrative work, such as medical coding/billing, can use generative AI to automate complex tasks, including appropriately filing documents and analyzing doctors’ notes. This frees staff to focus on more hands-on work, such as patient care or customer service.
  • Scan medical images for abnormalities: Medical providers can use generative AI to scan medical records and images to flag noteworthy issues and give doctors recommendations for medicine, including potential side effects contextualized with patient history.
  • Troubleshoot code: Software engineers can use generative AI models to troubleshoot and fine-tune their code faster and more reliably than combing through, line by line. They can then ask the tool for deeper explanations to inform future coding and improve their processes.

Benefits of Generative AI

The benefits that generative AI can bring to a business derive mainly from three overarching attributes: knowledge synthesis, human-AI collaboration, and speed. While many of the benefits noted below are similar to those promised in the past by earlier AI models and automation tools, the presence of one or more of these three attributes can help businesses realize the advantages faster, easier, and more effectively.

With generative AI, organizations can build custom models trained on their own institutional knowledge and intellectual property (IP), after which knowledge workers can ask the software to collaborate on a task in the same language they might use with a colleague. Such a specialized generative AI model can respond by synthesizing information from the entire corporate knowledge base with astonishing speed. Not only does this approach reduce or eliminate the need for complex—and often less effective and more expensive—software-engineering expertise to create specific programs for these tasks, it also is likely to surface ideas and connections that prior approaches couldn’t.

  • Increased productivity: Knowledge workers can use generative AI to reduce their time spent on routine day-to-day tasks, such as educating themselves on a new discipline suddenly needed for an upcoming project, organizing or categorizing data, combing the internet for applicable research, or drafting emails. By leveraging generative AI, fewer employees can accomplish tasks that previously required large teams or hours of work in a fraction of the time. A team of programmers, for example, could spend hours poring through flawed code to troubleshoot what went wrong, but a generative AI tool may be able to find the errors in moments and report them along with suggested fixes. Because some generative AI models possess skills that are roughly average or better across a broad spectrum of knowledge work competencies, collaborating with a generative AI system can dramatically boost its human partner’s productivity. For example, a junior product manager could also be at least an average project manager with an AI coach at their side. All these capabilities would dramatically accelerate knowledge workers’ ability to complete a project.
  • Reduced costs: Because of their speed, generative AI tools reduce the cost to complete processes, and if it takes half the time to do a task, the task costs half as much as it otherwise would. In addition, generative AI can minimize errors, eliminate downtime, and identify redundancies and other costly inefficiencies. There is an offset, however: Because of generative AI’s tendency to hallucinate, human oversight and quality control is still necessary. But human-AI collaborations are expected to do far more work in less time than humans alone—better and more accurately than AI tools alone—thereby reducing costs. While testing new products, for example, generative AI can help to create more advanced and detailed simulations than older tools could. This ultimately reduces the time and cost of testing new products.
  • Improved customer satisfaction: Customers can get a superior and more personalized experience through generative AI–based self-service and generative AI tools “whispering in the ear” of customer service reps, infusing them with knowledge in real time. While the AI-powered customer service chatbots encountered today can sometimes feel frustratingly limited, it’s easy to imagine a much higher quality customer experience powered by a company’s specially trained generative AI model, based on the caliber of today’s ChatGPT conversations.
  • Better-informed decision-making: Specially trained, enterprise-specific generative AI models can provide detailed insights through scenario modeling, risk assessment, and other sophisticated approaches to predictive analytics. Decision-makers can leverage these tools to gain a deeper understanding of their industry and the business’s position in it through personalized recommendations and actionable strategies, informed by further-reaching data and faster analysis than human analysts or older technology could generate on their own.
    For example, decision-makers can better plan inventory allocation before a busy season via more accurate demand forecasts made possible by a combination of internal data collected by their enterprise resource planning (ERP) system and comprehensive external market research, which is then analyzed by a specialized generative AI model. In this case, better allocation decisions minimize overbuying and stockouts while maximizing potential sales.
  • Faster product launches: Generative AI can quickly produce product prototypes and first drafts, help fine-tune works in progress, and test/troubleshoot existing projects to find improvements much faster than previously possible.
  • Quality control: An enterprise-specific, specialized generative AI model is likely to expose gaps and inconsistencies in the user manuals, videos, and other content that a business presents to the public.
A Sample of Specific Generative AI Benefits
 Knowledge synthesisHuman-AI collaborationSpeed
Increased productivityOrganize data, expedite research, product first drafts.Educate workers on new disciplines, suggest novel ways to solve problems.Accelerate knowledge workers’ ability to complete a new project.
Reduced costsIdentify redundancies and inefficiencies to improve workflows.Minimize human errors, reduce downtime through collaborative oversight.Complete tasks faster (if a task takes half the time, it has half the cost).
Improved customer satisfactionQuickly organize and retrieve customer account information to hasten issue resolution.Improved chatbots to automate simple interactions and better information to reps when human help is needed.Give real-time account updates and information to both customers and service reps.
Better-informed decision-makingFast-track insights by mediating predictive analytics, such as scenario modeling and risk assessment.Give personalized recommendations and actionable strategies to decision makers.Generate faster analysis from further-reaching data than human analysts or older technology.
Faster product launchesProduce prototypes and “minimal viable products” (MVPs).Test and troubleshoot existing projects to find improvements.Increase the speed at which adjustments can be implemented.

Limitations of Generative AI

Anyone who has used generative AI tools for education and/or research has likely experienced their best-known limitation: They make up stuff. Since the model is only predicting the next word, it can extrapolate from its training data to state falsehoods with just as much authority as the truths it reports. This is what AI researchers mean by hallucination, and it’s a key reason why the current crop of generative AI tools requires human collaborators. Businesses must take care to prepare for and manage this and other limitations as they implement generative AI. If a business sets unrealistic expectations or does not effectively manage the technology, the consequences can harm the company’s performance and reputation.

  • Requires oversight: Generative AI models can introduce false or misleading information, often with such detail and authoritative tone that even experts can be fooled. Similarly, their outputs may contain biased or offensive language learned from the data set that the model was trained on. Humans remain a critical part of the workflow to prevent these flawed outputs from spreading and reaching customers or influencing company policy.
  • Computational power and initial investment: Generative AI models require massive amounts of computing power for both training and operation. Many companies lack the necessary resources and expertise to build and maintain these systems on their own. This is one reason why much generative AI development is done using cloud infrastructure.
  • Potential to converge, not diverge: Organizations that don’t build their own specialized models, relying instead on public generative AI tools, may be doomed to mediocrity. Often, they will find their conclusions are identical to others’ because they’re based on the same training data. Unless these firms infuse their work with human innovation, they may find themselves effectively adapting to current best practices but struggling to find a competitive differentiator.
  • Resistance from staff and customers: Staff, especially long-time employees with ingrained protocols and methods, can struggle to adjust to generative AI, leading to a decrease in productivity while they adapt. Similarly, staff may resist the technology for fear of losing their jobs. Managers and business leaders must assuage these fears and be open and transparent about how the technology will change—or not change—the structure of the business.

Generative AI Risks and Concerns

Generative AI has elicited extreme reactions on both sides of the risk spectrum. Some groups are concerned that it will lead to human extinction, while others insist it will save the world. Those extremes are outside the scope of this article. However, here are some important risks and concerns that business leaders implementing AI technology must understand so that they can take steps to mitigate any potential negative consequences.

  • Trust and reliability: Generative AI models make inaccurate claims, sometimes hallucinating completely fabricated information. Similarly, many models are trained with older data, typically looking only at information published up to a certain date, so what fit last year’s market may no longer be relevant or useful. For example, businesses looking to improve their supply chain operations may find that their models’ suggestions are outdated and not relevant in the ever-changing global economy. Users must verify all claims before acting on them to ensure accuracy and relevance.
  • Privacy/intellectual property: Generative AI models often continue to learn from information inputs provided as part of prompts. Businesses, especially those that collect sensitive personal information from their customers, such as medical practices, must take care not to expose protected IP or confidential data. If the model accesses this information, it may increase the likelihood of exposure.
  • Supercharged social engineering: Threat actors are already using generative AI to help them better personalize social engineering and other cyberattacks by making them appear more authentic.
    “Already, it’s very hard to distinguish if you’re talking to a bot or a human online,” said Baig, the Duolingo AI and security engineer. “It’s become much easier for criminals looking to make a buck to generate a bunch of content that can fool people.”
  • Decrease in output quality and originality: Generative AI may make building products and content easier and faster, but it doesn’t guarantee a higher-quality result. Relying on AI models without significant human collaboration may result in products that become standardized and lacking in creativity.
  • Bias: If a generative AI model is trained on biased data, ranging from gaps in perspectives to harmful and prejudicial content, those biases will be reflected in its output. For example, if a business has historically hired only one type of employee, the model may cross reference new applicants with the “ideal” hire and eliminate qualified candidates because they don’t fit the mold, even if the organization intended to discard that mold.
  • Shadow AI: Employees’ use of generative AI without the organization’s official sanction or knowledge can lead to a business inadvertently putting out incorrect information or violating another organization’s copyright.
  • Model collapse: AI researchers have identified a phenomenon called model collapse that could render generative AI models less useful over time. Essentially, as AI-generated content proliferates, models that are trained on that synthetic data—which inevitably contains errors—will eventually “forget” the characteristics of the human-generated data on which they were originally trained. This concern may reach a breaking point as the internet becomes more populated by AI content, creating a feedback loop that degrades the model.
  • AI regulation: Because generative AI is so new, there’s not much applicable regulation. Still, governments all over the world are investigating how to regulate it. Some countries, such as China, have already proposed regulatory measures on how models can be trained and what they are allowed to produce. As more countries impose regulations, businesses, especially international companies, need to monitor new and changing laws to ensure compliance and avoid fines or criminal charges for misusing the technology.

Ethics and Generative AI

The rise of big data analytics more than a decade ago raised novel ethical questions and debates because emergent tools made it possible to infer private or sensitive information about people that they had not, and would not want, revealed. How should companies handle their ability to possess such information?

Given its potential to supercharge data analysis, generative AI is raising new ethical questions and resurfacing older ones.

  • How will generative AI impact workers? Generative AI already is making many workers feel uneasy about their long-term employment prospects—and justifiably so. While history shows that technology advances have always led to more, and higher-value, jobs than they eliminate, the roles AI might render obsolete are paying the bills for people today.
  • How can we eliminate potential bias? We know all AI models have the potential to produce biased results. Organizations must proactively choose how to manage this challenge from both the enterprise risk and ethical perspectives.
  • How might bad actors use GAI models to wreak harm and havoc on the public? The countless potential uses of generative AI unfortunately include criminal and harmful acts, especially as generative models become more accessible to the public. Deepfake videos using someone’s voice and likeness, hacking tools to enhance cyberattacks, widespread misinformation, and social engineering campaigns are just a few of the potential ways malicious actors can put generative AI to use. Currently, many models have safeguards, but those guardrails are not considered perfect. Businesses implementing their own models must understand what their systems are capable of and take steps to ensure their responsible use.
  • Who owns the work generated by AI? Even if a business fine-tunes a model on its own data, generative AI models are trained on vast amounts of external data. A model’s output, then, may include elements of other organizations’ work, leading to potential ethical and legal issues, such as plagiarism and copyright infringement. This is especially true for image-generating AI models; artists from all creative fields are exploring ways to keep their work from being fed into these programs. Regulatory bodies may create new rules over time, so anyone using generative AI should consider where the content is coming from and how it will be used before publishing it as their own.

Generative AI Examples

Enterprises across all sizes and industries, from the United States military to Coca-Cola, are prodigiously experimenting with generative AI. Here is a small set of examples that demonstrate the technology’s broad potential and rapid adoption.

Snap Inc., the company behind Snapchat, rolled out a chatbot called “My AI,” powered by a version of OpenAI’s GPT technology. Customized to fit Snapchat’s tone and style, My AI is programmed to be friendly and personable. Users can customize its appearance with avatars, wallpapers, and names and can use it to chat one-on-one or among multiple users, simulating the typical way that Snapchat users communicate with their friends. Users can request personal advice or engage in casual conversation about topics such as food, hobbies, or music—the bot can even tell jokes. Snapchat orients My AI to help users explore features of the app, such as augmented-reality lenses, and to help users get information they wouldn’t normally turn to Snapchat for, such as recommending places to go on a local map.

Bloomberg announced BloombergGPT, a chatbot trained roughly half on general data about the world and half on either proprietary Bloomberg data or cleaned financial data. It can perform simple tasks, such as writing good article headlines, and propriety tricks, like turning plain-English prompts into the Bloomberg Query Language required by the company’s data terminals, which are must-haves in many financial industry firms.

Oracle has partnered with AI developer Cohere to help businesses build internal models fine-tuned with private corporate data, in a move that aims to spread the use of specialized company-specific generative AI tools.

“Cohere and Oracle are working together to make it very, very easy for enterprise customers to train their own specialized large language models while protecting the privacy of their training data,” Oracle’s Ellison told financial analysts during the June 2023 earnings call. Oracle plans to embed generative AI services into business platforms to boost productivity and efficiency throughout a business’s existing processes, bypassing the need for many companies to build and train their own models from the ground up. To that end, the company also recently announced the incorporation of generative AI capabilities into its human resources software, Oracle Fusion Cloud Human Capital Management (HCM).

In addition:

  • Coca-Cola is using text and image generators to personalize ad copy and build highly tailored customer experiences.
  • American Express, which has long been at the forefront of AI use in credit card fraud detection, has its Amex Digital Labs subsidiary developing consumer and B2B capabilities.
  • The Pentagon’s digital and AI office is experimenting with five generative AI models, feeding them classified data and testing them to explore how they might be used to suggest creative options that human military leaders never considered.
  • Duolingo is using a ChatGPT-powered bot to help its foreign language learners. It provides in-depth explanations about why their answers on practice tests are right or wrong, mimicking the way users might interact with a human tutor.
  • Slack has released a chatbot that aims to help customers’ workers distill insights and advice from the corpus of institutional knowledge that resides in each customer’s Slack channels.

Generative AI Tools

ChatGPT is the tool that became a viral sensation, but a multitude of generative AI tools are available for each modality. For example, just for writing there is Jasper, Lex, AI-Writer, Writer, and many others. In image generation, Midjourney, Stable Diffusion, and Dall-E appear to be the most popular today.

Among the dozens of music generators are AIVA, Soundful, Boomy, Amper, Dadabots, and MuseNet. Although software programmers have been known to collaborate with ChatGPT, there are also plenty of specialized code-generation tools, including Codex, codeStarter, Tabnine, PolyCoder, Cogram, and CodeT5.

History of Generative AI

Perhaps surprisingly, the first step on the path to the generative AI models in use today came in 1943, the same year that the first electric programmable computer—the Colossus, which was then used by Britain to decode encrypted messages during World War II—was demonstrated. The AI step was a research paper, “A Logical Calculus of Ideas Immanent in Nervous Activity,” by Warren McCulloch, a psychiatrist and professor at the University of Illinois College of Medicine, and Walter Pitts, a self-taught computational neuroscientist.

Pitts, an apparent math prodigy, ran away from home at age 15 and was homeless when he met McCulloch, who took Pitts in to live with his family. Pitts’ only degree was an Associate of Arts awarded by the University of Chicago after it published the seminal paper that established the basic math by which an artificial neuron “decides” whether to output a one or a zero.

The second step shifts north and east to Buffalo, NY, and a Cornell Aeronautical Laboratory research psychologist named Frank Rosenblatt. Operating under a July 1957 grant from the Office of Naval Research within the United States Department of the Navy as part of Cornell’s Project PARA (Perceiving and Recognizing Automaton), Rosenblatt built on McCulloch and Pitts’ math to develop the perceptron, a neural network with a single “hidden” layer between the input and output layers. Before building the Mark I Perceptron, which today rests in the Smithsonian Institution, Rosenblatt and the Navy simulated it on an IBM 704 mainframe computer for a public demonstration in July 1958. But the perceptron was such a simple neural network it drew criticism from Massachusetts Institute of Technology computer scientist Marvin Minsky, cofounder of MIT’s AI laboratory. Minsky and Rosenblatt reportedly debated the perceptron’s long-term prospects in public forums, resulting in the AI community largely abandoning neural network research from the 1960s until the 1980s.

This period came to be known as the “AI winter.”

The landscape for neural network research thawed out in the 1980s thanks to the contributions of several researchers, most notably Paul Werbos, whose initial work rediscovered the perceptron; Geoffrey Hinton; Yoshua Bengio; and Yann LeCun. Their combined work demonstrated the viability of large, multilayer neural networks and showed how such networks could learn from their right and wrong answers through credit assignment via a backpropagation algorithm. This is when RNNs and CNNs emerged. But the limitations of these early neural nets, combined with overhyped early expectations that could not be met due to those limitations and the state of computational power at the time, led to a second AI winter in the 1990s and early 2000s.

This time, though, many neural net researchers stayed the course, including Hinton, Bengio, and LeCun. The trio, sometimes called “the Godfathers of AI,” shared the 2018 Turing Award for their 1980s work, their subsequent perseverance, and their ongoing contributions. By the mid-2010s, new and diverse neural net variants were rapidly emerging, as described in the Generative AI Models section.

Future of Generative AI

What impact generative AI has on businesses and how people work remains to be seen. But this much is clear: Massive investments are pouring into generative AI across multiple dimensions of human endeavor. Venture capitalists, established corporations, and virtually every business in between are investing in generative AI startups at breakneck speed. The universal “magic” of LLMs is an uncanny ability to mediate human interaction with big data, to help people make sense of information by explaining it simply, clearly, and astonishingly fast. This suggests that generative AI will become embedded in a multitude of existing applications and cause the invention of a second wave of new applications.

Gartner, for example, predicts that 40% of enterprise applications will have embedded conversational AI by 2024, 30% of enterprises will have AI-augmented development and testing strategies by 2025, and more than 100 million workers will collaborate with “robocolleagues” by 2026.

Of course, it’s possible that the risks and limitations of generative AI will derail this steamroller. Fine-tuning generative models to learn the nuances of what makes a business unique may prove too difficult, running such computationally intensive models may prove too costly, and an inadvertent exposure of trade secrets may scare companies away.

Or it all may happen but at a slower pace than many now expect. As a reminder, the promise of the internet was realized, eventually. But it took a decade longer than the first generation of enthusiasts anticipated, during which time necessary infrastructure was built or invented and people adapted their behavior to the new medium’s possibilities. In many ways, generative AI is another new medium.

Influencers are thinking broadly about the future of generative AI in business.

“It may mean we build companies differently in the future,” says Sean Ammirati, a venture capitalist who is also the distinguished service professor of entrepreneurship at Carnegie Mellon University’s Tepper School of Business and cofounder of CMU’s Corporate Startup Lab. In the same way that “digital native” companies had an advantage after the rise of the internet, Ammirati envisions future companies built from the ground up on generative AI-powered automation will be able to take the lead.

“These companies will be automation-first, so they won’t have to relearn how to stop doing things manually that they should be doing in an automated way,” he says. “You could end up with a very different kind of company.”

Easily Adopt Generative AI with Oracle

Oracle not only has a long history working with artificial intelligence capabilities and incorporating them into its products, it is also at the forefront of generative AI development and activities. Oracle Cloud Infrastructure is used by leading generative AI companies. This next-generation cloud can provide the perfect platform for enterprises to build and deploy specialized generative AI models specific to their organizations and individual lines of business. As explained by Oracle’s Ellison, “All of Oracle’s cloud data centers have a high-bandwidth, low-latency, RDMA [remote direct memory access] network that is perfectly optimized for building the large-scale GPU clusters that are used to train generative large language models. The extreme high performance and related cost savings of running generative AI workloads in our Gen 2 cloud has made Oracle the number one choice among cutting-edge AI development companies.”

Oracle’s partnership with Cohere has led to a new set of generative AI cloud service offerings. “This new service protects the privacy of our enterprise customers’ training data, enabling those customers to safely use their own private data to train their own private specialized large language models,” Ellison said.

The generative AI story started 80 years ago with the math of a teenage runaway and became a viral sensation late last year with the release of ChatGPT. Innovation in generative AI is accelerating rapidly, as businesses across all sizes and industries experiment with and invest in its capabilities. But along with its abilities to greatly enhance work and life, generative AI brings great risks, ranging from job loss to, if you believe the doomsayers, the potential for human extinction. What we know for sure is that the genie is out of the bottle—and it’s not going back in.

Generative AI FAQs

What is generative AI technology?

Generative AI technology is built on neural network software architectures that mimic the way the human brain is believed to work. These neural nets are trained by inputting vast amounts of data in relatively small samples and then asking the AI to make simple predictions, such as the next word in a sequence or the correct order of a sequence of sentences. The neural net gets credit or blame for right and wrong answers, so it learns from the process until it’s able to make good predictions. Ultimately, the technology draws on its training data and its learning to respond in human-like ways to questions and other prompts.

What is an example of generative AI?

The best-known example of generative AI today is ChatGPT, which is capable of human-like conversations and writing on a vast array of topics. Other examples include Midjourney and Dall-E, which create images, and a multitude of other tools that can generate text, images, video, and sound.

What is the difference between generative AI and AI?

It’s important to note that generative AI is not a fundamentally different technology from traditional AI; they exist at different points on a spectrum. Traditional AI systems usually perform a specific task, such as detecting credit card fraud. Generative AI is usually broader and can create new content. This is partly because generative AI tools are trained on larger and more diverse data sets than traditional AI. Furthermore, traditional AI is usually trained using supervised learning techniques, whereas generative AI is trained using unsupervised learning.

What is the danger of generative AI?

A major debate is going on in society about the possible risks of generative AI. Extremists on opposite sides of the debate have said that the technology may ultimately lead to human extinction, on one side, or save the world, on the other. More likely, AI will lead to the elimination of many existing jobs. Enterprises should be concerned with the ways in which generative AI will drive changes in work processes and job roles, as well as the potential for it to inadvertently expose private or sensitive information or infringe on copyrights.

What is generative AI good for?

Generative AI can be put to excellent use in partnership with human collaborators to assist, for example, with brainstorming new ideas and educating workers on adjacent disciplines. It’s also a great tool for helping people more quickly analyze unstructured data. More generally, it can benefit businesses by improving productivity, reducing costs, improving customer satisfaction, providing better information for decision-making, and accelerating the pace of product development.

What can generative AI not do?

Generative AI can’t have genuinely new ideas that haven’t been previously expressed in its training data or at least extrapolated from that data. It also shouldn’t be left on its own. Generative AI requires human oversight and is only at its best in human-AI collaborations.

What industries use generative AI?

Because of its breadth, generative AI is likely to be useful in virtually every industry.

How will generative AI impact the future of work?

Generative AI is likely to have a major impact on knowledge work, activities in which humans work together and/or make business decisions. At the very least, knowledge workers’ roles will need to adapt to working in partnerships with generative AI tools, and some jobs will be eliminated. History demonstrates, however, that technological change like that expected from generative AI always leads to the creation of more jobs than it destroys.

*Reproduced with permission from Oracle.

Related posts

AI and the Changing Face of Cybersecurity Posture

Ric Smith

5 Tips to Help Your Organization Prepare for the Next Zero-Day Threat

Carlos Morales

How AI-powered Data Virtualization Will Drive Automation in Data Integration and Management

Ravi Shankar