Over 70% of companies now use generative AI for one or more business functions, according to McKinsey and Company. Marketers use it to develop campaigns, personalize content, test creative and drive strategy. The laws and regulations governing AI, however, haven’t seen explosive growth. This creates potential pitfalls: copyright issues, inconsistent policies, inaccurate outputs, and illegal audience segmentation.
So, what happens when companies adopt AI faster than the rules can keep up?
It’s a tremendous opportunity for companies to set their own policies and guardrails, actively choosing how AI drives them forward and keeps marketing efforts safe and secure.
Ethical challenges

AI tools are created by human coders and inherit their biases, making outputs prone to bias that must be checked by humans. In 2022, MIT Technology Review found that the AI art tool DALL-E 2 linked white men with ‘CEO’ or ‘director’ 97% of the time when prompted. Such bias can harm brand reputation if marketing materials fail to reflect a company’s diverse customer base. It can also impact team diversity. Workday is currently facing a legal challenge from a jobseeker who claims its AI tool showed bias against his age, race, and disability status when filtering candidates, leading to no interview opportunities.
Moral challenges

Marketers also face three key moral challenges: environmental harm, workforce reduction, and lack of accountability. ChatGPT receives over 1 billion prompts daily; Bitcoin uses more power per year than countries like Argentina and the Netherlands; and activity from Meta’s 3 billion monthly users, plus other AI tools, drives massive energy and water demands to cool servers. According to NPR, nothing currently offsets this output, and energy needs are only expected to grow.
AI is affecting entry- and midlevel white-collar jobs as companies turn to it to do more tasks. OpenAI CEO Sam Altman told TechGig, “We’re entering an era where a small team powered by AI can do what once required hundreds of engineers.” Advanced AI tools are already replacing internships and entry-level roles. Marketing leaders will need to continue developing talent, purposefully factor in human experience, and provide training and resources as AI tools evolve. New job titles are emerging, such as generative content director, generative analytics storyteller, and prompt strategist.
A McKinsey and Company survey of 830 generative AI users found that respondents were roughly equally likely to review everything or nothing, with slightly more reviewing nothing. This lack of accountability is concerning, as it can be unclear whether the requestor or the AI tool itself is responsible for outputs. Marketing leaders should implement a framework to review all AI outputs before publication.
Legal challenges

All marketers want their assets to be unique and exclusive to their brand. According to the U.S. Copyright Office, nothing created by AI can be owned by a person or company. As a result, AI-generated assets could be used by competitors, small businesses, or other organizations without legal recourse, potentially leading to brand misuse or impersonation. Some companies now ideate with AI but rely on physical photo and video shoots to avoid these issues. Companies should also be aware that some AI-generated images and videos are stolen from creatives whose work was used to train the LLMs.
A bigger challenge is the rise of highly accurate deepfakes, where a person’s name, image, or likeness can be used without consent. Deepfakes can impact a brand’s messaging, identity, and resonance with customers. For example, a recent impersonation of Secretary of State Marco Rubio used synthetic media (voice and writing style) to communicate with five people. Once a deepfake emerges, it’s difficult to trace or contain -- sometimes going viral before it’s able to be removed. Failing to spot or correct inaccuracies could cause customer confusion or financial harm, such as a false CEO retirement announcement directly affecting stock price.
Key action items for leaders

Ethics and bias: Conduct quarterly AI audits to evaluate bias, accuracy, compliance, effectiveness, and potential risks. If using AI in hiring, use benchmarks so AI-driven decisions reflect human hiring practices. Monitor AI brand mentions by tracking how your brand is referenced across major LLMs, with tools that capture prompt-level context and provide transparency into citations and sources.
Talent and accountability: Host monthly hands-on AI sessions for employee development and include webinars or conferences in development plans. Create AI accountability frameworks that include human marketers’ review before AI-generated content goes live, legal review of medium-and high-stakes content, and clear audit trails. Require AI vendors to disclose their training data, bias mitigation, and compliance measures. Refresh this framework annually as new regulations emerge.
Brand protection: Monitor images that include your logo, leaders, or recognizable features to flag deepfakes. Develop an AI crisis playbook that includes copyright and deepfake response procedures, notification and PR decision trees, and legal and risk review.

For marketing leaders, the AI ground is shifting by the hour. They must first determine their brand’s overall AI risk tolerance and then develop a framework to govern AI content creation. Leaders who build a robust culture of AI accountability, training, and guidance will be ready for whatever comes next.
About the author
Katie Berry is an adjunct marketing faculty and AI advisor at the University of St. Thomas - Opus College of Business. She works at the intersection of AI and marketing; and in 2024, developed the first AI-driven brand campaign for U.S. Bank. She consults with creative agencies, nonprofits and Fortune 500 companies on how to leverage AI for more efficient and effective business outcomes. She also relies heavily on AI as a second set of hands when managing a chiropractic clinic with her partner.