ChatGPT vs Marketers: Risks, Rewards, and Responsibilities

Yomi Sanghvi
5 min readAug 7, 2023

--

ChatGPT vs Marketer: 5 Tips for Resolving the Ethical Dilemma of a Marketer while Using ChatGPT

Launched in Nov 2022, and reached a user base of more than 100 million unique users by February 2023 and over 25 million daily visits, ChatGPT is probably the fastest-growing tool in history.

Most professionals across the world have started to embrace generative AI at work in some way. As per AI Marketing Benchmark Report 2023, 61.4% of marketers have used AI in their marketing activities. If you are also tech-savvy like me, chances are you might have used it or thought of using it too. As a marketer, I find it an interesting productivity tool to quickly do market research, generate content, and get tips and guidance on resolving work problems. ChatGPT does not just support creativity, it amplifies it. And I think it is game-changing.

With so many benefits and such quick adaption, comes the risks. In the case of ChatGPT, the risks are not just limited to the use of information that can lead to painful lessons. It is much more than what you can imagine. Let us discuss the risks while using ChatGPT and how as a marketer you can navigate them.

Personalization vs Privacy:

ChatGPT thrives on data that it collects from millions of hints, patterns, connections, and preferences people have left or you have asked in the prompt as a question. It uses that to help you get psychographic information about your ideal customer. This can be used in many ways from articulating the personalized message to getting help indicating ways to involve the person or a set audience in a conversation or an experience.

A typical example of getting personalized content on ChatGPT:

ChatGPT Personalized Marketing Message

Obtaining user consent while collecting data with full disclosure of the use of such data is a well-accepted ethical marketing practice. However, as marketers, in the attempt of getting a closer understanding of the customer, we need not be tempted to compromise the user data and maintain respect for the individuality of the audience. We should be discrete in the prompt and careful to not include personally identifiable information (PII) such as name, email address, social security number, place of work, etc. to get a personalized message. It can violate the privacy of your audience.

I am over-sensitive toward the use of information when using external tools. I usually never share my organization, competitors, or client data to get solutions to my problem rather I use examples and placeholders like an IT company, a SaaS platform, or a b2b business.

Dependency vs Deep fakes:

ChatGPT is not capable of producing original ideas. It collects data from the internet and presents it as a response, without attribution, or citation. Its sources can have a biased point of view. A biased opinion in the source can lead to biases in the ChatGPT results too. Because the tool responds fast, it can be exploited easily to spread misinformation and disinformation. Also, its knowledge cut-off date is September 2021; hence, the results can be outdated.

To test the accuracy, I worked on an example where ChatGPT gave different responses when I intended to ask the same question using similar words in the prompt.

ChatGPT result wrong

The inaccuracy in the ChatGPT result shown in the above example is called AI hallucinations. Major research firms are studying it. Morgan Stanley believes that ChatGPT will give wrong answers for a couple of years.

Incidentally, there are several news reports on fake Amazon product reviews by ChatGPT. While Amazon is acting against these malpractices, it would be ideal for us to be cautious of trusting ChatGPT as a reliable source of information.

When used with other tools that create imagery and audio, ‘deep fake’ manipulative videos can be produced. Many such deep fake videos are live on the internet and are used for entertainment on social media. However, they can be dangerous if used in legal proceedings, healthcare practices, or forming faulty public opinions.

As a content creation tool, ChatGPT saves time on research and generating content. However, it’s best to not take it as a single source of truth. When using it, I usually review the content and make necessary changes, to add more relevance to it. I prefer to verify the source of facts, opinions, and findings before taking it to business use.

Fact vs Fraud:

As per data published on Palo Alto Networks, there are more than 118 ChatGPT-related malicious URLs detected daily by their URL filtering system. There are several reports of various cases of copycat chatbots and ChatGPT scams that are a daily appearance on the internet. ChatGPT-like squatting domains at the workplace can lead to security risks for your organization. AI scammers can mimic trusted sources and people we know. With ChatGPT, scammers can create more realistic and persuasive email content, websites, deceptive media, etc. for phishing attach that could be difficult for the most sophisticated firewall to detect. Fraudsters can use it to create fake identities by forging real documents exploit them for financial fraud.

Information vs Identity theft:

As per the data usage policy of OpenAI, ChatGPT stores conversation history and uses it to train AI. When using ChatGPT at work be mindful of what information you are giving out in the prompt. Open AI has accepted a data breach in the past. Such security vulnerabilities can put critical business data at risk. In certain cases, where the responsibility is related to financial statements or confidential information from a client it may also put you in legal trouble.

Human replacement vs unemployment:

The AI Marketing Benchmark Report also mentions that 71.2% of marketers believe AI can outperform humans at their jobs. However, reliance on ChatGPT is to automate and speed up the research and content generation process. Marketing has been a field that has valued creativity and innovation over speed and budget spending. New ideas, coming from anyone in the team, will always have more significance over information produced through a tool. To address the productivity within the team, career shifts can be made. For example, trained writers can be moved to roles of editing; and market researchers can be moved to the roles of data analyst which can be a value-added placement for the team members’ career growth.

Also, people connect better with brands that have a human way of outreach. Personal interactions, memorable experiences, and emotional engagement lead to long-term loyalty.

The use of ChatGPT is raising critical questions about following work ethics and responsibilities. I find it an influential tool if used ethically. If you are a manager, you can set standards and guidelines on usage by your team and keep an eye on regulations and laws for benefitting from this technology. As an employee, we all need to be transparent and professional in our conduct of using the tool and take complete accountability and responsibility for fact and accuracy checks.

This was my list. Do you have anything to add to this? Mention it in the comment, I would love to hear from you.

--

--

Yomi Sanghvi
Yomi Sanghvi

Written by Yomi Sanghvi

Professional Marketer | B2B Marketing Consultant | Ethical tech practitioner | Emerging tech enthusiast | Open for collaborations and connections

Responses (1)