Navigating the GenAI Revolution: 5 Strategies for Safe and Effective Marketing

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this presentation, Udipta will delve into the roles of both predictive and generative AI in marketing, with a strong emphasis on the importance of responsible implementation. Drawing from his experience at Salesforce, Adobe, and WPP, he will unpack the transformative potential of generative AI while underscoring the foundational strengths of predictive AI. Udipta will outline five essential strategies that marketing teams can adopt to maximize the benefits of generative AI while effectively managing its associated risks. Through practical examples and case studies, he aims to equip attendees with actionable insights to drive responsible AI adoption within their organizations.

This talk has been presented at Productivity Conf - Practical AI in Marketing, check out the latest edition of this Tech Conference.

FAQ

Udittha is the CMO of Travers.io, a sports AI technology company, with 16 years of experience in marketing technology roles at companies like Adobe, Salesforce, Rakuten, and WPP.

The presentation focuses on strategies for using generative AI in marketing while prioritizing safety, ethics, and compliance.

Key milestones include the movement of software into cloud systems, the rise of mobile internet, the impact of social media, and the development of big data and tools like Customer Data Platforms (CDPs).

Predictive AI classifies, predicts, and takes actions based on data, while generative AI creates new content such as text, images, or videos based on input data.

Risks include data breaches, intellectual property theft, compliance violations, misinformation, and damaging brand reputation.

Companies can implement data security frameworks, create secure user interfaces, conduct regular risk assessments, prioritize ethical AI use, and invest in employee training.

A data security framework involves protecting sensitive information through methods like masking and secure data retrieval, ensuring data is not retained after use.

Ethical AI ensures transparency and trust in how AI is used in marketing, helping to maintain customer trust and avoid legal issues.

Continuous education and training ensure that employees understand how to use generative AI responsibly and effectively, reducing the risk of data leaks and compliance issues.

Hybrid systems combine local and third-party AI models to balance customization and data security, although they require significant management and integration efforts.

Udipta Basumatari
Udipta Basumatari
25 min
05 Dec, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Today's Talk discusses the effective use of Gen AI in marketing while prioritizing safety, ethics, and compliance. Challenges of using generative AI include data breaches, intellectual property theft, compliance violations, and damage to brand reputation. Best practices for implementing generative AI include secure data retrieval, masking techniques for sensitive information, and toxicity checking. Strategies for safe and secure usage of Gen AI involve implementing a sensitivity layer for data protection and developing a secure user interface. Additionally, ethics training, continuous education, and prioritizing ethical AI use cases are crucial for successful implementation.

1. Introduction to Gen AI in Marketing

Short description:

Today, we will discuss how we can use Gen AI effectively in marketing while prioritizing safety, ethics, and compliance. The milestones that marketing technology has gone through include the movement of the software industry into cloud systems, the pivotal moment of mobile access to the internet, the rise of social media, and the importance of big data. We will specifically focus on Gen AI and its relationship with machine learning and deep learning. AI in marketing has predominantly been predictive AI.

Hello, everyone, and thank you for joining me today. My name is Udittha, and I am currently acting as the CMO of Travers.io, a sports AI technology company. And I have experience of about 16 years in Martek-related roles in companies like Adobe, Salesforce, Rakuten, and WPP.

Today I'm excited to take you through some critical strategies for navigating the world of generative AI or Gen AI. Today, we will discuss how we can use Gen AI effectively in marketing while prioritizing safety, ethics, and compliance. With the rapid rise of generative AI, I think these practices and strategies are not just beneficial but have become essential, as we will see.

I won't be talking much about the benefits of using Gen AI because this whole conference is about that. But I would like to share my experience in how the companies that I'm part of are using these strategies to make sure that the use of generative AI is happening in a very safe environment, which does not harm the reputation of the company and, in fact, adds to its productivity. So with that, I would like to quickly cover the milestones that marketing technology has gone through. There is no order to these. It's more about the crucial milestones and the pivotal points, starting with movement of the software industry into cloud systems. That's when we basically got access to tools and devices and software that we could access from anywhere in the world. Mobile was obviously a very pivotal moment because people now could access the internet on the move, which means that the capacity of organizations to reach out to them expanded exponentially. Social media was another big step, which was then succeeded by data, specifically big data, because the amount of data that was being captured about users and the interactions on various channels, social media, mobile devices, now all had to be stitched together. That saw the rise of tools like CDPs, which have become quite ubiquitous with both SMBs and multinational organizations.

The one that we're going to be looking at is artificial intelligence. Specifically, we're going to be looking at Gen AI because every time there is a new technology that comes out, there are lots of question marks about that technology. And this is what I would be happy to address today, to kind of quickly understand where Gen AI sits. Now, we usually talk about artificial intelligence as this all-encompassing concept. However, it's important to understand that machine learning is typically the only artificial intelligence system that we are familiar with. And as you dig deeper, machine learning is quite easy to understand because you have a set of data, you give a specific set of instructions to a system and it learns using neural networks. Neural networks become important because there are multiple nodes. It's pretty much like the human brain. Multiple nodes start to interact and that kind of speeds up the process of learning. Deep learning is when you have multiple nodes, but there are multiple layers of those nodes. So you have a far more complex way of letting the system understand and learn like pretty much like a human being. And that is where generative AI comes in. For many, many years, we have been relying on AI in marketing, but it has been mostly predictive AI. So predictive AI is something which basically can classify, predict and take actions. So you think about things like doing math, you know, doing your metrics, your calculations.

2. Challenges of Using Generative AI

Short description:

Generative AI is a new and powerful form of AI that generates outputs based on data. However, the use of generative AI presents challenges related to trust, safety, and compliance. These challenges include the risk of data breaches, intellectual property theft, compliance violations, copyright infringement, misinformation, and damage to brand reputation. To address these challenges, companies need to implement proper safeguards and establish a data security framework.

And because we've been using it for so many years, it is a trusted form of AI right now. And not many people question the use of it or the output that they receive when they use it.

Generative AI, on the other hand, is something that generates something based on data that you feed it. So you can look at it more as a system which is good at generating something new like art. It is obviously brand new. And so that's where some of these trust practices and safety practices haven't become as ubiquitous as we would like it to be.

So, again, generative AI takes in information, whether it's text, images, videos or any kind of data. There are specific data models that learn using deep learning methodologies. And then when you prompt something and you get an output that its output could be in the form of an image, text, or as the models get more and more sophisticated, you are starting to see people create entire videos, entire software codes, etc. But the important point is that because there is such a enormous amount of data flow happening from the source to the output, there are a lot of problems that can happen and companies should be aware of how to address those problems.

Starting with number one, data breach. One of the most famous examples we'll be looking at as a case study is about Samsung. Essentially, it's when company employees use a public LLM like chat GPT or perplexity and they use company sensitive or customer sensitive data and they push it out to those systems. And that can lead to a lot of problems when it comes to people looking out for information about the company, trying to find ways to find backdoors. So that is clearly an issue. And it's intellectual property theft. Again, if you are going to be putting up company specific IP information on a server that does not belong to you, you don't really know where it's going to end up. So that is the second problem.

Number three is about compliance. So compliance is very complex, obviously. We live in a world of things like GDPR and using generative AI without proper safeguards can result in violations of these regulations, including GDPR, and which will lead to hefty fines and legal repercussions. So moreover, the output generated by gen AI may inadvertently infringe on existing copyright or trademarks, something that any organization should be very careful about. You also have the issue situation of creating misinformation and damaging brand reputation. For example, if you use gen AI to produce misleading or inaccurate content, it can adversely affect the company's reputation when you use those things in marketing or customer communication, something that goes out there and is available to everybody. Also, pushing out inaccurate information without checking or without fact checking can lead to customer dissatisfaction, loss of trust and potential backlash.

So ultimately, all of these boil down to trust. And we know that trust is one of the most difficult commodities to earn for a brand, especially in a hyper-competitive environment that they exist in today. So it is very important to ensure that employees are well aware of these challenges and of these dangers before they use these softwares like the generative AI softwares. One of the ways in which companies can prevent these issues from cropping up is to have a data security framework. So my experience of this comes from my work with Salesforce's Einstein Trust Layer, when I was working with the Data Cloud product.

3. Implementing Best Practices for Generative AI

Short description:

Companies can easily deploy best practices for creating prompts and outputs for generative AI. The data security framework ensures secure data retrieval. Grounding uses customer data to customize messages, with masking techniques to protect sensitive information. Sending data to third-party models should not retain data, and options include working with companies like Salesforce or having locally hosted models. The output should be checked for toxicity before being sent to the public.

And I felt that most of these best practices can be easily deployed by companies, if they can provide a bit of resource and time and energy into it. So the whole idea of this is having a framework where, regardless of how untrained or how inexperienced the team may be, they're able to create prompts and they're able to create outputs which can be shared with enterprise level software.

So let's do a journey and let's see how this works out. We start with a prompt. A prompt is just a text that you put down. It's a query. The moment you press send on the prompt, the data security framework kicks in. So the prompt is accepted securely. That's the secure data retrieval part. And then comes the first challenging part, which is grounding. Grounding is basically using your customer's data to customize the message. Now, the company data is going to be accessed, it's going to be used for grounding, and that's the first area of risk, because you're exposing your company's data to a system now, which is going to be used to communicate with the generative AI.

However, there are ways to make sure that this does not fall into the wrong hands, or if the data is leaked. Masking is the first step. Masking is basically, if a user's ID is user1234, it'll mask it as something like XYZ12345, or something like that. And when the output is given back to a system, we match it back so that the correct name of the user is used. After that comes the complicated part of sending it to an LLM. Now, LLMs like ChatGPT or any of these companies which provide API, these are obviously very sophisticated, expensive models, cost upwards of $100 billion, not easy for most companies to build it themselves.

Anyway, so when the data reaches here, this is where the magic happens. Now, for this magic to happen, for this output to be created, you have to ensure that the data which is being taken by a third party model is not retained. So the moment the output is created, that data has to be destroyed. So major corporations like for example, Salesforce, they work with OpenAI and they have contracts in place. It's up to the organization using it to ensure that these contracts are respected, or of course, if you have the option, you can have your own locally hosted model, which significantly reduces the risk and danger of your data being leaked.

Now that's not the end of the story, because you also need to make sure that the output that you're going to be sending out to the public does not contain toxicity, like bad language or slangs, or controversial opinion.

4. Implementing a Secure Data Framework

Short description:

To ensure data security, it is crucial to destroy data accessed by third-party models once the output is created. Organizations can work with companies like Salesforce or have locally hosted models to reduce the risk of data leakage. The output should be checked for toxicity and factual errors. Maintaining an audit trail is important for tracking responsibilities and fixing mistakes. Once satisfied, the output can be sent to marketing apps, but caution is needed with third-party systems due to compliance and data security risks. Building an in-house solution requires resources and collaboration between tech and marketing teams. Hybrid systems combining both options increase management overhead and integration complexity.

Now, for this magic to happen, for this output to be created, you have to ensure that the data which is being taken by a third party model is not retained. So the moment the output is created, that data has to be destroyed. So major corporations like for example, Salesforce, they work with OpenAI and they have contracts in place. It's up to the organization using it to ensure that these contracts are respected, or of course, if you have the option, you can have your own locally hosted model, which significantly reduces the risk and danger of your data being leaked.

So once that is done, it is sent, the generation happens, that's where the output comes out. Now that's not the end of the story, because you also need to make sure that the output that you're going to be sending out to the public does not contain toxicity, like bad language or slangs, or controversial opinion. And of course, hallucinations, which LLMs are still well known for, is that they make factual mistakes, but it appears, I mean, they do it in a way that looks, that is true, that is correct and true. So you just have to be a bit careful about that. Finally, it is important for you to keep an audit trail to ensure that you know who was responsible for which step, at which step did something go wrong, so that you can track back, you can understand what the mistakes were and fix it for any future purposes.

Finally, once you're satisfied with the output, then it's time to send it to your marketing apps, like your CRM systems, your advertising systems, any APIs, or any points of data exchanges where this information will be used to disseminate to your audience. So that is just a high-level view of a security framework. Now, obviously, it's not possible for every company to build it themselves. Most of the major companies providing AI tools do come with their own, you know, some kind of a security or a trust layer. And that's when, if you want to use third-party systems, absolutely, feel free to do it, but just make sure that there is a limited customization there. And of course, there could be compliance concerns and data security risks, because if those companies get into trouble, they get hacked or something, your data is also going to be out there in the market after that. The other option, extreme option, is to build something in-house.

Now, that is usually an option that I'm seeing more and more of these tech-focused, highly technical companies approach. So, for example, there is this OTA in Southeast Asia, which is hosting everything locally, very sophisticated use of the tools that are being built around their LLM system. The only problem is that this requires a lot of resources, intensive training, and you need to make sure that the team which is building this is working closely with marketing, because that would be a challenge, and most of you with experience would know that it is not really easy to get the marketing team to work alongside a tech team. So just be aware of that. And number three are hybrid systems. Now, this kind of combines the best of both. The only problem is that this requires significant increase in management overhead and integration complexity. And whenever there are updates to one system, you need to make sure that the other system is compliant and it's working together.

5. Strategies for Safe Usage of GenAI

Short description:

There is a solution available for every organization to use GenAI in a safe way. Implementing a sensitivity layer for data protection, including keyword identification and data filtering, can help protect sensitive information. A study showed up to 95% accuracy in detecting common structured sensitive data patterns. Samsung implemented robust AI security measures after a code leak incident, including the deployment of Samsung Knox matrix and pre-screening system code.

So, there's no easy way out. However, I think there is a solution available for every organization. And with that, there are lots of things to cover, but I wanted to cover only five. So I'm going to first look at strategy number one. How do you make sure that you use GenAI in a safe way? So first things first, we had a look at it. You need to implement a sensitivity layer for data protection. Now, this basically is a layer which is applied to documents and inputs to protect sensitive information. So some of the best practices are keyword identification, where you use semantic mapping to identify sensitive keywords, example, financial data, strategic plans, and apply contextual checks. Second is you apply data filtering to implement a system that basically fails inputs, fails the inputs which contain sensitive keywords before they're processed in NLLM. And some of the recent studies show that up to 95 percent accuracy for detecting a common structured common structured sensitive data patterns have been found. This was a study and the results were actually real world results which came from Amazon Macy and similar DLP tools for detecting specific structured patterns.

One of the main examples that I kind of covered earlier is Samsung, where this was quite a famous, you know, when I say famous, it was actually quite infamous where Samsung implemented a robust AI security after there was a big code leak. So someone in the in their dev team used their company's code on chatGPD to query about certain outputs, but obviously it got leaked out and anybody in the world could access that. So nobody wants that. But Samsung has done a lot of things since then. So they've deployed Samsung Knox matrix for AI interactions. They do pre screen all the system code and technical documentation now. And also there's a lot of sensitive real time data detection happening. So they have done a lot of these things. This may not be possible for all companies, but you can definitely take away a lot of lessons from what Samsung had to go through.

6. Strategies for Secure Usage of GenAI

Short description:

Develop a secure user interface for interacting with generative AI models. Implement compliance checks and format queries before sending requests. Conduct regular risk assessments and develop strategies to address identified risks. Companies that implement comprehensive AI risk management can reduce security vulnerabilities by up to 40%. Prioritize ethical AI use cases by establishing transparency policies and providing ethics training. Trust is the most important thing.

Well, second strategy would be to develop a secure user interface. So this basically means creating a UI that interacts with the generative AI models. Now, the important thing is to make this UI as easy as possible to use, but also make it fail proof by design so that even if someone tries to make a mistake, the system has its the system is designed in a way that it doesn't allow that. So some best practices would be to have compliance checks so that the UI should format the queries and perform compliance checks before sending requests to the LLMs. And then you could look at input formatting where essentially you convert sensitive documents, for example, Excel files to secure text formats, example, CSV. So something as simple as that could actually help reduce incidents. So there was a study done by GitHub where they found that companies which have been deploying these best practices report a 27% reduction in security vulnerabilities.

Strategy three would be to conduct regular risk assessments. So this means conducting assessments associated with gen AI technology to identify vulnerabilities. Best practices I would pick would be threat modeling to identify potential threats specific to generative AI applications and mitigation plans, which basically means developing strategies to address identified risks, including incident response protocols. So you need to have them in advance, have as many of them as possible, because when an incident happens, you don't want to be struggling for solutions or ideas. At that point, you need to have those protocols in place. The results in the real world are quite encouraging for companies deploying these best practices. You see that companies that implement comprehensive AI risk management can reduce their security vulnerabilities by up to 40%. Morgan Stanley is a great case study here. So they have built customized AI tools within the company along with assistance from OpenAI to basically help their advisors use their existing systems like Outlook and Salesforce securely, because there's a lot of data going out of their company's systems through these software. And adoption rate has been really high. So from what I have, my understanding is that that's one of the reasons is obviously they have spent a lot of time training their associates. But also the UI of the system is really, really easy to use. That's been one of the big reasons behind the adoption.

Strategy four would be to prioritize ethical AI use cases. So this is, again, just establishing ethical guidelines for using generative AI in marketing campaigns. For that, I would say best practices would be transparency policies. So clearly communicate how AI, generative AI is used in marketing efforts and how customer data is handled. Second would be ethics training, where you provide training for employees on ethical considerations when they use these technologies. A report from KPMG found that companies focusing on trust and ethics, they see 15 to 20% higher customer retention rates, which is great. At the end of the day, trust is the most important thing.

7. Ethics Training and Continuous Education

Short description:

Provide ethics training for employees and establish an ethical AI framework. Invest in continuous education and training, including regular sessions and cross-functional collaboration. Companies that prioritize sensitive data protection, easy-to-use UIs, regular risk assessments, ethical AI use cases, and employee education will benefit the most.

Second would be ethics training, where you provide training for employees on ethical considerations when they use these technologies. A report from KPMG found that companies focusing on trust and ethics, they see 15 to 20% higher customer retention rates, which is great. At the end of the day, trust is the most important thing. And one of the companies which has invested a lot in this, and I've been very fortunate to experience this journey myself, is Salesforce, where there is a clear ethical AI framework which establishes guidelines for responsible AI use in marketing. Now, not only is this disseminated amongst the employees for their understanding, but a lot of these are also built into their system. So the whole idea of, for example, the Salesforce Einstein trust layer is that trust should be paramount. Trust should be the first tenant to look at while designing a system like this.

Finally, the strategy five would be the human factor, where you will invest in continuous education and training. So this is just to ensure that the marketing team are educated about the safe use of gen AI technologies. Now my best practices would be to have regular training sessions. Most organizations have these. Some organizations have it to the point where unless you pass these tests with a high score, you don't actually get to use the tools. So I think that's a really great way to ensure that nothing is leaked, nothing falls through the cracks. And also the second best practice would be to have cross-functional collaboration. So generative AI is a highly technical technology, let's put it that way, which is going to be used by every single person in your company. So obviously it makes sense to have those cross collaborations so that different teams can share their knowledge, their best practices, their concerns. And as this is a new technology, over time that corpus of information will become crucial to a company's overall best practices use of generative AI.

To conclude this, to summarize, companies have a lot of opportunities right now to make the most of generative AI, to get competitive advantage, to increase productivity. However, there are a lot of loopholes, a lot of dangers, but a company which invests in having a sensitive data protection layer, a company which makes a UI which is really easy to use and foolproof when interacting with LLMs, a company which conducts regular risk assessments, prioritizes ethical AI use case, and invests in their employees' continuous education. I think those are the companies that will benefit the most from the use of generative AI without getting into trouble. So that's it and thank you very much guys for coming to my presentation. I hope you found this useful and feel free to reach out to me if you have any questions. Thank you very much.