INBOX INSIGHTS, May 1, 2024: AI Ethics, Model Tuning

INBOX INSIGHTS: AI Ethics, Model Tuning (5/1) :: View in browser

Inbox Insights from Trust Insights

👉 Register for our upcoming webinar, Generative AI for Professional Associations!

Helpful, Honest, Harmless

Are you using Generative AI?

That sounds like the start of a sales pitch, doesn’t it?

It’s not. I promise.

I genuinely want to know if you’re using Generative AI. Not only do I want to know, but I want to know if you understand how the system you’re using decides what responses to give you.

I’m not going to get into technical details, that’s not why you’re here. There are academic papers and other articles that get into the weeds. For context, at a high level, the guiding principles for how companies, like OpenAI and Anthropic, are training Large Language Models (LLM) are HHH. HHH stands for Helpful, Honest, and Harmless.

Sounds good, right? Who wouldn’t want to use a system that is helpful, honest, and harmless?

Well, let’s not get ahead of ourselves.

There isn’t one singular definition of helpful.

There isn’t one singular definition of honest.

There isn’t one singular definition of harmless.

You see where I’m going with this.

Generative AI is a great tool to integrate into your workflow. There are a lot of reasons why marketers would want to optimize their efficiencies. But this is where I am going to encourage you to read the fine print. The thing that we all say we do, but don’t.

Ok, I’m actually going to ask you to do more than just that. Before you sign up for a generative AI tool to integrate into your workflow, I want you to go through a simple exercise. The goal is to determine what you will and won’t accept from a Large Language Model.

I’d like you to start by outlining your company values. When we think about helpful, honest, and harmless you should be able to tie those into what your company stands for.

As an example, here are the values that we outlined for Trust Insights:

  • We reject deception and secrecy. We are transparent and honest.
  • We reject laziness and stupidity. We are committed and smart.
  • We reject obfuscation and bullshit. We are clear and direct.
  • We reject discrimination and bias. We are fair and just.
  • We reject ego and selfishness. We are humble and generous.
  • We reject pigheadedness and willful ignorance. We are cooperative and aware.
  • We reject gloomy and dramatic. We are cheerful and agreeable.
  • We reject thoughtless acceptance of the status quo. We do better.

When I go through the exercise of selecting a piece of software, like generative AI, I want to have those values front and center. Why? Because the way that the model is trained may not align with your values. For instance, what I think is fair and just, may not resonate with you.

No, this is not normally a step you need to take when assessing software vendors. You want to take this extra step because of how companies are training the models. Unless you’re getting into the code (which they won’t share with you) you don’t know what the companies consider helpful, honest, or harmless. You have to do your due diligence and make those judgments for yourself.

Once you have a shared understanding of your value, go ahead and read the fine print, also know as the terms and conditions. Make sure you know what you’re signing up for and that you’re comfortable with the software. Generative AI is rapidly evolving. So quickly that most of us feel like we can’t keep up, let alone know exactly what it entails.

This is an important time in our industry to be skeptical and questioning. Someone who isn’t you is deciding what is helpful. Someone who isn’t you is deciding what is honest. Someone who isn’t you is deciding what is harmless. You can’t control that. But you can control whether or not to use their software.

Our friends over at the Marketing AI Institute are doing a lot of work trying to understand and educate on this topic. Be sure to follow them to stay up to date as well as following our content.

Are you clear on your values? Reply to this email to tell me or come join the conversation in our Free Slack Group, Analytics for Marketers.

– Katie Robbert, CEO

May 2024 One Click Poll

Please click/tap on just one answer – this is our monthly survey to see how we’re doing, so please do take it each month!

How likely are you to recommend Trust Insights as a consulting firm to someone in the next 90 days?

We use this information to measure how effective our marketing is. There’s no form to fill out – tapping your answer is literally all there is to it. Thank you for your help!

Share With A Colleague

Do you have a colleague or friend who needs this newsletter? Send them this link to help them get their own copy:

https://www.trustinsights.ai/newsletter

Binge Watch and Listen

In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle the complex and critical topic of AI ethics. You will explore the fundamental principles of ethics and how they apply to the development and use of artificial intelligence. Discover the importance of transparency, accountability, and minimizing harm when working with AI models. Finally, learn how to navigate the challenges of bias and censorship in AI, and how to choose the right tools and frameworks that align with your company’s values and ethical principles.

Watch/listen to this episode of In-Ear Insights here »

Last time on So What? The Marketing Analytics and Insights Livestream, we walked through how to use AI for SEO. Catch the episode replay here!

On this week’s So What? The Marketing Analytics and Insights Live show, we’ll be digging into how to use AI for email marketing. Are you following our YouTube channel? If not, click/tap here to follow us!

In Case You Missed It

Here’s some of our content from recent days that you might have missed. If you read something and enjoy it, please share it with a friend or colleague!

Paid Training Classes

Take your skills to the next level with our premium courses.

Free Training Classes

Get skilled up with an assortment of our free, on-demand classes.

Data Diaries: Interesting Data We Found

In this week’s Data Diaries, let’s discuss model tuning. Many AI services from big tech companies like Google’s AI Studio, OpenAI’s Platform, Anthropic’s Console, IBM WatsonX Studio, etc. all offer the ability to create tuned models. But what does this mean, and why would you do it?

Large language models work based on the prompts we give them. In general, the more specific, relevant text we provide in a prompt, the more likely it is we’re going to get a satisfactory output for most common tasks. The key phrase there is common tasks – the major use cases like summarization, extraction, classification, rewriting, question answering, and generation all have thousands or millions of examples around the web that models have trained on.

Sometimes, however, you want a model to perform a very specific task, a very specific way – and because language is naturally ambiguous, language models may not always do things the same way even when instructed to do so, much in the same way a toddler may not do things the same way even with firm instructions.

Generally speaking, you get better performance out of models by providing a few examples. You might have a specific style of summarization, so in your prompt, you’d specify a few examples of the right and wrong way a model should summarize your input text.

But sometimes, you need a model to conform exactly to a format, and even a few examples may not be enough to guarantee that output. That’s when you switch from prompting to model tuning. How it works is relatively straightforward: you provide a LOT of specific examples of the way you want a model to do a task, and then with the help of AI infrastructure (like that provided by the big AI tech companies), you essentially change how the model works by teaching it those examples.

For example, suppose you were building a system to do something like sentiment analysis. If you’ve ever done sentiment analysis with a large language model, you can tell it to provide only a numerical score and most of the time it will – but some of the time it wants to wax rhapsodic about your input text. That’s fine if you’re using a language model in a consumer interface like ChatGPT. That’s not fine if you’ve incorporated the language model into other software, like your CRM.

In that case, you’d want to build at least a thousand examples of exactly how you want the model to respond, in key-value pairs that look like this toy example:

  • Input: Score the sentiment of this text: “I really hate when my food is delivered cold.”
  • Output: -5

You’d have many, many specific examples of this in what’s essentially a spreadsheet, and you’d give that to the training software to tune the model to become really, really good at this task and delivering exactly the output you want.

As you migrate and evolve from end-user, consumer use of generative AI to organizational and enterprise use cases, these predictable, reliable responses become more and more important. When integrated into other software, there’s no opportunity to go back and ask the model to do it again, so tuning the model for a specific use case is essential.

The key takeawy to remember is that tuning language models makes them very good at one specific task. If you have a mission-critical task you need the model to do right all the time, tuning the model is the way to go.

Trust Insights In Action
Job Openings

Here’s a roundup of who’s hiring, based on positions shared in the Analytics for Marketers Slack group and other communities.

Join the Slack Group

Are you a member of our free Slack group, Analytics for Marketers? Join 3000+ like-minded marketers who care about data and measuring their success. Membership is free – join today. Members also receive sneak peeks of upcoming data, credible third-party studies we find and like, and much more. Join today!

Blatant Advertisement

Imagine a world where your marketing strategies are supercharged by the most cutting-edge technology available – Generative AI. Generative AI has the potential to save you incredible amounts of time and money, and you have the opportunity to be at the forefront. Get up to speed on using generative AI in your business in a thoughtful way with our new offering, Generative AI for Marketers, which comes in two flavors, workshops and a course.

Workshops: Offer the Generative AI for Marketers half and full day workshops at your company. These hands-on sessions are packed with exercises, resources and practical tips that you can implement immediately.

👉 Click/tap here to book a workshop

Course: We’ve turned our most popular full-day workshop into a self-paced course, available now and updated as of April 2024!

👉 Click/tap here to register for the course

Interested in sponsoring INBOX INSIGHTS? Contact us for sponsorship options to reach over 26,000 analytically-minded marketers and business professionals every week.

Upcoming Events

Where can you find Trust Insights face-to-face?

  • Society for Marketing Professional Services, Los Angeles, May 2024
  • Australian Food and Grocery Council, Melbourne, May 2024
  • MAICON, Cleveland, September 2024
  • MarketingProfs B2B Forum, Boston, November 2024

Going to a conference we should know about? Reach out!

Want some private training at your company? Ask us!

Stay In Touch, Okay?

First and most obvious – if you want to talk to us about something specific, especially something we can help with, hit up our contact form.

Where do you spend your time online? Chances are, we’re there too, and would enjoy sharing with you. Here’s where we are – see you there?

Featured Partners and Affiliates

Our Featured Partners are companies we work with and promote because we love their stuff. If you’ve ever wondered how we do what we do behind the scenes, chances are we use the tools and skills of one of our partners to do it.

Read our disclosures statement for more details, but we’re also compensated by our partners if you buy something through us.

Legal Disclosures And Such

Some events and partners have purchased sponsorships in this newsletter and as a result, Trust Insights receives financial compensation for promoting them. Read our full disclosures statement on our website.

Conclusion: Thanks for Reading

Thanks for subscribing and supporting us. Let us know if you want to see something different or have any feedback for us!


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This