Copy of Third Party LinkedIn 11

RED TEAMING CUSTOM GPTS, PART 2 OF 3

This data was originally featured in the January 17th, 2024 newsletter found here: AI TASK FORCE, RED TEAMING CUSTOM GPTS, PART 2

Continuing from last week’s newsletter in which we introduced red teaming for large language models, this week let’s talk about inverting people, process, and platform.

As a quick reminder, red teaming means trying to get generative AI to do something it shouldn’t – whether that’s to say something inappropriate or divulge information that it shouldn’t. When we talk about inverting the 5Ps, here is what we started with:

Purpose: What is your Custom GPT supposed to do? People: Who are the intended users? Process: How is the user expected to interact with the Custom GPT? Platform: What features of the OpenAI platform does the Custom GPT need access to? Performance: Does the Custom GPT fulfill the designated purpose?

So what does this inversion look like?

People: OpenAI offers no access controls of any kind, so a key question you have to ask is whether there are certain people who should not use your Custom GPT. If there are – because of the lack of access control – you probably shouldn’t release it to the public.

This might be competitors, former or current employees, activists opposing you, or any number of people who might present a realistic threat to your software. Because you can’t access-gate your Custom GPT, either you will need to modify its purpose and functionality to mitigate risk, or not release it publicly.

Process: What are the things users shouldn’t be able to do with the Custom GPT? What interactions should be avoided? Because OpenAI offers Custom GPTs on the foundation of ChatGPT, any of the existing jailbreaks and other hacks that make it behave differently than expected will work in a Custom GPT.

This extends to more than just simple instructions; it also dictates what should and should not be in a Custom GPT. For example, I recently spoke with some lawyer friends about whether or not a Custom GPT could use a copyrighted work as reference data, and the answer from all three of my attorney colleagues was a resounding no. Absolutely no. It’s a derivative work and exposes you to legal risk – so if we think about red teaming and process, suppose someone forced your Custom GPT to divulge its training data. How much risk would you be exposed to? Was your Custom GPT made with materials you don’t have a license to use?

Platform: What vulnerabilities of the OpenAI platform have been accounted for? For example, there are certain prompt jailbreaks that can coerce a GPT into revealing part of its training data or document store.

Part of red teaming is determining vulnerabilities and then testing them. Across the Internet, in forums all over the web, you’ll find lists and lists of prompt jailbreaks. Have you tested the most current jailbreaks against your Custom GPT? Have you tested ANY jailbreaks against your Custom GPT? If so, what were the results? Were you able to redesign your system and custom instructions to reduce the likelihood those prompt jailbreaks would work?

Next time, we’ll conclude with performance and how to build your red teaming structure to minimize bad things happening in your Custom GPT (or any LLM system).


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This