IN-EAR INSIGHTS WHAT IS A LARGE LANGUAGE MODEL

In-Ear Insights: What Is A Large Language Model?

In this week’s In-Ear Insights, Katie and Chris answer the big question that people are afraid to ask for fear of looking silly: what IS a large language model? Learn what an LLM is, why LLMs like GPT-4 can do what they do, and how to use them best. Tune in to learn more!

[podcastsponsor]

Watch the video here:

In-Ear Insights: What Is A Large Language Model?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.

Christopher Penn 0:00

In this week’s In-Ear Insights, today, we are talking all about large language models.

Now, the ones you’ve probably heard of most are models like GPT-3, point five, and GPT-4, which are from open AI.

But there are many of these things.

There is the GPT Neo X series from Eleuther.ai AI there stable LM from stability AI, there’s POM from Google.

So there’s many, many, many of these language models out there.

And today, we figured we’d talk about what they are, why you should probably know what they are, and maybe a little bit about how they work.

So Katie, where would you like to start?

Katie Robbert 0:35

I think we need to start with some basic definitions, because you just said a bunch of words that basically made my eyes glaze over.

And so I think we need to start first is what is a large language model.

And before you give me an overcomplicated definition, let me see if I can take a stab at this in a non like sort of technical see if I’m even understanding.

So my basic understanding of a large language model is that it is basically like, if you think of like a box or a bucket, and you put all of your content, your papers, your writing your text, your data, whatever, into that bucket, then that’s sort of like the house, the container for all of the language that you want to train the model on.

And so, you know, the more the more stuff you put in it, the bigger the bucket you need, hence the large because you can’t just have like, this tiny little handheld bucket have like two documents in it and say, Okay, that’s my large language model.

That’s not enough.

That’s not enough examples to give the model to train on, you need to keep giving it more information.

And the more information you put in the bucket, the more refined you can make the model.

Am I even close?

Christopher Penn 1:55

You are.

You’ve gotten the first part of the process, you’re creating a large language model.

But that’s not what the language model is itself.

Yeah.

So quick definition.

When we say the word model, in the context of AI, we’re really saying software, but just like Microsoft Word is software, right? Or Candy Crush is software, a language model is just software.

The difference is, it’s written by a machine, instead of a person, right? Mostly people wrote Microsoft Word, mostly people wrote Candy Crush, a machine, or a whole bunch of machines, wrote GPT-4.

And because of that, it has no interface it has no UI has nothing that as a human could use it is it’s like the engine of a car, right? You never under normal operating circumstances, you never operate the engine of a car, right? The rest of the car interacts with the engine, you interface with, like the steering wheel, and the pedals and things.

And so the language model really is the engine of these cool tools.

And then the interface is something like a ChatGPT.

That’s the rest of the car that you need to drive the engine.

Katie Robbert 3:02

Okay.

All right.

So a large language model is a piece of software built by a machine.

Yes.

And but that doesn’t really tell me what a large language model is.

Christopher Penn 3:17

Right.

So to talk about that, we got to talk about language itself, because that’s, that’s sort of the the key linchpin of it.

There’s a quote from John Rupert Firth back in 1957.

He’s a linguistics guy had nothing to do with machine learning.

But he said, You shall know a word by the company it keeps, right.

You know the context of words.

So for example, if I say the sentence, I’m brewing the tea, right, you know what the word tea means? In that context, right? You have the words, I’m as the person me, brewing the verb, I’m brewing something, and the tea is the object of the sentence.

If I say, I’m spilling the tea, we know from jargon, that that’s really means gossiping, right? I’m spilling tea.

It’s not literally pouring a beverage on the floor.

It is gossiping, but the spilling changes the meaning of the word tea.

Right? So that so we know a word by the company keeps.

And so a large language model is composed of essentially two things, the frequencies of words, and then the statistical relationship between words.

So if I say the tea I’m brewing, right, that’s a different sentence in English.

It has a very similar meaning, but they’re the T is the subject of the sentence instead of me.

So that’s sort of the focus of it.

If I say I say brewing, I’m the tea, but that makes no sense in English whatsoever.

There’s like a and that’s a case where in another language like Irish Gaelic, Hebrew, Tagalog, verb subject object structures are are understandable we in English, don’t don’t understand this.

So when you said, you take all this text and put it in a big bucket that is the first step towards creating a large language model.

Katie Robbert 5:12

Okay? So, do you have to pick a language? Like do you have to declare like this language model is going to be English, or this language model is going to be Spanish like, can you blend languages together and come up with a result or do the does the model has to be like, Okay, this version of the model is going to be the English speaking version.

Christopher Penn 5:42

That used to be the case, that is no longer the case.

And the reason is no longer the case is because of the architecture that’s used to make large language models.

Now it’s an architecture that is called transformers has nothing to do with the very cool 80s.

Kid Show

Katie Robbert 5:59

More Than Meets actually got it.

Christopher Penn 6:03

Essentially, this is what a transformer is, this is completely unhelpful for actually understanding how.

But there’s two things in there that will help us understand is there’s a section called input embedding and positional encoding.

And let’s talk about what these mean embeddings models can’t, software and computers can’t read, like they have no understanding of words.

What they can do is understand numbers, right? So if I say I’m brewing the tea, when you when a large length wall starts being constructed, computers take all the words and all that text that you provided, and start assigning the numbers.

And what’s important here is that there’s numbers, right.

But there’s also position, if I say brewing, I’m the tea that’s in the text, the order of the numbers changes, right, that’s what’s called positional encoding the position of the words, this is why you can have a language model that’s multilingual now, because in general, Spanish words are going to be next to Spanish words, Chinese words are gonna be extra Chinese words, in general, there are not many documents that are five languages at the same time, except for like, you know, customs, customs paperwork, but for the most part, like books that are in Dutch are going to be in Dutch the whole way through.

And so the probabilities of one word being next to the other word, are going to kind of glue together to naturally form language, right? That’s kind of how so a model can understand.

You would say, Shadow Christophe would be French.

And and those words would, those frequencies would occur together, and the positions would occur together a lot.

So it understands these things can understand multiple

Katie Robbert 7:51

languages.

So three, three questions.

One, did you just insult me in French?

Christopher Penn 7:57

No.

Okay.

I said, Hi, my name is Christophe, Chris.

Katie Robbert 8:02

To actually, let me ask two and three, and then I’ll let you answer them.

That too, is.

So it sounds like first you have to tell the machine, what the sentence structure is meant to be so that it can then assign? And then my third question is, how is this numerical assigning different or similar to what we’ve come to know as like binary code, which is just ones and zeros?

Christopher Penn 8:30

So is that reverse? Were these eventually get converted to binary, right? represent these numbers? No, it if for structure, you don’t have to provide language structure anymore.

In fact, earlier efforts in natural language processing back in the 70s, and stuff did that they tried to create expert models teaching language, the rules, teaching machines, the rules of language, and they were phenomenally unsuccessful.

Because most of the time, we don’t use those rules very well.

It’s a common joke.

You know, we in America, we speak two languages, English and bad English.

It’s mostly the latter.

What is powerful about these tools, and the reason they work so well is because there is no attempt to train them at all.

On structure.

Structure naturally evolves from the way that we use language, right? So here’s some reviews about tea, right? I like the taste and smell the coffee, I’m brewing the tea, I’m bringing it exactly why they do.

It tastes very good deal.

The grammar is not great in some of these reviews, but this is natural language.

And so what the computer is doing behind the scenes, is it’s taking those reviews and assigning the numbers and then looking at the probability that one word is going to be next to the next word, right? So the T is a term that CO occurs a lot, right? That the probability of those two words occurring next to each other in these views is very, very high.

I’m brewing the tea is a phrase that occurs a lot.

And so the way these attention models work in Transformers is they’re constantly looking to see what is the relationship of a word next to the word to the word next to it on either side, and then to the word next to that on the other side, and so on and so forth until it’s essentially creating this very large, almost like a, the light from a lighthouse sweeping across text, understanding the words that are all around that word, and developing mathematical probabilities and say, hey, if the word Starbucks occurs, like two paragraphs up with the word, I’m brewing occurs, you know, two paragraphs down, these are still close enough that we the machine would refer, you’re talking about brewing here.

The next was probably coffee, because you mentioned Starbucks up here, right? And no point did you mention Oolong or jasmine, right? So me it’d be more associated with with the words tea.

And so that’s what these large language models are at their heart.

They’re big, big tables of probabilities like a library of probabilities.

This is one of the reasons why a lot of people are talking about copyright infringement.

When people are training these models.

What’s in the model? Isn’t your language, it’s not your words, it is the probability distributions of your words and everyone else’s words all blended together so that the models implicitly understand based on probability, if you’re saying I’m brewing the right, there’s, there’s not too many options.

Right? I’m brewing tea, coffee, maybe beer, kombucha, probably not the fall of capitalism.

But these are.

Katie Robbert 11:41

Say more.

Christopher Penn 11:43

Exactly.

That’s how these models work.

They are just huge, huge tables of probability.

And anytime you work with a tool like ChatGPT, what it’s doing, whether it’s doing summaries, when it’s doing, for example, generation writing a blog post, the words you give it, in your prompt, help the model start to invoke what probabilities should I be looking at most closely.

That’s why the more detailed your prompt, the better it works, because it has more probabilities drawn, if you say, write me a blog post about dogs, you’re gonna get a real generic post of the most common probabilities around dogs.

If you say, write me a blog post about how to properly train a sharp a pitbull mix to retrieve items from the yard with a single command, without using a Martin gal collar or shock collar.

Suddenly, there’s so many more words in the prompt that can evoke the right probabilities for when it generates the text.

Katie Robbert 12:44

So I’m thinking back to, you know, a few years, you know, probably at least five or six years ago when you and I worked at the agency, and you introduce to our team, this concept of topic modeling, which use natural language processing, which I understood to be the frequency, and the nearness of words.

So basically, we would create these clusters of topics, topic modeling, from, you know, a few different documents or a few different blog posts and say, this is these are the topics that are being talked about the most, and sort of give them you know, give them a relative priority in size.

And so it sounds like at a bigger scale, this is very much the same thing in a more automated continual learning way.

Whereas the version that we were doing, you know, five or six years ago, was a little more manual.

Is that an accurate understanding,

Christopher Penn 13:52

though, what we were doing a few years ago was using much older technology.

So there’s it was technology that was essentially a mix of skip grams and bag of words, which are essentially looking at just raw frequencies of words.

But those older techniques could not take into account words that were a bit of a distance away, so it could understand I’m brewing the tea, right, and I’m brewing would be one pair, a bigram.

The tea would be another, but it would have no idea about if you’d mentioned Starbucks, you know, two sentences ago.

Today’s models with things like transformer based architecture are much, much more comprehensive, they can see based on those probabilities, a variety of texts, you know, huge windows.

For example, three years ago, the GPT two model came out and that had an input and output of about what’s called 1020 14 tokens a token is about a four letter fragment of a word, which translated to about five or 600 words, right, it could generate a paragraph before it went and just went off the rails GPT-3 came out 18 months later, it could understand 2000 tokens, right? Because that basically means the model got so much bigger that could now see further in text, it could understand more texts going in, you have been trained on more text and could create more texts going out GPT-4, which just came out a couple of months ago, can understand 32,000 tokens in either direction.

So you could put a novella in, right, and you could say, Hey, turn this into an emoji.

I thought 2000 word business book in emoji, because it has so many more of these probabilities that it’s been trained on that it can effectively do that next year’s model.

GPT five, will probably be about 64,000 tokens.

So you could take the entirety of our friend and Hamleys book here, just drop the whole thing in as a prompt, right? And say, rewrite this in Swahili, or rewrite this entirely as limericks or, you know, turn this turn Anne’s book into song lyrics, you can do that with that big of a window because it understands so many more probabilities.

Katie Robbert 16:27

Okay, but back to my question, the topic modeling that we were doing is you did it, sir.

Yes.

Okay.

It’s related.

Because that was a concept that I understood, basically, we took, you know, I think you could use like a Shakespeare’s play or something.

As an example, when we were teaching it to the team.

And said, basically, what this script is going to do is it’s going to summarize the major themes from this particular, you know, piece of writing.

And so it sounds like, this is a very early version of what now GPT is doing at a bigger scale, in a faster, more automated, like, it’s less manual intervention of you having to write the code and determine what like all of those like words are like that all happens behind the scenes.

Now.

Another example of this that we did was for a trucking, a long distance trucking recruiter agency.

And they basically, they had us take all of the transcripts, from their interviews from their call center, and turn that into that topic model.

And so it sounds like in a much more efficient, much more accurate way.

Now, these large language models could do that same work, but just better.

Christopher Penn 17:51

That’s exactly right.

These the models now are capable of doing that much in a much greater way.

But it’s still the same thing.

So let’s take a look here at this is ChatGPT using 3.5 ML, I fed it an episode of last week’s podcast.

And I said, Okay, tell me the top three topics.

And I want them isolated in a pipe delimited format.

And so we’ve got these nice, you know, topic one topic to topically.

So now, instead of having that crazy chart, which used to make that with all the different bars, and bands and colors, and now, very simple.

It just spits out these these basic probabilities about what this thing, what this text is about.

And now you can imagine, it would be trivial to extract these percentages, and then put them in a table.

So you’d have maybe the blog post title, the URL, topic, one topic to topic three, and the topic one percentage topic 2% topic 3% For every blog post on your blog, and then you could start to use this information to say, Oh, well, let’s do a correlation between topic frequency.

But using these new language models are much more efficient.

And sessions, Google Analytics sessions.

So what are our most popular topics? What topics bring in? Bring, you know, the humans to the yard?

Katie Robbert 19:14

I was waiting for that.

I was waiting.

Christopher Penn 19:20

Exactly.

But yes, it’s the same concept now just made a lot easier, a lot faster and a lot more accurate.

By these large language models were previously we didn’t have access to them.

They weren’t something that we were able to do.

Katie Robbert 19:37

Okay, so I can actually see that kind of use case being really helpful for creating a social post.

Like, you know, hey, here’s this really great blog post that we created.

Here’s what it’s about.

You’re gonna get these three takeaways because this is what the machine told us.

This thing is about.

It is about hourly billing.

It is about Oh, you know this and it is about this and that to me, I’m like, Oh, great.

I don’t have to write social anymore.

That’s amazing.

Oh, yeah,

Christopher Penn 20:06

absolutely one of my favorite use cases.

In fact, this is an example that I do in one of my talks, taking the five star reviews, or my teachers martial art school off of Google, I fed all the five star reviews of the school.

And I said, I want you to write social media ideas from these posts using the voice of the customer.

Right.

So you and I have opinions about what what are what Trust Insights is about.

But we don’t often think okay, well, how do we write content based on what our customers say about us? And this is true of many businesses, many businesses are like, oh, yeah, this is what a value proposition and if you read that customer feedback, kind of like what you’re talking about what the drivers example.

That’s not what customers are actually talking about.

So if you were to use customer data provided by customers, like reviews, with a large language model, to generate content from that language, you would actually probably perform better because you’re using the words and the topics that customers actually care about.

Katie Robbert 21:11

Yeah, that’s a, you know, it.

That’s an interesting, I feel like that’s a another topic for another day.

But you know, sort of the way that we think about how we’re writing this content is like, we’re giving our perspective, or we’re telling people what we think they want to hear, versus using the information that’s readily available, being said about us to create that content.

I mean, that’s just, you know, something that the machines are going to be so much better at, because they take out that Narcissus.

Yeah, for lack of a better word.

Yeah, the of this is what I think people need to know about me versus here’s what’s known about you.

Christopher Penn 21:53

Exactly.

And here’s the thing, these these models are good at six basic tasks, right? Generation extraction, summarization, rewriting classification, and question answering.

Generation is exactly what sounds like, hey, write me a blog post.

Believe it or not, this is the this is the most popular usage.

It’s also what is least good at their least good extraction and summarization, like extractions pull this data out of here.

For example, give it a list of 100 Twitter URLs and say, just give me the Twitter handles.

Boom, done easy.

summarization simple, like we just did, with with that example from the customers with, with Instagram just summarize, essentially, you’re summarizing these reviews, rewriting another very popular one.

Again, that one’s pretty easy for these models to say, Okay, I want to rewrite this rewrite this emoji, or Sumerian, whatever, classification.

But what is this document? Again, very similar to topic modeling? And in question answering, like, hey, I need ideas for a blog post about this.

Let’s put these things out.

One, what’s what’s interesting about these models, because of their architecture, they are better at the what I call editing tasks like extraction, summarization and rewriting than they are, and writing tasks like generation question answering, because if we think back to what we were talking about the very beginning of that about this architecture, it’s called a transformer.

It is good at transforming stuff.

So if you’re providing all the data in that it needs, then what comes out should be substantially the same, but transform, so the model doesn’t have to work particularly hard to generate what you want to generate, if you’re just saying, hey, I want you to clean up the grammar in this in this post.

That’s, that’s a great use of these things.

As opposed to write me a novel, that’s not a great use of these things.

Katie Robbert 23:56

Alright, so let me see if I’ve got this straight.

So a large language model, basically, is a piece of software that takes you know, the inputs, the tax, the content, the data, and does one of six functions, the functions that you were showing generation summarization, categorization, and three others.

And then it basically says, like, what do you want me to do? And you say, This is what I want you to do, I want you to do one of these six things, and it does the thing.

And it basically takes the text and the content and it turns it into numerical values.

And it does the probability it what it gives you back is the probability of words that are the the nearness of words, the closeness and So in your example, I’m brewing the tea.

It will say okay, I know that I’m brewing means that this is the action that the person is taking and the tea is the thing that they are doing.

So, yeah, I need I need a little bit of time to wrap my brain around around this so that I can better explain it back to you in a way that I feel like is a little bit more intuitive.

But I’m getting it though.

It’s, it’s complicated, but not

Christopher Penn 25:19

it.

Yeah.

And the reason it’s complicated is because these things use language differently than we do.

Right.

They’re not human, they are their mathematical machines.

We don’t use language that way.

We even though our brains, our neural networks, right, that’s the original the OG neural network.

It functions very differently in our heads than, than language does in a machine.

And language for humans is actually relatively new in our evolution.

It’s a part of our brain that evolved relatively recently.

So even for us, it’s still kind of developing.

That’s one of the reason why language can change so much, even from generation to generation.

I mean, there’s all that 90 slang we grew up with, and nobody says anymore.

Yeah, a whole bunch of

Katie Robbert 26:03

cringy.

Language.

You know, it’s it strikes me that sort of like, it’s along the lines of you having an idea in your head, but it not translating when you try to like, write it down or say it out loud.

Because there’s like, there’s that missing piece of how do you get it from one place, that’s sort of the same with writing these prompts for, you know, these big large learning models of you know, what you want it to do.

But unless you get the prompt, detailed and exact, it’s not going to come out.

Christopher Penn 26:34

Right.

And so, you know, we’ve done shows on prompt engineering and stuff, we’ve talked a decent amount about this stuff.

The reason why prompt engineering is so difficult is because it is it requires you to know how know how your language works, and know how the machines language works, and be able to do both to essentially speak two different languages speak two different ways of using language.

But if you get it right, and you understand these capabilities, it’s very powerful.

Here’s a fun one, if you were to write a LinkedIn request, you know, there’s connection requests you get on LinkedIn that are usually just terrible, right? They’re like, I’d like to add you to my professional network, right? You keep showing

Katie Robbert 27:20

up as someone who I’m recommended to connect with No, not

Christopher Penn 27:24

exactly.

What if you were to use a large language model? What would you what would you put into that, Katie to help you write a good connection request?

Katie Robbert 27:34

Oh, you’re putting me on the spot.

You know, help me write a authentic sounding non sleazy connection to someone that I want to be connected with on LinkedIn, so that they know that I’m not trying to sell them anything and genuinely just want to get to know them better.

Christopher Penn 28:04

Okay, I think that’s a good start.

How would you add? How would you make it relevant? Like if you assume it, let’s let’s put aside technology and large language models, right? Now, let’s, let’s put that aside and just say, what if we, what would you do if you want to write a connection request to me?

Katie Robbert 28:27

Okay.

It would probably be something like, you know, Hi, Chris, I really enjoyed reading.

So basically, what I would want to do is I would want to start with why I’m reaching out to you in the first place, and like, what is relevant, not just a cold thing, I feel like there needs to be like a personal touch to it, like, not just collecting numbers, but like, Hey, I read your stuff, I thought it was really interesting.

I want to learn more.

So I was hoping that I could connect with you so that we could, you know, talk about these things that you’ve been writing about or you know, whatever the thing is, that to me is a more genuine connection, and more likely to be accepted.

Christopher Penn 29:10

That’s the approach that you would take with a large language model.

So let’s look at an example what this looks like.

Go ahead and share my screen here.

I’m gonna say your LinkedIn expert you know all this stuff, right? You’re gonna have quite a connection request.

Here’s me, right? Here’s I’m the person the source of this and I’m gonna provide all of these words which we just discussed as the currency of large length was gonna take your background Katie, your LinkedIn profile that bio stuff and say Okay, write a connection request from the source to the target, emphasizing whatever Common Ground is available so let’s see what it does

so it because of the language used, it’s able to take the what we provided the store So the target data and weave together connection requests.

Now imagine this as a piece of software, right? Where have your bio is one of the prompts that we had just programmatically scraped the bios of the people you want to connect to over and over again and send customized connection requests that were worded.

As though I’d actually read your bio, instead of the generic sleeves.

But that’s an example of how the language in a large language model works.

The more you provide, the better it does.

Katie Robbert 30:29

Which makes sense.

I mean, that’s true, if anything, so like, you know, and we, I like to give the example of like, if you’re talking to another human and you, you know, ask them like, Hey, can you bake me a cake? And then you get mad because they bake you a vanilla cake and you want the chocolate one? Well, guess what you didn’t tell them? You know.

So just like humans aren’t mind reader’s, these machines aren’t mind reader’s, and they’re not going to make assumptions about what it is that you want it to do.

You need to be specific, I would need you to bake me a chocolate three layer cake with cream cheese frosting By two o’clock on Friday.

And it needs to be gluten free, dairy free, no coconut, no peanuts, probably gross and tastes like cardboard.

But that’s besides the point.

But I’ve been gotten very specific about the instructions.

And so then if it’s not delivered correctly, that’s a different conversation.

But unless I give you all of those details, you don’t know that that’s what I’m thinking.

Christopher Penn 31:24

Exactly.

Put this in your heads folks, a prompt as a creative brief, you would never have designer make me a logo, we’d never do that, to a designer would be very specific.

Here’s the tone.

Here’s the emotions.

Here’s the colors we want to use.

A prompt as a creative brief in a large language model is nothing more than a co worker who needs a lot of specificity to generate the best results.

So to wrap up, a large financial model is a piece of software.

All models are software written by a machine, they are built on the probabilities of lots of language in ways that understand not only the importance of language, but the structure of it as well.

They can write, okay, they can edit better.

And the the results you get are directly proportional to the amount of precision and information you provide in again, like all computing garbage in, garbage out.

If you’ve got some prompts that you want to share or some questions you have about large language models yourself, why not pop over to our free slack group go to trust insights.ai/analytics For markers where you have over 3000 other human marketers are all wondering if we’re still going to have jobs in a month.

Whether you are watching or listening to show if there’s a platform you’d rather have it on the show is available go to trust insights.ai/ti podcast on most networks that have podcasts are our new YouTube podcast is up on our YouTube channel.

You want to check that out to trust insights.ai/youtube Thanks for tuning in.

I will talk to you next time.


Need help with your marketing AI and analytics?

You might also enjoy:

Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday!

Click here to subscribe now »

Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This