This data was originally featured in the May 8th, 2024 newsletter found here: INBOX INSIGHTS, MAY 8, 2024: BEING DATA-DRIVEN, AI HALLUCINATIONS In this week’s Data Diaries, let’s discuss generative AI hallucination, especially in the context of large language models. What is it? Why do tools like ChatGPT hallucinate? To answer this question, we need […]
Tag: Large Language model
Model Tuning
This data was originally featured in the May 1st, 2024 newsletter found here: INBOX INSIGHTS, MAY 1, 2024: AI ETHICS, MODEL TUNING In this week’s Data Diaries, let’s discuss model tuning. Many AI services from big tech companies like Google’s AI Studio, OpenAI’s Platform, Anthropic’s Console, IBM WatsonX Studio, etc. all offer the ability to […]
AI Use Case Identification
This data was originally featured in the April 24th, 2024 newsletter found here: INBOX INSIGHTS, APRIL 24, 2024: DOWNSIDE OF SHORTCUTS, AI USE CASE IDENTIFICATION In this week’s Data Diaries, let’s talk about identifying AI use cases. In case you missed it, yesterday’s Generative AI for Agencies recapped the major use cases of generative AI […]