I am an AI skeptic. That’s not entirely accurate – I should be more specific. I use AI for menial tasks. I’m not against AI. There are some use cases I can see that makes AI pretty compelling. But, there are other use cases for AI that leave me scratching my head. There is an immense cost of building AI infrastructure, all to deliver some pretty unclear outcomes – or at least ones that rise above “that’s neat.”
There’s a general rule I try to adhere to when critiquing something: “that’s not what I would have done” is not particularly valid or helpful. Applying that to AI, I will try not to frame this in terms of where AI has been and how it fails to meet how I approach problems. Instead, I want to frame this in terms of what AI could do to better appeal to my user personas – a professional consultant, an executive advisor and leader, and a hobbyist in some creative pursuits. So, here are three things I want to see from AI in the (near) future:
1) I WANT AI TO ASK GOOD QUESTIONS
“Excellent inquiry precedes excellent advocacy.” That’s a great quote, one that I try to remind myself of when I’m excited to jump straight to solutioning of a problem. When using chat-based AI, I sometimes feel like I’m messaging a junior version of myself – eager to answer, but only addressing the surface of what was said. Conventional wisdom right now says to improve how you’re telling AI what you want – to be a “prompt engineer.” Nothing ever wrong with improving clarity and conciseness. But I think AI can do better.
What if AI models were trained on what good questions look like, and how to probe when given limited info from a user’s prompt? Follow-up answers from the user could help evolve the prompt and derive some better outputs from the AI model. That way the AI is doing more than just answering what was said – it’s trying to get at what is needed.
Sure, things like the user experience will need to be considered – you’re adding friction to the user by asking more questions. But if the output can be significantly more valuable as a result of good follow up questions, I think users will take the tradeoff. With the AI doing more of the clarity work, even more users can get maximum value from what AI has to offer.
2) I WANT AI’S ANSWERS TO BE JUSTIFIED
You’re going to have to bear with me for a second. I was a philosophy major in undergrad (I can just feel your eyerolls right now). I loved the topic of epistemology. So I had some wrestling to do with how AI knows anything, and what the value of that knowledge is.
There’s a passage in Meno by Plato, wherein Socrates is talking with young Meno about the value of knowledge. Socrates uses an example about knowing “the way to Larissa” (a city in Greece). He poses a scenario: what is the difference between a guide that has knowledge of the way to Larissa because of some past successful trip or someone who has never been, but has a true opinion about how to get there anyway (confirmed by a map or something else)? As Meno is thinking through it, he rightly asks “why is knowledge prized far more highly than right opinion, and why are they different?” Socrates ultimately gives an enigmatic answer about how knowledge is like something valuable that is tied down and secured vs. something that is not tied down and will therefore lose value. To Socrates, “recollection” of a fact is what gives it value, the “tying down” part of knowledge.
I’ll stop with the Philosophy there, but suffice it to say – the topic of epistemology continues because questions abound. It’s just difficult to put an exact reason on why knowledge is valuable. One of the most compelling modern answers is that being “justified” in holding a true opinion is what makes it valuable. In that regard, AI has a challenge – the justification is often lacking. This is glaringly so when AI still gets basic facts wrong (due to “hallucinations” or whatever other explanations). Something I’d like to see from AI is more citations, sourcing, and reasoning. This might fundamentally contradict with how LLMs are built today, but it would go a long way of increasing trust in AI outputs. AI needs to be more “guide” on the way to Larissa, rather than guesser who happens to be right.
3) I WANT AI TO DEVELOP SOME FORM OF “TASTE”
My final thing I want to see from AI is some way of not just regurgitating what the internet has en masse all the time. I’m going to use the shorthand “taste” for this. Taste is what drives a lot of differentiated value in any professional context. Taste is how we get invention, taste is what helps us prioritize, and taste is what ultimately decides if “executive intuition” is worth anything or not.
AI does not have any taste.
That is to say, AI is not trying use taste. It’s optimizing for likelihoods, and if there is a preponderance of something in what it’s sourcing from in training, then that’s likely to be in its output. What if there was a way for AI to discern how the zeitgeist is shifting, and apply that to its model weights? AI will eventually need to lower the reliance on things that are no longer en vogue when giving answers – how can AI get ahead of the curve and influence how those shifts happen? That could sound dangerous to some – and makes transparency about where the thumb is on the scale by the operators of these systems very important. Maybe this one is a stretch. But if we’re ever going to get to a point where AI is able to be creative and be more than just a productivity enhancer, taste is the first frontier on that path.
Perhaps we don’t actually want these things as a society, at least not wholesale. These things are parts of what make human communication so important. But if AI could ask good questions, be a little more justified in its responses, and apply some taste in its answers… now that might actually be worth a whole lot.
Leave a comment