I want to share something that I have that has helped me shift my viewpoint when I use AI tools to program. I am calling it empathic AI prompting. This is about treating your AI assistant (GitHub Copilot or AI models via Cursor or Windsurf) as a collaborator rather than a code vending machine. It’s a simple mindset shift that has made my interactions with AI both more productive and more enjoyable. I have also been using it to talk to colleagues and friends. I know people who have cut off ChatGPT based on their experience in January 2023. A lot has changed since then. Others tell me AI is bad at (insert whatever it may be)… I was telling them to think of it as AI is bad at that right now.
Empathic AI prompting is all about understanding and empathizing with your AI’s responses. Instead of getting frustrated when the AI gives a suboptimal answer, imagine why it might have produced that output. If you’ve ever done pair programming or mentored a junior developer, the idea is similar: when they make a mistake or an odd suggestion, you don’t just yell “wrong!” and walk away. You try to see what assumptions or misunderstandings led to that result, and then you guide them. With AI, you should do the same – treat it like a well-intentioned colleague who can email HR.
Why care? This mindset turns AI into a true coding partner and helps you understand more about programming. It might sound counterintuitive at first, but I am betting that this is what the future of programming looks like. It will be a required skill.
An article over at HuggingFace triggered me into this new way of thinking… Luigi says, “iterative prompting is a methodology for transforming AI into a coding partner… The key is to think of AI as a collaborator – guide it, refine its responses, and iterate until the desired results are achieved.” Another programmer, Steve Yegge from Sourcegraph even gave it a new name: CHOP (Chat-Oriented Programming) to describe this new style of coding “via iterative prompt refinement”.

Here are some of my tips to build this skill:
- Be curious, not critical: If the AI’s output is off, ask yourself why it might have thought that was okay.
- Look for clues in the response: Often the wrong answer reveals how the AI interpreted your prompt. Maybe your question was ambiguous or missing detail.
- Refine and iterate: Using those insights, clarify your prompt or provide more context, then try again.
- Collaborate step-by-step: Just like you’d work through a problem with a human, iteratively build the solution with the AI, rather than expecting perfect results first try.
This approach not only leads to better results, but also turns prompting into a learning experience. I often find that by analyzing an AI’s mistake, I understand the problem better myself! It’s a two-way street of learning.
Recently, I was working on a content-based recommendation feature for my blog. Whenever I publish a new blog post, I want to automatically recommend a few relevant forum posts from our developer community forums. Rather than writing the entire recommendation engine from scratch (with vectors, TF-IDF, and maybe cosine similarity), I decided to build this with Windsurf and its AI model’s help. Both the forum posts and blog posts are textua,l and a large language model might do a decent job of understanding which ones are related.
My first prompt to Claude Sonnet 3.5: My new blog post is about building content-based recommenders in Python, and I have a list of forum post titles. The task: pick the top 3 forum threads that would be most relevant to someone reading my blog post.
The Initial Prompt (and a Surprise in the Output)
Here’s the initial code and prompt I gave to the AI assistant (in a Python-like pseudo-code format for illustration):
blog_post = """\
Building a Content-Based Recommendation System in Python.
In this post, we explore how to recommend relevant content using textual similarity.
We use techniques like TF-IDF vectorization and cosine similarity to find related posts.
"""
forum_posts = [
"How to implement content-based recommender with TF-IDF",
"Collaborative filtering vs content-based: pros and cons",
"Python 3.9 release - new features and improvements",
"Best laptop for programming in 2025",
"Calculating cosine similarity for text matching in Python"
]
# Initial prompt to AI
prompt = f"""
Given the new blog post:\n\"\"\"\n{blog_post}\n\"\"\"\n
and the following forum post titles:\n- {forum_posts[0]}\n- {forum_posts[1]}\n- {forum_posts[2]}\n- {forum_posts[3]}\n- {forum_posts[4]}\n
Please list the top 3 forum posts that are most relevant to the blog post's content.
"""
response = call_ai_model(prompt) # Hypothetical function to get AI result
print(response)
When I ran this, the AI did return three forum posts, but the result made me raise an eyebrow:
1. **Python 3.9 release - new features and improvements** – (Talk about Python updates)
2. **Collaborative filtering vs content-based: pros and cons** – (Discussion comparing recommendation approaches)
3. **How to implement content-based recommender with TF-IDF** – (Guide on content-based recommendations)
At first glance, the AI’s answer isn’t horrible – it found two very relevant posts (the ones about recommender systems). But it also picked “Python 3.9 release – new features” as the top recommendation, which is clearly off-topic. A blog reader interested in content-based recommendations probably doesn’t care about the general Python 3.9 release notes. So what happened?
Empathic Analysis of the AI’s Output
Instead of cursing my AI assistant for being “dumb,” I put on my Ted Lasso hat or maybe mustache. I asked myself: Why might the AI have thought the Python 3.9 forum post was relevant?
After a moment, it clicked: the AI probably latched onto the keyword “Python” in both the blog and that forum title. My prompt didn’t stress what aspect of the content to match on, so the model might have defaulted to some simplistic notion of relevance, like overlapping keywords. The blog post text mentions “Python” (since the tutorial is in Python), and the forum post about Python 3.9 obviously mentions “Python” too – bingo, a naive content match! Meanwhile, the forum post about “Calculating cosine similarity for text matching” didn’t explicitly scream “recommender system” to the model, so it got ignored in favor of the more superficially similar Python thread.
In other words, the AI wasn’t trying to be obtuse; it was following my instructions as best it could. The mistake was actually understandable given the prompt. I realized I needed to clarify what “relevant” means in this context. This is the essence of empathic prompting: I learned something about how the AI interpreted my request, and now I can adjust my approach to guide it better.
Refining the Prompt for a Better Result
I wrote a new prompt. I wanted to tell the AI to focus on topical relevance (i.e., content about recommendation systems), and to ignore posts that are only tangentially related (like generic Python news). I also decided to explicitly mention the technique cosine similarity to nudge it toward that forum post it ignored. Essentially, I gave the AI a bit more context and a gentle steer in the right direction.
Here’s the refined prompt and the code:
# Refined prompt with more guidance
refined_prompt = f"""
We have a blog post about building a content-based recommendation system (using TF-IDF and cosine similarity in Python).
The blog post is about recommending relevant content based on text similarity.
Consider the following forum post titles:
- {forum_posts[0]}
- {forum_posts[1]}
- {forum_posts[2]}
- {forum_posts[3]}
- {forum_posts[4]}
Now, **pick the top 3 forum posts that are most relevant to the blog post's topic (content-based recommendations)**.
Focus on posts about recommendation systems or text similarity (TF-IDF, cosine similarity), *not* general Python news or unrelated topics.
"""
response = call_ai_model(refined_prompt)
print(response)
And the AI’s new response:
1. **How to implement content-based recommender with TF-IDF** – A forum thread directly about content-based recommendation using TF-IDF, which is exactly the technique discussed in the blog.
2. **Collaborative filtering vs content-based: pros and cons** – A discussion comparing content-based methods with collaborative filtering, which complements the blog post's topic.
3. **Calculating cosine similarity for text matching in Python** – A thread about cosine similarity (a core part of content-based recommendations), very relevant to the blog post.
The irrelevant Python 3.9 post is gone, replaced by the cosine similarity thread that we actually want. Notice how I didn’t tell the AI explicitly which ones to choose; I simply guided it by clarifying what “relevant” really means here. The AI did the rest on its own once it understood my intentions better.
A little empathy went a long way: by imagining what the AI was thinking before and adjusting my wording, I got a much better result. In a way, it felt like I debugged my prompt by examining the AI’s “mistake” and then fixing the cause of that mistake.
Let Me Try Explaining This Again
- Think of the AI as a teammate. It’s not a perfect genius that will always get everything right, nor is it a dumb tool to be dismissed when it errs. Like any human collaborator, it has its own way of “thinking” (albeit a very alien, statistical one!). Engage with it, guide it, and be patient when it needs a second try. As I reminded myself, treat AI-generated code or answers as you would a human junior developer’s output. You’d review it, figure out reasoning, and help improve it – do the same with the AI output.
- Use mistakes as insights. When the AI goes off track, don’t just reject the output – inspect it. Is there a pattern to the misunderstanding? Maybe your question was ambiguous or assumed knowledge the AI didn’t have. In our case, “relevant” was too vague without context. Often, an AI’s “mistake” is really a mirror held up to our prompt: it reflects how our request could be interpreted. This is incredibly useful for honing our own communication and problem decomposition skills.
- Iterate and refine. The first answer is rarely the final answer. Professional developers know that software design is an iterative process – turns out, prompt design is iterative too. You might start with a rough prompt, see what the AI does, then refine the prompt multiple times. This iterative loop is at the heart of chat-oriented programming (CHOP) and effective prompt engineering. It’s not cheating or a failure to redo the prompt; it’s part of the workflow. Each iteration is a chance to incorporate what you learned from the last output.
- Keep a casual, curious mindset. One thing I enjoy about working with an AI is that I can think out loud. If the AI says something weird, I can literally ask it “why did you do that?” or I can just introspect and adjust. Treating it like a game or puzzle is a bit of fun – “hmm, the AI misunderstood me, how can I say this differently?” Maintaining that curiosity (and even humor) makes the process less frustrating. Remember, the AI has a lot more patience than we do and doesn’t mind if you rephrase the question five times before you get it right
Empathic prompting is a skill that takes time to develop. You have to consistently reset your expectations as models get better and IDEs improve, but it pays off.
Articles and Videos that you need to read/watch, or AI summarize: