Put Fur on a Solar Panel. LLM Hallucination or Invention?

I just invented a solar panel that regulates its heat by using a coating of varied-length fur-like structures. Or, did a large-language model (LLM) hallucinate? I often hear people say that ChatGPT just made that up or that GPT-4 told me to do something that was wrong or misleading. In AI parlance, these made-up strings of text are called hallucinations.

My goal with this post is to convince you that all of the interesting parts of LLMs are in the hallucinations. Embrace the hallucinations. Learn the limits of LLMs and how to harness the power of hallucinations for creativity and invention.

AI Model Hallucinations as Hallucinated by DALL-E 3
AI Model Hallucinations as Hallucinated by DALL-E 3

What are LLM hallucinations?

Large Language Model (LLM) hallucinations refer to instances where a language model like GPT-4 generates information or responses that are either factually incorrect, nonsensical, or unrelated to the input it received. These hallucinations can occur due to various reasons, including limitations in the training data, the inherent nature of statistical learning models, or the complexity of the task at hand.

Here are a few key points about LLM hallucinations:

  1. Factual Inaccuracies: Sometimes, an LLM might provide information that is factually wrong. This can happen because the model doesn’t “know” facts but generates responses based on patterns it learned during training. If those patterns are incorrect or misinterpreted, the output can be factually incorrect.
  2. Overconfidence: LLMs often generate responses with a tone of confidence, regardless of their accuracy. This can make it difficult to discern when the model is providing incorrect information.
  3. Context Misinterpretation: LLMs might misunderstand or ignore the context provided in the input, leading to responses that are off-topic or irrelevant.
  4. Training Data Limitations: The data used to train LLMs might not cover all possible scenarios or might contain biases and inaccuracies, which the model can then replicate in its responses.
  5. Complexity of Language and Concepts: Some topics are complex, nuanced, or require domain-specific knowledge that LLMs might not handle accurately, leading to oversimplified or incorrect responses.
  6. Lack of Real-World Understanding: LLMs don’t have consciousness or real-world experience. They generate responses based on textual patterns rather than understanding or reasoning, which can sometimes lead to nonsensical or illogical outputs.
  7. Improvisation on Limited Information: When faced with queries about topics not well-represented in their training data, LLMs might “improvise,” leading to creative but inaccurate responses.
  8. Temporal Limitations: LLMs are trained up to a certain point in time, so they are not aware of events or developments that occurred after their last training update.

A Personal Example

I designed a popular API. It has a number of API functions, endpoints, and parameters. I named some valid parameters, one being numMinutes. This parameter lets you pass in a number that represents the number of minutes to collect data. ChatGPT thinks that my API also has numHours and numDays. Stupid AI. Maybe I should have added those parameters too. I actually think that it is perfectly reasonable for an LLM model to assume that I have other parameters like numMinutes. It was following a pattern that it detected in my API design. I should fix that. Maybe I should ask ChatGPT to make up more parameters. Maybe I could create a better experience for my users, or maybe I will be wasting my time. Some hallucinations are bad, though. So, you shouldn’t take anyone’s word about anything, especially not the words of a model that just generates probable text.

Detecting Hallucinations

This will be a domain in itself. Papers will be published on this topic. Blog posts like this will cover the topic extensively. Search engines will cite sources and cross-reference webpages—the webpages that were once written by humans—for fact-checking purposes. People will talk about how AI made up the capital of France. So, how can we detect hallucinations?

Detecting hallucinations in the outputs of Large Language Models (LLMs) like GPT-4 is an important aspect of ensuring the reliability and accuracy of the information provided. Here are some strategies that can be employed to detect hallucinations:

  1. Cross-Verification with Reliable Sources: One of the most effective ways to detect hallucinations is to cross-check the information provided by the LLM with reliable, up-to-date sources. This could include academic journals, official websites, news outlets, and other trusted sources of information.
  2. Awareness of the Model’s Limitations: Understanding the limitations of LLMs, such as their training cut-off date and the lack of real-world experience or consciousness, can help in assessing the likelihood of a response being a hallucination.
  3. Checking for Internal Consistency: Analyzing the response for internal consistency can be revealing. Hallucinations may often contain contradictions or statements that are logically inconsistent.
  4. Seeking Expert Opinion: For specialized or technical information, consulting with subject matter experts can help verify the accuracy of the LLM’s response.
  5. Using Fact-Checking Tools: Utilizing fact-checking tools and websites can help ascertain the factual accuracy of certain claims or statements made by the LLM.
  6. Monitoring for Overconfidence in Unusual Claims: LLMs may present information with a tone of confidence, even if it’s incorrect. Be cautious with claims that seem unusual, overly specific, or out of context, especially if presented with high confidence.
  7. Analyzing Source Credibility: When LLMs cite sources, it’s important to evaluate the credibility of these sources. Sometimes, LLMs might reference non-existent sources or misattribute information, which can be a sign of hallucinations.
  8. Contextual Understanding: Understanding the context of the query and the response can also help. If the response seems off-topic or is not directly addressing the query, it might be a hallucination.
  9. Community Feedback and Forums: Sometimes, discussing the response with others or checking community forums, especially for technical or niche topics, can provide insights into the accuracy of the information.
  10. Professional Tools and Software: For certain applications, there are professional tools and software designed to analyze and verify data, which can be used to check the validity of an LLM’s output.

How to Harness AI-Model Hallucinations

When you detect a hallucination, don’t disregard it right away. Take advantage of it. Should it be there? Should it be made true? Should you use it as a writing prompt or a brainstorm prompt? Here are some ways that you can use the hallucinations for good purposes:

  1. Inspiration for Creative Writing: LLM hallucinations can sometimes result in unique, unexpected combinations of ideas or narratives. Writers can use these surprising outputs as prompts or inspiration for creative writing, storytelling, or character development.
  2. Problem-Solving and Brainstorming: In brainstorming sessions, unusual or unconventional ideas can spark new ways of thinking. An LLM’s off-beat or ‘hallucinated’ responses might lead to innovative solutions or approaches to problems that wouldn’t have been considered otherwise.
  3. Artistic Inspiration: Artists can interpret hallucinated outputs as abstract concepts or themes for their work. This can be particularly intriguing in fields like conceptual art, where the unexpected nature of such outputs can lead to thought-provoking art pieces.
  4. Exploring Alternate Realities: Hallucinations can create scenarios that defy conventional logic or reality, offering a glimpse into alternate realities or science fiction-like worlds. This can be a playground for those interested in speculative fiction or exploring ‘what if’ scenarios.
  5. Learning Through Correction: Identifying and understanding why an LLM hallucinates can be a learning opportunity. It can help in understanding the model’s limitations, how language works, and the importance of critical thinking in the age of AI.
  6. Humor and Entertainment: Sometimes, the absurdity or randomness of LLM hallucinations can be amusing. They can be used for light-hearted entertainment or to add humor to content, like in comedic writing or social media posts.
  7. Cognitive Flexibility: Engaging with unconventional or incorrect responses can enhance cognitive flexibility by challenging users to think differently or consider perspectives they wouldn’t normally entertain.
  8. Generating Unexpected Questions: Hallucinations can lead to questions or discussions that might not have arisen from conventional responses, fostering deeper inquiry or debate on certain topics.
  9. Stimulating Imagination in Education: In educational settings, these unpredictable responses can be used to stimulate students’ imagination, encouraging them to think critically about information accuracy, source reliability, and the distinction between fact and fiction.

Are you able to force hallucinations?

It seems hallucinations are sporadic. It might not be able to force a hallucination to occur. However, it is possible to create conditions that are more likely to lead to outputs that could be considered hallucinatory or less accurate. Here are a few methods that can potentially lead to such outcomes:

  1. Asking for Information Outside the LLM’s Training Data: Inquiries about events or developments occurring after the LLM’s last training update in April 2023 may lead to speculative or incorrect responses, resembling a form of hallucination.
  2. Requesting Creative or Fictional Content: When the LLM is tasked with generating creative stories, hypothetical scenarios, or fictional content, its responses are naturally imaginative and not grounded in real-world facts.
  3. Posing Abstract or Philosophical Questions: Queries that are highly abstract or philosophical can elicit responses from the LLM that are interpretative or speculative, which might seem like hallucinations.
  4. Complex or Ambiguous Queries: Submitting complex, ambiguous, or poorly defined queries can result in off-topic or nonsensical responses as the LLM attempts to interpret and respond to the input.
  5. Requesting Information on Hypothetical or Non-Existent Concepts: Questions about things that don’t exist or are purely hypothetical can lead to creative or imagined responses from the LLM.
  6. Combining Unrelated Topics: Requests for the LLM to relate or combine completely unrelated topics might result in unusual or unexpected responses.
  7. Intentionally Misleading or Incorrect Information: If the input contains factual inaccuracies or misleading information, the LLM’s responses may reflect these inaccuracies, leading to outputs that could be considered hallucinatory.

My New Invention: FurraSol—The World’s First Fur-Covered Solar Panel

FurraSol—The World’s First Fur-Covered Solar Panel

One of my techniques to generate new ideas for something is to keep pulling on a thread. This is my pro-tip for you. I will describe how I invented FurraSol by “pulling on a thread” using ChatGPT.

Pull-the-Thread Technique for LLM Hallucinations

How to Invent the Fur-Covered Solar Panel Using the Pull-the-Tread Technique

I start by exploring a topic by asking for facts. I am not looking to validate the facts per se; I am looking for something to catch my eye.

Tell me 10 facts about solar power. Keep each fact short.

“Photovoltaic Cells” caught my attention, so I keep pulling on the thread.

Tell me 10 facts about Photovoltaic Cells

I keep going and layering the prompts.

Tell me 10 facts about Semiconductor Material in Photovoltaic Cells used in Solar Panels

While I am doing this, I am learning. I am having a few brainstorms. The process is helping me eliminate possibilities and think of new things.

Tell me 10 facts about Temperature Effect found in Semiconductor Materials used in Photovoltaic Cells for Solar Panels

Keep pulling…

Give me 50 ways to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells for Solar Panels
Give me 50 more ways to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells for Solar Panels

I am stressing the limits now, but the good stuff, the novel stuff is coming…

Give me 50 more ways to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells for Solar Panels

It’s all gibberish at this point. Maybe any of these are plausible. Pick one. I picked “Biomimicry of Animal Cooling Mechanisms” and, you guessed it, I kept pulling on the thread.

Tell me how I would invent Biomimicry of Animal Cooling Mechanisms to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells for Solar Panels

We are in crazy land, but this is exactly where I want to be.

Demonstrate 10 possible ways how a solar panel would use specialized fur and body structure for heat dissipation  to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells.

If you don’t like a response, try it again. And, then, keep pulling.

Explore 10 ways to Layer Fur-like Structures on a solar panel to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells.
Give me three ways on how I could manufacture this solar panel concept.

Help me visualize Variable Length Layers of Fur-like Structures on a solar panel to overcome the Temperature Effects found in Semiconductor Materials used in Photovoltaic Cells.


Creativity is about combining and transforming ideas. I did all of this work for the solar panel in about 10 minutes. I learned a lot throughout the process. This concept of using LLM hallucinations as a catalyst for creativity offers a fascinating paradigm shift that you should experience for yourself. These unintended outputs, often perceived as flaws, can in fact become wellsprings of inspiration, challenging us to think outside conventional boundaries. By ‘pulling on a thread’ of the vast tapestry of data that LLMs encompass, we unlock a realm where the unpredictable becomes a playground for the imagination. Whether in storytelling, problem-solving, or even artistic expression, the unique, sometimes surreal twists and turns of LLM hallucinations can spark a creative alchemy. This approach not only broadens our understanding of artificial intelligence but also enriches our creative endeavors, reminding us that sometimes the most innovative ideas emerge from the most unexpected places.

“Pull the thread.”

Hans Scharler

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.