<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ChatGPT &#8211; About Things | A Hans Scharler Blog</title>
	<atom:link href="https://nothans.com/tag/chatgpt/feed" rel="self" type="application/rss+xml" />
	<link>https://nothans.com</link>
	<description>Life, Comedy, Games, Tech, Marketing, and Community</description>
	<lastBuildDate>Wed, 22 Apr 2026 14:53:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">114568856</site>	<item>
		<title>ChatGPT Images 2.0 (gpt-image-2) API Tutorial</title>
		<link>https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial</link>
					<comments>https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 13:26:34 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[ChatGPT Images]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Generative AI Art]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5412</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>I have opinions about image generation APIs, and almost none of them are flattering. Every release until now has been a slightly-better pixel machine wrapped in the same three-preset UI. Pick 1024&#215;1024, 1536&#215;1024, or 1024&#215;1536. Cross your fingers on the text. Regenerate if anything looks like soup.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="500" data-attachment-id="5413" data-permalink="https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial/image-105" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?fit=1536%2C1024&amp;ssl=1" data-orig-size="1536,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Tutorial lede, generated with gpt-image-2 itself at 1536&amp;#215;1024." data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?fit=750%2C500&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=750%2C500&#038;ssl=1" alt="GPT-IMAGE-2 tutorial lede, NotHans Blue to Teal Cyan typography on dark charcoal with a grid of thumbnail icons" class="wp-image-5413" style="width:535px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=750%2C500&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=420%2C280&amp;ssl=1 420w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?resize=1320%2C880&amp;ssl=1 1320w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image-1.png?w=1536&amp;ssl=1 1536w" sizes="(max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">Tutorial lede, generated with gpt-image-2 itself at 1536&#215;1024.</figcaption></figure>
</div>


<p>OpenAI shipped&nbsp;<code>gpt-image-2</code>&nbsp;on April 21, 2026, and it is the first image model that actually belongs in a production pipeline. Not because the pictures are prettier. Because the API finally does the things I kept wanting the old one to do.</p>


<h2 class="wp-block-heading" id="what-actually-changed">What actually changed</h2>


<p>Three things, and you can ignore the rest of the announcement.</p>



<p><strong>It reads and writes legible text.</strong>&nbsp;OpenAI claims ~99% accuracy on typography, including CJK and right-to-left scripts. That is a big deal if you have ever tried to generate a product label or a slide deck header and gotten cursed runes back. The old model was a pixel painter. The new one is a pixel painter that can spell.</p>



<p><strong>It thinks before it draws.</strong>&nbsp;There is a reasoning pass baked into the model now, a &#8220;think about the scene, then render&#8221; step. You do not have to configure it. You do not pay a thinking-mode surcharge on the standard API call. It just converges faster. Prompts I used to iterate on three or four times now land on the first or second try.</p>



<p><strong>It edits images.</strong>&nbsp;Real editing, not &#8220;here&#8217;s a new image that vaguely resembles your old one.&#8221; You pass in a picture and a description of what you want changed, and the rest stays put. This is the capability that makes it worth wiring into a pipeline.</p>


<h2 class="wp-block-heading" id="the-minimum-viable-call">The minimum viable call</h2>


<p>If you have Node 20+ and an OpenAI API key, this is the whole thing:</p>



<pre class="wp-block-code"><code>curl https://api.openai.com/v1/images/generations \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2",
    "prompt": "A cinematic 4K landscape of an AI data center at twilight",
    "size": "1536x1024",
    "quality": "high"
  }'</code></pre>



<p>You get back base64 image data in&nbsp;<code>data[0].b64_json</code>. Write it to a file. Done.</p>



<p>One thing that tripped me up: do not send&nbsp;<code>response_format</code>. The docs say it is not supported, and they mean it. The API rejects the request with HTTP 400. All responses come back base64 only. If you want URLs, host them yourself.</p>


<h2 class="wp-block-heading" id="the-edit-endpoint-is-the-real-unlock">The edit endpoint is the real unlock</h2>


<p>Here is the image I generated first, a photorealistic 4K data center at twilight:</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" decoding="async" width="750" height="422" data-attachment-id="5414" data-permalink="https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial/4k-original" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?fit=2560%2C1440&amp;ssl=1" data-orig-size="2560,1440" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4k-original" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?fit=750%2C422&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original.png?resize=750%2C422&#038;ssl=1" alt="Photorealistic AI data center at twilight with rows of glowing blue server racks receding to a vanishing point, original generation from gpt-image-2" class="wp-image-5414" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=1536%2C864&amp;ssl=1 1536w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=2048%2C1152&amp;ssl=1 2048w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=750%2C422&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=480%2C270&amp;ssl=1 480w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?resize=1320%2C743&amp;ssl=1 1320w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-original-scaled.png?w=2250&amp;ssl=1 2250w" sizes="(max-width: 750px) 100vw, 750px" /></figure>
</div>


<p>Now here is the same image after a single edit call: &#8220;replace the twilight clouds above the servers with a dramatic aurora borealis, ribbons of emerald green, magenta, and electric teal. Keep everything else unchanged.&#8221;</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" decoding="async" width="750" height="500" data-attachment-id="5415" data-permalink="https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial/4k-edited-with-aurora" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?fit=1536%2C1024&amp;ssl=1" data-orig-size="1536,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4k-edited-with-aurora" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?fit=750%2C500&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=750%2C500&#038;ssl=1" alt="The same AI data center scene after a gpt-image-2 edit call added a green and magenta aurora borealis to the sky, everything else preserved&quot;/&gt;<figcaption class=&quot;wp-element-caption&quot;&gt;The same scene after one gpt-image-2 edit call. No mask. Only the sky changed." class="wp-image-5415" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=750%2C500&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=420%2C280&amp;ssl=1 420w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?resize=1320%2C880&amp;ssl=1 1320w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/4k-edited-with-aurora.png?w=1536&amp;ssl=1 1536w" sizes="(max-width: 750px) 100vw, 750px" /></figure>
</div>


<p>No mask. No Photoshop. The server racks are in the same positions. The orange horizon is preserved. The blue light trails between the racks still flow toward the vanishing point. Only the sky changed.</p>



<p>Try doing that with gpt-image-1. You cannot. The&nbsp;<code>/v1/images/edits</code>&nbsp;endpoint existed before, but the results it gave you were not the kind of thing you shipped to production. This is the feature I was waiting for.</p>



<p>The endpoint accepts multiple reference images, which you address inside the prompt as &#8220;image 1&#8221; and &#8220;image 2&#8221; for compositing. Style transfer, product placement, character relocation, all one API call.</p>


<h2 class="wp-block-heading" id="three-creative-moves-the-marketing-post-does-not-tell-you-about">Three creative moves the marketing post does not tell you about</h2>


<p><strong>Aspect ratios no one else gives you.</strong>&nbsp;The preset list is short, but&nbsp;<code>size</code>&nbsp;accepts custom values. Both dimensions must be multiples of 16, max edge 3840, aspect ratio up to 3:1, and total pixels between 655,360 and 8,294,400. That range covers Twitter cards at 1200&#215;628, Instagram stories at 1080&#215;1920, blog heroes at 1920&#215;1080, and full 4K landscapes at 3840&#215;2160. No cropping, no upscaling, no extra tooling.</p>



<p>One caveat the docs hide: anything above 2,560&#215;1,440 is officially &#8220;experimental.&#8221; It works. I generated a 4K image for this post. But OpenAI is not promising SLA on it yet, so budget for occasional failures in production.</p>



<p><strong>Batch consistency.</strong>&nbsp;The&nbsp;<code>n</code>&nbsp;parameter goes up to 8, and the model keeps characters and objects consistent across the set. For a product shot or a children&#8217;s book page, one call gives you eight variations that actually share visual DNA. Eight variations at medium quality costs about thirty cents. That is a lot cheaper than eight separate prompt-engineering sessions.</p>



<p><strong>Reasoning as a debugging tool.</strong>&nbsp;Because the model thinks before drawing, iteration feels different. Vague prompts still produce vague images, but specific prompts land harder. I stopped writing six-paragraph mega-prompts and started writing three-sentence scene descriptions with hex colors and composition direction. The output got better.</p>


<h2 class="wp-block-heading" id="gotchas">Gotchas</h2>


<p>Things I wish the docs had told me louder:</p>



<ul class="wp-block-list">
<li><strong>No transparent backgrounds.</strong> If you need a PNG with alpha for icon work, you still reach for <code>gpt-image-1</code>. Route by use case.</li>



<li><strong>Masks are prompt-guided, not pixel-exact.</strong> If you are coming from Stable Diffusion, this will feel wrong. The mask tells the model which region to focus on. The model decides how to blend.</li>



<li><strong>C2PA watermarks are on by default.</strong> Every image ships with provenance metadata. Useful for trust, relevant if you were hoping to redistribute without attribution.</li>



<li><strong>Streaming partials cost extra.</strong> Each <code>partial_images</code> frame adds 100 output tokens. Fine for prototyping a UI. Expensive at scale.</li>



<li><strong>Pricing is per token.</strong> $8 per million input tokens, $30 per million output tokens, with the usual caching discount. A medium 1024&#215;1024 lands around four cents. A high-quality 4K lands near eighty cents. The calculator on the docs page will save you some math.</li>
</ul>


<h2 class="wp-block-heading" id="where-to-take-it">Where to take it</h2>


<p>If you are already using image generation in an app, switching is low-risk. Change the model string, drop&nbsp;<code>response_format</code>&nbsp;if you were sending it, and audit for transparent-background assumptions. Your latency will drop. Your text will be legible.</p>



<p>If you are not using image generation in an app, the editing endpoint is the reason to start. Every product that has ever wanted &#8220;make this photo match our brand&#8221; can now do it with three lines of code.</p>



<p>I rebuilt my cartoon pipeline around the new editing flow in about an afternoon. The first draft of this post was going to be a benchmark comparison. Then I looked at the aurora edit and realized there was nothing to benchmark. Either your tool can do it or it cannot.</p>



<p><code>gpt-image-2</code>&nbsp;can.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/chatgpt-images-2-0-gpt-image-2-api-tutorial/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5412</post-id>	</item>
		<item>
		<title>Preparing for GPT-5: New Prompting Mindset for Advanced Models</title>
		<link>https://nothans.com/preparing-for-gpt-5-new-prompting-mindset-for-advanced-models</link>
					<comments>https://nothans.com/preparing-for-gpt-5-new-prompting-mindset-for-advanced-models#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 07 Aug 2025 15:01:21 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[GPT-5]]></category>
		<category><![CDATA[Metaprompting]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Prompting]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5171</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Today, OpenAI is going to <a href="https://x.com/OpenAI/status/1953139020231569685" target="_blank" rel="noreferrer noopener">announce</a> the GPT-5 family of Generative AI models.</p>



<ul class="wp-block-list">
<li><strong>gpt-5</strong> will be designed for logic and multi-step tasks.</li>



<li><strong>gpt-5-mini</strong> is a lightweight version for cost-sensitive applications.</li>



<li><strong>gpt-5-nano</strong> is optimized for speed and ideal for applications requiring low latency.</li>



<li><strong>gpt-5-chat</strong> is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.</li>
</ul>



<p>As large language models and other GenAI models advance, you need to shift your mindset on how to use them. Here&#8217;s how I leverage advanced models and how I have shifted my prompting.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="500" data-attachment-id="5172" data-permalink="https://nothans.com/preparing-for-gpt-5-new-prompting-mindset-for-advanced-models/image-61" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?fit=1536%2C1024&amp;ssl=1" data-orig-size="1536,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Shifting AI model prompting mindset" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?fit=750%2C500&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=750%2C500&#038;ssl=1" alt="" class="wp-image-5172" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=1024%2C683&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=768%2C512&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=750%2C500&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=420%2C280&amp;ssl=1 420w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?resize=1320%2C880&amp;ssl=1 1320w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/08/image.png?w=1536&amp;ssl=1 1536w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">Shifting AI Model Prompting Mindset</figcaption></figure>
</div>

<h4 class="wp-block-heading" id="1-structural-integrity-crafting-clear-and-organized-prompts">1. Structural Integrity: Crafting Clear and Organized Prompts</h4>


<p>Think of your prompt as a blueprint for the AI model. A well-structured prompt ensures the model understands your intent and constraints effectively.</p>



<ul class="wp-block-list">
<li><strong>Guardrails and Edge Cases:</strong> Don&#8217;t just describe the ideal scenario; also consider the exceptions. Anticipate potential issues or deviations and explicitly instruct the AI on how to handle them. For example, if you&#8217;re asking for a summary of articles, specify what to do if an article is paywalled or inaccessible. This proactive approach leads to more robust and reliable outputs.</li>



<li><strong>Strategic Context Positioning:</strong> Where you place information within your prompt can influence the AI model&#8217;s attention.
<ul class="wp-block-list">
<li><strong>Front-load critical instructions (first 10%):</strong> Make your core request and essential rules immediately clear.</li>



<li><strong>Middle ground for context and data:</strong> Provide necessary background information, examples, or data in the central part of the prompt.</li>



<li><strong>Reinforce key constraints at the end:</strong> Briefly reiterate any crucial limitations or desired formats to leave a lasting impression.</li>
</ul>
</li>



<li><strong>The Power of &#8220;Don&#8217;t&#8221;:</strong> Surprisingly, providing <strong>negative examples</strong> – explicitly stating what you <em>don&#8217;t</em> want the AI model to do can be more effective than solely focusing on positive examples. By illustrating failure modes, you guide the model away from undesirable outputs.</li>
</ul>


<h4 class="wp-block-heading" id="2-evidencebased-techniques-leveraging-the-models-strengths">2. Evidence-Based Techniques: Leveraging the Model&#8217;s Strengths</h4>


<p>Move beyond simple requests and employ techniques that tap into the AI model&#8217;s reasoning and self-awareness:</p>



<ul class="wp-block-list">
<li><strong>Test for Self-Consistency:</strong> For critical outputs, ask the AI model to generate multiple responses to the same prompt. Analyzing the consistency across these responses can give you a better gauge of the reliability of the information.</li>



<li><strong>Unleash &#8220;Program of Thought&#8221;:</strong> For tasks involving logic, math, or code generation, explicitly instruct the model to &#8220;solve this by writing a program&#8221; or &#8220;show your work step-by-step using calculations.&#8221; This encourages the AI to leverage its tool-use capabilities for more accurate results.</li>



<li><strong>&#8220;Plan and Solve&#8221; for Complex Tasks:</strong> Before asking the AI model to execute a complex task, request it to first outline a step-by-step plan. This allows you to review the proposed approach, identify potential flaws in its logic, and guide it towards a more effective strategy <em>before</em> the final output is generated.</li>
</ul>


<h4 class="wp-block-heading" id="3-the-art-of-metaprompting-talking-to-the-ai-model-about-itself">3. The Art of Metaprompting: Talking to the AI Model About Itself</h4>


<p>A new concept that you should consider is <strong>metaprompting</strong>. This is you prompting the AI model to reflect on its capabilities and limitations. Since advanced models possess a significant understanding of their workings, you can leverage this knowledge to improve your results.</p>



<ul class="wp-block-list">
<li><strong>The Self-Improvement Loop:</strong> Simply ask: &#8220;Here&#8217;s my current prompt: [your prompt]. How would you improve this prompt to get better results from you?&#8221; The AI model can often provide valuable suggestions for clarity, specificity, or even the inclusion of techniques you haven&#8217;t considered.</li>



<li><strong>Checking for Uncertainty:</strong> Proactively ask: &#8220;What parts of this request are unclear or ambiguous? What assumptions are you making? What additional information would help you execute this prompt with more accuracy?&#8221; This can help uncover potential misunderstandings and prevent overconfident, yet inaccurate, responses.</li>



<li><strong>Discovering Hidden Potential:</strong> Inquire: &#8220;How would you approach this if you had no constraints? What would be your ideal process? What tools or information would help you?&#8221; This can reveal the model&#8217;s full potential and suggest innovative approaches you might not have thought of.</li>



<li><strong>Demanding Explainability:</strong> Ask: &#8220;Explain your reasoning step by step. What parts are you most or least confident about?&#8221; Understanding the AI&#8217;s thought process can help you diagnose issues and build trust in its output.</li>



<li><strong>The Socratic Approach:</strong> Use probing questions like &#8220;Why did you choose that approach?&#8221; or &#8220;What alternatives did you consider?&#8221; to encourage deeper reflection and uncover underlying assumptions in the AI&#8217;s reasoning.</li>
</ul>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/preparing-for-gpt-5-new-prompting-mindset-for-advanced-models/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5171</post-id>	</item>
		<item>
		<title>I&#8217;ve Moved from a Futurist to a Near-Futurist&#8230; to a Todayist.</title>
		<link>https://nothans.com/ive-moved-from-a-futurist-to-a-near-futurist-to-a-todayist</link>
					<comments>https://nothans.com/ive-moved-from-a-futurist-to-a-near-futurist-to-a-todayist#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Wed, 02 Apr 2025 20:53:12 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5096</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>When you don&#8217;t know what to call yourself, you used to be able to call yourself a futurist. This was a nice, tidy brush to paint yourself with. Over the decades, I have had a lot of ideas about what&#8217;s happening in computing and technology. And, if you get my attention, <strong>I will let you know</strong> what they are. Beyond the OS, for example, I made some good predictions and bets.</p>



<p>But today, I am thinking of myself as more of a &#8220;Todayist,&#8221; as that feels more accurate.</p>



<p>AI is moving faster than I thought. Predicting next week feels like sci-fi; predicting tomorrow feels ambitious. Frankly, anticipating anything beyond today is like guessing the plot of Stranger Things season 5.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="912" data-attachment-id="5097" data-permalink="https://nothans.com/ive-moved-from-a-futurist-to-a-near-futurist-to-a-todayist/image-54" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?fit=1047%2C1273&amp;ssl=1" data-orig-size="1047,1273" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Todayist Cartoon" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?fit=750%2C912&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?resize=750%2C912&#038;ssl=1" alt="" class="wp-image-5097" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?resize=842%2C1024&amp;ssl=1 842w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?resize=247%2C300&amp;ssl=1 247w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?resize=768%2C934&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?resize=750%2C912&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/04/image.png?w=1047&amp;ssl=1 1047w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>
</div>


<p>Forget the grand future; I&#8217;m just trying to digest <em>today&#8217;s</em> AI news before tomorrow&#8217;s headlines. At this rate, &#8220;Hourlyist&#8221; is probably next. Just focusing on the now&#8230; until the <em>next</em> now arrives.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/ive-moved-from-a-futurist-to-a-near-futurist-to-a-todayist/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5096</post-id>	</item>
		<item>
		<title>Empathic AI Prompting for Programmers: Treating Your AI as a Coding Buddy</title>
		<link>https://nothans.com/empathic-ai-prompting-for-programmers-treating-your-ai-as-a-coding-buddy</link>
					<comments>https://nothans.com/empathic-ai-prompting-for-programmers-treating-your-ai-as-a-coding-buddy#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Wed, 26 Mar 2025 19:54:00 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Software]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[programming]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5089</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>I want to share something that I have that has helped me shift my viewpoint when I use AI tools to program. I am calling it empathic AI prompting. This is about treating your AI assistant (GitHub Copilot or AI models via Cursor or Windsurf) as a collaborator rather than a code vending machine. It&#8217;s a simple mindset shift that has made my interactions with AI both more productive and more enjoyable. I have also been using it to talk to colleagues and friends. I know people who have cut off ChatGPT based on their experience in January 2023. A lot has changed since then. Others tell me AI is bad at (insert whatever it may be)&#8230; I was telling them to think of it as AI is bad at that <em>right now</em>.</p>



<p>Empathic AI prompting is all about understanding and empathizing with your AI&#8217;s responses. Instead of getting frustrated when the AI gives a suboptimal answer, imagine <em>why</em> it might have produced that output. If you’ve ever done pair programming or mentored a junior developer, the idea is similar: when they make a mistake or an odd suggestion, you don’t just yell “wrong!” and walk away. You try to see what assumptions or misunderstandings led to that result, and then you guide them. With AI, you should do the same – treat it like a well-intentioned colleague who can email HR. </p>



<p>Why care? This mindset turns AI into a true coding partner and helps you understand more about programming. It might sound counterintuitive at first, but I am betting that this is what the future of programming looks like. It will be a required skill.</p>



<p>An article over at <a href="https://huggingface.co/blog/luigi12345/iterative-prompting#:~:text=Iterative%20prompting%20is%20more%20than,ready">HuggingFace</a> triggered me into this new way of thinking&#8230; Luigi says, “iterative prompting is a methodology for transforming AI into a coding partner&#8230; The key is to think of AI as a collaborator – guide it, refine its responses, and iterate until the desired results are achieved.” Another programmer, Steve Yegge from <a href="https://sourcegraph.com/blog/chat-oriented-programming-in-action#:~:text=With%20the%20advent%20of%20LLMs%2C,of%20the%20software%20creation%20lifecycle">Sourcegraph</a> even gave it a new name: <strong>CHOP (Chat-Oriented Programming)</strong> to describe this new style of coding “via iterative prompt refinement”.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="500" height="500" data-attachment-id="5091" data-permalink="https://nothans.com/empathic-ai-prompting-for-programmers-treating-your-ai-as-a-coding-buddy/image-7-21" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?fit=500%2C500&amp;ssl=1" data-orig-size="500,500" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Ted Lasso meme for Empathic AI Prompting for Programmers:" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?fit=500%2C500&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?resize=500%2C500&#038;ssl=1" alt="" class="wp-image-5091" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?w=500&amp;ssl=1 500w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/03/image-7.png?resize=150%2C150&amp;ssl=1 150w" sizes="auto, (max-width: 500px) 100vw, 500px" /></figure>
</div>


<p>Here are some of my tips to build this skill:</p>



<ul class="wp-block-list">
<li><strong>Be curious, not critical:</strong> If the AI’s output is off, ask yourself <em>why</em> it might have thought that was okay.</li>



<li><strong>Look for clues in the response:</strong> Often the wrong answer reveals how the AI interpreted your prompt. Maybe your question was ambiguous or missing detail.</li>



<li><strong>Refine and iterate:</strong> Using those insights, clarify your prompt or provide more context, then try again.</li>



<li><strong>Collaborate step-by-step:</strong> Just like you’d work through a problem with a human, iteratively build the solution with the AI, rather than expecting perfect results first try.</li>
</ul>



<p>This approach not only leads to better results, but also turns prompting into a learning experience. I often find that by analyzing an AI’s mistake, I understand the problem better myself! It’s a two-way street of learning.</p>



<p>Recently, I was working on a content-based recommendation feature for my blog. Whenever I publish a new blog post, I want to automatically recommend a few relevant forum posts from our developer community forums. Rather than writing the entire recommendation engine from scratch (with vectors, TF-IDF, and maybe cosine similarity), I decided to build this with Windsurf and its AI model&#8217;s help. Both the forum posts and blog posts are textua,l and a large language model might do a decent job of understanding which ones are related.</p>



<p>My first prompt to Claude Sonnet 3.5: <em>My new blog post is about building content-based recommenders in Python, and I have a list of forum post titles. The task:<strong> </strong>pick the top 3 forum threads that would be most relevant to someone reading my blog post.</em></p>


<h3 class="wp-block-heading" id="the-initial-prompt-and-a-surprise-in-the-output">The Initial Prompt (and a Surprise in the Output)</h3>


<p>Here&#8217;s the initial code and prompt I gave to the AI assistant (in a Python-like pseudo-code format for illustration):</p>



<pre class="wp-block-code"><code>blog_post = """\
Building a Content-Based Recommendation System in Python.
In this post, we explore how to recommend relevant content using textual similarity.
We use techniques like TF-IDF vectorization and cosine similarity to find related posts.
"""

forum_posts = &#91;
    "How to implement content-based recommender with TF-IDF",
    "Collaborative filtering vs content-based: pros and cons",
    "Python 3.9 release - new features and improvements",
    "Best laptop for programming in 2025",
    "Calculating cosine similarity for text matching in Python"
]

# Initial prompt to AI
prompt = f"""
Given the new blog post:\n\"\"\"\n{blog_post}\n\"\"\"\n
and the following forum post titles:\n- {forum_posts&#91;0]}\n- {forum_posts&#91;1]}\n- {forum_posts&#91;2]}\n- {forum_posts&#91;3]}\n- {forum_posts&#91;4]}\n
Please list the top 3 forum posts that are most relevant to the blog post's content.
"""
response = call_ai_model(prompt)  # Hypothetical function to get AI result
print(response)
</code></pre>



<p>When I ran this, the AI <em>did</em> return three forum posts, but the result made me raise an eyebrow:</p>



<pre class="wp-block-code"><code>1. **Python 3.9 release - new features and improvements** – (Talk about Python updates)
2. **Collaborative filtering vs content-based: pros and cons** – (Discussion comparing recommendation approaches)
3. **How to implement content-based recommender with TF-IDF** – (Guide on content-based recommendations)
</code></pre>



<p>At first glance, the AI’s answer isn’t horrible – it found two very relevant posts (the ones about recommender systems). But it also picked <strong>“Python 3.9 release – new features”</strong> as the top recommendation, which is clearly <em>off-topic</em>. A blog reader interested in content-based recommendations probably doesn’t care about the general Python 3.9 release notes. So what happened?</p>


<h4 class="wp-block-heading" id="empathic-analysis-of-the-ais-output">Empathic Analysis of the AI’s Output</h4>


<p>Instead of cursing my AI assistant for being “dumb,” I put on my Ted Lasso hat or maybe mustache. I asked myself: <em>Why might the AI have thought the Python 3.9 forum post was relevant?</em></p>



<p>After a moment, it clicked: the AI probably latched onto the keyword “Python” in both the blog and that forum title. My prompt didn’t stress <em>what</em> aspect of the content to match on, so the model might have defaulted to some simplistic notion of relevance, like overlapping keywords. The blog post text mentions “Python” (since the tutorial is in Python), and the forum post about Python 3.9 obviously mentions “Python” too – bingo, a naive content match! Meanwhile, the forum post about “Calculating cosine similarity for text matching” didn’t explicitly scream “recommender system” to the model, so it got ignored in favor of the more superficially similar Python thread.</p>



<p>In other words, the AI wasn’t <em>trying</em> to be obtuse; it was following my instructions as best it could. The mistake was actually <em>understandable given the prompt</em>. I realized I needed to clarify what “relevant” means in this context. This is the essence of empathic prompting: I learned something about how the AI interpreted my request, and now I can adjust my approach to guide it better.</p>


<h3 class="wp-block-heading" id="refining-the-prompt-for-a-better-result">Refining the Prompt for a Better Result</h3>


<p>I wrote a new prompt. I wanted to tell the AI to focus on topical relevance (i.e., content about recommendation systems), and to ignore posts that are only tangentially related (like generic Python news). I also decided to explicitly mention the technique <strong>cosine similarity</strong> to nudge it toward that forum post it ignored. Essentially, I gave the AI a bit more context and a gentle steer in the right direction.</p>



<p>Here&#8217;s the refined prompt and the code:</p>



<pre class="wp-block-code"><code># Refined prompt with more guidance
refined_prompt = f"""
We have a blog post about building a content-based recommendation system (using TF-IDF and cosine similarity in Python).
The blog post is about recommending relevant content based on text similarity.

Consider the following forum post titles:
- {forum_posts&#91;0]}
- {forum_posts&#91;1]}
- {forum_posts&#91;2]}
- {forum_posts&#91;3]}
- {forum_posts&#91;4]}

Now, **pick the top 3 forum posts that are most relevant to the blog post's topic (content-based recommendations)**.
Focus on posts about recommendation systems or text similarity (TF-IDF, cosine similarity), *not* general Python news or unrelated topics.
"""
response = call_ai_model(refined_prompt)
print(response)
</code></pre>



<p>And the AI’s new response:</p>



<pre class="wp-block-code"><code>1. **How to implement content-based recommender with TF-IDF** – A forum thread directly about content-based recommendation using TF-IDF, which is exactly the technique discussed in the blog.
2. **Collaborative filtering vs content-based: pros and cons** – A discussion comparing content-based methods with collaborative filtering, which complements the blog post's topic.
3. **Calculating cosine similarity for text matching in Python** – A thread about cosine similarity (a core part of content-based recommendations), very relevant to the blog post.
</code></pre>



<p>The irrelevant Python 3.9 post is gone, replaced by the cosine similarity thread that we actually want. Notice how I didn&#8217;t tell the AI explicitly which ones to choose; I simply guided it by clarifying what &#8220;relevant&#8221; really means here. The AI did the rest on its own once it understood my intentions better.</p>



<p>A little empathy went a long way: by imagining what the AI was thinking before and adjusting my wording, I got a much better result. In a way, it felt like I <strong>debugged my prompt</strong> by examining the AI&#8217;s &#8220;mistake&#8221; and then fixing the <em>cause</em> of that mistake.</p>


<h2 class="wp-block-heading" id="let-me-try-explaining-this-again">Let Me Try Explaining This Again</h2>


<ul class="wp-block-list">
<li><strong>Think of the AI as a teammate.</strong> It’s not a perfect genius that will always get everything right, nor is it a dumb tool to be dismissed when it errs. Like any human collaborator, it has its own way of “thinking” (albeit a very alien, statistical one!). Engage with it, guide it, and be patient when it needs a second try. As I reminded myself, <em>treat AI-generated code or answers as you would a human junior developer&#8217;s output.</em> You’d review it, figure out reasoning, and help improve it – do the same with the AI output.</li>



<li><strong>Use mistakes as insights.</strong> When the AI goes off track, don’t just reject the output – inspect it. Is there a pattern to the misunderstanding? Maybe <strong>your question was ambiguous</strong> or assumed knowledge the AI didn’t have. In our case, “relevant” was too vague without context. Often, an AI’s “mistake” is really a mirror held up to our prompt: it reflects how our request could be interpreted. This is incredibly useful for honing our own communication and problem decomposition skills.</li>



<li><strong>Iterate and refine.</strong> The first answer is rarely the final answer. Professional developers know that software design is an iterative process – turns out, prompt design is iterative too. You might start with a rough prompt, see what the AI does, then refine the prompt multiple times. This iterative loop is at the heart of chat-oriented programming (CHOP) and effective prompt engineering. It’s not cheating or a failure to redo the prompt; it’s part of the workflow. Each iteration is a chance to incorporate what you learned from the last output.</li>



<li><strong>Keep a casual, curious mindset.</strong> One thing I enjoy about working with an AI is that I can think out loud. If the AI says something weird, I can literally ask it “why did you do that?” or I can just introspect and adjust. Treating it like a game or puzzle is a bit of fun – “hmm, the AI misunderstood me, how can I say this differently?” Maintaining that curiosity (and even humor) makes the process less frustrating. Remember, the AI has a lot more patience than we do and doesn’t mind if you rephrase the question five times before you get it right</li>
</ul>



<p>Empathic prompting is a skill that takes time to develop. You have to consistently reset your expectations as models get better and IDEs improve, but it pays off.</p>



<p>Articles and Videos that you need to read/watch, or AI summarize:</p>



<ul class="wp-block-list">
<li><a href="https://medium.com/@rajanikethreddy/the-right-way-to-use-ai-for-coding-and-how-it-transformed-my-workflow-75eed29526f4">The Right Way to Use AI for Coding (And How It Transformed My Workflow)</a></li>



<li><a href="https://sourcegraph.com/blog/chat-oriented-programming-in-action#:~:text=With%20the%20advent%20of%20LLMs%2C,of%20the%20software%20creation%20lifecycle">Chat-oriented programming (CHOP) in action</a></li>



<li><a href="https://huggingface.co/blog/luigi12345/iterative-prompting#:~:text=Iterative%20prompting%20is%20more%20than,ready" data-type="link" data-id="https://huggingface.co/blog/luigi12345/iterative-prompting#:~:text=Iterative%20prompting%20is%20more%20than,ready">Mastering Iterative Prompting for Optimized AI Code Generation</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/empathic-ai-prompting-for-programmers-treating-your-ai-as-a-coding-buddy/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5089</post-id>	</item>
		<item>
		<title>OpenAI: GPT-4.5 has entered the Chat.</title>
		<link>https://nothans.com/openai-gpt-4-5-has-entered-the-chat</link>
					<comments>https://nothans.com/openai-gpt-4-5-has-entered-the-chat#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 27 Feb 2025 20:04:13 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[GPT-4.5]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5066</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Today. It is another day. And, there is another AI announcement.</p>



<p>At 3 pm EST, OpenAI will announce its latest large-language model, GPT-4.5.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="511" height="424" data-attachment-id="5067" data-permalink="https://nothans.com/openai-gpt-4-5-has-entered-the-chat/image-3-28" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/02/image-3.png?fit=511%2C424&amp;ssl=1" data-orig-size="511,424" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="GPT-4.5 from OpenAI" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2025/02/image-3.png?fit=511%2C424&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2025/02/image-3.png?resize=511%2C424&#038;ssl=1" alt="" class="wp-image-5067" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2025/02/image-3.png?w=511&amp;ssl=1 511w, https://i0.wp.com/nothans.com/wp-content/uploads/2025/02/image-3.png?resize=300%2C249&amp;ssl=1 300w" sizes="auto, (max-width: 511px) 100vw, 511px" /><figcaption class="wp-element-caption">GPT-4.5 has entered the Chat.</figcaption></figure>
</div>


<p>Here&#8217;s the live stream announcing GPT-4.5:</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/cfRYp0nItZ8?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/openai-gpt-4-5-has-entered-the-chat/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5066</post-id>	</item>
		<item>
		<title>It&#8217;s Shipmas: The Summary of the 12 Days of OpenAI 2024</title>
		<link>https://nothans.com/the-summary-of-the-12-days-of-openai-2024</link>
					<comments>https://nothans.com/the-summary-of-the-12-days-of-openai-2024#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Fri, 20 Dec 2024 21:20:30 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=4995</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Over 12 days (12 business days), OpenAI held live streams where they dropped the news, releases, projects, partners, and holiday-themed dad jokes. They claim they were the first company to do 12 days of releases. It was an interesting way to make significant announcements and marketing to big audiences of developers and early adopters. If you missed the 12 days, then here are the summaries and videos that you should pay attention to.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><a href="https://openai.com/12-days/"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="454" data-attachment-id="4996" data-permalink="https://nothans.com/the-summary-of-the-12-days-of-openai-2024/image-10-15" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?fit=913%2C553&amp;ssl=1" data-orig-size="913,553" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="12 Days of OpenAI" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?fit=750%2C454&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?resize=750%2C454&#038;ssl=1" alt="" class="wp-image-4996" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?w=913&amp;ssl=1 913w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?resize=300%2C182&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?resize=768%2C465&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-10.png?resize=750%2C454&amp;ssl=1 750w" sizes="auto, (max-width: 750px) 100vw, 750px" /></a><figcaption class="wp-element-caption">The 12 Days of OpenAI</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="day-1-of-12-days-of-openai">Day 1 of 12 Days of OpenAI</h2>


<p>OpenAI has launched a bunch of cool new stuff on Day 1 of the 12 Days of OpenAI. They&#8217;ve been working hard on a new model called o1, which is even smarter and faster than the previous &#8220;preview&#8221; version. It can now understand images and text together. They also introduced ChatGPT Pro, a $200/month subscription that gives you unlimited access to their best models and a special &#8220;Pro Mode&#8221; for tackling the most challenging problems.</p>



<p>The team showed off how o1 can solve complex problems, like figuring out the size of a cooling panel for a data center in space, just by looking at a hand-drawn diagram. They also demonstrated how much faster o1 is compared to the older model, especially for everyday tasks. </p>



<p>With ChatGPT Pro Mode, you can make o1 think even harder to solve complicated math, science, or programming challenges. They plan to add even more features to Pro and bring o1 to their API for developers to use.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/iBfQTnA2n2s?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Sam Altman and some members of the OpenAI team introduce &amp; demo o1 and o1 pro mode in ChatGPT and discuss the ChatGPT Pro plan.</figcaption></figure>


<h2 class="wp-block-heading" id="day-2-of-12-days-of-openai">Day 2 of 12 Days of OpenAI</h2>


<p>Mark Chen, SVP of OpenAI Research, Justin Reese, Computational Researcher in Environmental Genomics and Systems Biology, Berkeley Lab, and some team members from OpenAI as they demo and discuss Reinforcement Fine-Tuning.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/yCIYS9fx56U?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 2 of 12 Days of OpenAI</figcaption></figure>



<p>The news from Day 2 is that OpenAI is making it possible for people to customize their AI models. They&#8217;re introducing a new feature called reinforcement fine-tuning, which lets you train their models on your data to make them experts in your field. This means you can have an AI assistant that&#8217;s super helpful for legal work, finance, or even scientific research.</p>



<p>They showed how this works with an example from a researcher studying rare genetic diseases. By fine-tuning one of their models, they were able to make it much better at predicting which genes might be responsible for a disease based on a patient&#8217;s symptoms. </p>



<p>OpenAI is starting a research program to let people try out this new fine-tuning feature and plans to make it available to everyone early next year. They&#8217;re excited to see what people create with it and how it can be used to solve real-world problems.</p>



<p>They continued with the &#8220;dad&#8221; Christmas jokes&#8230; this one was about a self-driving sleigh needing &#8220;pine-tuning&#8221; to avoid trees.</p>


<h2 class="wp-block-heading" id="day-3-of-12-days-of-openai">Day 3 of 12 Days of OpenAI</h2>


<p>Today&#8217;s update from OpenAI is all about <a href="https://sora.com/">Sora</a> their video generation service. The announcement come in the form of a video from Sam Altman, Aditya Ramesh, Bill Peebles, Rohan Sahai, and Joey Flynn. Sora starts with a browse and explore option. This is to inspire and be a place to share ideas with the community. If you find something that you like, you can see the exact method that was used to create a video.</p>



<p>Sora is an AI model that can generate videos from text prompts, images, or even detailed storyboards. It&#8217;s available today in most of the world for ChatGPT Plus and Pro users. Sora can create videos up to 20 seconds long with different aspect ratios and resolutions, and it offers tons of creative tools to help you bring your vision to life.</p>



<p>You can give Sora simple text descriptions, like &#8220;woolly mammoths walking through a desert landscape,&#8221; and it will generate multiple variations for you to choose from. If you want more control, you can use the storyboard feature to direct the action with a timeline and multiple scenes. Sora can even animate still images and create seamless loops.</p>



<p>They also showed off some advanced features like &#8220;remix,&#8221; which lets you make changes to existing videos, and &#8220;blend,&#8221; which combines two different scenes into a cohesive new video. OpenAI is really excited to see what people create with Sora and how it will shape the future of video generation and storytelling. They emphasized that Sora is still in its early stages, but it&#8217;s already incredibly powerful and has the potential to revolutionize how we make and interact with videos.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/2jKVx2vyZOY?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 3 of OpenAI&#8217;s 12 Days of OpenAI</figcaption></figure>



<p>OpenAI released four tutorial-style videos: how to get started, blend, remix, and recut.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/360ZqfabuPQ?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Getting started with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/OXapuT2iVqM?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to blend videos with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/tC2ZELc4bOA?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to remix with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/jC0i_0wnly8?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to recut with Sora</figcaption></figure>


<h2 class="wp-block-heading" id="day-4-of-the-12-days-of-openai">Day 4 of the 12 Days of OpenAI</h2>


<p>Day 4 is all about canvas.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Canvas—12 Days of OpenAI: Day 4" width="750" height="422" src="https://www.youtube.com/embed/qZ0ImE41pVs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Join Kevin Weil, Lee Byron, and Alexi Christakis to hear about updates and watch live demos, including updates to Canvas.</figcaption></figure>



<p>OpenAI has launched Canvas, a new feature that allows you to collaborate with ChatGPT on writing and coding projects. Canvas provides a side-by-side view of your chat and a dedicated workspace where you can edit text, code, and even run Python code directly within the interface.</p>



<ul class="wp-block-list">
<li><strong>Co-create stories and documents:</strong> Work with ChatGPT to write stories, essays, and other content, with the ability to edit and provide feedback in real-time. ChatGPT can even add emojis to your text!</li>



<li><strong>Get feedback on your writing:</strong> Receive targeted suggestions and comments on your writing from ChatGPT, with the option to apply or reject edits.</li>



<li><strong>Debug and run Python code:</strong> Write, debug, and execute Python code directly within Canvas, complete with syntax highlighting, autocompletion, and the ability to generate graphics.</li>



<li><strong>Use Canvas in custom GPTs:</strong> Integrate Canvas into your custom GPTs to create specialized tools for tasks like drafting letters or generating reports.</li>
</ul>



<p>Canvas is rolling out to all web users, regardless of their plan, and is packed with features to enhance your writing and coding workflow. OpenAI is excited to see how people use Canvas to collaborate with ChatGPT and bring their ideas to life.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="422" data-attachment-id="4960" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-6-20" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?fit=829%2C467&amp;ssl=1" data-orig-size="829,467" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-6" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?fit=750%2C422&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=750%2C422&#038;ssl=1" alt="" class="wp-image-4960" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?w=829&amp;ssl=1 829w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=768%2C433&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=750%2C422&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=480%2C270&amp;ssl=1 480w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">ChatGPT has a new tools menu: Picture, Search, Reason, and Canvas.</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="day-5-of-the-12-days-of-openai">Day 5 of the 12 Days of OpenAI</h2>


<p>OpenAI has teamed up with Apple to bring ChatGPT to iPhones, iPads, and Macs! Now, you can access ChatGPT directly through Siri, writing tools, and even your camera. Ask Siri to organize a Christmas party or use your camera to have ChatGPT judge your Christmas sweater contest.</p>



<p>On your Mac, you can use ChatGPT within any application by typing to Siri or using writing tools. It can even analyze long documents and summarize key information for you. Sam Altman, Miqdad Jaffer, and Dave Cummings showed an example of using ChatGPT to analyze a 49-page PDF and create a pie chart visualizing the data.</p>



<p>This integration makes it super easy to use ChatGPT on all your Apple devices, whether you&#8217;re planning a party, writing a document, or just need some help understanding complex information. OpenAI is excited about this partnership and hopes it makes ChatGPT even more accessible and useful for everyone.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/mBhkD0iFf4w?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 5 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="day-6-of-the-12-days-of-openai">Day 6 of the 12 Days of OpenAI</h2>


<p>Today&#8217;s update is about adding video to the app. Audio was already included, but now you can use video from your phone&#8217;s camera and what&#8217;s on your screen as input into ChatGPT. They also added a &#8220;Santa&#8221; persona for the rest of the month. To access Santa, click on the snowflake icon in the ChatGPT app.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/NIQDnWlwYyQ?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 6 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="day-7-of-the-12-days-of-openai">Day 7 of the 12 Days of OpenAI</h2>


<p>OpenAI just launched a new feature called <strong>Projects</strong> for ChatGPT. It&#8217;s like having smart folders for your chats, where you can upload files, set custom instructions, and keep everything organized. This means you can tailor ChatGPT to specific tasks.</p>



<p>They showed some cool examples of how Projects works. One person used it to organize their Secret Santa, uploading spreadsheets and rules, and even using ChatGPT to assign gift givers randomly. Another person used it to keep track of home maintenance tasks, like when to change their fridge water filter.</p>



<p>Projects also work great for coding. They showed how to upload code files and use ChatGPT to help you write and debug code, even in less common formats like Astro. They even updated a personal website template with information from their resume and social media links.</p>



<p>Projects are rolling out to ChatGPT Plus, Pro, and Teams users today, with free and Enterprise users getting access soon. OpenAI is excited to see how people use Projects to collaborate with ChatGPT and tackle all sorts of tasks.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/FcB97h3vrzk?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Projects on Day 7 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="day-8-of-the-12-days-of-openai">Day 8 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Adam Fry and Cristina Scheau introduce and demo updates to ChatGPT search.</p>



<p>OpenAI is bringing its ChatGPT search feature to all users for free! Everyone can now access real-time information and web browsing directly within ChatGPT. They&#8217;ve also made several improvements based on user feedback, including faster speeds, better mobile experience, and new map features.</p>



<p>You can now search directly from the main chat bar or the dedicated search icon. ChatGPT intelligently decides if your question needs web results and presents them with clear citations and visuals. You can even make ChatGPT your default search engine for quick access to websites.</p>



<p>They&#8217;ve also added search to the advanced voice mode, so you can ask questions naturally and get spoken answers with web information. This update makes ChatGPT even more powerful and accessible for everyone, whether you&#8217;re on your computer or mobile device.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/OzgNJJ2ErEE?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Kevin Weil, Adam Fry and Cristina Scheau introduce and demo updates to ChatGPT search.</figcaption></figure>


<h2 class="wp-block-heading" id="day-9-of-the-12-days-of-openai">Day 9 of the 12 Days of OpenAI</h2>


<p>Olivier Godement, Sean DuBois, Andrew Peng, Michelle Pokrass, and Brian Zhang introduce and demo developer and API updates.</p>



<p>OpenAI is having a (mini holiday) Dev Day, and they&#8217;re excited to share some new updates. First, they&#8217;re launching their powerful model, o1, into the API. This means developers can now build even more sophisticated applications using o1&#8217;s advanced capabilities. They also announced updates to the Realtime API, making it easier and cheaper to build real-time voice experiences with AI. To help developers customize AI models, OpenAI introduced preference fine-tuning. This new method allows developers to train models based on preferred responses, leading to more accurate and aligned AI assistants. They also launched new Go and Java SDKs, simplified the API key signup process, released Dev Day talks on YouTube, and announced an AMA with the OpenAI team.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/14leJ1fg4Pw?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Olivier Godement, Sean DuBois, Andrew Peng, Michelle Pokrass, and Brian Zhang introduce and demo developer and API updates.</figcaption></figure>



<p>Transforming Contact Centers with GPT-4o Multi-Agent Crews and Human-in-the-Loop: Building agents with OpenAI o1 and GPT-4o for automation, quality assurance, and human-in-the-loop solutions. Presenter: Maik Hummel, Principal AI Evangelist, Parloa</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/xZc0YQbIyWE?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI DevDay 2024 | Community Spotlight | Parloa</figcaption></figure>


<h2 class="wp-block-heading" id="day-10-of-the-12-days-of-openai">Day 10 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Antonia Woodford, and Amadou Crookes introduce and demo 1-800-CHAT-GPT and ChatGPT in WhatsApp.</p>



<p>No joke. This might be the most significant update so far. Adding a phone interface to ChatGPT changes the accessibility and how you think of ChatGPT.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/LWa6OHeNK3s?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">1-800-CHAT-GPT</figcaption></figure>


<h2 class="wp-block-heading" id="day-11-of-the-12-days-of-openai">Day 11 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Justin Rushing, and John Nastos introduce and demo Work with Apps. </p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/g_qxoznfa7E?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI Day 11: Work with Apps </figcaption></figure>


<h2 class="wp-block-heading" id="day-12-of-the-12-days-of-openai">Day 12 of the 12 Days of OpenAI</h2>


<p>Well, we are here: The twelfth day of OpenAI shipmas.</p>



<p>OpenAI announced two new AI models: <strong>o3 and o3-mini!</strong> o3 is a powerhouse, crushing tough benchmarks in coding and math. It even scored impressively high on the Arc AGI test, a measure of general intelligence, beating previous AI models. o3-mini, while smaller, is still incredibly smart and much faster, making it perfect for quick tasks and saving on cost.</p>



<p>They showed off some cool demos of o3-mini in action, like having it write code to evaluate its own performance on a challenging dataset. They also highlighted how o3-mini is even better than previous models at coding and math, and it&#8217;s way faster too! Plus, it comes with handy features like function calling and structured outputs for developers.</p>



<p>While not publicly available just yet, OpenAI is inviting researchers to help safety test these new models. They&#8217;re aiming to launch o3-mini by the end of January and o3 shortly after. They also introduced a new safety technique called &#8220;deliberative alignment&#8221; that uses the models&#8217; reasoning abilities to better identify and avoid unsafe prompts. </p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/SKBG1sqdyIU?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI o3 and o3-mini—12 Days of OpenAI: Day 12</figcaption></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/the-summary-of-the-12-days-of-openai-2024/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4995</post-id>	</item>
		<item>
		<title>ChatGPT: Request for o1 pro mode (12 Days of OpenAI?)</title>
		<link>https://nothans.com/chatgpt-request-for-o1-pro-mode</link>
					<comments>https://nothans.com/chatgpt-request-for-o1-pro-mode#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 05 Dec 2024 16:37:46 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=4915</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>I was just using ChatGPT to research <a href="https://amzn.to/49lS6sY">Carol Dweck</a> for a new series of blog posts. While using ChatGPT, one of my prompts triggered a new response. I saw ChatGPT say, &#8220;Request for o1 pro mode.&#8221; It stayed on this message for a minute, and then my glorious output popped out.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="504" height="97" data-attachment-id="4916" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-50" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?fit=504%2C97&amp;ssl=1" data-orig-size="504,97" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Request for o1 pro mode" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?fit=504%2C97&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?resize=504%2C97&#038;ssl=1" alt="" class="wp-image-4916" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?w=504&amp;ssl=1 504w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?resize=300%2C58&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image.png?resize=500%2C97&amp;ssl=1 500w" sizes="auto, (max-width: 504px) 100vw, 504px" /><figcaption class="wp-element-caption">OpenAI ChatGPT: Request for o1 pro mode</figcaption></figure>
</div>


<p>Ah, what does that mean? I have heard rumors that OpenAI will treat the word to a GenAI advent of sorts. Meaning they are going to release a series of new features over the course of the month leading up to Christmas. Is this a preview of things to come? As I learn more, I will update this post. Stay tuned.</p>


<h2 class="wp-block-heading" id="update-hints-on-discord">Update: Hints on Discord</h2>


<p>This just in from the OpenAI Discord server:</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="464" data-attachment-id="4918" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-1-29" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?fit=1246%2C771&amp;ssl=1" data-orig-size="1246,771" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-1" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?fit=750%2C464&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?resize=750%2C464&#038;ssl=1" alt="" class="wp-image-4918" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?resize=1024%2C634&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?resize=300%2C186&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?resize=768%2C475&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?resize=750%2C464&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-1.png?w=1246&amp;ssl=1 1246w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">12 Days of OpenAI: Day 1</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="update-day-1-of-12-days-of-openai">Update: Day 1 of 12 Days of OpenAI</h2>


<p>Sam Altman and some members of the OpenAI team introduce &amp; demo o1 and o1 pro mode in ChatGPT and discuss the ChatGPT Pro plan.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/iBfQTnA2n2s?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>


<h2 class="wp-block-heading" id="update-openai-model-benchmarks">Update: OpenAI Model Benchmarks</h2>

<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="357" data-attachment-id="4925" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-2-26" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?fit=1072%2C511&amp;ssl=1" data-orig-size="1072,511" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-2" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?fit=750%2C357&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?resize=750%2C357&#038;ssl=1" alt="" class="wp-image-4925" style="width:800px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?resize=1024%2C488&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?resize=300%2C143&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?resize=768%2C366&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?resize=750%2C358&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-2.png?w=1072&amp;ssl=1 1072w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">OpenAI Model Benchmarks</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="update-openai-o1-model-benchmarks">Update: OpenAI o1 Model Benchmarks</h2>

<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="522" data-attachment-id="4927" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-3-26" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?fit=1087%2C757&amp;ssl=1" data-orig-size="1087,757" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-3" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?fit=750%2C522&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?resize=750%2C522&#038;ssl=1" alt="" class="wp-image-4927" style="width:585px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?resize=1024%2C713&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?resize=300%2C209&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?resize=768%2C535&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?resize=750%2C522&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-3.png?w=1087&amp;ssl=1 1087w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">OpenAI o1 Model Benchmark Scores</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="update-day-1-of-12-days-of-openai-summary">Update: Day 1 of 12 Days of OpenAI Summary</h2>


<p>OpenAI has launched a bunch of cool new stuff on their Day 1 of the 12 Days of OpenAI. They&#8217;ve been working hard on a new model called o1, which is even smarter and faster than the previous &#8220;preview&#8221; version. It can now understand images and text together. They also introduced ChatGPT Pro, a $200/month subscription that gives you unlimited access to their best models and a special &#8220;Pro Mode&#8221; for tackling the toughest problems.</p>



<p>The team showed off how o1 can solve complex problems, like figuring out the size of a cooling panel for a data center in space, just by looking at a hand-drawn diagram. They also demonstrated how much faster o1 is compared to the older model, especially for everyday tasks. </p>



<p>With ChatGPT Pro Mode, you can make o1 think even harder to solve really difficult math, science, or programming challenges. They&#8217;re planning to add even more features to Pro and bring o1 to their API for developers to use.</p>


<h2 class="wp-block-heading" id="update-day-2-of-12-days-of-openai">Update: Day 2 of 12 Days of OpenAI</h2>


<p>Mark Chen, SVP of OpenAI Research, Justin Reese, Computational Researcher in Environmental Genomics and Systems Biology, Berkeley Lab, and some team members from OpenAI as they demo and discuss Reinforcement Fine-Tuning.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/yCIYS9fx56U?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>


<h2 class="wp-block-heading" id="update-day-2-of-12-days-of-openai-summary">Update: Day 2 of 12 Days of OpenAI Summary</h2>


<p>The news from Day 2 is that OpenAI is making it possible for people to customize their AI models. They&#8217;re introducing a new feature called reinforcement fine-tuning, which lets you train their models on your own data to make them experts in your specific field. This means you can have an AI assistant that&#8217;s super helpful for things like legal work, finance, or even scientific research.</p>



<p>They showed how this works with an example from a researcher studying rare genetic diseases. By fine-tuning one of their models, they were able to make it much better at predicting which genes might be responsible for a disease based on a patient&#8217;s symptoms. </p>



<p>OpenAI is starting a research program to let people try out this new fine-tuning feature and plans to make it available to everyone early next year. They&#8217;re excited to see what people create with it and how it can be used to solve real-world problems.</p>



<p>They continued with the &#8220;dad&#8221; Christmas jokes&#8230; this one was about a self-driving sleigh needing &#8220;pine-tuning&#8221; to avoid trees.</p>


<h2 class="wp-block-heading" id="update-day-3-of-12-days-of-openai">Update: Day 3 of 12 Days of OpenAI</h2>


<p>Today&#8217;s update from OpenAI is all about <a href="https://sora.com/">Sora</a> their video generation service. The announcement come in the form of a video from Sam Altman, Aditya Ramesh, Bill Peebles, Rohan Sahai, and Joey Flynn. Sora starts with a browse and explore option. This is to inspire and be a place to share ideas with the community. If you find something that you like, you can see the exact method that was used to create a video.</p>



<p>Sora is an AI model that can generate videos from text prompts, images, or even detailed storyboards. It&#8217;s available today in most of the world for ChatGPT Plus and Pro users. Sora can create videos up to 20 seconds long with different aspect ratios and resolutions, and it offers tons of creative tools to help you bring your vision to life.</p>



<p>You can give Sora simple text descriptions, like &#8220;woolly mammoths walking through a desert landscape,&#8221; and it will generate multiple variations for you to choose from. If you want more control, you can use the storyboard feature to direct the action with a timeline and multiple scenes. Sora can even animate still images and create seamless loops.</p>



<p>They also showed off some advanced features like &#8220;remix,&#8221; which lets you make changes to existing videos, and &#8220;blend,&#8221; which combines two different scenes into a cohesive new video. OpenAI is really excited to see what people create with Sora and how it will shape the future of video generation and storytelling. They emphasized that Sora is still in its early stages, but it&#8217;s already incredibly powerful and has the potential to revolutionize how we make and interact with videos.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/2jKVx2vyZOY?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 3 of OpenAI&#8217;s 12 Days of OpenAI</figcaption></figure>



<p>OpenAI released four tutorial-style videos: how to get started, blend, remix, and recut.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/360ZqfabuPQ?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Getting started with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/OXapuT2iVqM?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to blend videos with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/tC2ZELc4bOA?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to remix with Sora</figcaption></figure>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/jC0i_0wnly8?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">How to recut with Sora</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-4-of-the-12-days-of-openai">Update: Day 4 of the 12 Days of OpenAI</h2>


<p>Today is all about canvas.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Canvas—12 Days of OpenAI: Day 4" width="750" height="422" src="https://www.youtube.com/embed/qZ0ImE41pVs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Join Kevin Weil, Lee Byron, and Alexi Christakis to hear about updates and watch live demos, including updates to Canvas.</figcaption></figure>


<h2 class="wp-block-heading" id="update-chatgpt-canvas">Update: ChatGPT Canvas</h2>


<p>OpenAI has launched Canvas, a new feature that allows you to collaborate with ChatGPT on writing and coding projects. Canvas provides a side-by-side view of your chat and a dedicated workspace where you can edit text, code, and even run Python code directly within the interface.</p>



<ul class="wp-block-list">
<li><strong>Co-create stories and documents:</strong> Work with ChatGPT to write stories, essays, and other content, with the ability to edit and provide feedback in real-time. ChatGPT can even add emojis to your text!</li>



<li><strong>Get feedback on your writing:</strong> Receive targeted suggestions and comments on your writing from ChatGPT, with the option to apply or reject edits.</li>



<li><strong>Debug and run Python code:</strong> Write, debug, and execute Python code directly within Canvas, complete with syntax highlighting, autocompletion, and the ability to generate graphics.</li>



<li><strong>Use Canvas in custom GPTs:</strong> Integrate Canvas into your custom GPTs to create specialized tools for tasks like drafting letters or generating reports.</li>
</ul>



<p>Canvas is rolling out to all web users, regardless of their plan, and is packed with features to enhance your writing and coding workflow. OpenAI is excited to see how people use Canvas to collaborate with ChatGPT and bring their ideas to life.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="422" data-attachment-id="4960" data-permalink="https://nothans.com/chatgpt-request-for-o1-pro-mode/image-6-20" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?fit=829%2C467&amp;ssl=1" data-orig-size="829,467" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-6" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?fit=750%2C422&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=750%2C422&#038;ssl=1" alt="" class="wp-image-4960" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?w=829&amp;ssl=1 829w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=768%2C433&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=750%2C422&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/12/image-6.png?resize=480%2C270&amp;ssl=1 480w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">ChatGPT has a new tools menu: Picture, Search, Reason, and Canvas.</figcaption></figure>
</div>

<h2 class="wp-block-heading" id="update-day-5-of-the-12-days-of-openai">Update: Day 5 of the 12 Days of OpenAI</h2>


<p>OpenAI has teamed up with Apple to bring ChatGPT to iPhones, iPads, and Macs! Now, you can access ChatGPT directly through Siri, writing tools, and even your camera. Ask Siri to organize a Christmas party or use your camera to have ChatGPT judge your Christmas sweater contest.</p>



<p>On your Mac, you can use ChatGPT within any application by typing to Siri or using writing tools. It can even analyze long documents and summarize key information for you. Sam Altman, Miqdad Jaffer, and Dave Cummings showed an example of using ChatGPT to analyze a 49-page PDF and create a pie chart visualizing the data.</p>



<p>This integration makes it super easy to use ChatGPT on all your Apple devices, whether you&#8217;re planning a party, writing a document, or just need some help understanding complex information. OpenAI is excited about this partnership and hopes it makes ChatGPT even more accessible and useful for everyone.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/mBhkD0iFf4w?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 5 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-6-of-the-12-days-of-openai">Update: Day 6 of the 12 Days of OpenAI</h2>


<p>Today&#8217;s update is about adding video to the app. Audio was already included, but now you can use video from your phone&#8217;s camera and what&#8217;s on your screen as input into ChatGPT. They also added a &#8220;Santa&#8221; persona for the rest of the month. To access Santa click on the snowflake icon within the ChatGPT app.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/NIQDnWlwYyQ?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Day 6 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-7-of-the-12-days-of-openai">Update: Day 7 of the 12 Days of OpenAI</h2>


<p>OpenAI just launched a new feature called <strong>Projects</strong> for ChatGPT. It&#8217;s like having smart folders for your chats, where you can upload files, set custom instructions, and keep everything organized. This means you can tailor ChatGPT to specific tasks.</p>



<p>They showed some cool examples of how Projects works. One person used it to organize their Secret Santa, uploading spreadsheets and rules, and even using ChatGPT to randomly assign gift givers. Another person used it to keep track of home maintenance tasks, like when to change their fridge water filter.</p>



<p>Projects also work great for coding. They showed how to upload code files and use ChatGPT to help you write and debug code, even in less common formats like Astro. They even updated a personal website template with information from their resume and social media links.</p>



<p>Projects are rolling out to ChatGPT Plus, Pro, and Teams users today, with free and Enterprise users getting access soon. OpenAI is excited to see how people use Projects to collaborate with ChatGPT and tackle all sorts of tasks.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/FcB97h3vrzk?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Projects on Day 7 of the 12 Days of OpenAI</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-8-of-the-12-days-of-openai">Update: Day 8 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Adam Fry and Cristina Scheau introduce and demo updates to ChatGPT search.</p>



<p>OpenAI is bringing its ChatGPT search feature to all users for free! Everyone can now access real-time information and web browsing directly within ChatGPT. They&#8217;ve also made several improvements based on user feedback, including faster speeds, better mobile experience, and new map features.</p>



<p>You can now search directly from the main chat bar or the dedicated search icon. ChatGPT intelligently decides if your question needs web results and presents them with clear citations and visuals. You can even make ChatGPT your default search engine for quick access to websites.</p>



<p>They&#8217;ve also added search to the advanced voice mode, so you can ask questions naturally and get spoken answers with web information. This update makes ChatGPT even more powerful and accessible for everyone, whether you&#8217;re on your computer or mobile device.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/OzgNJJ2ErEE?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Kevin Weil, Adam Fry and Cristina Scheau introduce and demo updates to ChatGPT search.</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-9-of-the-12-days-of-openai">Update: Day 9 of the 12 Days of OpenAI</h2>


<p>Olivier Godement, Sean DuBois, Andrew Peng, Michelle Pokrass, and Brian Zhang introduce and demo developer and API updates.</p>



<p>OpenAI is having a (mini holiday) Dev Day, and they&#8217;re excited to share some new updates. First, they&#8217;re launching their powerful model, o1, into the API. This means developers can now build even more sophisticated applications using o1&#8217;s advanced capabilities. They also announced updates to the Realtime API, making it easier and cheaper to build real-time voice experiences with AI. To help developers customize AI models, OpenAI introduced preference fine-tuning. This new method allows developers to train models based on preferred responses, leading to more accurate and aligned AI assistants. They also launched new Go and Java SDKs, simplified the API key signup process, released Dev Day talks on YouTube, and announced an AMA with the OpenAI team.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/14leJ1fg4Pw?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">Olivier Godement, Sean DuBois, Andrew Peng, Michelle Pokrass, and Brian Zhang introduce and demo developer and API updates.</figcaption></figure>



<p>Transforming Contact Centers with GPT-4o Multi-Agent Crews and Human-in-the-Loop: Building agents with OpenAI o1 and GPT-4o for automation, quality assurance, and human-in-the-loop solutions. Presenter: Maik Hummel, Principal AI Evangelist, Parloa</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/xZc0YQbIyWE?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI DevDay 2024 | Community Spotlight | Parloa</figcaption></figure>


<h2 class="wp-block-heading" id="update-day-10-of-the-12-days-of-openai">Update: Day 10 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Antonia Woodford, and Amadou Crookes introduce and demo 1-800-CHAT-GPT and ChatGPT in WhatsApp.</p>



<p>No joke. This might be the most significant update so far. Adding a phone interface to ChatGPT changes the accessibility and how you think of ChatGPT.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/LWa6OHeNK3s?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">1-800-CHAT-GPT</figcaption></figure>



<p></p>


<h2 class="wp-block-heading" id="update-day-11-of-the-12-days-of-openai">Update: Day 11 of the 12 Days of OpenAI</h2>


<p>Kevin Weil, Justin Rushing, and John Nastos introduce and demo Work with Apps. </p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/g_qxoznfa7E?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI Day 11: Work with Apps </figcaption></figure>


<h2 class="wp-block-heading" id="update-day-12-of-the-12-days-of-openai">Update: Day 12 of the 12 Days of OpenAI</h2>


<p>Well, we are here: The twelfth day of OpenAI shipmas.</p>



<p>OpenAI announced two new AI models: <strong>o3 and o3-mini!</strong> o3 is a powerhouse, crushing tough benchmarks in coding and math. It even scored impressively high on the Arc AGI test, a measure of general intelligence, beating previous AI models. o3-mini, while smaller, is still incredibly smart and much faster, making it perfect for quick tasks and saving on cost.</p>



<p>They showed off some cool demos of o3-mini in action, like having it write code to evaluate its own performance on a challenging dataset. They also highlighted how o3-mini is even better than previous models at coding and math, and it&#8217;s way faster too! Plus, it comes with handy features like function calling and structured outputs for developers.</p>



<p>While not publicly available just yet, OpenAI is inviting researchers to help safety test these new models. They&#8217;re aiming to launch o3-mini by the end of January and o3 shortly after. They also introduced a new safety technique called &#8220;deliberative alignment&#8221; that uses the models&#8217; reasoning abilities to better identify and avoid unsafe prompts. </p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/SKBG1sqdyIU?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div><figcaption class="wp-element-caption">OpenAI o3 and o3-mini—12 Days of OpenAI: Day 12</figcaption></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/chatgpt-request-for-o1-pro-mode/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4915</post-id>	</item>
		<item>
		<title>OpenAI o1: Finally an AI Model That You Can Reason With</title>
		<link>https://nothans.com/openai-o1-finally-an-ai-model-that-you-can-reason-with</link>
					<comments>https://nothans.com/openai-o1-finally-an-ai-model-that-you-can-reason-with#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 19 Sep 2024 18:14:28 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[o1]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=4801</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>OpenAI has launched <a href="https://openai.com/o1/">o1-preview</a>, the first in a series of advanced AI models designed to tackle complex problems in science, coding, and math. This new model family, named OpenAI o1, represents a significant leap in AI capabilities, particularly in areas requiring deep reasoning and problem-solving skills.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/MAZyQ-38b8M?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>



<p>Here&#8217;s what you need to know about OpenAI o1:</p>



<ul class="wp-block-list">
<li><strong>Enhanced thinking process: </strong>Trained to spend more time reasoning before responding</li>



<li><strong>Improved performance:</strong> Outperforms GPT-4 in challenging tasks, including International Mathematics Olympiad problems</li>



<li><strong>Advanced safety measures:</strong> New training approach leverages reasoning capabilities for better adherence to safety guidelines</li>
</ul>



<p>This is only a preview right now, but I think it is important sign of the direction that AI models are going in. We need to learn more about reasoning and how to interact with these new models, just like we did when we first experienced ChatGPT.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="423" data-attachment-id="4802" data-permalink="https://nothans.com/openai-o1-finally-an-ai-model-that-you-can-reason-with/image-14-10" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?fit=2048%2C1153&amp;ssl=1" data-orig-size="2048,1153" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="OpenAI o1 AI model" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?fit=750%2C423&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=750%2C423&#038;ssl=1" alt="" class="wp-image-4802" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=1024%2C577&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=1536%2C865&amp;ssl=1 1536w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=750%2C422&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=480%2C270&amp;ssl=1 480w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?resize=1320%2C743&amp;ssl=1 1320w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/09/image-14.png?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>


<h2 class="wp-block-heading" id="think-before-you-speak">Think Before You Speak?</h2>


<p>Imagine if you just thought a little bit before you spoke. I know that would have helped me a few times in my life. o1 introduces the thinking process embedded into the model. This is where the AI is trained to spend more time reasoning before responding.</p>



<ol class="wp-block-list">
<li><strong>Improved problem-solving:</strong> By taking more time to think, the AI can explore multiple approaches and strategies to solve complex problems. This is particularly crucial for tasks in fields like science, mathematics, and coding, where the solution may not be immediately apparent.</li>



<li><strong>Higher accuracy: </strong>Spending more time on reasoning allows the AI to double-check its work, consider potential errors, and refine its answers. This can lead to more accurate and reliable outputs, especially for complex queries.</li>



<li><strong>Mimicking human cognition: </strong>This approach more closely resembles how humans tackle difficult problems. We often need time to ponder, analyze, and work through challenging questions step-by-step. By emulating this process, the AI can produce more thoughtful and nuanced responses.</li>



<li><strong>Handling multi-step problems: </strong>Many real-world problems require breaking down into smaller steps and solving each part sequentially. The extended reasoning time allows the AI to manage these multi-step problems more effectively.</li>



<li><strong>Reduced impulsivity:</strong> Quick responses can sometimes lead to oversimplified or incorrect answers. By taking more time to think, the AI is less likely to jump to conclusions or provide hasty, poorly-considered responses.</li>



<li><strong>Improved explanation capability:</strong> With more thorough reasoning, the AI can potentially provide better explanations of its thought process, making its responses more transparent and understandable to users.</li>



<li><strong>Tackling novel situations: </strong>Extra reasoning time can help the AI better handle unfamiliar or unprecedented scenarios by allowing it to draw connections between known concepts and apply them to new situations.</li>
</ol>



<p>I am particularly interested in how to handle novel situations with o1. I experimented with o1 to find new game themes and reason about new things, and it felt like working with someone who really knows what is going on in a particular domain. However, the interactions reinforced that I have to learn a new way to prompt.</p>


<h2 class="wp-block-heading" id="reasoning-models-need-new-ways-of-prompting">Reasoning Models Need New Ways of Prompting</h2>


<p>When you&#8217;re using these new reasoning models, keep it simple! They&#8217;re pretty smart, so you don&#8217;t need to jump through hoops for good results. Our old tricks don&#8217;t apply anymore and even might degrade the output.</p>



<p>Here&#8217;s some advice from my first week of using OpenAI o1:</p>



<ul class="wp-block-list">
<li><strong>Just say what you want.</strong> These models are great at figuring out your meaning, so don&#8217;t overcomplicate things.</li>



<li><strong>Skip the &#8220;think step by step&#8221; </strong><span style="margin: 0px; padding: 0px;"><strong>part.</strong> The model is</span> already doing the thinking behind the scenes, so you don&#8217;t need to tell it how to think.</li>



<li><strong>Use markers to break things up.</strong> If your input has different parts, use something like quotes or tags to separate them. This helps the model understand what&#8217;s what.</li>



<li><strong>Don&#8217;t overload it with extra info.</strong> If you add context from other sources, stick to the essentials. Too much extra stuff can make the model go off on tangents.</li>
</ul>



<p><em><strong>Give it a try. Experiment. <span style="margin: 0px; padding: 0px;">o1-preview (and its smaller version, o1-mini) are available in <a href="https://chatgpt.com/?model=o1-preview" target="_blank" rel="noopener">ChatGPT Plus</a> and via the <a href="https://platform.openai.com/playground/chat?models=o1-mini" target="_blank" rel="noopener">API</a>.</span></strong></em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/openai-o1-finally-an-ai-model-that-you-can-reason-with/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4801</post-id>	</item>
		<item>
		<title>AI Fantasy Football Name Generator: Tutorial and Prompt Templates</title>
		<link>https://nothans.com/ai-fantasy-football-name-generator-tutorial-and-prompt-templates</link>
					<comments>https://nothans.com/ai-fantasy-football-name-generator-tutorial-and-prompt-templates#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 22 Aug 2024 23:05:11 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Games]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Claude]]></category>
		<category><![CDATA[fantasy football]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=4757</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>As we speak, the top way people are finding my blog is by Google searching &#8220;ai fantasy football name generator&#8221; and finding my <a href="https://nothans.com/use-chatgpt-to-generate-the-perfect-fantasy-football-team-name" data-type="post" data-id="3982">blog post</a> from last year. If you are reading this, you are likely one of those people trying to impress your friends with a clever name. I wanted to share some secrets of prompting a large language model to get great name suggestions. Maybe what you learn here will translate to something more practical that benefits your educator or career, but otherwise, enjoy learning how to leverage the most advanced AI to name your fantasy football team and get a massive reaction from your friends and coworkers. Remember, they must live with your team name all season long as you crush them each week.</p>


<h2 class="wp-block-heading" id="my-ai-prompting-process-for-generating-fantasy-football-names">My AI Prompting Process for Generating Fantasy Football Names</h2>


<p><strong>Step 1:</strong> Get <span style="box-sizing: border-box; margin: 0px; padding: 0px;">a <a href="https://chatgpt.com/g/g-R9ggCnF32-fantasy-football-name-wizard">ChatGPT</a> or&nbsp;<a href="https://claude.ai/" target="_blank" rel="noopener">Claude</a>&nbsp;account (or both). I love both platforms for different reasons, but the prompt techniques that I am going to explain he</span>re apply to both&nbsp;chat apps.</p>



<p><strong>Step 2:</strong> Start a new chat. Enter this starting text: My objective is to generate a fantasy football team name. Respond OKAY if you understand the objective.</p>



<p><strong>Step 3: </strong>Use one of the following prompt templates by replacing the [BRACKETED] words:</p>



<ul class="wp-block-list">
<li>&#8220;Generate 5 funny fantasy football team names based on [PLAYER NAME]&#8217;s name or playing style.&#8221;</li>



<li>&#8220;Create 5 clever fantasy football team names inspired by the [TEAM NAME] and their history or current roster.&#8221;</li>



<li>&#8220;Come up with 5 witty fantasy football team names centered around [POSITION] players.&#8221;</li>



<li>&#8220;Blend NFL references with [MOVIE/TV SHOW/BOOK] to create 5 unique fantasy football team names.&#8221;</li>



<li>&#8220;Generate 5 punny fantasy football team names using football terminology and common phrases.&#8221;</li>



<li>&#8220;Create 5 alliterative fantasy football team names starting with the letter [LETTER].&#8221;</li>



<li>&#8220;Combine [CITY/STATE] references with football terms to make 5 locally-themed fantasy team names.&#8221;</li>



<li>&#8220;Generate 5 playful trash-talking fantasy football team names that imply dominating the competition.&#8221;</li>



<li>&#8220;Create 5 fantasy football team names that blend [PROFESSION] terminology with football references.&#8221;</li>



<li>&#8220;Come up with 5 fantasy football team names that incorporate [HOLIDAY/SEASON] themes.&#8221;</li>



<li>&#8220;Generate 5 appetizing fantasy football team names that combine [FOOD] with player names or football terms.&#8221;</li>



<li>&#8220;Create 5 fantasy football team names that blend [MUSIC GENRE/ARTIST] references with NFL players or teams.&#8221;</li>



<li>&#8220;Combine names of [HISTORICAL ERA] figures with football terminology to create 5 unique team names.&#8221;</li>



<li>&#8220;Generate 5 epic fantasy football team names inspired by [MYTHOLOGY] and NFL players or terms.&#8221; </li>



<li>&#8220;Create 5 geeky fantasy football team names that combine [SCIENTIFIC FIELD/TECH COMPANY] with NFL references.&#8221;</li>



<li>&#8220;Come up with 5 colorful fantasy football team names incorporating [COLOR] and player names or football terms.&#8221;</li>



<li>&#8220;Generate 5 wild fantasy football team names that blend [ANIMAL SPECIES] characteristics with NFL players or terms.&#8221;</li>



<li>&#8220;Create 5 electrifying fantasy football team names inspired by [WEATHER EVENT] and football references.&#8221;</li>



<li>&#8220;Blend elements from [FICTIONAL WORLD/UNIVERSE] with NFL references to create 5 fantastical fantasy football team names.&#8221;</li>



<li>&#8220;Generate 5 multi-sport fantasy football team names that combine NFL references with terms from [OTHER SPORT].&#8221;</li>
</ul>



<p><strong>Step 4:</strong> Once you find a good direction or a direction that you like, ask the AI to generate more names. You are bound to find something that you like.</p>


<h2 class="wp-block-heading" id="example">Example</h2>


<p>I loved Deadpool and Wolverine. Others might want a fantasy football team named after the blockbuster movie. I will use this prompt: Blend NFL references with Deadpool to create 5 unique fantasy football team names.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img data-recalc-dims="1" loading="lazy" decoding="async" width="501" height="661" data-attachment-id="4758" data-permalink="https://nothans.com/ai-fantasy-football-name-generator-tutorial-and-prompt-templates/image-1-25" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-1.png?fit=501%2C661&amp;ssl=1" data-orig-size="501,661" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-1" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-1.png?fit=501%2C661&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-1.png?resize=501%2C661&#038;ssl=1" alt="" class="wp-image-4758" style="width:503px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-1.png?w=501&amp;ssl=1 501w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-1.png?resize=227%2C300&amp;ssl=1 227w" sizes="auto, (max-width: 501px) 100vw, 501px" /><figcaption class="wp-element-caption"><a href="https://chatgpt.com/g/g-R9ggCnF32-fantasy-football-name-wizard">ChatGPT Example</a> for generating fantasy football team names based on Deadpool</figcaption></figure>
</div>


<p>I was pleasantly pleased with <em>Mercs with a Mouthguard</em>, since Deadpool is known as the Merc with a Mouth in the comic books.</p>



<p>Let me know what you come up with and what you learned. I am always interested in learning more way to produce great results from AI models.</p>


<h2 class="wp-block-heading" id="bonus">Bonus?</h2>


<p>Here&#8217;s the weirdest image that I have ever created with Midjourney. <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f62c.png" alt="😬" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="750" data-attachment-id="4760" data-permalink="https://nothans.com/ai-fantasy-football-name-generator-tutorial-and-prompt-templates/image-3-22" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?fit=1024%2C1024&amp;ssl=1" data-orig-size="1024,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="AI Football" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?fit=750%2C750&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=750%2C750&#038;ssl=1" alt="" class="wp-image-4760" style="width:800px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=530%2C530&amp;ssl=1 530w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=750%2C750&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/08/image-3.png?resize=500%2C500&amp;ssl=1 500w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">&#8220;what if AI integrated with an NFL football&#8221;</figcaption></figure>
</div>


<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/ai-fantasy-football-name-generator-tutorial-and-prompt-templates/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4757</post-id>	</item>
		<item>
		<title>What is the cost of GPT-4o?</title>
		<link>https://nothans.com/what-is-the-cost-of-gpt-4o</link>
					<comments>https://nothans.com/what-is-the-cost-of-gpt-4o#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Mon, 13 May 2024 18:20:43 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=4702</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>OpenAI released a new model, GPT-4o. I think the &#8220;o&#8221; means omnimodel. This led to my colleagues asking me about pricing and what the new model costs.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="750" data-attachment-id="4703" data-permalink="https://nothans.com/what-is-the-cost-of-gpt-4o/image-20-5" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?fit=1024%2C1024&amp;ssl=1" data-orig-size="1024,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="OpenAI API costs are dropping" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?fit=750%2C750&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=750%2C750&#038;ssl=1" alt="" class="wp-image-4703" style="width:532px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=530%2C530&amp;ssl=1 530w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=750%2C750&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/05/image-20.png?resize=500%2C500&amp;ssl=1 500w" sizes="auto, (max-width: 750px) 100vw, 750px" /><figcaption class="wp-element-caption">DALL-E 3 Prompt: A digital illustration of a pricing chart showing a dramatic plummet in prices. The chart is displayed on a computer screen, with sharp downward arrows in red indicating a significant drop. The chart&#8217;s background is a light blue, with numbers and grid lines in a subtle grey, showing a clear and sudden decrease in values.</figcaption></figure>
</div>


<p>GPT-4o access via the API costs $5/M input and $15/M output tokens. That&#8217;s $5 per million input tokens and $15 per million output tokens. This pricing of GPT-4o is about 50% of the cost of GPT-4 Turbo.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/what-is-the-cost-of-gpt-4o/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4702</post-id>	</item>
	</channel>
</rss>
