I was just using ChatGPT to research Carol Dweck for a new series of blog posts. While using ChatGPT, one of my prompts triggered a new response. I saw ChatGPT say, “Request for o1 pro mode.” It stayed on this message for a minute, and then my glorious output popped out.
Ah, what does that mean? I have heard rumors that OpenAI will treat the word to a GenAI advent of sorts. Meaning they are going to release a series of new features over the course of the month leading up to Christmas. Is this a preview of things to come? As I learn more, I will update this post. Stay tuned.
Update: Hints on Discord
This just in from the OpenAI Discord server:
Update: Day 1 of 12 Days of OpenAI
Sam Altman and some members of the OpenAI team introduce & demo o1 and o1 pro mode in ChatGPT and discuss the ChatGPT Pro plan.
Update: OpenAI Model Benchmarks
Update: OpenAI o1 Model Benchmarks
Update: Day 1 of 12 Days of OpenAI Summary
OpenAI has launched a bunch of cool new stuff on their Day 1 of the 12 Days of OpenAI. They’ve been working hard on a new model called o1, which is even smarter and faster than the previous “preview” version. It can now understand images and text together. They also introduced ChatGPT Pro, a $200/month subscription that gives you unlimited access to their best models and a special “Pro Mode” for tackling the toughest problems.
The team showed off how o1 can solve complex problems, like figuring out the size of a cooling panel for a data center in space, just by looking at a hand-drawn diagram. They also demonstrated how much faster o1 is compared to the older model, especially for everyday tasks.
With ChatGPT Pro Mode, you can make o1 think even harder to solve really difficult math, science, or programming challenges. They’re planning to add even more features to Pro and bring o1 to their API for developers to use.
Update: Day 2 of 12 Days of OpenAI
Mark Chen, SVP of OpenAI Research, Justin Reese, Computational Researcher in Environmental Genomics and Systems Biology, Berkeley Lab, and some team members from OpenAI as they demo and discuss Reinforcement Fine-Tuning.
Update: Day 2 of 12 Days of OpenAI Summary
The news from Day 2 is that OpenAI is making it possible for people to customize their AI models. They’re introducing a new feature called reinforcement fine-tuning, which lets you train their models on your own data to make them experts in your specific field. This means you can have an AI assistant that’s super helpful for things like legal work, finance, or even scientific research.
They showed how this works with an example from a researcher studying rare genetic diseases. By fine-tuning one of their models, they were able to make it much better at predicting which genes might be responsible for a disease based on a patient’s symptoms.
OpenAI is starting a research program to let people try out this new fine-tuning feature and plans to make it available to everyone early next year. They’re excited to see what people create with it and how it can be used to solve real-world problems.
They continued with the “dad” Christmas jokes… this one was about a self-driving sleigh needing “pine-tuning” to avoid trees.
Update: Day 3 of 12 Days of OpenAI
Today’s update from OpenAI is all about Sora their video generation service. The announcement come in the form of a video from Sam Altman, Aditya Ramesh, Bill Peebles, Rohan Sahai, and Joey Flynn. Sora starts with a browse and explore option. This is to inspire and be a place to share ideas with the community. If you find something that you like, you can see the exact method that was used to create a video.
Sora is an AI model that can generate videos from text prompts, images, or even detailed storyboards. It’s available today in most of the world for ChatGPT Plus and Pro users. Sora can create videos up to 20 seconds long with different aspect ratios and resolutions, and it offers tons of creative tools to help you bring your vision to life.
You can give Sora simple text descriptions, like “woolly mammoths walking through a desert landscape,” and it will generate multiple variations for you to choose from. If you want more control, you can use the storyboard feature to direct the action with a timeline and multiple scenes. Sora can even animate still images and create seamless loops.
They also showed off some advanced features like “remix,” which lets you make changes to existing videos, and “blend,” which combines two different scenes into a cohesive new video. OpenAI is really excited to see what people create with Sora and how it will shape the future of video generation and storytelling. They emphasized that Sora is still in its early stages, but it’s already incredibly powerful and has the potential to revolutionize how we make and interact with videos.
OpenAI released four tutorial-style videos: how to get started, blend, remix, and recut.
Update: Day 4 of the 12 Days of OpenAI
Today is all about canvas.
Update: ChatGPT Canvas
OpenAI has launched Canvas, a new feature that allows you to collaborate with ChatGPT on writing and coding projects. Canvas provides a side-by-side view of your chat and a dedicated workspace where you can edit text, code, and even run Python code directly within the interface.
- Co-create stories and documents: Work with ChatGPT to write stories, essays, and other content, with the ability to edit and provide feedback in real-time. ChatGPT can even add emojis to your text!
- Get feedback on your writing: Receive targeted suggestions and comments on your writing from ChatGPT, with the option to apply or reject edits.
- Debug and run Python code: Write, debug, and execute Python code directly within Canvas, complete with syntax highlighting, autocompletion, and the ability to generate graphics.
- Use Canvas in custom GPTs: Integrate Canvas into your custom GPTs to create specialized tools for tasks like drafting letters or generating reports.
Canvas is rolling out to all web users, regardless of their plan, and is packed with features to enhance your writing and coding workflow. OpenAI is excited to see how people use Canvas to collaborate with ChatGPT and bring their ideas to life.
Update: Day 5 of the 12 Days of OpenAI
OpenAI has teamed up with Apple to bring ChatGPT to iPhones, iPads, and Macs! Now, you can access ChatGPT directly through Siri, writing tools, and even your camera. Ask Siri to organize a Christmas party or use your camera to have ChatGPT judge your Christmas sweater contest.
On your Mac, you can use ChatGPT within any application by typing to Siri or using writing tools. It can even analyze long documents and summarize key information for you. Sam Altman, Miqdad Jaffer, and Dave Cummings showed an example of using ChatGPT to analyze a 49-page PDF and create a pie chart visualizing the data.
This integration makes it super easy to use ChatGPT on all your Apple devices, whether you’re planning a party, writing a document, or just need some help understanding complex information. OpenAI is excited about this partnership and hopes it makes ChatGPT even more accessible and useful for everyone.
Update: Day 6 of the 12 Days of OpenAI
Today’s update is about adding video to the app. Audio was already included, but now you can use video from your phone’s camera and what’s on your screen as input into ChatGPT. They also added a “Santa” persona for the rest of the month. To access Santa click on the snowflake icon within the ChatGPT app.
Update: Day 7 of the 12 Days of OpenAI
OpenAI just launched a new feature called Projects for ChatGPT. It’s like having smart folders for your chats, where you can upload files, set custom instructions, and keep everything organized. This means you can tailor ChatGPT to specific tasks.
They showed some cool examples of how Projects works. One person used it to organize their Secret Santa, uploading spreadsheets and rules, and even using ChatGPT to randomly assign gift givers. Another person used it to keep track of home maintenance tasks, like when to change their fridge water filter.
Projects also work great for coding. They showed how to upload code files and use ChatGPT to help you write and debug code, even in less common formats like Astro. They even updated a personal website template with information from their resume and social media links.
Projects are rolling out to ChatGPT Plus, Pro, and Teams users today, with free and Enterprise users getting access soon. OpenAI is excited to see how people use Projects to collaborate with ChatGPT and tackle all sorts of tasks.
Update: Day 8 of the 12 Days of OpenAI
Kevin Weil, Adam Fry and Cristina Scheau introduce and demo updates to ChatGPT search.
OpenAI is bringing its ChatGPT search feature to all users for free! Everyone can now access real-time information and web browsing directly within ChatGPT. They’ve also made several improvements based on user feedback, including faster speeds, better mobile experience, and new map features.
You can now search directly from the main chat bar or the dedicated search icon. ChatGPT intelligently decides if your question needs web results and presents them with clear citations and visuals. You can even make ChatGPT your default search engine for quick access to websites.
They’ve also added search to the advanced voice mode, so you can ask questions naturally and get spoken answers with web information. This update makes ChatGPT even more powerful and accessible for everyone, whether you’re on your computer or mobile device.
Update: Day 9 of the 12 Days of OpenAI
Olivier Godement, Sean DuBois, Andrew Peng, Michelle Pokrass, and Brian Zhang introduce and demo developer and API updates.
OpenAI is having a (mini holiday) Dev Day, and they’re excited to share some new updates. First, they’re launching their powerful model, o1, into the API. This means developers can now build even more sophisticated applications using o1’s advanced capabilities. They also announced updates to the Realtime API, making it easier and cheaper to build real-time voice experiences with AI. To help developers customize AI models, OpenAI introduced preference fine-tuning. This new method allows developers to train models based on preferred responses, leading to more accurate and aligned AI assistants. They also launched new Go and Java SDKs, simplified the API key signup process, released Dev Day talks on YouTube, and announced an AMA with the OpenAI team.
Transforming Contact Centers with GPT-4o Multi-Agent Crews and Human-in-the-Loop: Building agents with OpenAI o1 and GPT-4o for automation, quality assurance, and human-in-the-loop solutions. Presenter: Maik Hummel, Principal AI Evangelist, Parloa
Update: Day 10 of the 12 Days of OpenAI
Kevin Weil, Antonia Woodford, and Amadou Crookes introduce and demo 1-800-CHAT-GPT and ChatGPT in WhatsApp.
No joke. This might be the most significant update so far. Adding a phone interface to ChatGPT changes the accessibility and how you think of ChatGPT.
Update: Day 11 of the 12 Days of OpenAI
Kevin Weil, Justin Rushing, and John Nastos introduce and demo Work with Apps.
Update: Day 12 of the 12 Days of OpenAI
Well, we are here: The twelfth day of OpenAI shipmas.
OpenAI announced two new AI models: o3 and o3-mini! o3 is a powerhouse, crushing tough benchmarks in coding and math. It even scored impressively high on the Arc AGI test, a measure of general intelligence, beating previous AI models. o3-mini, while smaller, is still incredibly smart and much faster, making it perfect for quick tasks and saving on cost.
They showed off some cool demos of o3-mini in action, like having it write code to evaluate its own performance on a challenging dataset. They also highlighted how o3-mini is even better than previous models at coding and math, and it’s way faster too! Plus, it comes with handy features like function calling and structured outputs for developers.
While not publicly available just yet, OpenAI is inviting researchers to help safety test these new models. They’re aiming to launch o3-mini by the end of January and o3 shortly after. They also introduced a new safety technique called “deliberative alignment” that uses the models’ reasoning abilities to better identify and avoid unsafe prompts.