Howdy, wizards.
Hereβs whatβs brewing in AI today.
DARIOβS PICKS
OpenAI had big update in store for developers in todayβs shipmas. They brought o1 to the API, including support for new tools, and cheaper pricing.
Most importantly, o1 uses 60% fewer βthinking tokensβ compared to the o1-preview model, making it much faster and cheaper.
Hereβs the news tools now available in the o1 API:
Vision inputs (new): o1 can see and reason over images uploaded to it through the API. For example, imagine a user taking a picture at a manufacturing facility or in a lab setting, and having o1 give feedback β thatβs the type of workflow which can now be built into AI applications.
Reasoning effort (new): Lets you tell the model how long to think before giving you a reply (saves time and money for easy prompts)
Developer messages (new): Lets developers tell the model which kind of instructions to follow and in what order e.g. tone, style and other behavioral guidance.
Function calling: Lets you connect o1 to third-party data and APIs.
Structured outputs: Gets you specific outputs where structured data is needed
OpenAIβs testing shows that function calling and structured outputsβand the combination of the twoβworks way better with o1 than GPT-4o; it does a much better job at calling the correct functions when it should.
If you want to see how the model and tools work together, check out OpenAIβs demo on using o1 to detect and correct errors in a tax form β combining vision input, function calling and structured outputs all at once.
β Why it mattersβ β o1 becoming simultaneously faster, better and cheaper is definitely an early x-mas gift for devs interested in building advanced AI applications. While O1-preview's limitations deterred many developers, the full O1 release makes advanced AI capabilities more accessible. This will likely accelerate the adoption of sophisticated AI features across a broader range of applications.
TOGETHER WITH 1440 MEDIA
The team at 1440 scours over 100+ sources ranging from culture and science to sports and politics to create one email that gets you all caught up on the dayβs events in 5 minutes. According to Gallup, 51% of Americans canβt think of a news source that reports the news objectively. Itβs 100% free. It has everything you need to be aware of for the day. And most importantly, it simplifies your life.
DARIOβS PICKS

ChatGPT Search is now free and available globally on chatgpt.com // Source: OpenAI
Yesterdayβs shipmas put OpenAI and ChatGPT another step closer to Google search.
Hereβs whatβs rolling out over the next week;
ChatGPT Search is rolling out to everyone, to all logged-in users including free users, over the next week
ChatGPT is getting more useful for navigational searches. When you search for something, a grid of relevant website links will appear right at the top; this way you donβt have to wait for ChatGPT to generate its full answer before you get the link you need. This is making it way better as a traditional search engine, while still providing the AI answers.
Advanced Voice Mode will have search, too. This means that you can get up-to-date info while talking with ChatGPT. Check this demo of asking ChatGPT about this yearβs Christmas events in Zurich and New York (including opening hours, current weather, and more).
ChatGPT on mobile apps is getting maps. Itβll let you chat about nearby businesses and restaurants, kind of like a smart version of Google Maps.
β Why it mattersβ β OpenAI really wantsβand are making a concerted effort toβgetting people to switch from Google with ChatGPT. While we know AI is just soo smart, a lot of the time when weβre using a search engine weβre not really looking for answers per se β weβre looking for links; hence, OpenAIβs move to put links at the top, then answers.
What Iβm most excited about here personally is the voice + search combo. I can see this feature becoming especially popular in contexts where reading is impossible or unpractical, such as when driving a car, for people with visual impairment, customer support agents, and more.
DARIOβS PICKS
4. 5 quick-fire headlines
Thereβs been more brewing in AI over the last days than Iβm able cover in detail today. Hereβs the other important headlines to keep you in the loop:
Google announced an updated video generation model, veo 2, as well as the next version of its image generator, Imagen 3. Curious how good the video model is? Check out this first-try video with veo 2 of "1960s film of Elvis shaking hands with an alien in the White House".
Nvidia just unveiled the Jetson Orin Nano Super Developer Kit β a compact βgenAI supercomputerβ for only $250.
Google Deepmind launched FACTS, a new framework for evaluating how well LLMs generate factual information.
Midjourney introduced a moodboard feature that lets users upload images to customise the model, ie creating personalised styles for images
OpenAI currently has no plans of releasing an API for Sora, according to Romain Huet, the companyβs Head of Developer Experience
THATβS ALL FOLKS!
Enjoying this newsletter? The best way to support my content is by checking out todayβs awesome sponsor, 1440 media, and their unbiased news breakdown.
Was this email forwarded to you? Sign up here.
Want to get in front of 13,000 AI enthusiasts? Work with me.
This newsletter is written & curated by Dario Chincha.
What's your verdict on today's email?
Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU β it will make it possible for me to continue to do this.




