Howdy, wizards.
Have you tried OpenAIβs recommended career prompt with ChatGPT yet? Here it is: βBased on all our interactions, whatβs a career path I might enjoy that I might not realize Iβd like?β
I got Experience Design (spot on) and AI ethics consultancy (umm, no).
All right β letβs unpack the most important AI news of the day.
DARIOβS PICKS
Anthropic, the AI company founded by former OpenAI employees wanting to create a more safety oriented approach to AI, and the makers of Claude, just did a big update to their Responsible Scaling Policy.
Overall, the updated policy is more flexible to allow the company to adapt itβs safety measures depending on the AI modelβs capability.
Hereβs whatβs new:
They redefined what AI safety levels mean. It no longer refers to the models themselves, but rather to specific βcapability thresholdsβ and βrequired safeguardsβ (as shown in the screenshot above).
Theyβve introduced a checkpoint for autonomous AI capabilities, which triggers additional evaluation rather than automatically enforcing higher safety standards. Anthropic now believes that the capabilities initially considered at this threshold donβt require escalating to stricter safety measures.
New threshold for βAI systems that can significantly advance AI developmentβ. Such capabilities could lead to rapid, uncontrolled advancements, outpacing their ability to evaluate and assess emerging risks.
Theyβre moving away from prespecified evaluations and prescriptive methodology to test AIβs capabilities. Instead, theyβre opting for affirmative case and more general requirements. Theyβve found that rigid methodologies quickly become outdated as new developments happen.
β Why it mattersβ β The policy update suggests Anthropic has big things in the worksβperhaps a new model release, a big funding round, or both:
Anthropicβs new policy says they donβt need to escalate safety measures upon reaching what they previously defined as autonomous AI capabilitiesβwhich seems to indicate weβre materially closer to that point.
Theyβre switching to a more pragmatic approach to evaluating capabilitiesβmaking an affirmative case that a model isnβt at a certain capability level rather than predefined methods. This signals weβre venturing into new, uncharted territory and need to adapt our safety measures as we go.
Recent reports suggest Anthropic is actively talking to investors, aiming for a $40 billion valuation. Anthropicβs CEO also published a long essay earlier this week with an optimistic vision for AIβs futureβmuch like what Sam Altman did days before OpenAIβs recent, record-breaking funding round.
TOGETHER WITH TELLO
With Tello Mobile, you can say goodbye to overpriced contracts and hello to freedom. Their flexible, affordable options start as low as $5 and go up to $25/month for Unlimited Everything, allowing you to customize each plan to suit your family's exact requirements.
Whether you're looking for reliable 4G LTE/5G coverage, Wi-Fi calling, free international calls to 60+ countries, or unlimited texts, Tello has you covered. And with no contracts or hidden fees, you'll enjoy peace of mind knowing that you're getting exactly what you pay for.
Bring your own phone or explore our selection of devices to find the perfect fit for you. Stop settling for expensive plans that charge you for what you donβt need β create your perfect plan with Tello Mobile today and start saving.
DARIOβS PICKS
The US government has announced Operation AI Comply β an initiative to take legal action against companies using deceptive claims about their AI-powered products. Theyβve recently taken action on five companies, two of which have settled and three which are facing ongoing lawsuits:
DoNotPay: a βrobot lawyerβ that claimed to substitute legal expertise.
Ascend Ecom: an AI-powered tool that claimed to help people make thousands through online storefronts.
Ecommerce Empire Builders: a tool that claimed to help people build an βAI-powered ecommerce empire.
Rytr: a writing tool that generated and posted fake reviews of companies on Google and Trustpilot.
FBA Machine: an AI tool that claimed to automate building and management of Amazon stores
β Why it mattersβ β This has less to do with AI, and more to do with businesses taking the shady route to get people to open their wallets β βAIβ is just the latest trick in the book.
The outcomes look reasonable for the companies that have already settled:
Rytr had to remove its functionality for generating reviews and testimonials β something that shouldnβt have been a service in the first place.
I checked out DoNotPayβs current website, and it looks like theyβve switched from taglines like βrobot lawyerβ, which implies you can replace traditional legal services, to the more down-to-earth βyour consumer championβ. They also had to pay $193,000 in consumer redress.
Hat tip to The Batch for the link.
RECOMMENDED
Love Hacker News but donβt have the time to read it every day?
THATβS ALL FOLKS!
Was this email forwarded to you? Sign up here.
Want to get in front of 13,000 AI enthusiasts? Work with me.
This newsletter is written & curated by Dario Chincha.
What's your verdict on today's email?
Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU β it will make it possible for me to continue to do this.




