๐ Gemini 1.5 Pro Expands, OpenAI & Meta Innovate, Siri Learns Apps
Todayโs pick
Gemini 1.5 Pro Now Available in 180+ Countries; With Native Audio Understanding, System Instructions, JSON Mode and More. Less than two months ago, we made our next-generation Gemini 1.5 Pro model available in Google AI Studio for developers to try out. Weโve been amazed by what the community has been able to debug, create and learn using our groundbreaking 1 million context window. Today, weโre making Gemini 1.5 Pro available in 180+ countries via the Gemini API in public preview, with a first-ever native audio (speech) understanding capability and a new File API to make it easy to handle files. By Jaclyn Konzelmann via Google Developers Blogย
๐: New @Google developer launch today: - Gemini 1.5 Pro is now available in 180+ countries via the Gemini API in public preview - Supports audio (speech) understanding capability, and a new File API to make it easy to handle files - New embedding model! - Logan Kilpatrick (@OfficialLoganK)
OpenAI and Meta ready new AI models capable of 'reasoning'. OpenAI and Meta are on the brink of releasing new artificial intelligence models that they say will be capable of reasoning and planning, key steps towards achieving superhuman cognition in machines. This week, executives at OpenAI and Meta signalled this week that they were preparing to launch the next versions of their large language models, the systems that power generative AI applications such as ChatGPT. By Madhumita Murgia via Financial Times
๐: โOpenAI and Meta are on the brink of releasing new artificial intelligence models that they say will be capable of reasoning and planning, critical steps towards achieving superhuman cognition in machines.โ - Shashank Joshi (@shashj)
The best way to reach new readers is through word of mouth. If you click THIS LINK in your inbox, itโll create an easy-to-send pre-written email you can just fire off to some friends.
Apple's new AI model could help Siri see how iOS apps work. Apple's Ferret LLM could help allow Siri to understand the layout of apps in an iPhone display, potentially increasing the capabilities of Apple's digital assistant. Apple has been working on numerous machine learning and AI projects that it could tease at WWDC 2024. In a just-released paper, it now seems that some of that work has the potential for Siri to understand what apps and iOS itself looks like. By Malcolm Owen via AppleInsider
๐: 12-24 months... "browsing for you" "coding for you" "writing for you" "editing for you"...to beat Apple, Google, & Microsoft until too late this is gonna be fun !!! - Josh Miller (@joshm)
Survey shows that teenagers are using more VR devices in the US. The AR/VR device market has always been considered a niche. Even with the launch of the Apple Vision Pro earlier this year, there was no expectation of major changes due to the high price tag of $3,500. However, a new survey by Piper Sandler hasย revealed that teenagers are using more VR headsets, at least in the US. By Filipe Espรณsito via 9to5Mac
๐: Survey estimates 33% of US teenagers own a VR headset. Nearly half of those are headset WAU. - Paul Katsen (@pavtalk)
Collaborative Robotics is prioritizing 'human problem solving' over humanoid forms. Humanoids have sucked a lot of the air out of the room. It is, after all, a lot easier to generate press for robots that look and move like humans. Ultimately, however, both the efficacy and scalability of such designs have yet to be proven out. For a while now, Collaborative Robotics founder Brad Porter has eschewed robots that look like people. Machines that can potentially reason like people, however, is another thing entirely. By Brian Heater via TechCrunch
Thanks for reading to the bottom and soaking in our Newslit Daily fueled with highlights for your morning.
I hope you found it interesting and, needless to say, if you have any questions or feedback let me know by hitting reply.
Take care and see you tomorrow!
P.S. Want to advertise with us? Weโd love to hear from you.