In what has not been the most publicly vocal collaboration, former Apple design head Jonathan “Jony” Ive and OpenAI chief Sam Altman have joined forces, with the primary focus being the development a physical AI gadget.
Earlier this year, OpenAI acquired io Products, a design startup founded by Ive, for a cool $6.5 billion. Besides him, twenty ex-Apple employees joined OpenAI as well. Considering Jony Ive’s stellar track record, this may just turn out to be a bargain in a few years.
What’s cooking at OpenAI?
Although CAD leaks or pictures of the device haven’t surfaced yet, there are several public details about the nature, positioning and purpose of the device. It’s planned to be an AI-based, part companion, part lifestyle device, similar to the Rabbit R1 in concept. Judging by the amount of confidence and excitement that OpenAI expressed in the public release document, it’s apparent that this is not going to be as half-baked a product as that.
As per a Wall Street Journal report, Altman also says it’s going to be a “third device”, the next must-have after a MacBook Pro and an iPhone. It’ll be small enough to fit in your pocket and sit on your desk. It isn’t going to be a phone either, or any sort of a wearable device for that matter, as Ive clarified.
Jony Ive left Apple in 2019 – fun fact, that’s also the year the iPhone 11, the last iPhone with round edges was released, not counting the minorly updated iPhone SE 2nd and 3rd gen.
The device is planned to be always-listening and always-watching, in order to be fully aware of the elements of the user’s life, while also possibly having a screen-less interface, relying on mics and cameras for perception. It’s going to have a personality, but Ive says that “it’s also not going to be your AI girlfriend”. To impart the gadget a conversational nature, it’ll also be designed to respond when you want it to, and to shut up when you don’t.
“As great as phones and computers are, there’s always something new to do”, Altman said. Ive shares a similar vision as well, expressing concern over where our relationship with devices has gotten today. He reiterated the fact that devices are supposed to make us happier and less anxious, and that should be the focus of development.
However, it’s not been smooth sailing. As is expected when creating a new computing paradigm, innovation is the bottleneck.
“Hardware is hard. Figuring out a new computing form factor is hard. I think we have the chance to do something amazing, but it will take a while.”
Sam Altman
The list of hurdles is voluminous. And while it may seem like there’s no end in sight, I believe Jony Ive’s history is a very compelling argument for the success of this product.
Raising the ceiling of AI: Can design be the key?
The first iPod, iPhone, iPad, PowerBooks, and even the Apple Stores are all well known, very desirable and iconic products. They’re a testament to his ability to seamlessly bridge the technical, complicated minutiae with our dynamic world, time and again. Such talent is highly rare.

Details like textures, finishes, device weight and so on were debated for weeks on end – to quote an example, the hue of grey to be used for Apple Store restroom signs took a half hour to decide. The iPhone 4 antennagate fix was delayed because Ive didn’t like the feel of the materials that could be used to eliminate the Faraday cage problem. There’s a large number of such examples.
This level of obsession over the small things was the driving force behind the behemoth that Apple became over the last few decades. Besides obsession, it was the unbridled innovation and constant stream of altogether new kinds of tech that made Apple, Apple. One of the largest sources of this innovation was Jony Ive. And of course, Steve Jobs .
It’s been evident for a while that OpenAI is facing a mismatch – their new kind of technology is being used in the same old way, i.e. opening a browser on a computer and then entering a prompt. If they really wish to change this with custom hardware, Jony Ive is their best bet. This is objectively an excellent acquisition for OpenAI.
Another flawed chase or a potential iPhone moment?
It is too soon to say what the outcome of this collaboration might be, but I am optimistic. As with most other things in the AI landscape today, overpromising and underdelivering seems to be the norm. Understandably, there’s the concerns of all the AI hype being a bubble, and they’re not exactly misplaced. Ai-based gadgets across the board have turned out to be largely disappointing, but this is the first set of big-name players to be taking a shot at one, and expectedly, they’ll come at it all guns blazing.
Methods of interaction with the device are another pain point, since they’ve made the decision to not give it a screen. This is yet another example of how AI isn’t readily blending with the realm of UI/UX design. Multitouch was a seismic eureka moment, and although we’ve come a long way since, we haven’t had that kind of innovation in how we interact with our technology. The premise of a screen-less AI gadget promises a glimpse of just that.
A screen-less interface is a monumental task to implement well. On top of which, to have it reliably and accurately interpret the user’s surroundings and then reference them makes the device all the more complicated. Devices like the Alexa are much simpler to implement in comparison, they don’t have to analyze your inputs, or recall them at later dates depending on the context of the conversation. It’s a highly complicated operation set that we haven’t fully achieved on screen based devices, which is the reason it’ll be harder to execute it well on this upcoming OpenAI device.
Despite the apprehensions, it is indeed an exciting idea.
This has the potential to completely reinvent how we interact with our devices. A fluent enough screen-less interface that handles everything with vocal input alone so well that it doesn’t let the need for a screen be felt is an aspirational target.
If a device with such an interface does make it to production, it may even find mainstream adoption by other devices. Eventually, it may even become the preferred medium of interaction for even smartphones and computers. There will always be a percentage of the population that would still prefer to use a screen, but such an interface might still gain a vast following.
There’s a lot of variables in this equation though. Compute shortages, hardware specifications, form factor etc. Most notably, there’s the privacy concerns that come with an always-on listening and watching device – are we willing to buy into that? Are we okay with putting our data into the hands of an organization that’s known for not being fastidious with data security, and for flouting copyright norms when it comes to sourcing data to train their models?
An employee from io expressed concern about the compute crunch:
“Amazon has the compute for an Alexa, so does Google [for their home device], but OpenAI is struggling with getting enough compute for ChatGPT, let alone an AI-native device. They need to fix that first.”
An io employee
I’m also having trouble envisioning how such a device would function well enough. ChatGPT and Claude token limits today are a pain point, and the hallucination on longer chats are positively annoying . Such instability draws a big question mark over reliably recalling past events and chats with the user, and that’s critical for referencing them accurately in future discussions, which would be a core function of such a device.
There is also the prompt engineering side things. It’ll be a challenge to impart the embedded AI model a helpful yet neutral personality, to mitigate the problem of people getting overly attached to it, and to ensure that it stays true to its purpose.
The Rabbit R1 had exposed API keys, abysmal battery life, and the Human AI Pin was borderline useless. Both of those were hurried releases, more concerned with hype than with what the product could actually do. I don’t fully expect it, but I hope for better from OpenAI and Jony Ive. The speculated release is for late 2026, which isn’t an insignificant amount of time. This further slightly reinforces my optimism towards this collaboration.
As things stand today, I’m sceptical of our present-day AI capabilities to pull this off. A truly unbiased, privacy-focused, reliable AI assistant would indeed be highly popular and successful. But like we’ve seen, those seem to be tough goals for AI models to meet.
I’m not one of the Blade Runner AI doomsday people, quite the opposite in fact. I’d absolutely love to see this idea brought to life. It has the makings of a resoundingly successful product.
If it’s sensitive about user privacy. If they can design it to make AI interactions feel human. If it can be relied upon to not hallucinate memories, and to interpret them right in the first place. If it actually lives up to its promise of being an unbiased AI life companion. If it isn’t 2499 USD.
If you count the number of ifs here, It’s clear that there’s a long, hard road ahead of OpenAI. Even with Jony Ive on their side.
However, if they do pull out all stops and bring this concept off of the drawing board, there may be something absolutely astronomical here. Besides making AI feel more easily approachable and truly human, this project may very well achieve a method of interaction superior to multi-touch.
Exciting as that is, I’d recommend curbing your enthusiasm.








