- Node Mode
- Posts
- AI Is Now A Product Design Challenge
AI Is Now A Product Design Challenge
Where we are in AI today, where we're heading, and I have a theory about the Altman-Ive marriage.
Morning everyone 👨🎨
Welcome to this Sunday’s essay. 😇
↓ Level 1: We're about to reach the ceiling on high-quality training data for AI.
↓ Level 2: This forces AI companies to shift from scale to design.
↓ Level 3: The Altman-Ive partnership might not be about building an AI device—it might be about unlimited access to live human behavioral data.
You might not know but I'm doing my master's in design. So I’ve spent the past few weeks buried in research on AI and particularly AI assisted high-stakes decision-making in areas like healthcare, crime and finance. This isn't just academic curiosity—I'm trying to understand where human agency is heading in human-AI collaboration because I believe these contexts will define how we actually rely on AI. And also will our job as designers, product people, or coders become obsolete? In my research, one concept kept coming up: black box problem. When AI makes recommendations in surgery or criminal justice, we need to understand why. That's where solutions like Explainable AI (XAI) and or Human in the Loop (HITL) become critical. But all of these solutions depend on training data that reflects real human behavior under pressure. And that's exactly what we don't have enough of. The constraints on AI progress aren't just technical anymore. What happens next depends not only on model improvement, but also on what we design around them.
Data is all you need
Let's take a few steps back. In 2017, the Transformer architecture changed everything in machine learning with a paper famously published with the title “Attention is all you need.” It made training on massive datasets actually useful. This was how first GPTs trained. Then in November 2022, ChatGPT launched based on GPT-3.5, fine-tuned with a technique called Reinforcement Learning with Human Feedback (RLHF). This version was shaped by human expectations of usefulness and tone. That breakthrough opened the floodgates and we know the rest.
But here's the thing: that success depended on a deep well of high-quality training data. And that well is drying up. By 2026, it is said that the most quality textual data that's publicly available will have been consumed. What will remain is noisy chunks—social media posts, forum replies, petabytes of fragmented content. Using that for training could cause more harm than good.
So where do we go next? One solution offered is synthetic data—having machines generate their own training examples. They’ve been trying this and it sounds efficient, but models learning from their own outputs spiral into self-reinforcing loops. Diversity suffers. Another approach is exclusive deals with publishers and content owners. This secures "clean" data, but introduces centralization. Instead of training on the open knowledge of internet, models will reflect the biases of a few powerful content sources. This can cause other political issues.
Meanwhile since the first fine-tuned model release, something interesting happened in the product space. While LLMs offered a simple chat interface for the masses, developers were busy using APIs to build utilities and tools. These were mocked early on in startup circles, they are now what we call "wrappers.” But they did something very useful, and the surviving ones have clarified what people actually needed from AI. They helped crystallize the market need.
Thanks to LLMs emergent capabilities—which means they were helpful in categories creators didn’t program for—they were very useful in coding, translation, summaries. Products like Cursor redefined how developers collaborate with AI. Lovable made coding accessible with just plain English (so called “vibe coding”). Affected by this shift, AI companies started building product lines. OpenAI introduced Codex, a specialized coding sub-product. Anthropic is now offering differentiated Claude versions for writing or coding and so on. So AI is now a product challenge.
Enter the Unlikely Power Couple

Is Ive photoshopped there?
Which brings us to the most ambitious tech pairing in recent memory: Sam Altman and Jonathan Ive. If you’ve missed the news, Johnny Ive is the industrial design guru behind hundreds of Apple products who founded his own company after leaving Apple in 2019, and Altman just bought it for $6.5 billion. They’ve launched a video on YouTube about their partnership where two men can't stop praising each other for 9 minutes straight (and San Francisco for some reason). And also they said they will launch “something” in a year. Which brings us back to the expected omnipresent AI hardware idea. Sure. After all, Ive's legacy is industrial design, and Altman needs a form factor for next-generation interaction. This has been tried and failed before, we wonder what they will bring to the table.
Humane—a comically mimicking Apple from design to presentation style—promised a device-first AI vision. Screenless (well there was a low-res projection which didn’t work well), voice-driven, ambient AI. It failed. The experience was awkward and impractical. Instead of highlighting AI capabilities it accentuated its worst features.
Rabbit made similar promises—a compact AI device claiming to replace your phone through voice interaction. Rabbit also was the result of a partnership with a design firm Teenage Engineering. The founder of Rabbit boldly declared "apps are dead”, “yuck! Who needs phones anymore?” But after people tried it, they obviously asked: "Why not just make this an app?" Because in many ways, a mobile app with voice interaction is more flexible. You can speak to AI while driving, but switch to text on a train or in meetings. That's how people actually live. Any AI device ignoring this multi-modal reality is already at odds with human behavior. If Rabbit had been a little less overselling and more humble—like "hey we built something fun"—instead of going to war with phones, which are actually enormously useful in all aspects of life, it might have worked.
Unfortunately I’m getting the same vibes from Sam. In the video he depicts the current ChatGPT interaction as a multi-step hustle like “When I now need to ask something, I need to take out my laptop, connect to the internet, open a browser, start typing to ChatGPT and then I get a response.” To me this depiction resembles how Rabbit CEO represented app use. Because a user might easily ask ChatGPT app on his/her phone using just voice.

Extract from the Altman-Ive video
So, I don't think this collaboration is just about creating "a better AI experience." I believe it's about creating a better AI data stream. As we reach the end of internet-as-training-data, OpenAI needs new datasets, and not just of type text. If you want to push models toward something like artificial general intelligence, you need something richer. You need real-world, high-resolution, always-on human context. And to get that, you need a device. Not just any device—one people carry, speak to, rely on, and let into their lives. That's the bet I think Altman is making. The future of AI is the new source of data. And that data won't come from the web. It'll come from you.
This device, if they pull it off, won't just be a sleek assistant. It'll be a real-time, ambient learning machine. It'll listen to how we speak, hesitate, argue, joke, trust, or withdraw. It will generate a never-before-seen dataset—continuous, emotional, unpredictable. The kind that could finally solve the black box problem that haunts AI in high-stakes environments. Right now, LLMs don't learn from your conversations. ChatGPT doesn't improve its model based on what you say or how you say it. But future systems might. That leap demands completely different infrastructure. Hardware, UX, fine-tuning and model training tightly integrated into a loop that observes, interprets, adapts, and evolves.
You might ask “Hey Phones are already capturing tons of data why not use it?” OpenAI and other AI companies don’t own it. Tesla or Waymo have their own enormous data for example. If it works, it may grow faster than anything we've seen. A million users feeding it every hour of every day? That's an engine. We've seen this before, in quieter forms. We spent two decades giving Google, Facebook, and Amazon our attention, preferences, location, and social graphs. We got search, connection, and convenience in return. This new wave—where we hand over conversation, tone, gesture, emotion—might feel like the next logical step. But this time, the intimacy is deeper and the data is rawer. The feedback loop is tighter.
So the question isn't just: What will they build? It's: What will we become around it?
Tiny Challenge
Next time you use ChatGPT or Claude, pay attention to what you're actually asking it to do. Are you using it as a search engine, a writing assistant, or something else entirely? Notice the gap between what AI companies promise and what you actually need.
Bright Minds
The Multitouch Team
Before the iPhone, a secretive team at Apple led by engineer Wayne Westerman was developing multitouch gestures in a hidden lab. They spent years perfecting pinch-to-zoom, rotation, and other touch interactions without Jobs knowing the full scope. When they finally demonstrated the technology, it fundamentally changed Apple's direction and became the foundation for the iPhone's revolutionary interface. Sometimes the most transformative breakthroughs happen in the shadows, developed by teams working on what seems like impossible problems.
Time Capsule
The original iPhone launched on June 29, 2007, without an App Store. Steve Jobs initially wanted all third-party software to run through Safari as web apps. The App Store didn't arrive until July 2008—and it fundamentally changed how we think about mobile computing. Sometimes the most transformative products succeed precisely because they evolve beyond their creators' original vision.
Node Mode [Off][ ]
Peace,
Aydıncan.
Reply