Most AI tools are designed for people who already know what they want. You open a chat interface, type a precise query, and receive a precise answer. That model works well for developers, researchers, and power users. It works poorly for everyone else.
That gap — between what AI can theoretically do and what the average person can actually ask for — was the original problem we wanted to solve. Not through better models or more parameters, but through better physical interface design.
The Keyboard Vision
The core idea was deceptively simple: what if AI capability was accessible through a dedicated hardware key — a physical button that, when pressed, activated an intelligent assistant with the minimal friction possible?
We were drawn to this because touch and physical interaction have a fundamentally different relationship with human attention than screen-based interfaces. You don't need to unlock a device. You don't need to open an app. You press a key.
The scenario that kept coming back to us: a person trying to find a video they half-remember. Not a precise title. Not even a clear plot summary. Something like: "that anime where the main character has a robot arm and there's a scene in the rain, I watched it maybe three years ago." A normal search engine fails this completely. A general chatbot might guess, but without access to your personal viewing history, it's shooting in the dark.
With a dedicated AI key and local access to your viewing history, browsing logs, and preference patterns, this becomes a solvable problem. Press the key. Say the description. The system cross-references your personal context and returns the most likely match — even with a faulty memory as input.
That's not just a parlor trick. That's a genuinely useful capability that millions of people would use every day if the interface made it frictionless enough. The value was real. The UX principle was sound: the best AI is invisible, and the most powerful interaction is the one that requires the least deliberate effort.
Why Simple Interfaces Matter More Than Smart Models
There's a pattern in consumer technology that repeats across every generation of new capability: the technology arrives before the interface does. Early smartphones required styluses and menus within menus. Early internet required command-line familiarity. The capability existed — the accessible interface didn't yet.
AI is living through the same phase right now. Large language models are extraordinarily capable. But the dominant interface — a text box in a browser tab — selects heavily for users who are comfortable composing explicit, structured queries. That's a meaningful but bounded population.
The mass market experience of AI is still largely confined to chatbots that respond to whatever you type — which means the quality of the interaction scales directly with the user's ability to articulate what they want. The gap between a skilled prompter and an inexperienced one is enormous. That gap is an interface problem, not a model problem.
A dedicated physical key, combined with local personal context, shortens that gap dramatically. The system can do much more inference on your behalf because it has richer context than a blank chat window provides. The user doesn't need to compose — they just need to gesture toward the intent.
The Turning Point: AI Is Moving Too Fast for Hardware
The keyboard vision was compelling, and we still believe the underlying principle is correct. But as we moved deeper into the problem, the landscape around us was shifting in ways that made the hardware path increasingly precarious.
AI capability was not improving incrementally — it was improving discontinuously. Capabilities that were unavailable one quarter were baseline the next. Operating system vendors were integrating AI directly at the platform level. Major hardware manufacturers were embedding AI accelerators into devices that most consumers already owned.
Building a hardware product in that environment carries a specific and serious risk: you might spend 18 months on a device that ships into a market where the operating system already does what your device does. Hardware has lead times. Software doesn't. Vertical integration — making both the intelligence and the device — looked, increasingly, like a trap.
The question we had to answer honestly: are we building toward a sustainable, defensible position, or are we building something that the major platforms will absorb before we reach scale? The honest answer pointed toward a pivot.
The Decision: Go Wide, Go Software
The pivot was not about abandoning the original insight. We still believe that accessible interfaces for non-expert users are underbuilt, and that the mass market needs AI experiences with less friction than a blank chat window. That conviction is unchanged.
What changed was the delivery vehicle. Instead of a hardware device that embeds intelligence in a physical form factor, we shifted toward software — specifically toward building the workflow layer that would eventually make AI useful in any interface, whether that's a browser, a desktop application, a keyboard shortcut, or a hardware integration we haven't built yet.
Software lets you iterate in days instead of months. It reaches users globally without a supply chain. It can be validated against real usage before committing to a form factor. And crucially: if the workflow logic is proven and portable, you can bring it to any physical interface later — not vertically (our hardware + our AI), but horizontally (our AI + any compatible hardware).
That reframing — from vertical to horizontal — is what led to Convilyn. Not as a replacement for the original vision, but as the foundation it would eventually need.
