Fostering breakthrough AI innovation through customer-back engineering
A growing body of evidence shows that enterprise AI innovation is broken when focused solely on algorithms and infrastructure, so this article explains how customer-back engineering—starting with user
The Customer-Back Revolution: Why the Next Wave of AI Innovation Starts with the User, Not the Algorithm
For years, the prevailing wisdom in enterprise AI has been a kind of technological determinism: build the most powerful model, stack the most impressive infrastructure, and the applications will follow. This logic has fueled a trillion-dollar arms race in GPUs, data centers, and foundation models. But a growing body of evidence suggests this approach is fundamentally broken. According to McKinsey research cited in a recent analysis, organizations capture less than one-third of the value expected from their digital investments [1]. That staggering 70 percent value gap isn't a failure of technology—it's a failure of orientation. The most ambitious AI projects collapse under the weight of their own complexity, not because the models aren't smart enough, but because they were built in a vacuum, disconnected from the messy, specific, and deeply human problems they were supposed to solve.
The antidote, emerging from a confluence of engineering breakthroughs and hard-won industry lessons, is a philosophy called "customer-back engineering." It's a deceptively simple inversion of the standard playbook: start with the customer's unmet need, and work backward to the technology, rather than starting with a shiny new capability and bolting it onto an existing process. This isn't just a feel-good mantra about user experience. As we'll see from recent developments at NASA's Jet Propulsion Laboratory, the reinvention of mundane hardware like device chargers, and cautionary tales from the banking sector, the customer-back approach is proving to be the single most reliable engine for genuine, breakthrough innovation in AI and beyond.
The 70% Value Gap: Why Technology-First Thinking Fails
The core thesis of the customer-back movement rests on a brutal piece of data. The McKinsey research, which found that organizations capture less than one-third of the expected value from digital investments, points to a systemic problem in how enterprises approach innovation [1]. The typical pattern is all too familiar: a company invests heavily in an advanced AI platform, perhaps a large language model or a sophisticated computer vision system. Then, the internal teams scramble to find "use cases" where this technology can be applied. This technology-first approach creates what analysts call "fragmented solutions"—disparate tools that don't integrate with existing workflows, that solve problems the customer doesn't actually have, or that create more friction than they eliminate [1].
The result is a graveyard of expensive, underutilized AI deployments. A bank might deploy a state-of-the-art fraud detection model that generates so many false positives it overwhelms its compliance team. A retailer might roll out a generative AI chatbot that can write poetry but can't process a simple return. These failures aren't technical; they are strategic. They stem from a fundamental misalignment between the capabilities of the technology and the actual, lived experience of the end-user. Customer-back engineering directly confronts this misalignment by forcing organizations to articulate the customer's job-to-be-done before a single line of code is written. It demands that engineers and product managers develop a deep, almost anthropological understanding of the user's pain points, constraints, and desired outcomes. Only then do they ask: "What technology, if it existed, would solve this perfectly?"
This is a radically different discipline. It requires humility from technologists accustomed to leading with the power of their models. It demands that the "customer" is not an abstract persona in a slide deck, but a real entity whose feedback loops directly into the engineering roadmap. When done correctly, it prevents the creation of those fragmented solutions [1]. Instead of bolting AI onto a broken process, you engineer a new process that is inherently intelligent because it was designed from the ground up to serve a specific human need.
From Mars to the Desktop: The Customer-Back Mindset in Action
Perhaps the most compelling evidence for this approach comes not from a Silicon Valley software company, but from the vacuum of space. Engineers at NASA's Jet Propulsion Laboratory (JPL) are currently designing the next generation of Martian rotorcraft, building on the legacy of the Ingenuity helicopter [2]. Ingenuity was a resounding success, becoming the first airborne platform to explore another world [2]. But its mission was, by necessity, a technology-first experiment. The goal was to prove that powered flight was even possible in Mars' low-density atmosphere. Now, the engineers at JPL are shifting their mindset. They are asking a fundamentally different question: "What does a scientist on the ground actually need to accomplish?"
The answer is carrying heavier payloads longer distances [2]. The customer—in this case, the planetary scientist—doesn't need a more elegant rotor design for its own sake. They need to reach a specific geological formation on the other side of a crater, and they need to carry a spectrometer to analyze the rock samples. The customer-back requirement is "heavy-lift, long-range aerial reconnaissance." This single, clearly defined need now drives the engineering breakthroughs in rotor technology at JPL [2]. The team isn't starting with a new material or a more efficient motor and then looking for a problem to solve. They start with the mission objective and work backward to the rotor design, the power system, and the navigation software. This is the purest form of customer-back engineering, and it is producing tangible, breakthrough results in one of the most challenging environments imaginable.
This same principle is quietly revolutionizing far more terrestrial technologies. Consider the humble device charger. For a decade, chargers were a bulky mess of tangled cables, slow to perform and prone to overheating [3]. The industry response was often technology-first: "We have a new gallium nitride (GaN) semiconductor process; let's put it in a charger and see what happens." But the real breakthrough came when engineers started with the customer's unspoken need: "I want my device to be fully charged by the time I leave for work, and I don't want a brick in my bag." By starting with that need, the industry was forced to innovate across multiple vectors simultaneously. The switch to gallium nitride (GaN) was a critical enabler, but only one piece of the puzzle [3]. The real innovation was in the system-level design—the thermal management, the power delivery protocols, the compact form factor—all optimized to deliver a specific customer outcome. The result is that chargers are now smaller, safer, and faster [3]. The technology didn't lead; the customer need led, and the technology was engineered to serve it. This is the difference between incremental improvement and genuine reinvention.
The Hidden Risk: When Customer-Back Becomes a Security Blind Spot
For all its virtues, the customer-back approach is not a panacea. In fact, when applied naively, it can create profound new risks—especially in the age of AI. The recent disclosure by Community Bank, which operates in Pennsylvania, Ohio, and West Virginia, serves as a stark warning [4]. The bank reported a cybersecurity incident that exposed customers' names, dates of birth, and Social Security numbers [4]. The root cause? The bank shared customer data with an AI application [4].
This is a textbook case of customer-back engineering gone wrong. The bank likely started with a legitimate customer need: "I want faster loan approvals" or "I want a more intuitive mobile banking experience." To meet that need, the bank's engineering team worked backward and integrated a third-party AI application to process customer data. The problem is that they failed to adequately model the security needs of the customer. The customer's job-to-be-done is not just "get a loan fast." It is "get a loan fast without having my identity stolen." The bank's engineering team optimized for speed and convenience (the explicit need) but neglected safety and privacy (the implicit, and arguably more important, need).
This incident highlights a critical nuance in the customer-back philosophy. It is not enough to simply ask the customer what they want. Customers are notoriously bad at articulating their deep-seated needs, especially around security and trust. A truly sophisticated customer-back engineering process must include a rigorous "second loop" that models the customer's constraints and risk tolerances. It must ask: "What would make this solution unacceptable to the customer?" For a bank, the answer is almost always "a data breach." By failing to engineer for that constraint, Community Bank created a fragmented solution that ultimately destroyed the very value it was trying to create [1][4]. The technology worked; the customer-back framing was incomplete.
This is the hidden risk that the mainstream media is missing in the rush to embrace customer-centric AI. The approach can lead to a dangerous form of hyper-optimization, where teams become so focused on solving the immediate, stated need that they ignore systemic risks. The solution is not to abandon the customer-back model, but to deepen it. Security, privacy, and ethical considerations cannot be afterthoughts or "bolt-on" features. They must be treated as first-order customer requirements, baked into the engineering process from the very first design sprint.
The Macro Shift: Winners, Losers, and the New Discipline of Deep Listening
The implications of this shift are profound for the entire AI ecosystem. The winners in the coming decade will not necessarily be the companies with the largest models or the most data. The winners will be the companies that can most effectively operationalize the customer-back discipline. This means investing in new roles and processes: "customer engineers" who sit between the product team and the ML team, ethnographic research budgets that rival R&D budgets, and feedback loops measured in hours, not quarters.
The losers will be the technology-first incumbents who continue to build platforms in search of a problem. We are already seeing this play out in the enterprise software market. The monolithic AI suites that promise to solve everything are being rejected in favor of smaller, more targeted tools that solve one thing exceptionally well. The companies that understand this are already pivoting. They are moving away from selling "AI" and toward selling "outcomes"—faster drug discovery, more efficient supply chains, more personalized learning. The technology is the engine, but the customer need is the steering wheel.
For developers and engineers, this represents a fundamental shift in craft. The most valuable skill is no longer the ability to fine-tune a transformer model, but the ability to listen. To observe a user struggling with a workflow and to ask the right questions. To resist the temptation to reach for a neural network when a simple rules-based system would suffice. This is the discipline of "customer-back engineering" as articulated in the MIT Technology Review analysis [1]. It is harder than building a model. It requires more patience and more empathy. But it is the only path to capturing that elusive two-thirds of value that currently slips through the fingers of most organizations.
The evidence is mounting from every direction. From the JPL engineers designing a rotorcraft to carry a scientist's payload across the Martian surface [2], to the hardware designers who reinvented the charger by starting with the user's morning routine [3], the pattern is clear. The most elegant solutions are not the most technologically sophisticated; they are the most precisely targeted. They begin with a deep understanding of a specific human problem and then work backward, with relentless discipline, to the technology that solves it. The AI revolution will not be built by the companies with the most powerful models. It will be built by the companies that best understand their customers. And that is a revolution that starts not in the data center, but in the quiet, difficult work of listening.
References
[1] Editorial_board — Original article — https://www.technologyreview.com/2026/05/11/1136967/fostering-breakthrough-ai-innovation-through-customer-back-engineering/
[2] Ars Technica — Engineers at NASA's Jet Propulsion Lab make a breakthrough in rotor technology — https://arstechnica.com/space/2026/05/engineers-at-nasas-jet-propulsion-lab-make-a-breakthrough-in-rotor-technology/
[3] MIT Tech Review — Innovation abounds in device charging — https://www.technologyreview.com/2026/05/11/1136406/innovation-abounds-in-device-charging/
[4] TechCrunch — US bank discloses security lapse after sharing customer data with AI app — https://techcrunch.com/2026/05/12/us-bank-discloses-security-lapse-after-sharing-customer-data-with-ai-app/
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
A conversation with Kevin Scott: What’s next in AI
In a late 2022 interview, Microsoft CTO Kevin Scott calmly discussed the next phase of AI without product announcements, offering a prescient look at the long-term strategy behind the generative AI ar
Google detects hackers using AI-generated code to bypass 2FA with zero-day vulnerability
On May 13, 2026, Google's Threat Analysis Group confirmed state-sponsored hackers used AI-generated exploit code to weaponize a zero-day vulnerability, bypassing two-factor authentication on Google ac
hacksider/Deep-Live-Cam — real time face swap and one-click video deepfake with only a single image
Deep-Live-Cam is an open-source tool enabling real-time face swapping and one-click video deepfakes using just a single image, running locally on a laptop via Python, marking a significant shift in de