Key Takeaways
- The standard logic for a single-vendor pilot is straightforward: it limits operational complexity and gives the vendor a clean environment to succeed or fail.
- The vendors CKE selected represent meaningfully different approaches to the same problem.
- The reported results from CKE's tests are specific enough to be meaningful.
- The multi-vendor strategy has direct lessons for other operators evaluating drive-thru AI, particularly mid-sized and large chains that have not yet committed.
- CKE has not publicly announced a winner from its three-vendor evaluation or released a timeline for systemwide deployment.
Most chains shopping for AI drive-thru technology do what most enterprises do when evaluating software: pick a vendor, run a pilot at three to five locations, collect a few weeks of data, and make a call. CKE Restaurants went a different direction.
The parent company of Carl's Jr. and Hardee's ran three separate AI voice systems at its drive-thrus at the same time: Presto Automation, OpenCity (whose AI assistant is named Tori), and Valyant AI. Each vendor got real locations, real guests, and real transactions. The approach is unusual enough in the QSR industry that it warrants a close look, both for what CKE learned and for what it tells the broader industry about where AI ordering actually stands in 2026.
Why Test Three at Once
The standard logic for a single-vendor pilot is straightforward: it limits operational complexity and gives the vendor a clean environment to succeed or fail. CKE's multi-vendor strategy reflects a different set of concerns.
The drive-thru AI market is still genuinely unsettled. McDonald's publicly terminated its partnership with IBM's AI ordering system in 2023 after years of testing, citing accuracy problems and customer frustration. Yum Brands went the opposite direction, committing to one vendor, Dragontail Systems for back-of-house logistics and a proprietary AI voice platform for Taco Bell, ultimately scaling voice AI to over 300 Taco Bell locations by 2025. Wendy's ran a high-profile pilot with Google Cloud's voice AI. None of these outcomes established a clear market leader.
Against that backdrop, a chain with CKE's footprint, roughly 2,800 Hardee's and Carl's Jr. locations in the United States and about 1,000 more internationally, has real strategic exposure in making the wrong bet. Testing three vendors simultaneously compresses the evaluation timeline and produces direct apples-to-apples data that sequential pilots cannot.
The approach also acknowledges something vendors rarely advertise: AI voice ordering performance is highly sensitive to menu complexity, regional accent variation, ambient noise conditions, and drive-thru lane configuration. A system that performs well at a Taco Bell in suburban Phoenix may produce very different results at a Hardee's in rural North Carolina. CKE's portfolio skews toward the South and Midwest, where Hardee's is the dominant brand, and toward drive-thru-heavy freestanding locations where order accuracy and speed of service are the primary operational pressure points.
The Three Vendors and What They Brought
The vendors CKE selected represent meaningfully different approaches to the same problem.
Presto Automation entered the QSR AI market as a table-side technology company before pivoting hard toward drive-thru voice AI. Presto's system is built on a hybrid model: AI handles the bulk of the order, but a human remote worker can take over at any moment, often monitoring multiple lanes from a centralized location. The company has been public about its partnership with ElevenLabs, the voice synthesis platform, to make its AI voices sound more natural. That partnership matters because customer acceptance of AI ordering correlates directly with voice quality. Flat, robotic-sounding systems generate complaints and cart abandonment; conversational, natural-sounding voices reduce friction. Presto's bet is that closing the voice quality gap with human agents is as important as pure accuracy.
OpenCity's Tori operates on a similar hybrid logic but with a different staffing model. Tori handles orders independently, but human workers at the restaurant can hear every interaction in real time and intervene if the AI makes a mistake or encounters a complex customization request. The design is intentionally transparent to the employee rather than positioning the AI as a black box running in the background. That transparency has implications for training, for employee morale, and for the volume of interventions over time. A system employees can hear and correct creates a continuous feedback loop that improves accuracy; it also keeps staff engaged rather than sidelined.
Valyant AI focuses specifically on restaurant voice AI and has positioned itself as a pure-play specialist in the category. The company's pitch is that single-purpose AI trained exclusively on restaurant ordering outperforms general-purpose voice systems adapted to the use case. That claim is testable, and CKE's multi-vendor setup was effectively a test of it.
What the Numbers Showed
The reported results from CKE's tests are specific enough to be meaningful. Across the AI-assisted drive-thru interactions, the systems delivered an 88% upsell offer rate, meaning the AI presented an add-on suggestion on 88% of qualifying transactions. The upsell acceptance rate, meaning the percentage of guests who actually took the suggested item, came in at 46%.
For context: industry benchmarks for human upselling at drive-thrus vary considerably by chain and daypart, but consistent 80%+ offer rates are difficult to achieve with human cashiers handling high-volume lanes during peak periods. The AI does not skip upsells because it is busy, distracted, or having a bad shift. It executes on every transaction, every time.
A 46% acceptance rate is a real business number. At a chain the size of CKE, with tens of millions of drive-thru transactions annually, an incremental upsell acceptance rate that high on a reliable offer rate compounds quickly into measurable same-store sales lift. The math is not complicated. If the average accepted upsell adds $1.50 to the ticket and the system generates, say, 200 accepted upsells per location per day at a mid-volume Hardee's, that is $300 per day per location, or roughly $109,500 annually per restaurant.
Beyond upselling, CKE reported improvements in order accuracy and gains in labor efficiency. The labor efficiency piece is the more structurally significant finding. Drive-thru AI does not eliminate headcount in the short term, but it changes where labor is allocated. When the AI handles the lane, existing employees shift toward food preparation, order assembly, and guest service at the window. The operational argument for AI voice ordering is not replacement but reallocation, and CKE's results appear to support that framing.
Employee satisfaction reportedly improved as well, which may seem counterintuitive for a technology deployment often framed in the press as job-threatening. The more likely explanation is that drive-thru order-taking, particularly during peak rushes, is one of the more cognitively demanding and frustration-prone roles in a QSR. Removing that pressure while keeping employees engaged through the transparent intervention model creates a less stressful work environment for the humans who remain.
What CKE's Approach Teaches the Industry
The multi-vendor strategy has direct lessons for other operators evaluating drive-thru AI, particularly mid-sized and large chains that have not yet committed.
The first lesson is that vendor claims do not survive contact with your specific operation. Every AI voice company has a demo environment, a controlled pilot case study, and a deck of impressive statistics. None of that tells you how the system performs with your menu, your crew, your regional customer base, and your drive-thru lane geometry. The only way to know is to test, and testing against multiple systems simultaneously gives you a benchmark that single-vendor pilots cannot provide.
The second lesson is that voice quality is no longer a differentiator; it is table stakes. The ElevenLabs partnership Presto announced signals where the market is heading: AI voices that are indistinguishable from human agents in normal drive-thru audio conditions. OpenCity's Tori is similarly built around natural conversation rather than transactional command-and-response. Operators evaluating systems in 2026 should treat any vendor that cannot demonstrate natural voice quality as disqualified before the pilot begins. Customer tolerance for robotic-sounding AI ordering is low and declining.
The third lesson is about the human-in-the-loop design. Both Presto and OpenCity build in human intervention capability, and the evidence suggests that is the right architecture for this stage of the technology. McDonald's IBM experience illustrates what happens when the AI operates without adequate fallback: edge cases and complex orders produce errors, complaints accumulate, and the system gets pulled. The hybrid model, AI handling the routine and humans catching the exceptions, is currently more reliable than full automation and more palatable to operators managing guest experience risk.
The fourth lesson, harder to quantify from CKE's data but worth naming, is about franchise system dynamics. CKE operates as a heavily franchised system. Any technology mandate that generates labor concerns, capital costs, or operational disruption gets filtered through the lens of franchisee relations. The AI systems CKE tested are pitched as labor complements, not labor reductions. That framing matters enormously for franchisee adoption, and chains that position their AI rollouts as replacement technology will face stiffer resistance from their franchise systems than those that lead with the labor efficiency and accuracy story.
Where CKE Goes from Here
CKE has not publicly announced a winner from its three-vendor evaluation or released a timeline for systemwide deployment. Given the size of the organization, a phased rollout across the domestic Hardee's and Carl's Jr. footprint would likely take 18 to 24 months from vendor selection to meaningful scale.
The broader question is whether CKE will consolidate on one platform or operate a multi-vendor environment long-term. Most technology analysts would argue for consolidation: a single platform simplifies training, support, data analytics, and vendor negotiation. But CKE's willingness to run three systems simultaneously suggests an organization that is comfortable with complexity in exchange for optionality. Some regional variation by brand, with Hardee's and Carl's Jr. potentially optimized differently given their distinct customer bases and geographic distributions, is plausible.
What is clear is that the industry is past the "whether" question on drive-thru AI. McDonald's abandoned one implementation and is already running new AI initiatives across its 27,000-location drive-thru network. Taco Bell is at 300-plus AI-assisted locations and growing. Wendy's is testing. Jack in the Box, Checkers, and Rally's have all explored voice AI deployments. The competitive pressure is real.
CKE's experiment produced concrete data on upsell performance, accuracy, and employee impact. More importantly, it produced that data across three different technology approaches, giving the company a foundation for vendor selection that most chains in its peer group cannot match. In a market where the wrong AI partnership can cost millions in deployment expense and damage guest satisfaction, that information advantage is worth the operational complexity of running three systems at once.
For operators still on the sidelines, CKE's methodology is as instructive as its results: test aggressively, test comparatively, and do not let any single vendor control your data.
QSR Pro Staff
The QSR Pro editorial team covers the quick service restaurant industry with in-depth analysis, data-driven reporting, and operator-first perspective.
More from QSR