Go to the main content

The same AI that hallucinates your meal plan is now guiding Pentagon targeting, and the companies behind your favorite plant-based brand apps see no conflict

The large language models behind your AI meal planner and the systems being evaluated for Pentagon targeting decisions share the same architecture, the same hallucination problem, and often the same parent companies — a dual-use reality the conscious consumer movement has barely begun to reckon with.

News

The large language models behind your AI meal planner and the systems being evaluated for Pentagon targeting decisions share the same architecture, the same hallucination problem, and often the same parent companies — a dual-use reality the conscious consumer movement has barely begun to reckon with.

Add VegOut to your Google News feed.

The large language model that confidently told you tempeh contains B12 (it doesn't, unless fortified) and invented a citation to prove it is now being evaluated for military targeting decisions. That's not hyperbole — it's the conclusion drawn from a recent report from MIT Technology Review, in which a defense official described how AI chatbot-style systems could be integrated into targeting workflows at the Pentagon.

The overlap between consumer AI and defense AI is closer than most people realize. And for the millions of us who interact with AI-driven meal planners, recipe generators, and wellness apps every day, there's a question worth sitting with: do we understand what these tools actually are, and who builds them?

From Meal Plans to Military Plans

Large language models — the technology behind ChatGPT, Google's Gemini, and the AI assistants embedded in countless consumer apps — have a well-documented hallucination problem. They generate plausible-sounding text that is sometimes flatly wrong. In the food and nutrition space, this manifests as fabricated studies, incorrect micronutrient data, and meal plans that ignore allergies users explicitly listed.

These are the same foundational models, built by the same handful of companies, now being explored for national security applications. According to MIT Technology Review's reporting, defense officials have discussed how chatbot-like AI systems could assist in identifying and evaluating targets — a process where a hallucination carries consequences that are categorically different from a bad smoothie recipe.

The technical architecture doesn't change because the use case does. A transformer model generating text about quinoa bowl macros and a transformer model summarizing intelligence briefings share the same fundamental tendency: they optimize for coherence, not truth. They produce outputs that sound right. The gap between sounding right and being right is where the risk lives — whether you're planning dinner or planning a strike.

The Companies in the Middle

Here's where things get uncomfortable for the conscious consumer. The major cloud computing and AI platforms — Microsoft Azure, Amazon Web Services, Google Cloud — serve as infrastructure for both consumer wellness applications and defense contracts. The plant-based brand app that generates your weekly shopping list likely runs on the same cloud platform that hosts military AI workloads.

This isn't a conspiracy. It's a business model. Cloud computing is designed to be general-purpose. The same GPU clusters that train a model to recommend vegan protein sources can be allocated, minutes later, to a defense department project. The companies involved see no inherent conflict because, from a revenue perspective, there isn't one. A compute cycle is a compute cycle.

For consumers who choose plant-based products partly because of ethical considerations — environmental impact, resource use, a general orientation toward reducing harm — this dual-use reality introduces a layer of complexity that brand marketing rarely acknowledges. The app that helps you reduce your carbon footprint runs on infrastructure that also powers systems designed for kinetic operations.

The Hallucination Problem Isn't Going Away

The core technical limitation at the heart of this story — AI hallucination — remains unsolved. Researchers have made progress on retrieval-augmented generation and fact-checking layers, but no current system eliminates confabulation entirely. These models don't "know" things. They predict the next statistically likely token in a sequence. When that sequence is a recipe, the failure mode is an inedible dish. When that sequence informs a targeting decision, the failure mode is something else entirely.

As MIT Technology Review detailed, the integration of chatbot-style AI into defense workflows raises questions about how much human oversight remains in the loop. Proponents argue these systems assist rather than replace human decision-makers. Critics counter that the speed and volume of AI-generated analysis creates pressure to trust outputs without adequate verification — the same dynamic that leads someone to follow a hallucinated nutrition claim without checking the source.

The psychology is remarkably consistent across domains. When a system presents information with confident formatting — clean bullet points, authoritative tone, specific numbers — humans tend to trust it. This is true whether the output is a weekly meal prep guide or an intelligence summary. The packaging of certainty is the product, regardless of whether certainty is warranted.

What This Means for the Conscious Consumer

None of this means you should delete your meal planning app or stop using AI tools for recipe inspiration. The technology is genuinely useful. But there's value in understanding what you're actually interacting with when you use these products.

First, the practical: treat AI-generated nutrition information the way you'd treat advice from a confident friend who didn't go to nutrition school. Verify macros. Cross-check allergen information. Don't trust fabricated citations — and yes, these models do fabricate citations with alarming regularity. If an AI tells you a study from the Journal of Nutrition proves something, look up whether that study exists before reshaping your diet around it.

Second, the structural: the AI industry operates on a dual-use model by default. The companies building the tools embedded in plant-based lifestyle apps are simultaneously competing for defense contracts worth billions. This doesn't make the consumer products malicious. But it does mean that "voting with your dollar" — a principle many conscious consumers hold dear — gets complicated when the supply chain for your ethically-minded app purchase feeds into a revenue stream that funds military AI development.

The Transparency Gap

Perhaps the most striking aspect of this dual-use reality is how little it's discussed in consumer-facing contexts. Plant-based brands that build their identity around transparency in ingredient sourcing rarely extend that transparency to their technology stack. Which cloud provider hosts their app? Which AI model generates their recommendations? Does any portion of their licensing fees flow to companies actively pursuing defense AI contracts?

These aren't gotcha questions. They're the kind of supply-chain inquiries that the conscious living movement has normalized for food — where's it grown, who picked it, how was it processed — but hasn't yet applied to technology. The ingredient list on your oat milk is more transparent than the AI pipeline behind the app that recommended you buy it.

This gap matters because consumer AI and military AI are converging, not diverging. As the MIT Technology Review report makes clear, the defense establishment isn't building separate AI from scratch. It's adapting the same commercial models already woven into everyday life.

Complexity Is the Point

There's no clean moral here, no single action item that resolves the tension. The technology that helps someone transition to a more plant-forward diet — reducing their environmental impact in tangible, measurable ways — shares DNA with systems designed for warfare. Both things are true. Sitting with that complexity is more honest than pretending it doesn't exist.

What consumers can do is demand the same supply-chain transparency from their tech tools that they demand from their food. Ask app developers which AI models power their recommendations. Ask whether those models come from companies with active defense contracts. The answers might not change your behavior — but they'll make your choices more informed.

And the next time an AI chatbot confidently tells you that a particular mushroom blend contains complete protein with all nine essential amino acids, citing a study that doesn't exist — remember that the same architecture, running on the same infrastructure, might be generating equally confident outputs about something far more consequential.

Verify everything. Trust the format of nothing.

 

What’s Your Plant-Powered Archetype?

Ever wonder what your everyday habits say about your deeper purpose—and how they ripple out to impact the planet?

This 90-second quiz reveals the plant-powered role you’re here to play, and the tiny shift that makes it even more powerful.

12 fun questions. Instant results. Surprisingly accurate.

 

Adam Kelton

Adam Kelton is a writer and culinary professional with deep experience in luxury food and beverage. He began his career in fine-dining restaurants and boutique hotels, training under seasoned chefs and learning classical European technique, menu development, and service precision. He later managed small kitchen teams, coordinated wine programs, and designed seasonal tasting menus that balanced creativity with consistency.

After more than a decade in hospitality, Adam transitioned into private-chef work and food consulting. His clients have included executives, wellness retreats, and lifestyle brands looking to develop flavor-forward, plant-focused menus. He has also advised on recipe testing, product launches, and brand storytelling for food and beverage startups.

At VegOut, Adam brings this experience to his writing on personal development, entrepreneurship, relationships, and food culture. He connects lessons from the kitchen with principles of growth, discipline, and self-mastery.

Outside of work, Adam enjoys strength training, exploring food scenes around the world, and reading nonfiction about psychology, leadership, and creativity. He believes that excellence in cooking and in life comes from attention to detail, curiosity, and consistent practice.

More Articles by Adam

More From Vegout