Before recommendation engines told us what to want, people developed authentic preferences through trial and error—and research reveals their choices came with less anxiety and more conviction than our algorithm-optimized decisions today.
My grandmother doesn't agonize over what to watch after dinner. She just turns on PBS.
Last week at my parents' place in Sacramento, I watched her flip through three channels max before settling on a documentary about wolves. No scrolling. No algorithm suggesting I might also enjoy nature shows because I once clicked on a bird video. Just simple, decisive action.
Twenty minutes later, I was still paralyzed on Netflix, having read 47 plot descriptions without pressing play on a single one.
Research has shown that anxiety significantly impacts our ability to make decisions, and as someone who grew up with tech, I'm starting to wonder if we traded something valuable when we outsourced our choices to recommendation engines. The psychology behind pre-algorithm decision making reveals something fascinating about confidence, anxiety, and the mental shortcuts that helped people commit to choices without second-guessing themselves into paralysis.
The practice of developing genuine preferences
My partner still loves pepperoni pizza with ranch. When we first moved in together five years ago, I asked how he knew that was his favorite topping combination.
"I tried them all," he said, like it was obvious.
That's the thing about pre-algorithm life. People actually had to figure out what they liked through trial and error. They couldn't just scroll through endless reviews or wait for a recommendation engine to tell them "people who bought X also enjoyed Y."
Studies on algorithm aversion show that when people rely too heavily on automated suggestions, they often lose touch with their authentic preferences. Before algorithms, you'd rent a movie based on the cover art or a friend's recommendation. Sometimes you'd hate it. But each decision, good or bad, helped you understand your actual taste.
This process built something crucial: an internal compass for decision making. You learned to trust your gut because your gut was all you had. Each choice became a data point in your personal experience, teaching you not what an algorithm predicted you'd like, but what you actually enjoyed.
The difference matters more than you might think. Research on algorithmic decision-making and autonomy reveals that when we delegate choices to AI systems, we can lose our sense of agency. We start confusing algorithmic filtering with autonomous choice, eroding genuine decision-making under the pretense of control.
When constraints protected us from paralysis
Walk into a video rental store in 1995 and you had maybe 200 titles to browse. Today, Netflix offers over 15,000 titles in the US alone.
This isn't progress in the way we assumed it would be.
Barry Schwartz's research on the paradox of choice demonstrates that an explosion of options doesn't make us happier or more confident. It makes us anxious and dissatisfied, constantly wondering if we missed the better option hiding two clicks away.
Pre-algorithm decision makers had natural constraints. Limited shelf space. Three TV channels. Five restaurants in town. These boundaries actually made choosing easier, not harder. The scarcity forced prioritization and created a framework where decisions felt manageable rather than overwhelming.
I see this play out every time I visit my parents in Sacramento. My dad still subscribes to the local newspaper. He reads it with his coffee every morning, absorbs the information available to him, and moves on with his day. Meanwhile, I've got 47 browser tabs open, each one a different news source, never quite sure I've read the right thing or gotten the full picture.
Consumer behavior research confirms that people postpone buying decisions when faced with too many choices, while closing deals quicker with fewer options. The constraint wasn't a bug. It was a feature that protected decision-making confidence.
When you had fewer options, you also had fewer opportunities for regret. You couldn't torture yourself wondering if the perfect choice was just one more scroll away, because there was nowhere left to scroll.
The psychological safety of finality
Once you picked a restaurant in the pre-smartphone era, that was it. You were going. No pulling out your phone at the stoplight to check if the place across town had better reviews. No reading 47 Yelp comments while your date waits for you to commit.
The decision was final not because you were more decisive, but because reopening the choice required real effort. You'd have to turn the car around. Find a payphone. Look up a new address in the phone book. The friction made commitment the path of least resistance.
Neuroscience research shows that anxiety heightens our cognitive responses to potential threats, making us hypersensitive to the possibility that we've made the wrong choice. When you couldn't easily access alternatives, your brain had permission to move on.
The finality of pre-algorithm decisions actually reduced anxiety. You made your choice, lived with it, and focused your mental energy elsewhere. There was a strange freedom in knowing the decision was done.
Today, we live in a state of perpetual reconsideration. Every choice remains provisional because better information is always theoretically available. We're never fully committed, never fully present, always one search away from discovering we chose wrong.
Social trust versus algorithmic prediction
My friend Sarah and I have wildly different taste in movies. I learned this the hard way in 2018 when she dragged me to a rom-com I hated.
But here's the thing: that experience taught me something valuable about whose recommendations to trust. Pre-algorithm life forced you to calibrate whose judgment aligned with yours through actual social interactions. You learned which friends shared your taste in music, which family members knew good restaurants, which coworkers could recommend books you'd actually finish.
This social learning built a web of trusted sources that was messy, imperfect, and deeply human.
Recent research on recommendation systems and autonomy suggests that algorithmic suggestions can increase acceptance when they preserve user control, but they can't replicate the social learning that happens when a real person vouches for something.
When Marcus recommends a book now, I know he's not a predictive model optimizing for engagement metrics. He's my friend who knows what I actually care about, who's seen me respond to things in real time, who understands context that no algorithm could capture.
That personal accountability created a different kind of confidence in the decision. If the recommendation worked out, the relationship was strengthened. If it didn't, you both learned something about each other's preferences. Either way, the social bond deepened.
Algorithms don't have skin in the game the way humans do. They don't care if you waste two hours on a terrible movie. They're just collecting data for the next prediction.
Building decision-making muscles through necessity
Every choice was a rep in the gym of judgment.
You picked a college major without personality quizzes generated by AI. You chose an apartment without virtual tours and algorithm-ranked listings. You selected a career path by talking to actual humans and making educated guesses based on incomplete information.
The necessity of making these choices without algorithmic assistance built competence through practice.
Computational research on decision making demonstrates that practice strengthens our ability to estimate probability and value under conditions of uncertainty. Pre-algorithm decision makers got constant practice because they had no choice.
They developed heuristics, mental shortcuts for navigating decisions efficiently. Not perfect shortcuts, but functional ones. They learned to satisfice rather than optimize, to choose "good enough" instead of holding out for perfection.
I see this difference most clearly in how my grandmother shops compared to how I shop. She walks into a store, finds something that meets her needs, and buys it. The entire transaction takes maybe 15 minutes.
I spend 45 minutes comparing prices across six websites, reading reviews, checking if there's a better version coming out next month, and second-guessing whether I even need the thing at all. By the time I finally purchase, I've burned through so much decision-making energy that I have nothing left for choices that actually matter.
The muscle memory of making choices, learning from outcomes, and adjusting your approach built genuine confidence that didn't depend on external validation. You trusted yourself because you'd proven you could navigate uncertainty and come out fine on the other side.
Where we spend our decision-making energy
Decision fatigue is real. Research shows that when our brains become overwhelmed by too many choices, our ability to make sound decisions deteriorates.
But pre-algorithm decision fatigue came from actual life demands. Choosing between job offers. Deciding whether to move across the country. Determining how to handle a family crisis. These were high-stakes choices that deserved mental energy.
Today, we burn through our decision-making capacity before we even leave the house. Which coffee pods to use. Which playlist for the commute. Which route the GPS should take based on current traffic patterns. Whether to accept the calendar invite or suggest an alternative time.
Algorithms were supposed to reduce this fatigue by handling routine decisions for us. Instead, they multiplied micro-decisions while adding the anxiety of infinite optimization. Every algorithmic suggestion comes with an implicit question: is this recommendation actually optimal, or should I override it?
People who made decisions before recommendation engines could save their mental energy for choices that actually mattered. They didn't have to decide whether to trust the algorithm or their intuition for every single selection. They just made the choice and moved on.
The psychology of ownership versus delegation
When my grandmother's generation picked a bad movie, they shrugged and said "well, that was terrible." When we pick a bad movie today, we complain that Netflix's algorithm failed us.
This shift in attribution matters psychologically.
When you make a decision entirely on your own, you own the outcome completely. That ownership comes with discomfort when things go wrong, but it also builds resilience. You learn that making a wrong choice isn't a system failure or an optimization problem. It's just part of being human, part of learning, part of the inherent uncertainty in life.
Pre-algorithm decision makers had to take full ownership. There was no system to blame, no recommendation engine that led you astray. Just you and your judgment and the consequences you had to live with.
This forced a kind of psychological maturity. You couldn't outsource responsibility for your choices, so you had to develop the emotional capacity to handle both good outcomes and bad ones.
Research on algorithmic autonomy suggests that as users become accustomed to algorithms shaping their experiences, they often view these processes as mere enhancements rather than recognizing how they fundamentally alter agency and accountability.
We've become comfortable blaming the algorithm when recommendations don't work out, which sounds convenient but actually robs us of the growth that comes from owning our mistakes.
What we might want to preserve
I'm not suggesting we should ditch recommendation algorithms entirely. They help us navigate information abundance in ways that would be impossible otherwise. I use them constantly. I probably couldn't function in modern life without them.
But there's something worth preserving in how our parents and grandparents made choices. The confidence that came from trusting yourself. The reduced anxiety that came from limited options. The authentic preferences that emerged from actual experience rather than predicted behavior.
Watching my grandmother decisively choose PBS reminded me that sometimes the best algorithm is the one that doesn't exist. The one that forces you to know what you want, make a choice, and move on without optimizing yourself into paralysis.
Maybe the solution isn't abandoning algorithms but being more conscious about when we use them and when we don't. Reserving some decisions for our own judgment, even when an algorithm could theoretically optimize them. Building in friction occasionally, making choices final even when we could technically reconsider.
Because here's what the research ultimately suggests: perfect decisions aren't what we actually need. We need the confidence to make imperfect ones, the resilience to live with outcomes we didn't predict, and the autonomy to trust that we know ourselves better than any algorithm possibly could.
Conclusion
My grandmother will finish her documentary tonight and go to bed satisfied with her choice. I'll probably still be scrolling, still optimizing, still searching for the perfect show that doesn't exist.
But maybe tomorrow I'll try her approach. Pick something quickly. Commit to it. Trust myself.
The algorithm will still be there if I need it. The question is whether I need it as much as I think I do.
If You Were a Healing Herb, Which Would You Be?
Each herb holds a unique kind of magic — soothing, awakening, grounding, or clarifying.
This 9-question quiz reveals the healing plant that mirrors your energy right now and what it says about your natural rhythm.
✨ Instant results. Deeply insightful.
