Categories
Eng-Business Eng-Marketing

The Algorithm and You: How Recommendation Engines Rewrite Your Consumer Identity

Abstract

Recommendation engines have become central to digital consumption, shaping what users see, choose, and ultimately become. This article examines how algorithmic systems influence and reconstruct consumer identity in platform-mediated environments. Drawing from consumer research, identity theory, and artificial intelligence studies, the paper argues that recommendation systems do not merely reflect preferences but actively participate in their formation. Through mechanisms such as personalization, digital nudging, and data-driven profiling, algorithms reshape the extended self in the digital age. The study also highlights emerging concerns around bias, echo chambers, and consumer autonomy. The paper concludes by proposing a conceptual framework for understanding identity transformation in algorithmic marketplaces and calls for more transparent and ethical design of recommendation systems.

1. Introduction

Digital platforms increasingly rely on recommendation engines to personalize user experiences. From streaming services to e-commerce, algorithmic systems curate content and products tailored to individual preferences. While these technologies enhance convenience and efficiency, they also raise important questions about their influence on consumer identity.

Traditionally, identity has been viewed as a stable construct shaped by social interactions and personal experiences. However, in the digital environment, identity is continuously mediated by algorithmic processes. Recommendation engines not only respond to user behavior but also guide and reshape it, creating a dynamic interplay between human agency and machine intelligence.

This article explores how recommendation systems rewrite consumer identity by influencing preferences, reinforcing behavioral patterns, and redefining the boundaries of the self.

2. Literature Review

2.1 Consumer Identity and the Extended Self

Consumer research has long established that possessions contribute to the formation of identity. The concept of the “extended self” suggests that individuals define themselves through what they own and consume. In digital contexts, this notion expands to include online profiles, interactions, and algorithmically curated experiences.

Identity theory further explains how individuals seek consistency between their self-concept and external representations. In digital environments, algorithms increasingly participate in this process by reflecting and reinforcing perceived identities.

2.2 Algorithmic Consumer Culture

Algorithmic systems are now embedded within consumer culture, shaping how preferences are formed and expressed. Rather than acting as neutral intermediaries, recommendation engines actively construct cultural meaning by prioritizing certain products, ideas, and behaviors.

The platformization of markets has intensified this effect, as large technology companies control the infrastructures through which consumption occurs. This creates a feedback loop where user data informs recommendations, which in turn shape future behavior.

2.3 Personalization and Digital Nudging

Personalization is a defining feature of recommendation systems. By analyzing user data, algorithms deliver highly tailored suggestions that align with predicted preferences. While this increases relevance, it also introduces subtle forms of influence known as digital nudging.

Digital nudges guide consumer decisions without explicit awareness, shaping choices through interface design and recommendation logic. Over time, these nudges can alter preferences, leading users to adopt identities that align with algorithmic predictions.

2.4 Echo Chambers and Filter Bubbles

One consequence of algorithmic personalization is the creation of echo chambers, where users are repeatedly exposed to similar content. This limits diversity in consumption and reinforces existing beliefs and preferences.

Filter bubbles further isolate users by restricting exposure to alternative viewpoints. In consumer contexts, this can lead to homogenized tastes and reduced exploration, narrowing the scope of identity formation.

2.5 Algorithmic Bias and Inequality

Recommendation systems are not free from bias. Machine learning models often reflect the data on which they are trained, leading to systematic biases in recommendations. These biases can reinforce social inequalities and limit opportunities for certain groups.

Algorithmic bias also affects how consumers are categorized and targeted, influencing the types of products and services they encounter. This has significant implications for fairness and inclusivity in digital markets.

2.6 Trust, Transparency, and Consumer Autonomy

Trust plays a critical role in the adoption of algorithmic systems. While many users appreciate the efficiency of recommendations, concerns about transparency and control persist. The “black box” nature of algorithms makes it difficult for users to understand how decisions are made.

This lack of transparency can undermine consumer autonomy, as individuals may unknowingly rely on systems that shape their preferences and behaviors. Enhancing explainability and accountability is therefore essential.

3. Conceptual Framework

This article proposes a framework in which recommendation systems influence consumer identity through three key mechanisms:

  1. Datafication – Continuous collection and analysis of user data to construct digital profiles.
  2. Personalization – Tailoring content and products based on predicted preferences.
  3. Behavioral Reinforcement – Repeated exposure to similar recommendations, strengthening specific consumption patterns.

These mechanisms interact to create a cycle of identity formation where consumers both shape and are shaped by algorithmic systems.

4. Discussion

4.1 The Co-Creation of Identity

Consumer identity in the digital age is co-created by users and algorithms. While individuals provide data through their actions, algorithms interpret and amplify these signals, influencing future behavior. This challenges traditional notions of autonomy and self-determination.

4.2 The Personalization Paradox

Personalization enhances user experience but may limit diversity and exploration. As algorithms become more accurate, they may confine users to narrow identity categories, reducing opportunities for discovery and change.

4.3 Ethical and Social Implications

The growing influence of recommendation systems raises important ethical concerns. Issues such as data privacy, algorithmic bias, and digital manipulation require careful consideration. Moreover, the concentration of power within platform companies amplifies these challenges.

4.4 Toward Responsible Algorithm Design

To mitigate risks, developers and policymakers must prioritize transparency, fairness, and user control. Strategies include:

  • Implementing explainable AI systems
  • Providing users with customization options
  • Auditing algorithms for bias
  • Promoting digital literacy among consumers

Such measures can help ensure that recommendation systems support, rather than undermine, consumer autonomy.

5. Conclusion

Recommendation engines are no longer passive tools; they are active agents in shaping consumer identity. By influencing what individuals see, choose, and value, algorithms play a central role in defining the modern self.

This article highlights the need to critically examine the impact of recommendation systems on identity formation. As digital platforms continue to evolve, understanding this relationship will be essential for creating more inclusive, transparent, and ethical consumer environments.

References:

Airoldi, M., & Rokka, J. (2022). Tracing how algorithms shape consumer identities in digital platforms. Consumption Markets & Culture, 25(5), 411–428.
Original annotation: This study introduces the concept of “algorithmic consumer culture,” arguing that recommendation engines do not merely suggest products but actively co‑construct what consumers desire and how they see themselves.

Belk, R. W. (1988, 2013). Two seminal works on possessions as extensions of the self. (1988) Journal of Consumer Research, 15(2), 139–168; (2013) Journal of Consumer Research, 40(3), 477–500.
Original annotation: The 1988 paper laid the foundation for understanding material goods as part of identity; the 2013 update expands the framework to digital possessions, avatars, and online profiles, showing how algorithms mediate self‑extension.

Burke, P. J., & Stets, J. E. (2009, 2022). Identity theory from a sociological perspective. (2009) Oxford Academic; (2022) 2nd ed., revised and expanded.
Original annotation: These volumes explain how individuals verify their identities through social feedback – a process now increasingly automated by algorithmic systems that affirm or challenge who we are online.

Lee, A. Y., Mieczkowski, H., Ellison, N. B., & Hancock, J. T. (2022). The algorithmic crystal: Self‑formation on TikTok. Proceedings of the ACM on Human‑Computer Interaction, 6(CSCW2), 1–22.
Original annotation: Proposes that TikTok’s personalized feed acts as a “crystal ball” through which users see possible future selves, thus algorithmically shaping identity exploration.

Roccas, S., & Brewer, M. B. (2002). Social identity complexity. Personality and Social Psychology Review, 6(2), 88–106.
Original annotation: Explains how individuals belong to multiple social groups simultaneously – a complexity that algorithms often flatten by forcing simplistic category assignments.

Swann, W. B. (1983). Self‑verification theory. In Suls & Greenwald (Eds.), Social psychological perspectives on the self (Vol. 2, pp. 33–66).
Original annotation: Argues that people seek confirmation of their existing self‑views – a tendency that recommender systems exploit to create reinforcing feedback loops.

Won, J., & Lee, J. L. (2026). Data‑driven identity transformation in digital sport fandom. European Sport Management Quarterly, 1–26.
Original annotation:* Applies identity theory to sports fans, showing how algorithms track and then influence fan loyalty and visual self‑presentation.

Akter, S., Dwivedi, Y. K., Sajib, S., Biswas, K., Bandara, R. J., & Michael, K. (2022). Bias in machine‑learning marketing models. Journal of Business Research, 144, 201–216.
Original annotation: A comprehensive review of how biased training data and flawed model design lead to unfair consumer treatment – from predatory lending to discriminatory ad delivery.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). Fairness and bias in machine learning: A survey. ACM Computing Surveys, 54(6), Article 115.
Original annotation: Maps 20+ types of bias in algorithmic systems, providing a taxonomy that marketers and business students can use to audit AI applications.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Original annotation: A non‑technical but rigorous critique of opaque algorithms that punish the poor and marginalized – essential reading for understanding real‑world consequences of biased models.

Sun, B., Pei, S., Wang, Q., & Meng, X. (2025). Algorithmic discrimination and unethical consumer behavior. Behavioral Sciences, 15(4), 494.
Original annotation: Empirical evidence that when consumers perceive algorithmic bias (e.g., in pricing or recommendations), they respond with retaliatory unethical actions like lying or return fraud.

Dobolyi, D. G., Greenspan, R. L., Grabman, J. H., Abbasi, A., & Dodson, C. S. (2026). Face recognition, race, and algorithmic similarity. Journal of Applied Research in Memory and Cognition. Advance online.
Original annotation: Demonstrates that facial recognition algorithms perform unevenly across racial groups, which affects both consumer authentication and targeted marketing.

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). Echo chambers on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118.
Original annotation: Large‑scale analysis of Facebook and Twitter showing that users self‑segregate into ideologically homogeneous communities, and algorithms amplify this effect.

Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles and online news consumption. Public Opinion Quarterly, 80(S1), 298–320.
Original annotation: Challenges the strongest claims about filter bubbles but confirms that personalization reduces exposure to cross‑cutting political content.

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1), 1–16.
Original annotation: A balanced, student‑friendly introduction to the filter bubble debate, distinguishing empirical evidence from speculative fears.

Davidson, B. M., & Weeks, B. E. (2025). Automating distrust? Algorithmic news curation and trust. Mass Communication & Society, 1–16.
Original annotation:* Reveals that when users learn an algorithm curates their news, their trust in that news decreases – a paradox for AI‑driven media.

Couldry, N., & Mejias, U. A. (2019). Data colonialism. Television & New Media, 20(4), 336–349.
Original annotation: Frames big data extraction as a new form of colonialism, where user data is taken without meaningful consent for profit – a critical perspective for business ethics discussions.

Zuboff, S. (2020). The age of surveillance capitalism (with N. Zanzarella). PublicAffairs.
Original annotation: The definitive critique of how tech giants claim human experience as free raw material for behavioral prediction markets.

Masiero, S. (2023). Digital identity as platform‑mediated surveillance. Big Data & Society, 10(1), 1–5.
Original annotation: Examines how digital ID systems (e.g., Aadhaar in India) enable unprecedented tracking, often under the guise of financial inclusion.

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure meets platform studies. New Media & Society, 20(1), 293–310.
Original annotation: Shows how Google and Facebook have become invisible digital infrastructure, making their algorithms unavoidable for businesses and consumers.

Srnicek, N. (2017). Platform capitalism. Polity Press.
Original annotation: A concise explanation of how platform‑based business models (e.g., Uber, Airbnb) rely on data extraction and network effects, not just matching supply and demand.

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press.
Original annotation: Argues that platforms are not neutral intermediaries but actively govern social and economic life, with algorithms as the primary governance mechanism.

Glikson, E., & Woolley, A. W. (2020). Human trust in AI: A review. Academy of Management Annals, 14(2), 627–660.
Original annotation: Systematic review of factors that build or erode trust in algorithmic systems – including transparency, reliability, and perceived benevolence.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation. Organizational Behavior and Human Decision Processes, 151, 90–103.
Original annotation: Experimental evidence that people often trust algorithms more than human experts, even for subjective tasks like forecasting – countering common assumptions.

Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
Original annotation: A short, accessible introduction to why marketing models must become interpretable, and how that benefits both firms and consumers.

Miller, T. (2019). Explanation in artificial intelligence. Artificial Intelligence, 267, 1–38.
Original annotation: Bridges computer science and philosophy to define what a “good explanation” means for AI, with implications for consumer-facing recommender systems.

Han, J., & Ko, D. (2025). Consumer autonomy in generative AI services. Behavioral Sciences, 15(4), 534.
Original annotation: Finds that giving users control over AI design elements (e.g., tone, length of output) increases trust and satisfaction, preserving autonomy.

Henin, C., & Le Métayer, D. (2022). Beyond explainability: Justifiability and contestability of algorithmic decisions. AI & Society, 37(4), 1397–1410.
Original annotation: Argues that even explainable AI is insufficient – users must also have the right to contest and appeal algorithmic outcomes.

Yeung, K. (2017). ‘Hypernudge’: Big data as regulation by design. Information, Communication & Society, 20(1), 118–136.
Original annotation: Critiques how hyper‑personalized nudges (e.g., default settings, dynamic pricing) steer behavior without conscious awareness, raising ethical flags.

Jesse, M., & Jannach, D. (2021). Digital nudging with recommender systems. Computers in Human Behavior Reports, 3, 100052.
Original annotation: A state‑of‑the‑art survey of nudging techniques (e.g., social proof, scarcity) deployed through recommendation algorithms.

Helberger, N., Sax, M., Strycharz, J., & Micklitz, H.-W. (2022). Digital vulnerability. Journal of Consumer Policy, 45(2), 175–200.
Original annotation: Introduces the concept of “digital vulnerability” – the inability of consumers to resist algorithmic manipulation due to information asymmetries and interface design.

Ontanon, S., & Zhu, J. (2021). The personalization paradox. In IUI Companion (pp. 64–66). ACM.
Original annotation: Short conference paper showing that more accurate personalization can actually reduce user satisfaction by limiting serendipity.

Chaney, A. J.-B., Stewart, B. M., & Engelhardt, B. E. (2018). Algorithmic confounding in recommender systems. In RecSys ’18 (pp. 224–232). ACM.
Original annotation: Demonstrates mathematically that recommendation algorithms create feedback loops that reduce item diversity and lock users into narrow consumption patterns.

Camacho, L. J., Salazar-Concha, C., & Ramírez-Correa, P. (2020, 2022). Xenocentrism and consumer behavior. (2020) Sustainability, 12(4), 1647; (2022) Journal of Risk and Financial Management, 15, 166.
Original annotation: The 2020 paper links xenocentrism (preference for foreign products) to purchase intentions; the 2022 study finds that formal education reduces xenocentric bias, important for cross‑cultural marketing.

Papageorgiou, K., & Milioris, K. (2026). AI‑driven personalisation in social media marketing. Marketing, 57(1), 39–48.
Original annotation: Explores how AI can tailor influencer content to xenocentric or ethnocentric consumer segments, along with ethical guardrails.

Theodorakopoulos, L., Theodoropoulou, A., & Klavdianos, C. (2025). Interactive viral marketing through big data and AI. Journal of Theoretical and Applied Electronic Commerce Research, 20(2), 115.
Original annotation: Provides a framework for combining AI analytics with influencer networks, while warning against manipulation of consumer identity.

Dwivedi, Y. K., et al. (2023). Generative conversational AI (ChatGPT) – multidisciplinary perspectives. Social Science Information Studies, 71, 102642.
Original annotation: A large‑author team paper covering opportunities (e.g., content generation) and risks (e.g., misinformation, bias) for research, practice, and policy.

Kshetri, N., Dwivedi, Y. K., Davenport, T. H., & Panteli, N. (2024). Generative AI in marketing. International Journal of Information Management, 75, 102716.
Original annotation: Identifies specific marketing applications (e.g., hyper‑personalized emails, synthetic reviews) and a research agenda for each.

Chacon, A., Montecino, R., Reyes, T., & Kausel, E. E. (2026). Terminology effects in algorithmic systems. Big Data & Society, 13(1), 1–15.
Original annotation: Shows that calling an AI “chatbot” vs. “assistant” changes user trust and preference – crucial for interface design.

Pelau, C., Dabija, D.-C., & Stanescu, M. (2024). Trust and friendship with AI. Oeconomia Copernicana, 15(2), 407–433.
Original annotation: Finds that emotional bonds with AI agents increase information sharing but also make consumers more vulnerable to exploitation.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Original annotation: A deep dive into how platforms like Facebook and YouTube moderate content – often using algorithms – and why those decisions are secret yet powerful.

Gonçalves, J., Weber, I., Masullo, G. M., Torres da Silva, M., & Hofhuis, J. (2023). Perceptions of algorithmic content deletion. New Media & Society, 25(10), 2595–2617.
Original annotation: Experimental study showing that users perceive algorithmic removal of political content as “censorship” but removal of hate speech as “common sense.”

Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work. Academy of Management Annals, 14(1), 366–410.
Original annotation: Extends the study of algorithms to labor contexts (e.g., gig workers rated by AI), with implications for algorithmic governance in marketing platforms.

Nieborg, D. B., & Poell, T. (2018). Platformization of cultural production. New Media & Society, 20(11), 4275–4292.
Original annotation:* Explains how platforms transform cultural goods (news, games, apps) into contingent, algorithm‑optimized commodities.

Poell, T., Nieborg, D., & van Dijck, J. (2019). Platformisation. Internet Policy Review, 8(4).
Original annotation: A concise definition and analysis of platformisation as an institutional phenomenon, not just a technical one.

Carrière, T. C., Boeschoten, L., Struminskaya, B., Janssen, H. L., De Schipper, N. C., & Araujo, T. (2025). Best practices for digital data donation studies. Quality & Quantity, 59(Suppl. 1), 389–412.
Original annotation:* A methodological guide for researchers who ask participants to donate their digital trace data (e.g., browser history, Spotify logs), emphasizing ethics and consent.

Ohme, J., & Araujo, T. (2022). Digital data donations: A quest for best practices. Patterns, 3(4), 100467.
Original annotation: Shorter, accessible overview of data donation as an alternative to API access, with checklists for transparency.

Ohme, J., et al. (2024). Digital trace data collection for social media effects research. Communication Methods and Measures, 18(2), 124–141.
Original annotation:* Compares three methods (APIs, data donation, screen tracking) for studying algorithmic effects, with clear advice for student researchers.

Davenport, T. H., Guha, A., Grewal, D., & Bressgott, T. (2020). How AI will change marketing. Journal of the Academy of Marketing Science, 48, 24–42.
Original annotation: A forward‑looking article outlining four AI‑driven changes: automated decisions, dynamic pricing, real‑time personalization, and AI‑augmented creativity.

Grewal, D., Hulland, J., Kopalle, P. K., & Karahanna, E. (2020). The next 50 years of technology and marketing. Journal of the Academy of Marketing Science, 48(1), 1–8.
Original annotation: Editorial introduction to a special issue, summarizing key themes including algorithmic bias, consumer privacy, and digital identity.

Huang, M.-H., & Rust, R. T. (2021). A strategic framework for AI in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.
Original annotation: Proposes a four‑level AI maturity model (mechanical, thinking, feeling, social) for marketers to assess their algorithmic capabilities.

MacInnis, D. J. (2011). A framework for conceptual contributions in marketing. Journal of Marketing, 75(4), 136–154.
Original annotation: A methodological classic – helps students evaluate whether a paper makes a theoretical, methodological, or policy contribution.

Puntoni, S., Reczek, R. W., Giesler, M., & Botti, S. (2021). Consumers and AI: An experiential perspective. Journal of Marketing, 85(1), 131–151.
Original annotation: Shifts the focus from functional to experiential dimensions of AI (e.g., meaning, identity, emotion), highly relevant for student projects.

Reed, A., Forehand, M. R., Puntoni, S., & Warlop, L. (2012). Identity‑based consumer behavior. International Journal of Research in Marketing, 29(4), 310–321.
Original annotation: A theoretical integration showing how identity salience drives consumption choices – a foundation for understanding algorithmic personalization.

Ricci, F., Rokach, L., & Shapira, B. (Eds.). (2022). Recommender systems handbook (3rd ed.). Springer.
Original annotation: The standard technical reference for recommendation algorithms; suitable for advanced students who want to understand collaborative filtering, matrix factorization, and deep learning approaches.

Sahakyan, H., Gevorgyan, A., & Malkjyan, A. (2025). Algorithmic control and Foucault. Philosophies, 10(4), 73.
Original annotation: A philosophical piece applying Foucault’s concept of discipline to algorithmically managed populations – useful for critical business students.

Song, Y. G., Ham, J., Jin, E., & Eastin, M. S. (2024). Advertising AI agents: Social presence and sincerity. Journal of Interactive Advertising, 24(3), 185–202.
Original annotation: Experimental evidence that making AI agents seem more “human” (social presence) increases advertising effectiveness but also triggers higher expectations of sincerity.

Vărzaru, A. A. (2023). Digital transformation acceptance in public marketing. Sustainability, 15(1), 265.
Original annotation: Investigates how public sector marketers adopt (or resist) AI tools, with implications for government‑facing FinTech or health campaigns.

Wang, W., & Benbasat, I. (2007). Explanation facilities and trust in recommendation agents. Journal of Management Information Systems, 23(4), 217–246.
Original annotation: Early but still influential study showing that giving users explanations for recommendations increases their trust and intention to use.

Zhang, Y., & Chen, X. (2020). Explainable recommendation: A survey. Foundations and Trends in Information Retrieval, 14(1), 1–101.
Original annotation: Comprehensive technical survey of methods to make recommender systems explainable (e.g., feature‑based, example‑based, natural language).

Kitchin, R. (2017). Critical algorithm research. Information, Communication & Society, 20(1), 14–29.
Original annotation: A methodological manifesto for studying algorithms as socio‑technical assemblages, not just code.

Longoni, C., & Cian, L. (2022). The “word‑of‑machine” effect. Journal of Marketing, 86(1), 91–108.
Original annotation: Demonstrates that consumers prefer AI recommendations for utilitarian products (e.g., batteries) but human recommendations for hedonic products (e.g., wine).

Mende, M., Bradford, T. W., Roggeveen, A. L., Scott, M. L., & Zavala, M. (2024). Consumer vulnerability dynamics. Journal of the Academy of Marketing Science, 52, 1301–1322.
Original annotation: A dynamic framework showing how algorithmic marketing can both reduce (e.g., accessible pricing) and increase (e.g., predatory personalization) consumer vulnerability.

SHARE THIS POST

0
0
0
0
Explore More:
Contact | Privacy Policy | About Us