Feed me

Reverse Engineering Your Personalized Algorithm

Inspired by: Trevor Paglen

Description of the work:

Many AI systems reflect hidden biases. Not because the machines are flawed, but because the data they’re trained on carries the assumptions, stereotypes, and blind spots of the people who created it. These biases often go unnoticed, yet they shape how AI classifies, ranks, and interacts with the world.

Artist and researcher Trevor Paglen makes work that reveals the hidden systems behind AI, especially the biased data and invisible structures that shape how machines see the world. One example is ImageNet Roulette (2020), made with AI researcher Kate Crawford. The project let users upload a photo, then used an AI model trained on the ImageNet dataset to label them. The results were often strange, outdated, or offensive. It highlighted how AI can misrepresent people when trained on flawed data. Rather than a practical tool, ImageNet Roulette acts as a critical artwork—it prompts us to ask who creates AI systems, whose perspectives they embed, and who gets misrepresented.In the next assignment, you’ll turn that lens on yourself: by analyzing your own social media feed, you’ll explore how your personal algorithm sees and categorizes you. What kind of content does it show? What assumptions does it make? And how might you challenge—or even hack—its version of you?

Assignment:

  1. Pick a platform where you experience a personalized feed (for example Instagram, TikTok, YouTube or Spotify).
  2. Spend 10–15 minutes using the app like you normally do and record your screen. Watch back the screen recording and note down the first 10 posts you encountered. 
  3. For each item, answer the following questions:
    • What is this content trying to get me to do? For example: watch something, buy something, like it, click a link, worry, relate, or feel seen?
    • Why do you think the algorithm showed you this?
    • What does this post assume about who I am or what I like? Think about the image of you it’s working with.
  4. Try grouping your 10 posts into three “identity clusters”: little versions of how the algorithm might see you. Give each cluster a name that captures the kind of person your feed seems to imagine you are. Some examples: “Aspiring creative professional”, “Anxious consumer”, “Curious but monetizable feminist”, “Casual conspiracy watcher”, “Productivity-hacker”, “Potential mother”. 
  5. Based on your clusters, come up with a fictional or poetic name that captures how your feed seems to see you. This is a playful way to reflect on the identity your algorithm builds for you. Some examples: “Almost-Influencer With Decision Fatigue”, ““Doomscrolling Daughter” or “Neurodivergent Consumer With a Credit Card”.
  6. Write a short reflection (150–300 words), answering the following questions: What kind of person is being constructed for you? Do you agree with it? What parts feel accurate? And what part is aspirational, flattened or manipulated?
  7. Now, intervene in your feed. Choose a new identity to perform as someone completely different from your current algorithmic self. This could be playful, aspirational, ironic, or surreal (you can take the above examples as inspiration). Search for things this new version of you would be into. Like or comment on unexpected posts. Mute, block, or report things that seem tailored to your old self.
  8. Observe how your feed shifts over the next 48 hours. What changes? What stays the same? What does it take to re-train your algorithm?
  9. Take a moment to reflect on this hacking of your own personalised feed. Did your feed start to shift, or did it pull you back toward your old algorithmic self? Were you able to reshape it, or did the system resist change? What patterns or strategies did you notice? How does your feed try to keep your attention?