↩︎

Recommendation feeds that explain themselves

01.20.24·1 min read
Overview

Users rarely see the logic behind a recommendation. This study explores patterns for surfacing "why you're seeing this" in feed-based products — balancing transparency with cognitive load.

The tension

Explainability improves trust. It also adds friction, complexity, and sometimes confusion. The goal was to find patterns that make the system legible without turning the feed into a data science dashboard.