Blog #4
One of the ideas that has become increasingly important to my work on the digital public sphere is epistemic injustice. The term is now fairly widely used in philosophy, though it is often invoked very quickly, as if it simply meant being wronged in relation to knowledge. It does mean that, but the idea is more specific and more useful than that shorthand suggests. What matters is not just that someone does not know something, or that a false belief circulates. Epistemic injustice concerns the ways in which people are wronged in their capacity as knowers.
When people speak publicly about harm, one of the first obstacles they often encounter is not disagreement in the abstract. It is the refusal of credibility. They are not believed, or not believed fully, or treated as unreliable because of who they are. In other cases, something more elusive happens. A person may know that something is wrong, and may feel its force very sharply, but lack the shared concepts needed to describe it convincingly. The experience is there, but the public language lags behind it. In those cases, injustice operates not only through institutions or direct exclusion, but through the difficulty of rendering experience intelligible.
This is why the concept is helpful for my work on the digital. The project asks how injustice becomes recognisable in digital space. That question is not only about visibility in the simple sense. It is also about whether people are able to make sense of what has happened to them, whether others are prepared to hear it, and whether the available concepts are adequate to the world being described. Epistemic injustice sits right at that intersection.
Digital platforms have become important here because they can alter the conditions under which experience is named and received. Someone may encounter, for the first time, a description of their experience written by another person. A phrase, a testimony, or a hashtag may suddenly give form to something that had previously remained half-articulated. What was once experienced as private confusion begins to appear as patterned and social. A wrong acquires shape not because an institution has acknowledged it, but because others have supplied language for it.
That is one of the most significant political functions of online space. It is not simply that people communicate more quickly. It is that interpretation itself becomes more collective. Experiences are compared, clarified, reframed. People borrow each other’s words, modify them, contest them, and sometimes arrive at sharper concepts than those available in official discourse. Much of the recognition of injustice now involves precisely this labour of shared interpretation.
At the same time, digital space can also intensify epistemic injustice. Platforms do not simply host speech neutrally. They privilege certain voices, reward speed, and make credibility precarious. Testimony can circulate widely and still be dismissed. Worse, it can be pulled into hostile economies of visibility in which the cost of speaking becomes very high. Online, one may be heard by more people than ever before and yet still not count as credible in any stable way. Visibility is not the same thing as recognition.
If injustice is often sustained because experiences are not believed or cannot be properly named, then the digital public sphere matters not only as a site of protest, but as a site of interpretation. It is a place where people struggle over the terms in which reality is grasped.
This also helps explain why debates online can appear so intense even when they are not directly about policy. Very often, what is at stake is not just what should be done, but what is happening, what kind of experience this is, and who gets to define it. Those are epistemic questions, though not in a narrow academic sense. They concern the social organisation of understanding itself.
*Please note: this blog and subsequent blogs have been dictated, with AI-enabled writing technology used in the process. All notes have been reviewed by the author. This part of an effort to try to find productive and responsible ways to use generative AI.
Leave a comment