In this study, we challenge the common belief that algorithmic explanations effectively help detect discrimination by conducting a user study to test how well people can identify unfair predictions using explanations. The results demonstrate that explanations are unreliable tools for flagging discriminatory outcomes, even under ideal conditions where participants were trained and had various levels of knowledge about protected attributes and causal mechanisms.