Transformer-based language models (LMs) capture factual knowledge in their parameters. This paper investigates how factual associations are stored and extracted internally in LMs through information flow analysis. The study reveals a three-step internal mechanism for attribute extraction, involving subject enrichment, relation propagation, and attribute extraction via attention heads. The findings shed light on knowledge localization and model editing in LMs.