Hydrology is shifting from process-based to deep learning models. Entity-aware (EA) deep learning models with static features (predominantly physiographic proxies) merged to dynamic forcing features show significant performance improvements. However, recent studies challenge the notion that combining dynamic forcings with static attributes make such models entity aware, suggesting static features are not effectively leveraged for generalization. We examine entity awareness using state-of-the-art Long-Sort Term Memory (LSTM) networks with the CAMELS-US dataset. We compare EA models provided with physiographic static features with ablated variants not provided with static inputs. Findings indicate that the superior performance of EA models is largely due to information provided by meteorological data, with minimal contributions by physiographic static features, particularly when tested out-of-sample. These results challenge previously held assumptions regarding how physiographic proxies are used to achieve generalization ability in EA Models, highlighting the need for new approaches for robust generalization in deep learning models.