Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant amount of resources and deploys both structural and functional solutions that work in tandem to compress this range. Here we propose a new dynamic neural model built upon well-established canonical neural mechanisms. The model consists of two feed-forward stages. The first stage encodes the stimulus spatially and normalizes its activity by extracting contrast and discounting the background luminance. These normalized activities allow a second stage to implement a contrast-dependent spatial-integration strategy. We show how the properties of this model can account for adaptive properties of motion discrimination, integration, and segregation.