Spatial constraints on learning in visual search: Modeling contextual cueing.
Predictive visual context facilitates visual search, a benefit termed contextual cueing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. We modeled existing results using a connectionist architecture, and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cueing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.