Moving natural scenes pose a challenge to the human visual system, containing diverse objects, clutter, and backgrounds. Well-known models of object recognition do not fully explain natural scene perception, ignoring segmentation or the recognition of dynamic objects.
How do we search for dynamic events in sequences of longer events? How do we find a series of facial expressions in stream of conversation, or a sequence of notes in a longer piece of music? My PhD thesis compared models of how sequences of features may be represented and how they could be matched with longer stimuli.
Sharing of models and experimental results is central to the progress of brain science: data need to be re-analysed, validated and integrated by other researchers. There have been several attempts to use ontology languages to reify theories and results, notably the Neuroscience Information Framework and the Cognitive Atlas. These projects have limited utility because they do not take into account the changing and competing nature of scientific concepts.