When we started with the gvSIG sensors projects, we started to be aware of the importance of the time dimension that we had never been considered to date. Before to study in depth the sensors problem, we started to analyze the time dimension and its impact in gvSIG, and the final conclusion that we got was that the time dimension can be compared with the spatial dimension.
When a user changes the envelope in a view (doing a panning, zoom…), the application is really retrieving a subset of the information for the layers loaded in the view. If the layer is a vector layer, only the subset of features contained in this view are displayed. If the layer is a raster layer, only a part of the coverage is displayed.
If the layer is a vector layer, the application creates and uses an spatial filter to decide which features are inside the envelope of the view and which features are outside of this envelope.
If the layer is a raster layer, the application uses other filter to decide which part of the coverage is inside the envelope of the view and which part of the coverage is outside of this envelope.
This approach to manage spatial filters is too similar to the management of time filters. It is possible to create temporal filters (by instant or by interval) that can be used by a layer to retrieve a sub-set of its information. If the layer is a vector layer, the result of the temporal filter is a sub set of features that are contained in the temporal filter. This filter only can de applied if the layer has information about the time of their features (e.g: in an attribute):
If the layer is a raster layer the result of the temporal filter is just one image that is contained in the temporal filter. In this case, the store has to have several images and the information about the time of each of them:
This approach is really simple and makes easier the support of the time dimension in gvSIG.
More information in the gvSIG Sensors project