Interactive visualization of large data sets is only possible with efficient algorithms for all parts of the visualization pipeline. This thesis analyzes the filtering and the rendering steps of this pipeline for several fundamentally different data types. Two key techniques that are employed throughout this work are the use of hierarchical methods and graphics-hardware-based implementations of the presented algorithms.
In order to improve the efficiency of filtering, both linear and nonlinear filters are accelerated using graphics hardware for the computation. For many algorithms hierarchical filters based on wavelets are needed and, therefore, inspected as well. Finally, the quality of the achieved results is analyzed, as the accuracy of graphics-card-based approaches is limited by register sizes and framebuffer depths.
During rendering hierarchical approaches allow for a compact representation of partially detailed data. Additionally, the user can trade visualization speed for quality. Sparse grids allow for extremely compact representations, thus interpolated data from sparse grids would no longer fit into main memory. This raises the need for visualization algorithms working directly on the sparse grid coefficients. The interpolation process is expensive, but graphics hardware is usually too inaccurate to be used for acceleration. Thus, the rendering process is parallelized with MPI, using a ray distribution scheme that implicitly generates previews with lower resolution during rendering. For unstructured data a compact hierarchical representation with radial basis functions is introduced that can be employed for rendering at interactive frame rates using graphics hardware.
Completely uncorrelated data like astrophysical n-body problems have high spatial resolution which is lost during resampling for volume rendering. A new hierarchical splatting approach is presented that is able to visualize tens of millions of points interactively for steady and time-dependent data sets.
The used data representations have different approximation capabilities, thus the properties of the different data encodings are analyzed by comparing several data sets that exhibit different amounts of features.