The following students who worked with the ADL Project have completed their degrees.
In the past, for various reasons, GIS software has not been able to represent many of the complexities of the world and human perceptions of it. Along with concepts such as the temporal change, three-dimensionality, and uncertainty inherent in reality and cognition, gradation plays an important role in how the world works. This dissertation creates a formal taxonomy concerning digital and cartographic representation of geographical entities with gradual boundaries. Once a degree of understanding is obtained concerning the variety of causes and forms of gradation, the representation of gradation in GIS is investigated. Several possible data models and structures are described, including some introduced elsewhere and some developed by the author, and an evaluation of the effectiveness for particular applications. It is found that a variety of characteristics of gradation lead to multiple representational solutions. Several proposed cartographic structures appear to implementable in current GIS software, while others will require a total redesign.
At UCSB, the following graduate students who worked closely with the ADL Project have completed their degrees.
In this dissertation, the focus is on algorithms for progressive- resolution image transmission, storage and retrieval. The resulting methods are designed to support a fast system response, to provide an intuitive image representation, and to achieve lossless image compression at competitive bit rates. Average interpolation subdivision (AIS) is introduced. It offers a sound mathematical framework for the derivation of algorithms providing the functionality required. To proceed from AIS to image compression, a multiresolution analysis is added. It allows the construction of wavelet transforms and leads to AIS filter banks. For lossless image compression, reversible wavelet transforms (RWTs) are constructed. To that end, an interband prediction structure for AIS filter banks is first derived. Afterwards rounding operations are introduced, and RWTs are assembled. Reversible AIS filter banks deliver acceptable bit rates when applied to the lossless compression of monochrome images. A time-domain filter analysis confirms that AIS filters are nearly optimal given their structural constraints. Better results are still obtainable. They, however, require an enhanced interband prediction framework. If images of arbitrary size are to be ingested into a database, an efficient method is needed to process signal boundaries. It is demonstrated that interband-prediction offers an elegant and practical solution for this task. The reversible AIS filter banks are finally applied to the lossless compression of color images. Color images offer an excellent opportunity for the design of algorithms which are applicable to a much wider range of multispectral imagery. A new lossless compression method is proposed. It executes a reversible wavelet transform first and then applies adaptive spectral transforms to associated color subbands. Very good results have been obtained. Yet, little computational complexity has been added. Simulation results are included. They verify the rationale of this approach.
The realization of the need to deal with large-scale
systems as a whole and the availability of powerful computer
resources have fueled the interest for efficient monitoring
and modeling of such systems. This is particularly true in the
area of Earth System Science, which has motivated this work.
These are projects that need a multicomputer environment.
Reasons include requirement for very large computing power,
financial constraints that prohibit supercomputers, and
fault-tolerance issues. Furthermore, a distributed computing
environment is more natural sinceusers and data generation are
geographically distributed.
In an Earth System Science computing environment, users should be
able to browse the data available, process them, insert new data and
correct or enhance data already available. A bottleneck is the
handling of image or image-like data -- there are a large number
of images and the size of each of them is also large.
We propose a scheme that stores images and supports image browsing
efficiently. In particular, the scheme achieves good image
compression while supporting fast image subregion retrieval at
various resolutions. It is particularly suited for a distributed
server/client environment. Since new images are inserted, obsolete
ones are deleted and others are updated, support for maintaining
data coherence without sacrificing performance is needed. For an
image server/client environment to be used as part of a distributed
system, which may include other servers that provide related
information, all these servers must appear as a single logical entity
to any user. We propose an algorithm that achieves such an objective
by providing ashared memory abstraction, i.e., access to the memories
of the servers appearlike accesses to a single memory module.
To achieve this efficiently, the algorithm dynamically
migrates/replicates/deletes data from/to servers in order to best
distribute them to satisfy the current pattern of users' accesses.
Recent technological advances have made it possible to process and
store large amounts of image/video data. Perhaps the most impressive example
is the fast accumulation of image data in scientific applications such as
medical and satellite imagery. The internet is another excellent example
of a distributed database containing several millions of images. However,
in order to realize their full potential, tools for automated analysis and
extraction of information, and for intelligent searches in image databases
need to be developed.
We have investigated various techniques which facilitate content-based
image search and retrieval. A prototype system, called NETRA, which enables
the search of aerial photographs and natural color images has been
implemented on the web using the platform independent Java language. A
distinguishing aspect of this system is its incorporation of a robust
automated image segmentation algorithm that allows object or region based
search. Image segmentation significantly improves the quality of image
retrieval when images contain multiple complex objects. Images are
segmented into homogeneous regions at the time of ingest into the
database, and image attributes that represent each of these regions are
computed. This is the first time that image segmentation and region based
search have been combined in a robust way and retrieval performance
demonstrated on a large image database.
In addition to image segmentation, other important components of the system
include feature representations for charecterizing the color, texture, and
shape information, an approach to enhancing the retreival performance by
learning the appropriate similarity measures in the image feature space,
and an image thesaurus model for image annotation and indexing. NETRA
allows users to search by image example. For instance, the user can retrieve
all images containing ``blue sky'' by specifying the color (blue) and
location (upper one-third) information. Images containing snow covered
peaks can be specified by selecting an example from the database and
choosing color and texture attributes for search. NETRA can be accessed
on the web at http://vivaldi.ece.ucsb.edu/Netra.
WWW-based information service has grown enormously during the last fewyears, and major performance bottlenecks have been caused by WWW server
and internet bandwidth inadequacies. Augmenting the server with
multiprocessor support and shifting computation to client-site machines
can substantially improve the system response time and for some
applications, it may also reduce network bandwidth requirements. Taking
full advantage of these capabilities requires sophisticated scheduling.
We first investigate algorithms for scheduling HTTP requests within a
server cluster.
Typical distributed scheduling techniques for HTTP servers
either use a simple round-robin request distribution algorithm,
or have only a single criteria such as CPU load.
We propose novel multifaceted scheduling techniques that optimize the
use of a
multiprocessor server
by predicting the demands of
requests on I/O, processor, and network resources.
We present SWEB, a software system implementing our techniques
on a cluster of workstations.
We provide a
performance analysis under simplified assumptions for understanding the
impact of system loads
when using our scheduling strategies.
Our experiments show
substantial improvements by our techniques compared to traditional
algorithms, and we observe a close correlation
between our theoretical analysis and the achieved results.
We then extend our techniques to adaptively incorporate client
resources. Due to a wide variation in client capabilities and
connection network characteristics, the standard technique of partitioning
the client-server workload at a fixed point is infeasible.
We present a task model and scheduling technique
for adaptive client-server computing in which the computation
required by a user request is dynamically partitioned
between the client and server by monitoring
network, client and server resources.
We demonstrate the use of this technique in digital library applications
such as image and postscript document browsing.
We also present SWEB++, a software system
to support programmers desiring to use our scheduling algorithms.
Experimentally, we achieve significantly faster response times
through utilizing client resources, and demonstrate the validity
of our theoretical analysis.