Vision, Image, and Signal Processing

VISP Publications

This is a list of the most representative publications for our group. Older publications are below, click here to go directly to 2008, 2007, 2006, 2005, 2004 or 2003.

BibTeX references are provided for each publication separately by clicking the BibTeX icon, alternativelly all references can be downloaded as one file here.

Copyright Notice

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Publications in 2009

BibTeX Reference

The effect of detector sampling in wavefront-coded imaging systems
@ARTICLE{Muyo:2009,
    author = {Muyo,
    G. and Harvey,
    A. R.},
    title = {{The effect of detector sampling in wavefront-coded imaging systems}},
    journal = {{JOURNAL OF OPTICS A-PURE AND APPLIED OPTICS}},
    year = {{2009}},
    volume = {{11}},
    number = {{5}},
    month = {{MAY}},
    note = {{Photon 2008 Conference,
    Edinburgh,
    SCOTLAND,
    AUG 26-29,
    2008}},
    abstract = {{We present a comprehensive study of the effect of detector sampling on wavefront-coded systems. Two important results are obtained: the spurious response ratios are reduced in wavefront-coded systems with a cubic phase mask and detector sampling does not compromise the restoration of wavefront-coded images with extended depth of field. Rigorous computer simulation of sampled wavefront-coded images shows an increased signal-to-aliased-noise ratio of up to 16\\% for a cubic phase mask with alpha = 5 lambda.}},
    address = {{DIRAC HOUSE,
    TEMPLE BACK,
    BRISTOL BS1 6BE,
    ENGLAND}},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Muyo_TheEffectOfDetectorSamplingInWavefrontCodedImagingSystems.pdf},
    affiliation = {{Muyo,
    G (Reprint Author),
    Heriot Watt Univ,
    Sch Engn \\& Phys Sci,
    Edinburgh EH14 4AS,
    Midlothian,
    Scotland. {[}Muyo,
    G.; Harvey,
    A. R.] Heriot Watt Univ,
    Sch Engn \\& Phys Sci,
    Edinburgh EH14 4AS,
    Midlothian,
    Scotland.}},
    article-number = {{054002}},
    author-email = {{a.r.harvey@hw.ac.uk}},
    doc-delivery-number = {{427RN}},
    doi = {{10.1088/1464-4258/11/5/054002}},
    issn = {{1464-4258}},
    journal-iso = {{J. Opt. A-Pure Appl. Opt.}},
    keywords = {{wavefront coding; detector sampling; aliasing; digital image processing}},
    language = {{English}},
    number-of-cited-references = {{11}},
    organization = {{UK Consortium Photon \\& Opt}},
    owner = {tomv},
    publisher = {{IOP PUBLISHING LTD}},
    subject-category = {{Optics}},
    times-cited = {{0}},
    timestamp = {2009.04.17},
    unique-id = {{ISI:000264796500003}} }
We present a comprehensive study of the effect of detector sampling on wavefront-coded systems. Two important results are obtained: the spurious response ratios are reduced in wavefront-coded systems with a cubic phase mask and detector sampling does not compromise the restoration of wavefront-coded images with extended depth of field. Rigorous computer simulation of sampled wavefront-coded images shows an increased signal-to-aliased-noise ratio of up to 16\% for a cubic phase mask with alpha = 5 lambda.
Muyo, G. and Harvey, A. R."The effect of detector sampling in wavefront-coded imaging systems"JOURNAL OF OPTICS A-PURE AND APPLIED OPTICSvol. 11no. 5MAY2009

BibTeX Reference

Miniaturization of zoom lenses with a single moving element
@ARTICLE{Demenikov:2009,
    author = {Mads Demenikov and Ewan Findlay and Andrew R. Harvey},
    title = {Miniaturization of zoom lenses with a single moving element},
    journal = {Opt. Express},
    year = {2009},
    volume = {17},
    pages = {6118--6127},
    number = {8},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Demenikov_MiniaturizationOfZoomlensesWithASingleMovingElement.pdf},
    keywords = {Geometric optical design; Lens system design},
    publisher = {OSA} }
Mads Demenikov and Ewan Findlay and Andrew R. Harvey"Miniaturization of zoom lenses with a single moving element"Opt. Expressvol. 17no. 8pp. 6118-61272009

Publications in 2008

BibTeX Reference

Fast, robust, and faithful methods for detecting crest lines on meshes
@ARTICLE{Yoshizawa:2008,
    author = {Yoshizawa,
    Shin and Belyaev,
    Alexander and Yokota,
    Hideo and Seidel,
    Hans-Peter},
    title = {{Fast,
    robust,
    and faithful methods for detecting crest lines on meshes}},
    journal = {{COMPUTER AIDED GEOMETRIC DESIGN}},
    year = {{2008}},
    volume = {{25}},
    pages = {{545-560}},
    number = {{8,
    Sp. Iss. SI}},
    month = {{NOV}},
    note = {{15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2007),
    Maui,
    HI,
    OCT 29-NOV 02,
    2007}},
    abstract = {{The crest lines,
    salient subsets or the extrema or the principal curvatures over their corresponding curvature lines,
    are powerful shape descriptors which are widely used for shape matching,
    interrogation,
    and visualization purposes. In this paper,
    we develop fast,
    accurate,
    and reliable methods for detecting the crest lines on surfaces approximated by dense triangle meshes. The methods exploit intrinsic geometric properties of the curvature extrema and provide with an inherent level-of-detail control of the detected crest lines. As an immediate application,
    we use of the crest lines for adaptive mesh simplification purposes. (c) 2008 Elsevier B.V. All rights reserved.}},
    adsurl = {http://dx.doi.org/10.1016/j.cagd.2008.06.008},
    doi = {{10.1016/j.cagd.2008.06.008}},
    issn = {{0167-8396}},
    unique-id = {{ISI:000260733000002}} }
The crest lines, salient subsets or the extrema or the principal curvatures over their corresponding curvature lines, are powerful shape descriptors which are widely used for shape matching, interrogation, and visualization purposes. In this paper, we develop fast, accurate, and reliable methods for detecting the crest lines on surfaces approximated by dense triangle meshes. The methods exploit intrinsic geometric properties of the curvature extrema and provide with an inherent level-of-detail control of the detected crest lines. As an immediate application, we use of the crest lines for adaptive mesh simplification purposes. (c) 2008 Elsevier B.V. All rights reserved.
Yoshizawa, Shin and Belyaev, Alexander and Yokota, Hideo and Seidel, Hans-Peter"Fast, robust, and faithful methods for detecting crest lines on meshes"COMPUTER AIDED GEOMETRIC DESIGNvol. 25no. 8, Sp. Iss. SIpp. 545-560NOV2008

BibTeX Reference

An improved deterministic SoS channel simulator for efficient simulation of multiple uncorrelated Rayleigh fading channels
@ARTICLE{Wang:2008b,
    author = {Cheng-Xiang Wang and Dongfeng Yuan and Hsiao-Hwa Chen and Wen Xu},
    title = {An improved deterministic SoS channel simulator for efficient simulation of multiple uncorrelated Rayleigh fading channels},
    journal = {IEEE Trans. Wireless Communications},
    year = {2008},
    volume = {7},
    number = {9},
    month = {September},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_AnImprovedDeterministicSoSChannelSimulatorForEfficientSimulationOfMultipleUncorrelatedRayleighFadingChannels.pdf} }
Cheng-Xiang Wang and Dongfeng Yuan and Hsiao-Hwa Chen and Wen Xu"An improved deterministic SoS channel simulator for efficient simulation of multiple uncorrelated Rayleigh fading channels"IEEE Trans. Wireless Communicationsvol. 7no. 9September2008

BibTeX Reference

Image compression with anisotropic diffusion
@ARTICLE{Galic:2008,
    author = {Galic,
    Irena and Weickert,
    Joachim and Welk,
    Martin and Bruhn,
    Andres and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{Image compression with anisotropic diffusion}},
    journal = {{JOURNAL OF MATHEMATICAL IMAGING AND VISION}},
    year = {{2008}},
    volume = {{31}},
    pages = {{255-269}},
    number = {{2-3}},
    month = {{JUL}},
    abstract = {{Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs),
    however,
    have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising,
    we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression,
    we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation,
    diffusion-based point selection,
    and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates,
    our PDE-based approach does not only give far better results than the widely-used JPEG standard,
    but can even come close to the quality of the highly optimised JPEG2000 codec.}},
    adsurl = {http://dx.doi.org/10.1007/s10851-008-0087-0},
    doi = {{10.1007/s10851-008-0087-0}},
    issn = {{0924-9907}},
    unique-id = {{ISI:000256341200014}} }
Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec.
Galic, Irena and Weickert, Joachim and Welk, Martin and Bruhn, Andres and Belyaev, Alexander and Seidel, Hans-Peter"Image compression with anisotropic diffusion"JOURNAL OF MATHEMATICAL IMAGING AND VISIONvol. 31no. 2-3pp. 255-269JUL2008

BibTeX Reference

Adaptive feature-preserving non-local denoising of static and time-varying range data
@ARTICLE{Schall:2008,
    author = {Schall,
    Oliver and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{Adaptive feature-preserving non-local denoising of static and time-varying range data}},
    journal = {{COMPUTER-AIDED DESIGN}},
    year = {{2008}},
    volume = {{40}},
    pages = {{701-707}},
    number = {{6}},
    month = {{JUN}},
    abstract = {{We present a new method for noise removal oil static and time-varying range data. Our approach predicts the restored position of a perturbed vertex using similar vertices in its neighborhood. It define the required similarity measure in a new non-local fashion which compares regions of the surface instead of point pairs. This allows our algorithm to obtain a more accurate denoising result than previous state-of-the-art approaches and,
    at the same time,
    to better preserve fine features of the surface. Another interesting component of our method is that the neighborhood size is not constant over the surface but adapted close to the boundaries which improves the denoising performance in those regions of the dataset. Furthermore,
    our approach is easy to implement,
    effective,
    and flexibly applicable to different types of scanned data. We demonstrate this oil several static and interesting new time-varying datasets obtained Using laser and structured light scanners. (C) 2008 Elsevier Ltd. All rights reserved.}},
    adsurl = {http://dx.doi.org/10.1016/j.cad.2008.01.011},
    doi = {{10.1016/j.cad.2008.01.011}},
    issn = {{0010-4485}},
    unique-id = {{ISI:000258989900005}} }
We present a new method for noise removal oil static and time-varying range data. Our approach predicts the restored position of a perturbed vertex using similar vertices in its neighborhood. It define the required similarity measure in a new non-local fashion which compares regions of the surface instead of point pairs. This allows our algorithm to obtain a more accurate denoising result than previous state-of-the-art approaches and, at the same time, to better preserve fine features of the surface. Another interesting component of our method is that the neighborhood size is not constant over the surface but adapted close to the boundaries which improves the denoising performance in those regions of the dataset. Furthermore, our approach is easy to implement, effective, and flexibly applicable to different types of scanned data. We demonstrate this oil several static and interesting new time-varying datasets obtained Using laser and structured light scanners. (C) 2008 Elsevier Ltd. All rights reserved.
Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter"Adaptive feature-preserving non-local denoising of static and time-varying range data"COMPUTER-AIDED DESIGNvol. 40no. 6pp. 701-707JUN2008

BibTeX Reference

Adaptive fusion framework based on augmented reality training
@ARTICLE{Mignotte:2008,
    author = {Mignotte,
    P. -Y. and Coiras,
    E. and Rohou,
    H. and Petillot,
    Y. and Bell,
    J. and Lebart,
    K.},
    title = {{Adaptive fusion framework based on augmented reality training}},
    journal = {{IET RADAR SONAR AND NAVIGATION}},
    year = {{2008}},
    volume = {{2}},
    pages = {{146-154}},
    number = {{2}},
    month = {{APR}},
    abstract = {{A framework for the fusion of computer-aided detection and classification algorithms for side-scan imagery is presented. The framework is based on the Dempster-Shafer theory of evidence,
    which permits fusion of heterogeneous outputs of target detectors and classifiers. The utilisation of augmented reality for the training and evaluation of the algorithms used over a large test set permits the optimisation of their performance. In addition,
    this framework is adaptive regarding two aspects. First,
    it allows for the addition of contextual information to the decision process,
    giving more importance to the outputs of those algorithms that perform better in particular mission conditions. Secondly,
    the fusion parameters are optimised on-line to correct for mistakes,
    which occur while deployed.}},
    adsurl = {http://dx.doi.org/10.1049/iet-rsn:20070136},
    doi = {{10.1049/iet-rsn:20070136}},
    issn = {{1751-8784}},
    unique-id = {{ISI:000255251600009}} }
A framework for the fusion of computer-aided detection and classification algorithms for side-scan imagery is presented. The framework is based on the Dempster-Shafer theory of evidence, which permits fusion of heterogeneous outputs of target detectors and classifiers. The utilisation of augmented reality for the training and evaluation of the algorithms used over a large test set permits the optimisation of their performance. In addition, this framework is adaptive regarding two aspects. First, it allows for the addition of contextual information to the decision process, giving more importance to the outputs of those algorithms that perform better in particular mission conditions. Secondly, the fusion parameters are optimised on-line to correct for mistakes, which occur while deployed.
Mignotte, P. -Y. and Coiras, E. and Rohou, H. and Petillot, Y. and Bell, J. and Lebart, K."Adaptive fusion framework based on augmented reality training"IET RADAR SONAR AND NAVIGATIONvol. 2no. 2pp. 146-154APR2008

BibTeX Reference

Design and evaluation of a reactive and deliberative collision avoidance and escape architecture for autonomous robots
@ARTICLE{Evans:2008,
    author = {Evans,
    Jonathan and Patron,
    Pedro and Smith,
    Ben and Lane,
    David M.},
    title = {{Design and evaluation of a reactive and deliberative collision avoidance and escape architecture for autonomous robots}},
    journal = {{AUTONOMOUS ROBOTS}},
    year = {{2008}},
    volume = {{24}},
    pages = {{247-266}},
    number = {{3}},
    month = {{APR}},
    abstract = {{We present the design and evaluation of an architecture for collision avoidance and escape of mobile autonomous robots operating in unstructured environments. The approach mixes both reactive and deliberative components. This provides the vehicle's behavior designers with an explicit means to design-in avoidance strategies that match system requirements in concepts of operations and for robot certification. The now traditional three layer architecture is extended to include a fourth Scenario layer,
    where scripts describing specific responses are selected and parameterized on the fly. A local map is maintained using available sensor data,
    and adjacent objects are combined as they are observed. This has been observed to create safer trajectories. Objects have persistence and fade if not re-observed over time. In common with behavior based approaches,
    a reactive layer is maintained containing pre-defined knee jerk responses for extreme situations. The reactive layer can inhibit outputs from above. Path planning of updated goal point outputs from the Scenario layer is performed using a fast marching method made more efficient through lifelong planning techniques. The architecture is applied to applications with Autonomous Underwater Vehicles. Both simulated and open water tests are carried out to establish the performance and usefulness of the approach.}},
    adsurl = {http://dx.doi.org/10.1007/s10514-007-9053-8},
    doi = {{10.1007/s10514-007-9053-8}},
    issn = {{0929-5593}},
    unique-id = {{ISI:000253525400003}} }
We present the design and evaluation of an architecture for collision avoidance and escape of mobile autonomous robots operating in unstructured environments. The approach mixes both reactive and deliberative components. This provides the vehicle's behavior designers with an explicit means to design-in avoidance strategies that match system requirements in concepts of operations and for robot certification. The now traditional three layer architecture is extended to include a fourth Scenario layer, where scripts describing specific responses are selected and parameterized on the fly. A local map is maintained using available sensor data, and adjacent objects are combined as they are observed. This has been observed to create safer trajectories. Objects have persistence and fade if not re-observed over time. In common with behavior based approaches, a reactive layer is maintained containing pre-defined knee jerk responses for extreme situations. The reactive layer can inhibit outputs from above. Path planning of updated goal point outputs from the Scenario layer is performed using a fast marching method made more efficient through lifelong planning techniques. The architecture is applied to applications with Autonomous Underwater Vehicles. Both simulated and open water tests are carried out to establish the performance and usefulness of the approach.
Evans, Jonathan and Patron, Pedro and Smith, Ben and Lane, David M."Design and evaluation of a reactive and deliberative collision avoidance and escape architecture for autonomous robots"AUTONOMOUS ROBOTSvol. 24no. 3pp. 247-266APR2008

BibTeX Reference

Shape from defocus via diffusion
@ARTICLE{Favaro:2008,
    author = {Favaro,
    Paolo and Soatto,
    Stefano and Burger,
    Martin and Osher,
    Stanley J.},
    title = {{Shape from defocus via diffusion}},
    journal = {{IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE}},
    year = {{2008}},
    volume = {{30}},
    pages = {{518-531}},
    number = {{3}},
    month = {{MAR}},
    abstract = {{Defocus can be modeled as a diffusion process and represented mathematically using the heat equation,
    where image blur corresponds to the diffusion of heat. This analogy can be extended to nonplanar scenes by allowing a space-varying diffusion coefficient. The inverse problem of reconstructing 3D structure from blurred images corresponds to an ``inverse diffusion{''} that is notoriously ill posed. We show how to bypass this problem by using the notion of relative blur. Given two images,
    within each neighborhood,
    the amount of diffusion necessary to transform the sharper image into the blurrier one depends on the depth of the scene. This can be used to devise a global algorithm to estimate the depth profile of the scene without recovering the deblurred image using only forward diffusion.}},
    adsurl = {http://dx.doi.org/10.1109/TPAMI.2007.1175},
    doi = {{10.1109/TPAMI.2007.1175}},
    issn = {{0162-8828}},
    unique-id = {{ISI:000252286100012}} }
Defocus can be modeled as a diffusion process and represented mathematically using the heat equation, where image blur corresponds to the diffusion of heat. This analogy can be extended to nonplanar scenes by allowing a space-varying diffusion coefficient. The inverse problem of reconstructing 3D structure from blurred images corresponds to an ``inverse diffusion{''} that is notoriously ill posed. We show how to bypass this problem by using the notion of relative blur. Given two images, within each neighborhood, the amount of diffusion necessary to transform the sharper image into the blurrier one depends on the depth of the scene. This can be used to devise a global algorithm to estimate the depth profile of the scene without recovering the deblurred image using only forward diffusion.
Favaro, Paolo and Soatto, Stefano and Burger, Martin and Osher, Stanley J."Shape from defocus via diffusion"IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCEvol. 30no. 3pp. 518-531MAR2008

BibTeX Reference

Cognitive radio network management: tuning in to real-time conditions
@ARTICLE{Wang:2008a,
    author = {Cheng-Xiang Wang and Hsiao-Hwa Chen and Xuemin Hong and Mohsen Guizani},
    title = {Cognitive radio network management: tuning in to real-time conditions},
    journal = {IEEE Vehicular Technology Magazine},
    year = {2008},
    volume = {3},
    pages = {28-35},
    number = {1},
    month = {March},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_CognitiveRadioNetworkManagementTuningInToRealTimeConditions.pdf} }
Cheng-Xiang Wang and Hsiao-Hwa Chen and Xuemin Hong and Mohsen Guizani"Cognitive radio network management: tuning in to real-time conditions"IEEE Vehicular Technology Magazinevol. 3no. 1pp. 28-35March2008

BibTeX Reference

Cognitive radio network management: tuning in to real-time conditions
@ARTICLE{Wang:2008,
    author = {Cheng-Xiang Wang and Hsiao-Hwa Chen and Xuemin Hong and Mohsen Guizani},
    title = {Cognitive radio network management: tuning in to real-time conditions},
    journal = {IEEE Vehicular Technology Magazine - accepted for publication},
    year = {2008},
    volume = {3},
    pages = {28-35},
    number = {1},
    month = {MAR} }
Cheng-Xiang Wang and Hsiao-Hwa Chen and Xuemin Hong and Mohsen Guizani"Cognitive radio network management: tuning in to real-time conditions"IEEE Vehicular Technology Magazine - accepted for publicationvol. 3no. 1pp. 28-35MAR2008

BibTeX Reference

Cross-layer design based on RC-LDPC codes in MIMO channels with estimation errors
@ARTICLE{Zhang:2008,
    author = {Zhang,
    Yuling and Yuan,
    Dongfeng and Wang,
    Cheng-Xiang},
    title = {{Cross-layer design based on RC-LDPC codes in MIMO channels with estimation errors}},
    journal = {{AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS}},
    year = {{2008}},
    volume = {{62}},
    pages = {{659-665}},
    number = {{9}},
    abstract = {{In this paper,
    we propose a cross-layer design framework combining adaptive modulation and coding (AMC) with hybrid automatic repeat request (HARQ) based on rate-compatible low-density parity-check codes (RC-LDPC) in multiple-input multiple-output (MIMO) fading channels with estimation errors. First,
    we propose a new puncturing pattern for RC-LDPC codes and demonstrate that the new Puncturing pattern performs similar to the random puncturing but is easier to apply. Then,
    we apply RC-LDPC codes with the new puncturing pattern to the cross-layer design combing AMC with ARQ over MIMO fading channels and derive the expressions for the throughput of the system. The effect of channel estimation errors on the system throughput is also investigated. Numerical results show that the joint design of AMC and ARQ based on RC-LDPC codes can achieve considerable spectral efficiency gain. (C) 2007 Elsevier GrnbH. All rights reserved.}},
    adsurl = {http://dx.doi.org/10.1016/j.aeue.2007.08.009},
    doi = {{10.1016/j.aeue.2007.08.009}},
    issn = {{1434-8411}},
    unique-id = {{ISI:000260169200002}} }
In this paper, we propose a cross-layer design framework combining adaptive modulation and coding (AMC) with hybrid automatic repeat request (HARQ) based on rate-compatible low-density parity-check codes (RC-LDPC) in multiple-input multiple-output (MIMO) fading channels with estimation errors. First, we propose a new puncturing pattern for RC-LDPC codes and demonstrate that the new Puncturing pattern performs similar to the random puncturing but is easier to apply. Then, we apply RC-LDPC codes with the new puncturing pattern to the cross-layer design combing AMC with ARQ over MIMO fading channels and derive the expressions for the throughput of the system. The effect of channel estimation errors on the system throughput is also investigated. Numerical results show that the joint design of AMC and ARQ based on RC-LDPC codes can achieve considerable spectral efficiency gain. (C) 2007 Elsevier GrnbH. All rights reserved.
Zhang, Yuling and Yuan, Dongfeng and Wang, Cheng-Xiang"Cross-layer design based on RC-LDPC codes in MIMO channels with estimation errors"AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONSvol. 62no. 9pp. 659-6652008

BibTeX Reference

Secondary spectrum access networks: spatial modeling and system design
@ARTICLE{Hong:2008,
    author = {Xuemin Hong and Cheng-Xiang Wang and Hsiao-Hwa Chen and Yan Zhang},
    title = {Secondary spectrum access networks: spatial modeling and system design},
    journal = {IEEE Vehicular Technology Magazine - accepted for publication},
    year = {2008} }
Xuemin Hong and Cheng-Xiang Wang and Hsiao-Hwa Chen and Yan Zhang"Secondary spectrum access networks: spatial modeling and system design"IEEE Vehicular Technology Magazine - accepted for publication2008

BibTeX Reference

Multilayered 3D LiDAR image construction using spatial models in a Bayesian framework
@ARTICLE{Wallace:2008c,
    author = {S. Hernandez-Marin and A. M. Wallace and G. J. Gibson},
    title = {Multilayered 3D LiDAR image construction using spatial models in a Bayesian framework},
    journal = {IEEE Trans. Pattern Analysis and Machine Intelligence},
    year = {2008},
    volume = {30},
    pages = {1028-1040},
    number = {6},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_Multilayered3DLiDARImageConstructionUsingSpatialModelsInABayesianFramework.pdf} }
S. Hernandez-Marin and A. M. Wallace and G. J. Gibson"Multilayered 3D LiDAR image construction using spatial models in a Bayesian framework"IEEE Trans. Pattern Analysis and Machine Intelligencevol. 30no. 6pp. 1028-10402008

BibTeX Reference

Array rotation aperture synthesis for short-range imaging at millimeter wavelengths
@ARTICLE{Lucotte:2008,
    author = {B M Lucotte and B Grafulla-Gonzalez and A R Harvey},
    title = {Array rotation aperture synthesis for short-range imaging at millimeter wavelengths},
    journal = {Radio Science - in press},
    year = {2008},
    adsurl = {http://www.agu.org/journals/pip/rs/2008RS003863-pip.pdf},
    doi = {10.1029/2008RS003863} }
B M Lucotte and B Grafulla-Gonzalez and A R Harvey"Array rotation aperture synthesis for short-range imaging at millimeter wavelengths"Radio Science - in press2008

BibTeX Reference

On analytical derivations of the condition number distributions of dual non-central Wishart matrices
@ARTICLE{Matthaiou:2008,
    author = {Michail Matthaiou and Dave I. Laurenson and Cheng-Xiang Wang},
    title = {On analytical derivations of the condition number distributions of dual non-central Wishart matrices},
    journal = {IEEE Trans. on Wireless Communications - accepted for publication},
    year = {2008} }
Michail Matthaiou and Dave I. Laurenson and Cheng-Xiang Wang"On analytical derivations of the condition number distributions of dual non-central Wishart matrices"IEEE Trans. on Wireless Communications - accepted for publication2008

BibTeX Reference

Recovering surface reflectance and multiple light locations and intensities from image data
@ARTICLE{Wallace:2008b,
    author = {S. Xu and A.M. Wallace},
    title = {Recovering surface reflectance and multiple light locations and intensities from image data},
    journal = {Pattern Recognition Letters},
    year = {2008},
    volume = {29},
    pages = {1639-1647},
    number = {11},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_RecoveringSurfaceReflectanceAndMultipleLightLocationsAndIntensitiesFromImageData.pdf} }
S. Xu and A.M. Wallace"Recovering surface reflectance and multiple light locations and intensities from image data"Pattern Recognition Lettersvol. 29no. 11pp. 1639-16472008

BibTeX Reference

Robust CDMA multiuser detectors: Probability-constrained versus the worst-case-based design
@ARTICLE{Vorobyov:2008,
    author = {Vorobyov,
    Sergiy A.},
    title = {{Robust CDMA multiuser detectors: Probability-constrained versus the worst-case-based design}},
    journal = {{IEEE SIGNAL PROCESSING LETTERS}},
    year = {{2008}},
    volume = {{15}},
    pages = {{273-276}},
    abstract = {{In this letter,
    a robust code-division multiple-access (CDMA) multiuser detection technique which is based on the probability-constrained optimization approach is developed,
    and its relationship to the popular worst-case-based robust CDMA multiuser detection technique is established. The important advantage of the proposed probability-constrained optimization-based approach with respect to the worst-case-based design is that in the former approach,
    the parameter of the uncertainty region of the worst-case-based design is quantified in terms of the outage probability and second-order statistics of the user signature estimation error. A simulation example demonstrates the advantages of such statistically motivated choice of the parameter of the uncertainty region.}},
    adsurl = {http://dx.doi.org/10.1109/LSP.2008.916722},
    doi = {{10.1109/LSP.2008.916722}},
    issn = {{1070-9908}},
    unique-id = {{ISI:000258585600071}} }
In this letter, a robust code-division multiple-access (CDMA) multiuser detection technique which is based on the probability-constrained optimization approach is developed, and its relationship to the popular worst-case-based robust CDMA multiuser detection technique is established. The important advantage of the proposed probability-constrained optimization-based approach with respect to the worst-case-based design is that in the former approach, the parameter of the uncertainty region of the worst-case-based design is quantified in terms of the outage probability and second-order statistics of the user signature estimation error. A simulation example demonstrates the advantages of such statistically motivated choice of the parameter of the uncertainty region.
Vorobyov, Sergiy A."Robust CDMA multiuser detectors: Probability-constrained versus the worst-case-based design"IEEE SIGNAL PROCESSING LETTERSvol. 15pp. 273-2762008

BibTeX Reference

Wavefront-coded, Hybrid Imaging for the Alleviation of Optical Aberrations
@ARTICLE{Harvey:2008,
    author = {A R Harvey and Gonzalo Muyo},
    title = {Wavefront-coded,
    Hybrid Imaging for the Alleviation of Optical Aberrations},
    journal = {Encyclopedia of Materials article 2123},
    year = {2008} }
A R Harvey and Gonzalo Muyo"Wavefront-coded, Hybrid Imaging for the Alleviation of Optical Aberrations"Encyclopedia of Materials article 21232008

BibTeX Reference

Call admission control algorithms in OFDM-based wireless multiservice networks
@ARTICLE{Zhang:2008,
    author = {Yan Zhang and Yifan Chen and Jianhua He and Cheng-Xiang Wang and Athanasios V. Vasilakos},
    title = {Call admission control algorithms in OFDM-based wireless multiservice networks},
    journal = {Wireless Personal Communications - accepted for publication},
    year = {2008} }
Yan Zhang and Yifan Chen and Jianhua He and Cheng-Xiang Wang and Athanasios V. Vasilakos"Call admission control algorithms in OFDM-based wireless multiservice networks"Wireless Personal Communications - accepted for publication2008

BibTeX Reference

SimBIL: appearance-based simulation of burst-illumination laser sequences
@ARTICLE{Wallace:2008a,
    author = {AF Nayak and E. Trucco and A. Ahmad and A.M. Wallace},
    title = {SimBIL: appearance-based simulation of burst-illumination laser sequences},
    journal = {IET Proceedings on Image Processing},
    year = {2008},
    volume = {2},
    pages = {165-174},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_SimBILAppearanceBasedSimulationOfBurstIlluminationLaserSequences.pdf},
    issue = {3} }
AF Nayak and E. Trucco and A. Ahmad and A.M. Wallace"SimBIL: appearance-based simulation of burst-illumination laser sequences"IET Proceedings on Image Processingvol. 2pp. 165-1742008

BibTeX Reference

Predicted detection performance of MIMO radar
@ARTICLE{Du:2008,
    author = {Du,
    Chaoran and Thompson,
    John S. and Petillot,
    Yvan},
    title = {{Predicted detection performance of MIMO radar}},
    journal = {{IEEE SIGNAL PROCESSING LETTERS}},
    year = {{2008}},
    volume = {{15}},
    pages = {{83-86}},
    abstract = {{It has been shown that multiple-input multiple-output (MIMO) radar systems can improve target detection performance significantly by exploiting the spatial diversity gain. We introduce the system model in which the radar target is composed of a finite number of small scatterers and derive the formula to evaluate the theoretical probability of detection for the system having an arbitrary array-target configuration. The results can be used to predict the detection performance of the actual MIMO radar without time-consuming simulations.}},
    adsurl = {http://dx.doi.org/10.1109/LSP.2007.910312},
    doi = {{10.1109/LSP.2007.910312}},
    issn = {{1070-9908}},
    unique-id = {{ISI:000258585600022}} }
It has been shown that multiple-input multiple-output (MIMO) radar systems can improve target detection performance significantly by exploiting the spatial diversity gain. We introduce the system model in which the radar target is composed of a finite number of small scatterers and derive the formula to evaluate the theoretical probability of detection for the system having an arbitrary array-target configuration. The results can be used to predict the detection performance of the actual MIMO radar without time-consuming simulations.
Du, Chaoran and Thompson, John S. and Petillot, Yvan"Predicted detection performance of MIMO radar"IEEE SIGNAL PROCESSING LETTERSvol. 15pp. 83-862008

Publications in 2007

BibTeX Reference

A correlation based double-directional stochastic channel model for multiple-antenna UWB systems
@ARTICLE{Wang:2007e,
    author = {Xuemin Hong and Cheng-Xiang Wang and Ben Allen and Wasim Malik},
    title = {A correlation based double-directional stochastic channel model for multiple-antenna UWB systems},
    journal = {IET Microwaves,
    Antennas \\& Propagation},
    year = {2007},
    volume = {1},
    pages = {1182-1191},
    number = {6},
    month = {December},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_ACorrelationBasedDoubleDirectionalStochasticChannelModelForMultipleAntennaUWBSystems.pdf} }
Xuemin Hong and Cheng-Xiang Wang and Ben Allen and Wasim Malik"A correlation based double-directional stochastic channel model for multiple-antenna UWB systems"IET Microwaves, Antennas \& Propagationvol. 1no. 6pp. 1182-1191December2007

BibTeX Reference

Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength
@ARTICLE{Warburton:2007,
    author = {Warburton,
    Ryan E. and McCarthy,
    Aongus and Wallace,
    Andrew M. and Hernandez-Marin,
    Sergio and Hadfield,
    Robert H. and Nam,
    Sae Woo and Buller,
    Gerald S.},
    title = {{Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength}},
    journal = {{OPTICS LETTERS}},
    year = {{2007}},
    volume = {{32}},
    pages = {{2266-2268}},
    number = {{15}},
    month = {{AUG 1}},
    abstract = {{We demonstrate subeentimeter depth profiling at a stand off distance of 330 m using a time-of-flight approach based on time-correlated single-photon counting. For the first time to our knowledge,
    the photoncounting time-of-flight technique was demonstrated at a wavelength of 1550 nm using a superconducting nanowire single-photon detector. The performance achieved suggests that a system using superconducting detectors has the potential for low-light-level and eye-safe operation. The system's instrumental response was 70 ps full width at half-maximum,
    which meant that 1 cm surface-to-surface resolution could be achieved by locating the centroids of each return signal. A depth resolution of 4 mm was achieved by employing an optimized signal-processing algorithm based on a reversible jump Markov chain Monte Carlo method. (c) 2007 Optical Society of America.}},
    issn = {{0146-9592}},
    unique-id = {{ISI:000249087200066}} }
We demonstrate subeentimeter depth profiling at a stand off distance of 330 m using a time-of-flight approach based on time-correlated single-photon counting. For the first time to our knowledge, the photoncounting time-of-flight technique was demonstrated at a wavelength of 1550 nm using a superconducting nanowire single-photon detector. The performance achieved suggests that a system using superconducting detectors has the potential for low-light-level and eye-safe operation. The system's instrumental response was 70 ps full width at half-maximum, which meant that 1 cm surface-to-surface resolution could be achieved by locating the centroids of each return signal. A depth resolution of 4 mm was achieved by employing an optimized signal-processing algorithm based on a reversible jump Markov chain Monte Carlo method. (c) 2007 Optical Society of America.
Warburton, Ryan E. and McCarthy, Aongus and Wallace, Andrew M. and Hernandez-Marin, Sergio and Hadfield, Robert H. and Nam, Sae Woo and Buller, Gerald S."Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength"OPTICS LETTERSvol. 32no. 15pp. 2266-2268AUG 12007

BibTeX Reference

An integrated diagnostic architecture for autonomous underwater vehicles
@ARTICLE{Hamilton:2007,
    author = {Hamilton,
    K. and Lane,
    D. M. and Brown,
    K. E. and Evans,
    J. and Taylor,
    N. K.},
    title = {{An integrated diagnostic architecture for autonomous underwater vehicles}},
    journal = {{JOURNAL OF FIELD ROBOTICS}},
    year = {{2007}},
    volume = {{24}},
    pages = {{497-526}},
    number = {{6}},
    month = {{JUN}},
    abstract = {{The architecture of an advanced fault detection and diagnosis (FDD) system is described and applied with an Autonomous Underwater Vehicle (AUV). The architecture aims to provide a more capable system that does not require dedicated sensors for each fault,
    can diagnose previously unforeseen failures and failures with cause-effect patterns across different subsystems. It also lays the foundations for incipient fault detection and condition-based maintenance schemes. A model of relationships is used as an ontology to describe the connected set of electrical,
    mechanical,
    hydraulic,
    and computing components that make up the vehicle,
    down to the level of least replaceable unit in the field. The architecture uses a variety of domain dependent diagnostic tools (rulebase,
    model-based methods) and domain independent tools (correlator,
    topology analyzer,
    watcher) to first detect and then diagnose the location of faults. Tools nominate components,
    so that a rank order of most likely candidates can be generated. This modular approach allows existing proven FDD methods (e.g.,
    vibration analysis,
    FMEA) to be incorporated and to add confidence to the conclusions. Illustrative performance is provided working in real time during deployments with the RAUVER hover capable AUV as an example of the class of automated system to which this approach is applicable. (C) 2007 Wiley Periodicals,
    Inc.}},
    adsurl = {http://dx.doi.org/10.1002/rob.20202},
    doi = {{10.1002/rob.20202}},
    issn = {{1556-4959}},
    unique-id = {{ISI:000248167300007}} }
The architecture of an advanced fault detection and diagnosis (FDD) system is described and applied with an Autonomous Underwater Vehicle (AUV). The architecture aims to provide a more capable system that does not require dedicated sensors for each fault, can diagnose previously unforeseen failures and failures with cause-effect patterns across different subsystems. It also lays the foundations for incipient fault detection and condition-based maintenance schemes. A model of relationships is used as an ontology to describe the connected set of electrical, mechanical, hydraulic, and computing components that make up the vehicle, down to the level of least replaceable unit in the field. The architecture uses a variety of domain dependent diagnostic tools (rulebase, model-based methods) and domain independent tools (correlator, topology analyzer, watcher) to first detect and then diagnose the location of faults. Tools nominate components, so that a rank order of most likely candidates can be generated. This modular approach allows existing proven FDD methods (e.g., vibration analysis, FMEA) to be incorporated and to add confidence to the conclusions. Illustrative performance is provided working in real time during deployments with the RAUVER hover capable AUV as an example of the class of automated system to which this approach is applicable. (C) 2007 Wiley Periodicals, Inc.
Hamilton, K. and Lane, D. M. and Brown, K. E. and Evans, J. and Taylor, N. K."An integrated diagnostic architecture for autonomous underwater vehicles"JOURNAL OF FIELD ROBOTICSvol. 24no. 6pp. 497-526JUN2007

BibTeX Reference

On stochastic methods for surface reconstruction
@ARTICLE{Saleem:2007,
    author = {Saleem,
    Waqar and Schall,
    Oliver and Patane,
    Giuseppe and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{On stochastic methods for surface reconstruction}},
    journal = {{VISUAL COMPUTER}},
    year = {{2007}},
    volume = {{23}},
    pages = {{381-395}},
    number = {{6}},
    month = {{JUN}},
    abstract = {{In this article,
    we present and discuss three statistical methods for surface reconstruction. A typical input to a surface reconstruction technique consists of a large set of points that has been sampled from a smooth surface and contains uncertain data in the form of noise and outliers. We first present a method that filters out uncertain and redundant information yielding a more accurate and economical surface representation. Then we present two methods,
    each of which converts the input point data to a standard shape representation; the first produces an implicit representation while the second yields a triangle mesh.}},
    adsurl = {http://dx.doi.org/10.1007/s00371-006-0094-3},
    doi = {{10.1007/s00371-006-0094-3}},
    issn = {{0178-2789}},
    unique-id = {{ISI:000246277800001}} }
In this article, we present and discuss three statistical methods for surface reconstruction. A typical input to a surface reconstruction technique consists of a large set of points that has been sampled from a smooth surface and contains uncertain data in the form of noise and outliers. We first present a method that filters out uncertain and redundant information yielding a more accurate and economical surface representation. Then we present two methods, each of which converts the input point data to a standard shape representation; the first produces an implicit representation while the second yields a triangle mesh.
Saleem, Waqar and Schall, Oliver and Patane, Giuseppe and Belyaev, Alexander and Seidel, Hans-Peter"On stochastic methods for surface reconstruction"VISUAL COMPUTERvol. 23no. 6pp. 381-395JUN2007

BibTeX Reference

Stochastic modeling and simulation of frequency-correlated wideband fading channels
@ARTICLE{Wang:2007d,
    author = {Cheng-Xiang Wang and Matthias P\\'atzold and Qi Yao},
    title = {Stochastic modeling and simulation of frequency-correlated wideband fading channels},
    journal = {IEEE Trans. Vehicular Technology,
    Special Issue on Antenna Systems and Propagation for Future Wireless Communications},
    year = {2007},
    volume = {56},
    pages = {1050-1063},
    number = {3},
    month = {May},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_StochasticModelingAndSimulationOfFrequencyCorrelatedWidebandFadingChannels.pdf} }
Cheng-Xiang Wang and Matthias Pätzold and Qi Yao"Stochastic modeling and simulation of frequency-correlated wideband fading channels"IEEE Trans. Vehicular Technology, Special Issue on Antenna Systems and Propagation for Future Wireless Communicationsvol. 56no. 3pp. 1050-1063May2007

BibTeX Reference

Error-guided adaptive Fourier-based surface reconstruction
@ARTICLE{Schall:2007,
    author = {Schall,
    Oliver and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{Error-guided adaptive Fourier-based surface reconstruction}},
    journal = {{COMPUTER-AIDED DESIGN}},
    year = {{2007}},
    volume = {{39}},
    pages = {{421-426}},
    number = {{5}},
    month = {{MAY}},
    note = {{4th International Conference on Geometric Modeling and Processing (GMP 2006),
    Pittsburgh,
    PA,
    JUL 26-28,
    2006}},
    abstract = {{In this paper,
    we propose to combine Kazhdan's FFT-based approach to surface reconstruction from oriented points with adaptive subdivision and partition of unity blending techniques. This removes the main drawback of the FFT-based approach which is a high memory consumption for geometrically complex datasets. This allows us to achieve a higher reconstruction accuracy compared with the original global approach. Furthermore,
    our reconstruction process is guided by a global error control accomplished by computing the Hausdorff distance of selected input samples to intermediate reconstructions. The advantages of our surface reconstruction method also include a more robust surface restoration in regions where the surface folds back to itself. (c) 2007 Elsevier Ltd. All rights reserved.}},
    adsurl = {http://dx.doi.org/10.1016/j.cad.2007.02.005},
    doi = {{10.1016/j.cad.2007.02.005}},
    issn = {{0010-4485}},
    unique-id = {{ISI:000247222600010}} }
In this paper, we propose to combine Kazhdan's FFT-based approach to surface reconstruction from oriented points with adaptive subdivision and partition of unity blending techniques. This removes the main drawback of the FFT-based approach which is a high memory consumption for geometrically complex datasets. This allows us to achieve a higher reconstruction accuracy compared with the original global approach. Furthermore, our reconstruction process is guided by a global error control accomplished by computing the Hausdorff distance of selected input samples to intermediate reconstructions. The advantages of our surface reconstruction method also include a more robust surface restoration in regions where the surface folds back to itself. (c) 2007 Elsevier Ltd. All rights reserved.
Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter"Error-guided adaptive Fourier-based surface reconstruction"COMPUTER-AIDED DESIGNvol. 39no. 5pp. 421-426MAY2007

BibTeX Reference

Convergence analysis of the Gaussian mixture PHD filter
@ARTICLE{Clark:2007,
    author = {Clark,
    Daniel and Vo,
    Ba-Ngu},
    title = {{Convergence analysis of the Gaussian mixture PHD filter}},
    journal = {{IEEE TRANSACTIONS ON SIGNAL PROCESSING}},
    year = {{2007}},
    volume = {{55}},
    pages = {{1204-1212}},
    number = {{4}},
    month = {{APR}},
    abstract = {{The Gaussian mixture probability hypothesis density (PHD) filter was proposed recently for jointly estimating the time-varying number of targets and their states from a sequence of sets of observations without the need for measurement-to-track data association. It was shown that,
    under linear-Gaussian assumptions,
    the posterior intensity at any point in time is a Gaussian mixture. This paper proves uniform convergence of the errors in the algorithm and provides error bounds for the pruning and merging stages. In addition,
    uniform convergence results for the extended Kalman PHD Filter are given,
    and the unscented Kalman PHD Filter implementation is discussed.}},
    adsurl = {http://dx.doi.org/10.1109/TSP.206.888886},
    doi = {{10.1109/TSP.206.888886}},
    issn = {{1053-587X}},
    unique-id = {{ISI:000245452700002}} }
The Gaussian mixture probability hypothesis density (PHD) filter was proposed recently for jointly estimating the time-varying number of targets and their states from a sequence of sets of observations without the need for measurement-to-track data association. It was shown that, under linear-Gaussian assumptions, the posterior intensity at any point in time is a Gaussian mixture. This paper proves uniform convergence of the errors in the algorithm and provides error bounds for the pruning and merging stages. In addition, uniform convergence results for the extended Kalman PHD Filter are given, and the unscented Kalman PHD Filter implementation is discussed.
Clark, Daniel and Vo, Ba-Ngu"Convergence analysis of the Gaussian mixture PHD filter"IEEE TRANSACTIONS ON SIGNAL PROCESSINGvol. 55no. 4pp. 1204-1212APR2007

BibTeX Reference

Path planning for autonomous underwater vehicles
@ARTICLE{Petres:2007,
    author = {Petres,
    Clement and Pailhas,
    Yan and Patron,
    Pedro and Petillot,
    Yvan and Evans,
    Jonathan and Lane,
    David},
    title = {{Path planning for autonomous underwater vehicles}},
    journal = {{IEEE TRANSACTIONS ON ROBOTICS}},
    year = {{2007}},
    volume = {{23}},
    pages = {{331-341}},
    number = {{2}},
    month = {{APR}},
    note = {{IJCAI Workshop on Planning and Learning in a Priori Unknown or Dynamic Domains,
    Edinburgh,
    SCOTLAND,
    AUG,
    2005}},
    abstract = {{Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First,
    we develop an algorithm we call FM{*} to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second,
    we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third,
    the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally,
    a multiresolution method is introduced to speed up the overall path-planning process.}},
    adsurl = {http://dx.doi.org/10.1109/TRO.2007.895057},
    doi = {{10.1109/TRO.2007.895057}},
    issn = {{1552-3098}},
    unique-id = {{ISI:000245904500013}} }
Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First, we develop an algorithm we call FM{*} to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second, we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third, the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally, a multiresolution method is introduced to speed up the overall path-planning process.
Petres, Clement and Pailhas, Yan and Patron, Pedro and Petillot, Yvan and Evans, Jonathan and Lane, David"Path planning for autonomous underwater vehicles"IEEE TRANSACTIONS ON ROBOTICSvol. 23no. 2pp. 331-341APR2007

BibTeX Reference

Sparse-periodic hybrid array beamformer
@ARTICLE{Wilson:2007,
    author = {Wilson,
    M. J. and McHugh,
    R.},
    title = {{Sparse-periodic hybrid array beamformer}},
    journal = {{IET RADAR SONAR AND NAVIGATION}},
    year = {{2007}},
    volume = {{1}},
    pages = {{116-123}},
    number = {{2}},
    month = {{APR}},
    abstract = {{A novel approach to the design of spatial arrays is presented. The hybrid array beam-former consists of a sparse array in which some of the elements are arranged periodically,
    thus creating a periodic sub-array. The outputs of the sparse and periodic arrays are then fused to create a beampattern with good resolution and low peak sidelobe levels. This approach is seen as matching the performance of standard periodic arrays but with a significant saving in terms of the required number of elements. Applications include sonar,
    radar and ultrasonic systems.}},
    adsurl = {http://dx.doi.org/10.1121/1.2427127},
    doi = {{10.1049/iet-rsn:20060051}},
    issn = {{1751-8784}},
    unique-id = {{ISI:000248711200004}} }
A novel approach to the design of spatial arrays is presented. The hybrid array beam-former consists of a sparse array in which some of the elements are arranged periodically, thus creating a periodic sub-array. The outputs of the sparse and periodic arrays are then fused to create a beampattern with good resolution and low peak sidelobe levels. This approach is seen as matching the performance of standard periodic arrays but with a significant saving in terms of the required number of elements. Applications include sonar, radar and ultrasonic systems.
Wilson, M. J. and McHugh, R."Sparse-periodic hybrid array beamformer"IET RADAR SONAR AND NAVIGATIONvol. 1no. 2pp. 116-123APR2007

BibTeX Reference

Towards resource-certified software: a formal cost model for time and its application to an image processing example
@ARTICLE{Wallace:2007b,
    author = {Armelle Bonenfant and Zezhi Chen and Kevin Hammond and Greg Michaelson and Andy Wallace and Iain Wallace},
    title = {Towards resource-certified software: a formal cost model for time and its application to an image processing example},
    journal = {ACM Symposium on Applied Computing (SAC '07),
    Seoul,
    Korea},
    year = {2007},
    volume = {13},
    pages = {1006-1015},
    number = {4},
    month = {March},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_TowardsResourceCertifiedSoftwareAFormalCostModelForTimeAndItsApplicationToAnImageProcessingExample.pdf},
    day = {11} }
Armelle Bonenfant and Zezhi Chen and Kevin Hammond and Greg Michaelson and Andy Wallace and Iain Wallace"Towards resource-certified software: a formal cost model for time and its application to an image processing example"ACM Symposium on Applied Computing (SAC '07), Seoul, Koreavol. 13no. 4pp. 1006-1015March2007

BibTeX Reference

Accurate and efficient simulation of multiple uncorrelated Rayleigh fading waveforms
@ARTICLE{Wang:2007b,
    author = {Cheng-Xiang Wang and Matthias P\\'atzold and Dongfeng Yuan},
    title = {Accurate and efficient simulation of multiple uncorrelated Rayleigh fading waveforms},
    journal = {IEEE Trans. Wireless Communications},
    year = {2007},
    volume = {6},
    pages = {833-839},
    number = {3},
    month = {March},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_AccurateAndEfficientSimulationOfMultipleUncorrelatedRayleighFadingWaveforms.pdf} }
Cheng-Xiang Wang and Matthias Pätzold and Dongfeng Yuan"Accurate and efficient simulation of multiple uncorrelated Rayleigh fading waveforms"IEEE Trans. Wireless Communicationsvol. 6no. 3pp. 833-839March2007

BibTeX Reference

A new class of generative models for burst error characterization in digital wireless channels
@ARTICLE{Wang:2007c,
    author = {Cheng-Xiang Wang and Wen Xu},
    title = {A new class of generative models for burst error characterization in digital wireless channels},
    journal = {IEEE Trans. Communications},
    year = {2007},
    volume = {55},
    pages = {453-462},
    number = {3},
    month = {March},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_ANewClassOfGenerativeModelsForBurstErrorCharacterizationInDigitalWirelessChannels.pdf} }
Cheng-Xiang Wang and Wen Xu"A new class of generative models for burst error characterization in digital wireless channels"IEEE Trans. Communicationsvol. 55no. 3pp. 453-462March2007

BibTeX Reference

A study on the PAPRs in multicarrier modulation systems with different orthogonal bases
@ARTICLE{Zhang:2007,
    author = {Zhang,
    Haixia and Yuan,
    Dongfeng and Wang,
    Cheng-Xiang},
    title = {{A study on the PAPRs in multicarrier modulation systems with different orthogonal bases}},
    journal = {{WIRELESS COMMUNICATIONS \\& MOBILE COMPUTING}},
    year = {{2007}},
    volume = {{7}},
    pages = {{311-318}},
    number = {{3}},
    month = {{MAR}},
    abstract = {{This paper studies the peak-to-average power ratios (PAPRs) in multicarrier modulation (MCM) systems with seven different orthogonal bases,
    one Fourier base and six wavelet bases. It is shown by simulation results that the PAPRs of the Fourier-based MCM system are lower than those of all wavelet-based MCM (WMCM) systems. A novel threshold-based PAPR reduction method is then proposed to reduce the PAPRs in WMCM systems. Both numerical and simulation results indicate that the proposed PAPR reduction method works very effectively in WMCM systems. Copyright (C) 2006 John Wiley \\& Sons,
    Ltd.}},
    adsurl = {http://dx.doi.org/10.1002/wcm.327},
    doi = {{10.1002/wcm.327}},
    issn = {{1530-8669}},
    unique-id = {{ISI:000244929100005}} }
This paper studies the peak-to-average power ratios (PAPRs) in multicarrier modulation (MCM) systems with seven different orthogonal bases, one Fourier base and six wavelet bases. It is shown by simulation results that the PAPRs of the Fourier-based MCM system are lower than those of all wavelet-based MCM (WMCM) systems. A novel threshold-based PAPR reduction method is then proposed to reduce the PAPRs in WMCM systems. Both numerical and simulation results indicate that the proposed PAPR reduction method works very effectively in WMCM systems. Copyright (C) 2006 John Wiley \& Sons, Ltd.
Zhang, Haixia and Yuan, Dongfeng and Wang, Cheng-Xiang"A study on the PAPRs in multicarrier modulation systems with different orthogonal bases"WIRELESS COMMUNICATIONS \& MOBILE COMPUTINGvol. 7no. 3pp. 311-318MAR2007

BibTeX Reference

Detection and tracking of multiple metallic objects in millimetre-wave images
@ARTICLE{Haworth:2007,
    author = {Haworth,
    C. D. and De Saint-Pern,
    Y. and Clark,
    D. and Trucco,
    E. and Petillot,
    Y. R.},
    title = {{Detection and tracking of multiple metallic objects in millimetre-wave images}},
    journal = {{INTERNATIONAL JOURNAL OF COMPUTER VISION}},
    year = {{2007}},
    volume = {{71}},
    pages = {{183-196}},
    number = {{2}},
    month = {{FEB}},
    abstract = {{In this paper we present a system for the automatic detection and tracking of metallic objects concealed on moving people in sequences of millimetre-wave (MMW) images. The millimetre-wave sensor employed has been demonstrated for use in covert detection because of its ability to see through clothing,
    plastics and fabrics. The system employs two distinct stages: detection and tracking. In this paper a single detector,
    for metallic objects,
    is presented which utilises a statistical model also developed in this paper. The second stage tracks the target locations of the objects using a Probability Hypothesis Density filter. The advantage of this filter is that it has the ability to track a variable number of targets,
    estimating both the number of targets and their locations. This avoids the need for data association techniques as the identities of the individual targets are not required. Results are presented for both simulations and real millimetre-wave image test sequences demonstrating the benefits of our system for the automatic detection and tracking of metallic objects.}},
    adsurl = {http://dx.doi.org/10.1007/s11263-006-6275-8},
    doi = {{10.1007/s11263-006-6275-8}},
    issn = {{0920-5691}},
    unique-id = {{ISI:000241501300005}} }
In this paper we present a system for the automatic detection and tracking of metallic objects concealed on moving people in sequences of millimetre-wave (MMW) images. The millimetre-wave sensor employed has been demonstrated for use in covert detection because of its ability to see through clothing, plastics and fabrics. The system employs two distinct stages: detection and tracking. In this paper a single detector, for metallic objects, is presented which utilises a statistical model also developed in this paper. The second stage tracks the target locations of the objects using a Probability Hypothesis Density filter. The advantage of this filter is that it has the ability to track a variable number of targets, estimating both the number of targets and their locations. This avoids the need for data association techniques as the identities of the individual targets are not required. Results are presented for both simulations and real millimetre-wave image test sequences demonstrating the benefits of our system for the automatic detection and tracking of metallic objects.
Haworth, C. D. and De Saint-Pern, Y. and Clark, D. and Trucco, E. and Petillot, Y. R."Detection and tracking of multiple metallic objects in millimetre-wave images"INTERNATIONAL JOURNAL OF COMPUTER VISIONvol. 71no. 2pp. 183-196FEB2007

BibTeX Reference

Supervised target detection and classification by training on augmented reality data
@ARTICLE{Coiras:2007,
    author = {Coiras,
    E. and Mignotte,
    P.-Y. and Petillot,
    Y. and Bell,
    J. and Lebart,
    K.},
    title = {{Supervised target detection and classification by training on augmented reality data}},
    journal = {{IET RADAR SONAR AND NAVIGATION}},
    year = {{2007}},
    volume = {{1}},
    pages = {{83-90}},
    number = {{1}},
    month = {{FEB}},
    abstract = {{A proof of concept for a model-less target detection and classification system for side-scan imagery is presented. The system is based on a supervised approach that uses augmented reality (AR) images for training computer added detection and classification (CAD/CAC) algorithms,
    which are then deployed on real data. The algorithms are able to generalise and detect real targets when trained on AR ones,
    with performances comparable with the state-of-the-art in CAD/CAC. To illustrate the approach,
    the focus is on one specific algorithm,
    which uses Bayesian decision and the novel,
    purpose-designed central filter feature extractors. Depending on how the training database is partitioned,
    the algorithm can be used either for detection or classification. Performance figures for these two modes of operation are presented,
    both for synthetic and real targets. Typical results show a detection rate of more that 95\\% and a false alarm rate of less than 5\\%. The proposed supervised approach can be directly applied to train and evaluate other learning algorithms and data representations. In fact,
    a most important aspect is that it enables the use of a wealth of legacy pattern recognition algorithms for the sonar CAD/CAC applications of target detection and target classification.}},
    adsurl = {http://dx.doi.org/10.1049/iet-rsn:20060098},
    doi = {{10.1049/iet-rsn:20060098}},
    issn = {{1751-8784}},
    unique-id = {{ISI:000248759100011}} }
A proof of concept for a model-less target detection and classification system for side-scan imagery is presented. The system is based on a supervised approach that uses augmented reality (AR) images for training computer added detection and classification (CAD/CAC) algorithms, which are then deployed on real data. The algorithms are able to generalise and detect real targets when trained on AR ones, with performances comparable with the state-of-the-art in CAD/CAC. To illustrate the approach, the focus is on one specific algorithm, which uses Bayesian decision and the novel, purpose-designed central filter feature extractors. Depending on how the training database is partitioned, the algorithm can be used either for detection or classification. Performance figures for these two modes of operation are presented, both for synthetic and real targets. Typical results show a detection rate of more that 95\% and a false alarm rate of less than 5\%. The proposed supervised approach can be directly applied to train and evaluate other learning algorithms and data representations. In fact, a most important aspect is that it enables the use of a wealth of legacy pattern recognition algorithms for the sonar CAD/CAC applications of target detection and target classification.
Coiras, E. and Mignotte, P.-Y. and Petillot, Y. and Bell, J. and Lebart, K."Supervised target detection and classification by training on augmented reality data"IET RADAR SONAR AND NAVIGATIONvol. 1no. 1pp. 83-90FEB2007

BibTeX Reference

Multiresolution 3-D reconstruction from side-scan sonar images
@ARTICLE{Coiras:2007b,
    author = {Coiras,
    Enrique and Petillot,
    Yvan and Lane,
    David M.},
    title = {{Multiresolution 3-D reconstruction from side-scan sonar images}},
    journal = {{IEEE TRANSACTIONS ON IMAGE PROCESSING}},
    year = {{2007}},
    volume = {{16}},
    pages = {{382-390}},
    number = {{2}},
    month = {{FEB}},
    abstract = {{In this paper,
    a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model,
    which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model,
    approximations for seabed reflectivity,
    side-scan beam pattern,
    and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.}},
    adsurl = {http://dx.doi.org/10.1109/TIP.2006.888337},
    doi = {{10.1109/TIP.2006.888337}},
    issn = {{1057-7149}},
    unique-id = {{ISI:000243619200008}} }
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.
Coiras, Enrique and Petillot, Yvan and Lane, David M."Multiresolution 3-D reconstruction from side-scan sonar images"IEEE TRANSACTIONS ON IMAGE PROCESSINGvol. 16no. 2pp. 382-390FEB2007

BibTeX Reference

An equivalent roughness model for seabed backscattering at very high frequencies using a band-matrix approach
@ARTICLE{Wendelboe:2007,
    author = {Wendelboe,
    Gorm and Jacobsen,
    Finn and Bell,
    Judith M.},
    title = {{An equivalent roughness model for seabed backscattering at very high frequencies using a band-matrix approach}},
    journal = {{JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA}},
    year = {{2007}},
    volume = {{121}},
    pages = {{814-823}},
    number = {{2}},
    month = {{FEB}},
    abstract = {{This work concerns modeling of very high frequency (> 100 kHz) sonar images obtained from a sandy seabed. The seabed is divided into a discrete number of ID height profiles. For each height profile the backscattered pressure is computed by an integral equation method for interface scattering between two homogeneous media as formulated by Chan {[}IEEE Trans. Antennas Propag. 46,
    142-149 (1998)]. However,
    the seabed is inhomogeneous,
    and volume scattering is a major contributor to backscattering. The SAX99 experiments revealed that the density in the unconsolidated sediment within the first 5 mm exhibits a high spatial variation. For that reason,
    additional roughness is introduced: For each surface point a stochastic realization of the density along the vertical is generated,
    and the sediment depth at which the density has its maximum value will constitute the new height field value. The matrix of the full integral equation is reduced to a band matrix as the interaction between the point sources on the seabed is neglected from a certain range; this allows computations on long height profiles with lengths up to approximately 25 m (at 300 kHz). The equivalent roughness approach,
    combined with the band-matrix approach,
    agrees with SAX99 data at 300 kHz. (c) 2007 Acoustical Society of America.}},
    adsurl = {http://dx.doi.org/10.1121/1.2427127},
    doi = {{10.1121/1.2427127}},
    issn = {{0001-4966}},
    unique-id = {{ISI:000244113500013}} }
This work concerns modeling of very high frequency (> 100 kHz) sonar images obtained from a sandy seabed. The seabed is divided into a discrete number of ID height profiles. For each height profile the backscattered pressure is computed by an integral equation method for interface scattering between two homogeneous media as formulated by Chan {[}IEEE Trans. Antennas Propag. 46, 142-149 (1998)]. However, the seabed is inhomogeneous, and volume scattering is a major contributor to backscattering. The SAX99 experiments revealed that the density in the unconsolidated sediment within the first 5 mm exhibits a high spatial variation. For that reason, additional roughness is introduced: For each surface point a stochastic realization of the density along the vertical is generated, and the sediment depth at which the density has its maximum value will constitute the new height field value. The matrix of the full integral equation is reduced to a band matrix as the interaction between the point sources on the seabed is neglected from a certain range; this allows computations on long height profiles with lengths up to approximately 25 m (at 300 kHz). The equivalent roughness approach, combined with the band-matrix approach, agrees with SAX99 data at 300 kHz. (c) 2007 Acoustical Society of America.
Wendelboe, Gorm and Jacobsen, Finn and Bell, Judith M."An equivalent roughness model for seabed backscattering at very high frequencies using a band-matrix approach"JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICAvol. 121no. 2pp. 814-823FEB2007

BibTeX Reference

Enhanced performance photon-counting time-of-flight sensor
@ARTICLE{Warburton:2007,
    author = {Warburton,
    Ryan E. and McCarthy,
    Aongus and Wallace,
    Andrew M. and Hernandez-Marin,
    Sergio and Cova,
    Sergio and Lamb,
    Robert A. and Buller,
    Gerald S.},
    title = {{Enhanced performance photon-counting time-of-flight sensor}},
    journal = {{OPTICS EXPRESS}},
    year = {{2007}},
    volume = {{15}},
    pages = {{423-429}},
    number = {{2}},
    month = {{JAN 22}},
    abstract = {{We describe improvements to a time-of-flight sensor utilising the time-correlated single-photon counting technique employing a commercially-available silicon-based photon-counting module. By making modifications to the single-photon detection circuitry and the data analysis techniques,
    we experimentally demonstrate improved resolution between multiple scattering surfaces with a minimum resolvable separation of 1.7 cm at ranges in excess of several hundred metres. (c) 2007 Optical Society of America.}},
    issn = {{1094-4087}},
    unique-id = {{ISI:000244680600019}} }
We describe improvements to a time-of-flight sensor utilising the time-correlated single-photon counting technique employing a commercially-available silicon-based photon-counting module. By making modifications to the single-photon detection circuitry and the data analysis techniques, we experimentally demonstrate improved resolution between multiple scattering surfaces with a minimum resolvable separation of 1.7 cm at ranges in excess of several hundred metres. (c) 2007 Optical Society of America.
Warburton, Ryan E. and McCarthy, Aongus and Wallace, Andrew M. and Hernandez-Marin, Sergio and Cova, Sergio and Lamb, Robert A. and Buller, Gerald S."Enhanced performance photon-counting time-of-flight sensor"OPTICS EXPRESSvol. 15no. 2pp. 423-429JAN 222007

BibTeX Reference

Particle PHD filter multiple target tracking in sonar image
@ARTICLE{Clark:2007,
    author = {Clark,
    Daniel and Ruiz,
    Ioseba Tena and Petillot,
    Yvan and Bell,
    Judith},
    title = {{Particle PHD filter multiple target tracking in sonar image}},
    journal = {{IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS}},
    year = {{2007}},
    volume = {{43}},
    pages = {{409-416}},
    number = {{1}},
    month = {{JAN}},
    abstract = {{Two contrasting approaches for tracking multiple targets in multi-beam forward-looking sonar images are considered. The first approach is based on assigning a Kalman filter to each target and managing the measurements with gating and a measurement-to-track data association technique. The second approach uses the recently developed particle implementation of the multiple-target probability hypothesis density (PHD) filter and a target state estimate-to-track data association technique. The two approaches are implemented and compared on both simulated sonar and real forward-looking sonar data obtained from an Autonomous Underwater Vehicle (AUV) and demonstrate that the PHD filter with data association compares well with traditional approaches for multiple target tracking.}},
    adsurl = {http://ieeexplore.ieee.org/iel5/7/4194746/04194781.pdf?tp=&arnumber=4194781&isnumber=4194746},
    issn = {{0018-9251}},
    unique-id = {{ISI:000246780800032}} }
Two contrasting approaches for tracking multiple targets in multi-beam forward-looking sonar images are considered. The first approach is based on assigning a Kalman filter to each target and managing the measurements with gating and a measurement-to-track data association technique. The second approach uses the recently developed particle implementation of the multiple-target probability hypothesis density (PHD) filter and a target state estimate-to-track data association technique. The two approaches are implemented and compared on both simulated sonar and real forward-looking sonar data obtained from an Autonomous Underwater Vehicle (AUV) and demonstrate that the PHD filter with data association compares well with traditional approaches for multiple target tracking.
Clark, Daniel and Ruiz, Ioseba Tena and Petillot, Yvan and Bell, Judith"Particle PHD filter multiple target tracking in sonar image"IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMSvol. 43no. 1pp. 409-416JAN2007

BibTeX Reference

Bayesian Analysis of Lidar Signals with Multiple Returns
@ARTICLE{Wallace:2007e,
    author = {Sergio Hernandez-Marin and Andrew M. Wallace and Gavin J. Gibson},
    title = {Bayesian Analysis of Lidar Signals with Multiple Returns},
    journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
    year = {2007},
    volume = {29},
    pages = {2170-2180},
    number = {12},
    address = {Los Alamitos,
    CA,
    USA},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/HernandezMarin_BayesianAnalysisOfLidarSignalsWithMultipleReturns.pdf},
    doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.1122},
    issn = {0162-8828},
    publisher = {IEEE Computer Society} }
Sergio Hernandez-Marin and Andrew M. Wallace and Gavin J. Gibson"Bayesian Analysis of Lidar Signals with Multiple Returns"IEEE Transactions on Pattern Analysis and Machine Intelligencevol. 29no. 12pp. 2170-21802007

BibTeX Reference

Automatic human behaviour recognition and explanation for CCTV video surveillance
@ARTICLE{Robertson:security,
    author = {N.M. Robertson and I.D. Reid and J.M. Brady},
    title = {Automatic human behaviour recognition and explanation for CCTV video surveillance},
    journal = {Security Journal},
    year = {2007},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/robertson_security_submission.pdf} }
N.M. Robertson and I.D. Reid and J.M. Brady"Automatic human behaviour recognition and explanation for CCTV video surveillance"Security Journal2007

BibTeX Reference

Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition
@ARTICLE{Wallace:2007c,
    author = {GS Buller and AM Wallace},
    title = {Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition},
    journal = {IEEE Journal on Selected Topics in Quantum Electronics},
    year = {2007},
    volume = {13},
    pages = {1006-1015},
    number = {4},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_RangingAndThreeDimensionalImagingUsingTimeCorrelatedSinglePhotonCountingAndPointByPointAcquisition.pdf} }
GS Buller and AM Wallace"Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition"IEEE Journal on Selected Topics in Quantum Electronicsvol. 13no. 4pp. 1006-10152007

BibTeX Reference

Perceived roughness of textured surfaces
@ARTICLE{Green:2007,
    author = {Green,
    Patrick R. and Padilla,
    Stefano M. and Drbohlav,
    Ondrej and Chantler,
    Mike J.},
    title = {{Perceived roughness of textured surfaces}},
    journal = {{PERCEPTION}},
    year = {{2007}},
    volume = {{36}},
    pages = {{305}},
    number = {{2}},
    issn = {{0301-0066}},
    unique-id = {{ISI:000245525200021}} }
Green, Patrick R. and Padilla, Stefano M. and Drbohlav, Ondrej and Chantler, Mike J."Perceived roughness of textured surfaces"PERCEPTIONvol. 36no. 2pp. 3052007

BibTeX Reference

Spatial temporal correlation properties of the 3GPP spatial channel model and the Kronecker MIMO channel model
@ARTICLE{Wang:2007a,
    author = {Cheng-Xiang Wang and Xuemin Hong and Hanguang Wu and Wen Xu},
    title = {Spatial temporal correlation properties of the 3GPP spatial channel model and the Kronecker MIMO channel model},
    journal = {EURASIP Journal on Wireless Communications and Networking,
    Special Issue on Space-Time Channel Modeling for Wireless Communications},
    year = {2007},
    volume = {2007},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_Spatial-TemporalCorrelationPropertiesOfThe3GPPSpatialChannelModelAndTheKroneckerMIMOChannelModel.pdf},
    articleid = {39871},
    doi = {10.1155/2007/39871} }
Cheng-Xiang Wang and Xuemin Hong and Hanguang Wu and Wen Xu"Spatial temporal correlation properties of the 3GPP spatial channel model and the Kronecker MIMO channel model"EURASIP Journal on Wireless Communications and Networking, Special Issue on Space-Time Channel Modeling for Wireless Communicationsvol. 20072007

BibTeX Reference

Active segmentation and adaptive tracking using level sets
@ARTICLE{Wallace:2007d,
    author = {Zezhi Chen and Andrew Wallace},
    title = {Active segmentation and adaptive tracking using level sets},
    journal = {Proceedings of British Machine Vision Conference},
    year = {2007},
    pages = {920-929},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_ActiveSegmentationAndAdaptiveTrackingUsingLevelSets.pdf} }
Zezhi Chen and Andrew Wallace"Active segmentation and adaptive tracking using level sets"Proceedings of British Machine Vision Conferencepp. 920-9292007

BibTeX Reference

Evaluation of a hierarchical partitioned particle filter with action primitives
@ARTICLE{Wallace:2007a,
    author = {Zsolt Husz and Andrew Wallace and Patrick Green},
    title = {Evaluation of a hierarchical partitioned particle filter with action primitives},
    journal = {IEEE Conf. on Computer Vision and Pattern Recognition,
    Minnesota},
    year = {2007},
    volume = {13},
    pages = {1006-1015},
    number = {4},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_EvaluationOfAHierarchicalPartitionedParticleFilterWithActionPrimitives.pdf} }
Zsolt Husz and Andrew Wallace and Patrick Green"Evaluation of a hierarchical partitioned particle filter with action primitives"IEEE Conf. on Computer Vision and Pattern Recognition, Minnesotavol. 13no. 4pp. 1006-10152007

BibTeX Reference

Skeleton-based variational mesh deformations
@ARTICLE{Yoshizawa:2007,
    author = {Yoshizawa,
    Shin and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{Skeleton-based variational mesh deformations}},
    journal = {{COMPUTER GRAPHICS FORUM}},
    year = {{2007}},
    volume = {{26}},
    pages = {{255-264}},
    number = {{3,
    Sp. Iss. SI}},
    abstract = {{In this paper a new free-form shape deformation approach is proposed. We combine a skeleton-based mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh,
    we first extract a skeletal mesh,
    a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-based deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self intersections of the deformed mesh while preserving fine geometric details. Finally,
    we present a multi-resolution version of our approach in order to simplify and accelerate the deformation process. In addition,
    interesting links between the proposed free-form shape deformation technique and classical and modern results in the differential geometry of sphere congruences are established and discussed.}},
    issn = {{0167-7055}},
    unique-id = {{ISI:000249660500006}} }
In this paper a new free-form shape deformation approach is proposed. We combine a skeleton-based mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-based deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multi-resolution version of our approach in order to simplify and accelerate the deformation process. In addition, interesting links between the proposed free-form shape deformation technique and classical and modern results in the differential geometry of sphere congruences are established and discussed.
Yoshizawa, Shin and Belyaev, Alexander and Seidel, Hans-Peter"Skeleton-based variational mesh deformations"COMPUTER GRAPHICS FORUMvol. 26no. 3, Sp. Iss. SIpp. 255-2642007

BibTeX Reference

New spectral imaging techniques for blood oximetry in the retina - art. no. 66310L
@INPROCEEDINGS{Alabboud:2007,
    author = {Alabboud,
    Ied and Muyo,
    Gonzalo and Gorman,
    Alistair and Mordant,
    David and McNaught,
    Andrew and Petres,
    Clement and Petillot,
    Yvan R. and Harvey,
    Andrew R.},
    title = {{New spectral imaging techniques for blood oximetry in the retina - art. no. 66310L}},
    booktitle = {{NOVEL OPTICAL INSTRUMENTATION FOR BIOMEDICAL APPLICATIONS III}},
    year = {{2007}},
    editor = {{Depeursinge,
    CD}},
    volume = {{6631}},
    series = {{PROCEEDINGS OF THE SOCIETY OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS (SPIE)}},
    pages = {{L6310}},
    note = {{Conference on Novel Optical Instrumentation for Biomedical Applications III,
    Munich,
    GERMANY,
    JUN 17-19,
    2007}},
    abstract = {{Hyperspectral imaging of the retina presents a unique opportunity for direct and quantitative mapping of retinal biochemistry - particularly of the vasculature where blood oximetry is enabled by the strong variation of absorption spectra with oxygenation. This is particularly pertinent both to research and to clinical investigation and diagnosis of retinal diseases such as diabetes,
    glaucoma and age-related macular degeneration. The optimal exploitation of hyperspectral imaging however,
    presents a set of challenging problems,
    including; the poorly characterised and controlled optical environment of structures within the retina to be imaged; the erratic motion of the eye ball; and the compounding effects of the optical sensitivity of the retina and the low numerical aperture of the eye. We have developed two spectral imaging techniques to address these issues. We describe first a system in which a liquid crystal tuneable filter is integrated into the illumination system of a conventional fundus camera to enable time-sequential,
    random access recording of narrow-band spectral images. Image processing techniques are described to eradicate the artefacts that may be introduced by time-sequential imaging. In addition we describe a unique snapshot spectral imaging technique dubbed IRIS that employs polarising interferometry and Wollaston prism beam splitters to simultaneously replicate and spectrally filter images of the retina into multiple spectral bands onto a single detector array. Results of early clinical trials acquired with these two techniques together with a physical model which enables oximetry map are reported.}},
    isbn = {{978-0-8194-6775-1}},
    issn = {{0277-786X}},
    unique-id = {{ISI:000250954100019}} }
Hyperspectral imaging of the retina presents a unique opportunity for direct and quantitative mapping of retinal biochemistry - particularly of the vasculature where blood oximetry is enabled by the strong variation of absorption spectra with oxygenation. This is particularly pertinent both to research and to clinical investigation and diagnosis of retinal diseases such as diabetes, glaucoma and age-related macular degeneration. The optimal exploitation of hyperspectral imaging however, presents a set of challenging problems, including; the poorly characterised and controlled optical environment of structures within the retina to be imaged; the erratic motion of the eye ball; and the compounding effects of the optical sensitivity of the retina and the low numerical aperture of the eye. We have developed two spectral imaging techniques to address these issues. We describe first a system in which a liquid crystal tuneable filter is integrated into the illumination system of a conventional fundus camera to enable time-sequential, random access recording of narrow-band spectral images. Image processing techniques are described to eradicate the artefacts that may be introduced by time-sequential imaging. In addition we describe a unique snapshot spectral imaging technique dubbed IRIS that employs polarising interferometry and Wollaston prism beam splitters to simultaneously replicate and spectrally filter images of the retina into multiple spectral bands onto a single detector array. Results of early clinical trials acquired with these two techniques together with a physical model which enables oximetry map are reported.
Alabboud, Ied and Muyo, Gonzalo and Gorman, Alistair and Mordant, David and McNaught, Andrew and Petres, Clement and Petillot, Yvan R. and Harvey, Andrew R."New spectral imaging techniques for blood oximetry in the retina - art. no. 66310L"vol. 6631pp. L63102007

Publications in 2006

BibTeX Reference

Image processing techniques for metallic object detection with millimetre-wave images
@ARTICLE{Haworth:2006,
    author = {Haworth,
    C. D. and Petillot,
    Y. R. and Trucco,
    E.},
    title = {{Image processing techniques for metallic object detection with millimetre-wave images}},
    journal = {{PATTERN RECOGNITION LETTERS}},
    year = {{2006}},
    volume = {{27}},
    pages = {{1843-1851}},
    number = {{15,
    Sp. Iss. SI}},
    month = {{NOV}},
    note = {{IEEE International Symposium on Imaging for Crime Detection and Prevention,
    London,
    ENGLAND,
    JUN 07-08,
    2005}},
    abstract = {{In this paper,
    we present a system for the automatic detection and tracking of metallic objects concealed on moving people in sequences of millimetre-wave images,
    which can penetrate clothing,
    plastics and fabrics. The subjects are required to enter one at a time and turn round slowly to ensure complete coverage for the scan. The system employs two distinct stages: detection and tracking. In this paper a single detector,
    for metallic objects,
    is presented which utilises a statistical model also developed in this paper. Target tracking is performed using a particle filter. Results are presented on real millimetre-wave image test sequences and indicate an excellent rate of success for threat identification. Encouraging results for target tracking are also reported. (c) 2006 Elsevier B.V. All rights reserved.}},
    adsurl = {http://dx.doi.org/10.1016/j.patrec.2006.02.003},
    doi = {{10.1016/j.patrec.2006.02.003}},
    issn = {{0167-8655}},
    unique-id = {{ISI:000241169800011}} }
In this paper, we present a system for the automatic detection and tracking of metallic objects concealed on moving people in sequences of millimetre-wave images, which can penetrate clothing, plastics and fabrics. The subjects are required to enter one at a time and turn round slowly to ensure complete coverage for the scan. The system employs two distinct stages: detection and tracking. In this paper a single detector, for metallic objects, is presented which utilises a statistical model also developed in this paper. Target tracking is performed using a particle filter. Results are presented on real millimetre-wave image test sequences and indicate an excellent rate of success for threat identification. Encouraging results for target tracking are also reported. (c) 2006 Elsevier B.V. All rights reserved.
Haworth, C. D. and Petillot, Y. R. and Trucco, E."Image processing techniques for metallic object detection with millimetre-wave images"PATTERN RECOGNITION LETTERSvol. 27no. 15, Sp. Iss. SIpp. 1843-1851NOV2006

BibTeX Reference

Adaptive OFDM techniques with one-bit-per-subcarrier channel-state feedback
@ARTICLE{Rong:2006,
    author = {Rong,
    Yue and Vorobyov,
    Sergiy A. and Gershman,
    Alex B.},
    title = {{Adaptive OFDM techniques with one-bit-per-subcarrier channel-state feedback}},
    journal = {{IEEE TRANSACTIONS ON COMMUNICATIONS}},
    year = {{2006}},
    volume = {{54}},
    pages = {{1993-2003}},
    number = {{11}},
    month = {{NOV}},
    note = {{IEEE Global Telecommunications Conference (GLOBECOM 04),
    Dallas,
    TX,
    NOV 29-DEC 03,
    2004}},
    abstract = {{In the orthogonal frequency-division multiplexing (OFDM) scheme,
    some subcarriers may be subject,to a deep fading. Adaptive techniques can be applied to mitigate this effect if the channel-state information (CSI) is available at the transmitter. In this paper,
    we study the performance of an OFDM-based communication system whose transmitter has only one bit of CSI per subcarrier,
    obtained through a low-rate feedback. Three adaptive approaches are considered to exploit such a CSI feedback: adaptive subcarrier selection; adaptive power allocation (APA); and adaptive modulation selection (AMS). Under the conditions of a constant raw data rate and perfect feedback channel,
    the performance of these approaches are analyzed and compared in terms of raw bit-error rate. It is shown that one-bit CSI feedback can greatly enhance the system performance. Moreover,
    imperfections of the feedback channel are considered,
    and their impact on the performance of these techniques is studied. It is shown that by exploiting the knowledge that the feedback channel is imperfect,
    the performance of the APA and AMS techniques can be substantially improved.}},
    adsurl = {http://dx.doi.org/10.1109/TCOMM.2006.884841},
    doi = {{10.1109/TCOMM.2006.884841}},
    issn = {{0090-6778}},
    unique-id = {{ISI:000242689600014}} }
In the orthogonal frequency-division multiplexing (OFDM) scheme, some subcarriers may be subject,to a deep fading. Adaptive techniques can be applied to mitigate this effect if the channel-state information (CSI) is available at the transmitter. In this paper, we study the performance of an OFDM-based communication system whose transmitter has only one bit of CSI per subcarrier, obtained through a low-rate feedback. Three adaptive approaches are considered to exploit such a CSI feedback: adaptive subcarrier selection; adaptive power allocation (APA); and adaptive modulation selection (AMS). Under the conditions of a constant raw data rate and perfect feedback channel, the performance of these approaches are analyzed and compared in terms of raw bit-error rate. It is shown that one-bit CSI feedback can greatly enhance the system performance. Moreover, imperfections of the feedback channel are considered, and their impact on the performance of these techniques is studied. It is shown that by exploiting the knowledge that the feedback channel is imperfect, the performance of the APA and AMS techniques can be substantially improved.
Rong, Yue and Vorobyov, Sergiy A. and Gershman, Alex B."Adaptive OFDM techniques with one-bit-per-subcarrier channel-state feedback"IEEE TRANSACTIONS ON COMMUNICATIONSvol. 54no. 11pp. 1993-2003NOV2006

BibTeX Reference

Robust linear receivers for multiaccess space-time block-coded MIMO systems: A probabilistically constrained approach
@ARTICLE{Rong:2006b,
    author = {Rong,
    Yue and Vorobyov,
    Sergiy A. and Gershman,
    Alex B.},
    title = {{Robust linear receivers for multiaccess space-time block-coded MIMO systems: A probabilistically constrained approach}},
    journal = {{IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS}},
    year = {{2006}},
    volume = {{24}},
    pages = {{1560-1570}},
    number = {{8}},
    month = {{AUG}},
    note = {{13th IEEE Workshop on Statistical Signal Processing,
    Bordeaux,
    FRANCE,
    JUL 17-20,
    2005}},
    abstract = {{Traditional multiuser receiver algorithms developed for multiple-input-multiple-output (MIMO) wireless systems are based on the assumption that the channel state information (CSI) is precisely known at the receiver. However,
    in practical situations,
    the exact CSI may be unavailable because of channel estimation errors and/or outdated training. In this paper,
    we address the problem of robustness of multiuser MIMO receivers against imperfect CSI and propose a new linear technique that guarantees the robustness against CST errors with a certain selected probability. The proposed receivers are formulated as probabilistically constrained stochastic optimization problems. Provided that the CSI mismatch is Gaussian,
    each of these problems is shown to be convex and to have a unique solution. The fact that the CSI mismatch is Gaussian also enables to convert the original stochastic problems to a more tractable deterministic form and to solve them using the second-order cone programming approach. Numerical simulations illustrate an improved robustness of the proposed receivers against CSI errors and validate their better flexibility as compared with the robust multiuser MIMO receivers based on the worst case designs.}},
    adsurl = {http://dx.doi.org/10.1109/JSAC.2006.879379},
    doi = {{10.1109/JSAC.2006.879379}},
    issn = {{0733-8716}},
    unique-id = {{ISI:000239552800013}} }
Traditional multiuser receiver algorithms developed for multiple-input-multiple-output (MIMO) wireless systems are based on the assumption that the channel state information (CSI) is precisely known at the receiver. However, in practical situations, the exact CSI may be unavailable because of channel estimation errors and/or outdated training. In this paper, we address the problem of robustness of multiuser MIMO receivers against imperfect CSI and propose a new linear technique that guarantees the robustness against CST errors with a certain selected probability. The proposed receivers are formulated as probabilistically constrained stochastic optimization problems. Provided that the CSI mismatch is Gaussian, each of these problems is shown to be convex and to have a unique solution. The fact that the CSI mismatch is Gaussian also enables to convert the original stochastic problems to a more tractable deterministic form and to solve them using the second-order cone programming approach. Numerical simulations illustrate an improved robustness of the proposed receivers against CSI errors and validate their better flexibility as compared with the robust multiuser MIMO receivers based on the worst case designs.
Rong, Yue and Vorobyov, Sergiy A. and Gershman, Alex B."Robust linear receivers for multiaccess space-time block-coded MIMO systems: A probabilistically constrained approach"IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONSvol. 24no. 8pp. 1560-1570AUG2006

BibTeX Reference

Convergence results for the particle PHD filter
@ARTICLE{Clark:2006,
    author = {Clark,
    Daniel Edward and Bell,
    Judith},
    title = {{Convergence results for the particle PHD filter}},
    journal = {{IEEE TRANSACTIONS ON SIGNAL PROCESSING}},
    year = {{2006}},
    volume = {{54}},
    pages = {{2652-2661}},
    number = {{7}},
    month = {{JUL}},
    abstract = {{Bayesian single-target tracking techniques can be extended to a multiple-target environment by viewing the multiple-target state as a random finite set,
    but evaluating the multiple-target posterior distribution is currently computationally intractable for real-time applications. A practical alternative to the optimal Bayes multitarget filter is the probability hypothesis density (PHD) filter,
    which propagates the first-order moment of the multitarget posterior instead of the posterior distribution itself. It has been shown that the PHD is the best-fit approximation of the multitarget posterior in an information-theoretic sense. The method avoids the,
    need for explicit data association,
    as the target states are viewed as a single,
    global target state,
    and the identities of the targets are not part of the tracking framework. Sequential Monte Carlo approximations of the PHD using particle filter techniques have been implemented,
    showing the potential of this technique for real-time tracking applications. This paper presents mathematical proofs of convergence for the particle filtering algorithm and gives bounds for the mean-square error.}},
    adsurl = {http://dx.doi.org/10.1109/TSP.2006.874845},
    doi = {{10.1109/TSP.2006.874845}},
    issn = {{1053-587X}},
    unique-id = {{ISI:000238707900017}} }
Bayesian single-target tracking techniques can be extended to a multiple-target environment by viewing the multiple-target state as a random finite set, but evaluating the multiple-target posterior distribution is currently computationally intractable for real-time applications. A practical alternative to the optimal Bayes multitarget filter is the probability hypothesis density (PHD) filter, which propagates the first-order moment of the multitarget posterior instead of the posterior distribution itself. It has been shown that the PHD is the best-fit approximation of the multitarget posterior in an information-theoretic sense. The method avoids the, need for explicit data association, as the target states are viewed as a single, global target state, and the identities of the targets are not part of the tracking framework. Sequential Monte Carlo approximations of the PHD using particle filter techniques have been implemented, showing the potential of this technique for real-time tracking applications. This paper presents mathematical proofs of convergence for the particle filtering algorithm and gives bounds for the mean-square error.
Clark, Daniel Edward and Bell, Judith"Convergence results for the particle PHD filter"IEEE TRANSACTIONS ON SIGNAL PROCESSINGvol. 54no. 7pp. 2652-2661JUL2006

BibTeX Reference

The fusion of large scale classified side-scan sonar image mosaics
@ARTICLE{Reed:2006,
    author = {Reed,
    S and Ruiz,
    IT and Capus,
    C and Petillot,
    Y},
    title = {{The fusion of large scale classified side-scan sonar image mosaics}},
    journal = {{IEEE TRANSACTIONS ON IMAGE PROCESSING}},
    year = {{2006}},
    volume = {{15}},
    pages = {{2049-2060}},
    number = {{7}},
    month = {{JUL}},
    abstract = {{This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction,
    classification,
    navigation and registration,
    and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel,
    the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.}},
    adsurl = {http://dx.doi.org/10.1109/TIP.2006.873448},
    doi = {{10.1109/TIP.2006.873448}},
    issn = {{1057-7149}},
    unique-id = {{ISI:000238714200029}} }
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
Reed, S and Ruiz, IT and Capus, C and Petillot, Y"The fusion of large scale classified side-scan sonar image mosaics"IEEE TRANSACTIONS ON IMAGE PROCESSINGvol. 15no. 7pp. 2049-2060JUL2006

BibTeX Reference

Redundancy of independent component analysis in four common types of childhood epileptic seizure
@ARTICLE{Unsworth:2006,
    author = {Unsworth,
    CP and Spowart,
    JJ and Lawson,
    G and Brown,
    JK and Mulgrew,
    B and Minns,
    RA and Clarkf,
    M},
    title = {{Redundancy of independent component analysis in four common types of childhood epileptic seizure}},
    journal = {{JOURNAL OF CLINICAL NEUROPHYSIOLOGY}},
    year = {{2006}},
    volume = {{23}},
    pages = {{245-253}},
    number = {{3}},
    month = {{JUN}},
    abstract = {{Independent component analysis (ICA) has recently been applied to epileptic seizure in the EEG. In this paper,
    the authors show how the fundamental axioms required for ICA to be valid are broken. Four common cases of childhood seizure are presented and assessed for stationarity and an eigenvalue analysis is applied. In all cases,
    for the stationary sections of data the eigenvalue analysis yields results that imply the signals are coming from a source-rich environment,
    thus yielding ICA inappropriate when applied to the four common types of childhood seizure. The results suggest that it is not appropriate to apply ICA or source localization from independent components in these four common cases of epilepsy,
    because the spurious independent components determined by ICA could lead to a spurious localization of the epilepsy. If surgery were to follow,
    it could result in the incorrect treatment of a healthy localized region of the brain.}},
    issn = {{0736-0258}},
    unique-id = {{ISI:000238103200009}} }
Independent component analysis (ICA) has recently been applied to epileptic seizure in the EEG. In this paper, the authors show how the fundamental axioms required for ICA to be valid are broken. Four common cases of childhood seizure are presented and assessed for stationarity and an eigenvalue analysis is applied. In all cases, for the stationary sections of data the eigenvalue analysis yields results that imply the signals are coming from a source-rich environment, thus yielding ICA inappropriate when applied to the four common types of childhood seizure. The results suggest that it is not appropriate to apply ICA or source localization from independent components in these four common cases of epilepsy, because the spurious independent components determined by ICA could lead to a spurious localization of the epilepsy. If surgery were to follow, it could result in the incorrect treatment of a healthy localized region of the brain.
Unsworth, CP and Spowart, JJ and Lawson, G and Brown, JK and Mulgrew, B and Minns, RA and Clarkf, M"Redundancy of independent component analysis in four common types of childhood epileptic seizure"JOURNAL OF CLINICAL NEUROPHYSIOLOGYvol. 23no. 3pp. 245-253JUN2006

BibTeX Reference

Optimal illumination for three-image photometric stereo using sensitivity analysis
@ARTICLE{Spence:2006,
    author = {Spence,
    AD and Chantler,
    MJ},
    title = {{Optimal illumination for three-image photometric stereo using sensitivity analysis}},
    journal = {{IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING}},
    year = {{2006}},
    volume = {{153}},
    pages = {{149-159}},
    number = {{2}},
    month = {{APR}},
    abstract = {{The optimal placement of the illumination for three-image photometric stereo acquisition of smooth and rough surface textures with respect to camera noise is derived and verified experimentally. The sensitivities of the scaled surface normal elements are derived and used to provide expressions for the noise variances. An overall figure of merit is developed by considering the image-based rendering (i.e. relighting) of Lambertian surfaces. This metric is optimised numerically with respect to the illumination angles. An orthogonal configuration was found to be optimal. With regard to constant slant,
    the optimal separation between the tilt angles of successive illumination vectors was found to be 120 degrees. The optimal slant angle was found to be 90 degrees for smooth surface textures and 55 degrees for rough surface textures.}},
    adsurl = {http://dx.doi.org/10.1049/ip-vis:20050229},
    doi = {{10.1049/ip-vis:20050229}},
    issn = {{1350-245X}},
    unique-id = {{ISI:000237771300008}} }
The optimal placement of the illumination for three-image photometric stereo acquisition of smooth and rough surface textures with respect to camera noise is derived and verified experimentally. The sensitivities of the scaled surface normal elements are derived and used to provide expressions for the noise variances. An overall figure of merit is developed by considering the image-based rendering (i.e. relighting) of Lambertian surfaces. This metric is optimised numerically with respect to the illumination angles. An orthogonal configuration was found to be optimal. With regard to constant slant, the optimal separation between the tilt angles of successive illumination vectors was found to be 120 degrees. The optimal slant angle was found to be 90 degrees for smooth surface textures and 55 degrees for rough surface textures.
Spence, AD and Chantler, MJ"Optimal illumination for three-image photometric stereo using sensitivity analysis"IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSINGvol. 153no. 2pp. 149-159APR2006

BibTeX Reference

Detecting and characterising returns in a pulsed ladar system
@ARTICLE{Wallace:2006,
    author = {Wallace,
    AM and Sung,
    RCW and Buller,
    GS and Harkins,
    RD and Warburton,
    RE and Lamb,
    RA},
    title = {{Detecting and characterising returns in a pulsed ladar system}},
    journal = {{IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING}},
    year = {{2006}},
    volume = {{153}},
    pages = {{160-172}},
    number = {{2}},
    month = {{APR}},
    abstract = {{A new multi-spectral laser radar (ladar) system based on the time-correlated single photon counting,
    time-of-flight technique has been designed to detect and characterise distributed targets at ranges of several kilometres. The system uses six separated laser channels in the visible and near infrared part of the electromagnetic spectrum. The authors present a method to detect the numbers,
    positions,
    heights and shape parameters of returns from this system,
    used for range profiling and target classification. The algorithm has two principal stages: non-parametric bump hunting based on an analysis of the smoothed derivatives of the photon count histogram in scale space,
    and maximum likelihood estimation using Poisson statistics. The approach is demonstrated on simulated and real data from a multi-spectral ladar system,
    showing that the return parameters can be estimated to a high degree of accuracy.}},
    adsurl = {http://dx.doi.org/10.1049/ip-vis:20045023},
    doi = {{10.1049/ip-vis:20045023}},
    issn = {{1350-245X}},
    unique-id = {{ISI:000237771300009}} }
A new multi-spectral laser radar (ladar) system based on the time-correlated single photon counting, time-of-flight technique has been designed to detect and characterise distributed targets at ranges of several kilometres. The system uses six separated laser channels in the visible and near infrared part of the electromagnetic spectrum. The authors present a method to detect the numbers, positions, heights and shape parameters of returns from this system, used for range profiling and target classification. The algorithm has two principal stages: non-parametric bump hunting based on an analysis of the smoothed derivatives of the photon count histogram in scale space, and maximum likelihood estimation using Poisson statistics. The approach is demonstrated on simulated and real data from a multi-spectral ladar system, showing that the return parameters can be estimated to a high degree of accuracy.
Wallace, AM and Sung, RCW and Buller, GS and Harkins, RD and Warburton, RE and Lamb, RA"Detecting and characterising returns in a pulsed ladar system"IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSINGvol. 153no. 2pp. 160-172APR2006

BibTeX Reference

A numerically accurate and robust expression for bistatic scattering from a plane triangular facet (L)
@ARTICLE{Wendelboe:2006,
    author = {Wendelboe,
    G and Jacobsen,
    F and Bell,
    JM},
    title = {{A numerically accurate and robust expression for bistatic scattering from a plane triangular facet (L)}},
    journal = {{JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA}},
    year = {{2006}},
    volume = {{119}},
    pages = {{701-704}},
    number = {{2}},
    month = {{FEB}},
    abstract = {{This work is related to modeling of synthetic sonar images of naval mines or other objects. Considered here is the computation of high frequency scattering from the surface of a rigid 3D-object numerically represented by plane triangular facets. The far field scattered pressure from each facet is found by application of the Kirchhoff approximation. Fawcett {[}J. Acoust. Soc. Am. 109,
    1319-1320 (2001)] derived a time domain expression for the backscattered pressure from a trianaular facet,
    but the expression encountered numerical problems at certain angles,
    and therefore,
    the effective ensonified area was applied instead. The effective ensonified area solution is exact at normal incidence. but at other angles,
    where singularities also exist,
    the scattered pressure will be incorrect. This paper presents a frequency domain expression generalized to bistatic scattering written in terms of sine functions; it is shown that the expression improves the computational accuracy without loss of robustness. (c) 2006 Acoustical Society of America.}},
    adsurl = {http://dx.doi.org/10.1121/1.2149842},
    doi = {{10.1121/1.2149842}},
    issn = {{0001-4966}},
    unique-id = {{ISI:000235458100001}} }
This work is related to modeling of synthetic sonar images of naval mines or other objects. Considered here is the computation of high frequency scattering from the surface of a rigid 3D-object numerically represented by plane triangular facets. The far field scattered pressure from each facet is found by application of the Kirchhoff approximation. Fawcett {[}J. Acoust. Soc. Am. 109, 1319-1320 (2001)] derived a time domain expression for the backscattered pressure from a trianaular facet, but the expression encountered numerical problems at certain angles, and therefore, the effective ensonified area was applied instead. The effective ensonified area solution is exact at normal incidence. but at other angles, where singularities also exist, the scattered pressure will be incorrect. This paper presents a frequency domain expression generalized to bistatic scattering written in terms of sine functions; it is shown that the expression improves the computational accuracy without loss of robustness. (c) 2006 Acoustical Society of America.
Wendelboe, G and Jacobsen, F and Bell, JM"A numerically accurate and robust expression for bistatic scattering from a plane triangular facet (L)"JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICAvol. 119no. 2pp. 701-704FEB2006

BibTeX Reference

Harmonic array design: technique for efficient non-periodic array optimisation in digital sonar beamforming
@ARTICLE{Wilson:2006,
    author = {Wilson,
    MJ and McHugh,
    R},
    title = {{Harmonic array design: technique for efficient non-periodic array optimisation in digital sonar beamforming}},
    journal = {{IEE PROCEEDINGS-RADAR SONAR AND NAVIGATION}},
    year = {{2006}},
    volume = {{153}},
    pages = {{63-68}},
    number = {{1}},
    month = {{FEB}},
    abstract = {{A highly constrained iterative technique for the optimisation of non-periodic,
    (random and sparse) array geometries is presented. This approach involves establishing a set of array configurations which seek to minimise the peak sidelobe level of the array beampattern. The technique,
    harmonic array design,
    quickly produces a constrained set of near-optimal array solutions which can then be tried and tested. This is seen as an alternative to intensive computational search algorithms. Results of simulations for ten element linear non-periodic arrays are presented and their performance compared with equivalent periodic arrays. Such non-periodic arrays have the potential to offer improved angular resolution at a lower cost than periodic designs and have applications in all forms of acoustic and electromagnetic imaging. Here,
    the particular application of digital sonar beamforming is considered.}},
    adsurl = {http://dx.doi.org/10.1049/ip-rsn:20045131},
    doi = {{10.1049/ip-rsn:20045131}},
    issn = {{1350-2395}},
    unique-id = {{ISI:000235946100010}} }
A highly constrained iterative technique for the optimisation of non-periodic, (random and sparse) array geometries is presented. This approach involves establishing a set of array configurations which seek to minimise the peak sidelobe level of the array beampattern. The technique, harmonic array design, quickly produces a constrained set of near-optimal array solutions which can then be tried and tested. This is seen as an alternative to intensive computational search algorithms. Results of simulations for ten element linear non-periodic arrays are presented and their performance compared with equivalent periodic arrays. Such non-periodic arrays have the potential to offer improved angular resolution at a lower cost than periodic designs and have applications in all forms of acoustic and electromagnetic imaging. Here, the particular application of digital sonar beamforming is considered.
Wilson, MJ and McHugh, R"Harmonic array design: technique for efficient non-periodic array optimisation in digital sonar beamforming"IEE PROCEEDINGS-RADAR SONAR AND NAVIGATIONvol. 153no. 1pp. 63-68FEB2006

BibTeX Reference

Detecting and characterizing returns in a multi-spectral pulsed lidar system
@ARTICLE{Wallace:2006,
    author = {Wallace,
    A.M. and Sung,
    R. and Buller,
    G.S. and Harkins,
    R.D. and Ayre,
    C. and Foster,
    C. and Lamb,
    R.},
    title = {Detecting and characterizing returns in a multi-spectral pulsed lidar system},
    journal = {IEE Proceedings Vision,
    Image and Signal Processing},
    year = {2006},
    volume = {153},
    pages = {160-172},
    number = {2},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_DetectingAndCharacterisingReturnsInAPulsedLadarSystem.pdf} }
Wallace, A.M. and Sung, R. and Buller, G.S. and Harkins, R.D. and Ayre, C. and Foster, C. and Lamb, R."Detecting and characterizing returns in a multi-spectral pulsed lidar system"IEE Proceedings Vision, Image and Signal Processingvol. 153no. 2pp. 160-1722006

BibTeX Reference

High-speed photogrammetry system for measuring the kinematics of insect wings
@ARTICLE{Wallace:2006,
    author = {Iain D. Wallace and Nicholas J. Lawson and Andrew R. Harvey and Julian D. C. Jones and Andrew J. Moore},
    title = {High-speed photogrammetry system for measuring the kinematics of insect wings},
    journal = {Appl. Opt.},
    year = {2006},
    volume = {45},
    pages = {4165--4173},
    number = {17},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_HighSpeedPhotogrammetrySystemForMeasuringTheKinematicsOfInsectWings.pdf},
    keywords = {Instrumentation,
    measurement,
    and metrology; Surface measurements,
    figure},
    publisher = {OSA} }
Iain D. Wallace and Nicholas J. Lawson and Andrew R. Harvey and Julian D. C. Jones and Andrew J. Moore"High-speed photogrammetry system for measuring the kinematics of insect wings"Appl. Opt.vol. 45no. 17pp. 4165-41732006

BibTeX Reference

Adaptive Fourier-based surface reconstruction
@INPROCEEDINGS{Schall:2006,
    author = {Schall,
    Oliver and Belyaev,
    Alexander and Seidel,
    Hans-Peter},
    title = {{Adaptive Fourier-based surface reconstruction}},
    booktitle = {{GEOMETRIC MODELING AND PROCESSING - GMP 2006,
    PROCEEDINGS}},
    year = {{2006}},
    editor = {{Kim,
    MS and Shimada,
    K}},
    volume = {{4077}},
    series = {{LECTURE NOTES IN COMPUTER SCIENCE}},
    pages = {{34-44}},
    note = {{4th International Conference on Geometric Modeling and Processing (GMP 2006),
    Pittsburgh,
    PA,
    JUL 26-28,
    2006}},
    abstract = {{In this paper,
    we combine Kazhdan's FFT-based approach to surface reconstruction from oriented points with adaptive subdivision and partition of unity blending techniques. The advantages of our surface reconstruction method include a more robust surface restoration in regions where the surface bends close to itself and a lower memory consumption. The latter allows us to achieve a higher reconstruction accuracy than the original global approach. Furthermore,
    our reconstruction process is guided by a global error control achieved by computing the Hausdorff distance of selected input samples to intermediate reconstructions.}},
    isbn = {{3-540-36711-X}},
    issn = {{0302-9743}},
    unique-id = {{ISI:000239567900003}} }
In this paper, we combine Kazhdan's FFT-based approach to surface reconstruction from oriented points with adaptive subdivision and partition of unity blending techniques. The advantages of our surface reconstruction method include a more robust surface restoration in regions where the surface bends close to itself and a lower memory consumption. The latter allows us to achieve a higher reconstruction accuracy than the original global approach. Furthermore, our reconstruction process is guided by a global error control achieved by computing the Hausdorff distance of selected input samples to intermediate reconstructions.
Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter"Adaptive Fourier-based surface reconstruction"vol. 4077pp. 34-442006

BibTeX Reference

Linear programming polytope and algorithm for mean payoff games
@INPROCEEDINGS{Svensson:2006,
    author = {Svensson,
    Ola and Vorobyov,
    Sergei},
    title = {{Linear programming polytope and algorithm for mean payoff games}},
    booktitle = {{ALGORITHMIC ASPECTS IN INFORMATION AND MANAGEMENT,
    PROCEEDINGS}},
    year = {{2006}},
    editor = {{Cheng,
    SW and Poon,
    CK}},
    volume = {{4041}},
    series = {{LECTURE NOTES IN COMPUTER SCIENCE}},
    pages = {{64-78}},
    note = {{2nd International Conference on Algorithmic Aspects in Information and Management,
    Hong Kong,
    PEOPLES R CHINA,
    JUN 20-22,
    2006}},
    abstract = {{We investigate LP-polytopes generated by mean payoff games and their properties,
    including the existence of tight feasible solutions of bounded size. We suggest a new associated algorithm solving a linear program and transforming its solution into a solution of the game.}},
    isbn = {{3-540-35157-4}},
    issn = {{0302-9743}},
    unique-id = {{ISI:000239454600008}} }
We investigate LP-polytopes generated by mean payoff games and their properties, including the existence of tight feasible solutions of bounded size. We suggest a new associated algorithm solving a linear program and transforming its solution into a solution of the game.
Svensson, Ola and Vorobyov, Sergei"Linear programming polytope and algorithm for mean payoff games"vol. 4041pp. 64-782006

BibTeX Reference

A general method for human activity recognition in video
@ARTICLE{Robertson:1225854,
    author = {Neil Robertson and Ian Reid},
    title = {A general method for human activity recognition in video},
    journal = {Comput. Vis. Image Underst.},
    year = {2006},
    volume = {104},
    pages = {232--248},
    number = {2},
    address = {New York,
    NY,
    USA},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/robertson_reid_cviu2006.pdf},
    doi = {http://dx.doi.org/10.1016/j.cviu.2006.07.006},
    issn = {1077-3142},
    publisher = {Elsevier Science Inc.} }
Neil Robertson and Ian Reid"A general method for human activity recognition in video"Comput. Vis. Image Underst.vol. 104no. 2pp. 232-2482006

BibTeX Reference

Defocus inpainting
@INPROCEEDINGS{Favaro:2006,
    author = {Favaro,
    Paolo and Grisan,
    Enrico},
    title = {{Defocus inpainting}},
    booktitle = {{COMPUTER VISION - ECCV 2006,
    PT 2,
    PROCEEDINGS}},
    year = {{2006}},
    editor = {{Leonardis,
    A and Bischof,
    H and Pinz,
    A}},
    volume = {{3952}},
    number = {{Part 2}},
    series = {{LECTURE NOTES IN COMPUTER SCIENCE}},
    pages = {{349-359}},
    note = {{9th European Conference on Computer Vision (ECCV 2006),
    Graz,
    AUSTRIA,
    MAY 07-13,
    2006}},
    abstract = {{In this paper,
    we propose a method to restore a single image affected by space-varying blur. The main novelty of our method is the use of recurring patterns as regularization during the restoration process. We postulate that restored patterns in the deblurred image should resemble other sharp details in the input image. To this purpose,
    we establish the correspondence of regions that are similar up to Gaussian blur. When two regions are in correspondence,
    one can perform deblurring by using the sharpest of the two as a proposal. Our solution consists of two steps: First,
    estimate correspondence of similar patches and their relative amount of blurring; second,
    restore the input image by imposing the similarity of such recurring patterns as a prior. Our approach has been successfully tested on both real and synthetic data.}},
    isbn = {{3-540-33834-9}},
    issn = {{0302-9743}},
    unique-id = {{ISI:000237555200027}} }
In this paper, we propose a method to restore a single image affected by space-varying blur. The main novelty of our method is the use of recurring patterns as regularization during the restoration process. We postulate that restored patterns in the deblurred image should resemble other sharp details in the input image. To this purpose, we establish the correspondence of regions that are similar up to Gaussian blur. When two regions are in correspondence, one can perform deblurring by using the sharpest of the two as a proposal. Our solution consists of two steps: First, estimate correspondence of similar patches and their relative amount of blurring; second, restore the input image by imposing the similarity of such recurring patterns as a prior. Our approach has been successfully tested on both real and synthetic data.
Favaro, Paolo and Grisan, Enrico"Defocus inpainting"vol. 3952no. Part 2pp. 349-3592006

BibTeX Reference

Circularly symmetric phase filters for control of primary third-order aberrations: coma and astigmatism
@ARTICLE{Mezouari:2006,
    author = {Samir Mezouari and Gonzalo Muyo and Andrew R. Harvey},
    title = {Circularly symmetric phase filters for control of primary third-order aberrations: coma and astigmatism},
    journal = {J. Opt. Soc. Am. A},
    year = {2006},
    volume = {23},
    pages = {1058--1062},
    number = {5},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Mezouari_CircularlySymmetricPhaseFiltersForControlOfPrimaryThirdOrderAberrationsComaAndAstigmatism.pdf},
    keywords = {Diffraction theory; Image formation theory},
    publisher = {OSA} }
Samir Mezouari and Gonzalo Muyo and Andrew R. Harvey"Circularly symmetric phase filters for control of primary third-order aberrations: coma and astigmatism"J. Opt. Soc. Am. Avol. 23no. 5pp. 1058-10622006

BibTeX Reference

Physical optics modelling of millimetre-wave personnel scanners
@ARTICLE{GrafullaGonzález2006,
    author = {Beatriz Grafulla-Gonz\\'{a}lez and Katia Lebart and Andrew R. Harvey},
    title = {Physical optics modelling of millimetre-wave personnel scanners},
    journal = {Pattern Recognition Letters},
    year = {2006},
    volume = {27},
    pages = {1852 - 1862},
    number = {15},
    note = {Vision for Crime Detection and Prevention},
    abstract = {We describe the physical-optics modelling of a millimetre-wave imaging system intended to enable automated detection of threats hidden under clothes. This paper outlines the theoretical basis of the formation of millimetre-wave images and provides the model of the simulated imaging system. Results of simulated images are presented and the validation with real ones is carried out. Finally,
    we present a brief study of the potential materials to be classified in this system.},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Grafulla_PhysicalOpticsModellingOfMillimetreWavePersonnelScanners.pdf},
    doi = {DOI: 10.1016/j.patrec.2006.02.002},
    issn = {0167-8655},
    keywords = {Millimetre-wave} }
We describe the physical-optics modelling of a millimetre-wave imaging system intended to enable automated detection of threats hidden under clothes. This paper outlines the theoretical basis of the formation of millimetre-wave images and provides the model of the simulated imaging system. Results of simulated images are presented and the validation with real ones is carried out. Finally, we present a brief study of the potential materials to be classified in this system.
Beatriz Grafulla-González and Katia Lebart and Andrew R. Harvey"Physical optics modelling of millimetre-wave personnel scanners"Pattern Recognition Lettersvol. 27no. 15pp. 1852 - 18622006

BibTeX Reference

Estimating Gaze Direction from Low-Resolution Faces in Video
@BOOK{Robertson:965206,
    title = {Estimating Gaze Direction from Low-Resolution Faces in Video},
    year = {2006},
    author = {Robertson,
    Neil and Reid,
    Ian },
    pages = {402--415},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/robertson_eccv_2006.pdf},
    doi = {10.1007/11744047\\_31},
    journal = {Lecture Notes in Computer Science : Computer Vision \\^{a} ECCV 2006},
    keywords = {body-pose,
    head-pose,
    low-res},
    posted-at = {2006-11-28 11:45:37},
    priority = {2},
    url = {http://dx.doi.org/10.1007/11744047\\_31} }
Robertson, Neil and Reid, Ian"Estimating Gaze Direction from Low-Resolution Faces in Video"Lecture Notes in Computer Science : Computer Vision \^a ECCV 2006pp. 402-4152006

BibTeX Reference

Optimized thermal imaging with a singlet and pupil plane encoding: experimental realization
@INPROCEEDINGS{Muyo:2006,
    author = {Muyo,
    Gonzalo and Singh,
    Amritpal and Andersson,
    Mathias and Huckridge,
    David and Harvey,
    Andy},
    title = {{Optimized thermal imaging with a singlet and pupil plane encoding: experimental realization}},
    booktitle = {{Electro-Optical and Infrared Systems: Technology and Applications III}},
    year = {{2006}},
    editor = {{Driggers,
    RG and Huckridge,
    DA}},
    volume = {{6395}},
    series = {{PROCEEDINGS OF THE SOCIETY OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS (SPIE)}},
    pages = {{U211-U219}},
    note = {{Conference on Electro-Optical and Infrared Systems - Technology and Applications III,
    Stockholm,
    SWEDEN,
    SEP 13-14,
    2006}},
    abstract = {{Pupil plane encoding has shown to be a useful technique to extend the depth of field of optical systems. Recently,
    further studies have demonstrated its potential in reducing the impact of other common focus-related aberrations (such as thermally induced defocus,
    field curvature,
    etc) which enables to employ simple and low-cost optical systems while maintaining good optical performance. In this paper,
    we present for the first time an experimental application where pupil plane encoding alleviates aberrations across the field of view of an uncooled LWIR optical system formed by F/1,
    75mm focal length germanium singlet and a 320x240 detector array with 38-micron pixel. The singlet was corrected from coma and spherical aberration but exhibited large amounts of astigmatism and field curvature even for small fields of view. A manufactured asymmetrical germanium phase mask was placed at the front of the singlet,
    which in combination with digital image processing enabled to increase significantly the performance across the entire field of view. This improvement is subject to the exceptionally challenging manufacturing of the asymmetrical phase mask and noise amplification in the digitally restored image. Future research will consider manufacturing of the phase mask in the front surface of the singlet and a real-time implementation of the image processing algorithms.}},
    adsurl = {http://dx.doi.org/10.1117/12.689765},
    doi = {{10.1117/12.689765}},
    isbn = {{978-0-8194-6493-4}},
    issn = {{0277-786X}},
    unique-id = {{ISI:000243902700021}} }
Pupil plane encoding has shown to be a useful technique to extend the depth of field of optical systems. Recently, further studies have demonstrated its potential in reducing the impact of other common focus-related aberrations (such as thermally induced defocus, field curvature, etc) which enables to employ simple and low-cost optical systems while maintaining good optical performance. In this paper, we present for the first time an experimental application where pupil plane encoding alleviates aberrations across the field of view of an uncooled LWIR optical system formed by F/1, 75mm focal length germanium singlet and a 320x240 detector array with 38-micron pixel. The singlet was corrected from coma and spherical aberration but exhibited large amounts of astigmatism and field curvature even for small fields of view. A manufactured asymmetrical germanium phase mask was placed at the front of the singlet, which in combination with digital image processing enabled to increase significantly the performance across the entire field of view. This improvement is subject to the exceptionally challenging manufacturing of the asymmetrical phase mask and noise amplification in the digitally restored image. Future research will consider manufacturing of the phase mask in the front surface of the singlet and a real-time implementation of the image processing algorithms.
Muyo, Gonzalo and Singh, Amritpal and Andersson, Mathias and Huckridge, David and Harvey, Andy"Optimized thermal imaging with a singlet and pupil plane encoding: experimental realization"vol. 6395pp. U211-U2192006

Publications in 2005

BibTeX Reference

Behaviour understanding in video: a combined method
@ARTICLE{Robertson:1541336,
    author = {Robertson,
    N. and Reid,
    I.},
    title = {Behaviour understanding in video: a combined method},
    journal = {Computer Vision,
    2005. ICCV 2005. Tenth IEEE International Conference on},
    year = {2005},
    volume = {1},
    pages = { 808-815 Vol. 1},
    month = {Oct.},
    abstract = {In this paper we develop a system for human behaviour recognition in video sequences. Human behaviour is modelled as a stochastic sequence of actions. Actions are described by a feature vector comprising both trajectory information (position and velocity),
    and a set of local motion descriptors. Action recognition is achieved via probabilistic search of image feature databases representing previously seen actions. A HMM which encodes the rules of the scene is used to smooth sequences of actions.},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/iccv2005_robertson.pdf},
    doi = {10.1109/ICCV.2005.47},
    issn = {1550-5499 },
    keywords = { belief maintenance,belief networks,hidden Markov models,image motion analysis,image representation,image sequences,video signal processing Bayes network,action recognition,automated video annotation,belief propagation,broadcast tennis sequence,feature vector,hidden Markov models,human behaviour recognition,image feature database,image sequence,learned action database,local motion descriptor,nonparametric sampling,probabilistic search,trajectory information,video sequence} }
In this paper we develop a system for human behaviour recognition in video sequences. Human behaviour is modelled as a stochastic sequence of actions. Actions are described by a feature vector comprising both trajectory information (position and velocity), and a set of local motion descriptors. Action recognition is achieved via probabilistic search of image feature databases representing previously seen actions. A HMM which encodes the rules of the scene is used to smooth sequences of actions.
Robertson, N. and Reid, I."Behaviour understanding in video: a combined method"Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference onvol. 1pp. 808-815 Vol. 1Oct.2005

BibTeX Reference

Comment on `extended depth of field in hybrid imaging systems: circular aperture'
@ARTICLE{Sherif:2005,
    author = {Sherif,
    SS and Muyo,
    G and Harvey,
    AR},
    title = {{Comment on `extended depth of field in hybrid imaging systems: circular aperture'}},
    journal = {{JOURNAL OF MODERN OPTICS}},
    year = {{2005}},
    volume = {{52}},
    pages = {{1783-1788}},
    number = {{13}},
    month = {{SEP 10}},
    abstract = {{A recent paper by Sherif et al. (S.S. Sherif,
    E.R. Dowski,
    W.T. Cathey,
    J. Mod. Opt. 51 1191 (2004).) reported the derivation of an aspherical phase plate,
    which when placed at the exit pupil of a conventional imaging system and combined with digital processing of the recorded images,
    increases the depth of field by an order of magnitude. An error in the derivation of this phase plate has been identified,
    which makes the reported extension of depth-of-field sub-optimum rather than invalid. In this comment,
    an optimum phase plate is obtained and the relevant results are repeated.}},
    adsurl = {http://dx.doi.org/10.1080/09500340500141649},
    doi = {{10.1080/09500340500141649}},
    issn = {{0950-0340}},
    unique-id = {{ISI:000231898400001}} }
A recent paper by Sherif et al. (S.S. Sherif, E.R. Dowski, W.T. Cathey, J. Mod. Opt. 51 1191 (2004).) reported the derivation of an aspherical phase plate, which when placed at the exit pupil of a conventional imaging system and combined with digital processing of the recorded images, increases the depth of field by an order of magnitude. An error in the derivation of this phase plate has been identified, which makes the reported extension of depth-of-field sub-optimum rather than invalid. In this comment, an optimum phase plate is obtained and the relevant results are repeated.
Sherif, SS and Muyo, G and Harvey, AR"Comment on `extended depth of field in hybrid imaging systems: circular aperture'"JOURNAL OF MODERN OPTICSvol. 52no. 13pp. 1783-1788SEP 102005

BibTeX Reference

A study on the second order statistics of Nakagami-Hoyt mobile fading channels
@ARTICLE{Wang:2005,
    author = {Neji Youssef and Cheng-Xiang Wang and Matthias P\\'atzold},
    title = {A study on the second order statistics of Nakagami-Hoyt mobile fading channels},
    journal = {IEEE Trans. Vehicular Technology,
    Special Issue on Antenna Systems and Propagation for Future Wireless Communications},
    year = {2005},
    volume = {54},
    pages = {1259-1265},
    number = {4},
    month = {July},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_AStudyOnTheSecondOrderStatisticsOfNakagamiHoytMobileFadingChannels.pdf} }
Neji Youssef and Cheng-Xiang Wang and Matthias Pätzold"A study on the second order statistics of Nakagami-Hoyt mobile fading channels"IEEE Trans. Vehicular Technology, Special Issue on Antenna Systems and Propagation for Future Wireless Communicationsvol. 54no. 4pp. 1259-1265July2005

BibTeX Reference

Real-time imaging with a hyperspectral fovea
@ARTICLE{FletcherHolmes:2005,
    author = {Fletcher-Holmes,
    David and Harvey,
    Andrew},
    title = {Real-time imaging with a hyperspectral fovea},
    journal = {Journal of Optics A: Pure and Applied Optics},
    year = {2005},
    volume = {7},
    pages = {S298--S302},
    number = {6},
    month = {June},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/FletcherHolmes_RealTimeImagingWithHyperspectralFovea.pdf},
    doi = {http://dx.doi.org/10.1088/1464-4258/7/6/007},
    issn = {1464-4258},
    posted-at = {2008-11-21 11:15:16},
    priority = {2},
    publisher = {Institute of Physics Publishing} }
Fletcher-Holmes, David and Harvey, Andrew"Real-time imaging with a hyperspectral fovea"Journal of Optics A: Pure and Applied Opticsvol. 7no. 6pp. S298-S302June2005

BibTeX Reference

Real-time imaging with a hyperspectral fovea
@ARTICLE{Mignotte:2008,
    author = {Fletcher-Holmes,
    David and Harvey,
    Andrew},
    title = {Real-time imaging with a hyperspectral fovea},
    journal = {Journal of Optics A: Pure and Applied Optics},
    year = {2005},
    volume = {7},
    pages = {S298--S302},
    number = {6},
    month = {June},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/FletcherHolmes_RealTimeImagingWithHyperspectralFovea.pdf},
    doi = {http://dx.doi.org/10.1088/1464-4258/7/6/007},
    issn = {1464-4258},
    posted-at = {2008-11-21 11:15:16},
    priority = {2},
    publisher = {Institute of Physics Publishing} }
Fletcher-Holmes, David and Harvey, Andrew"Real-time imaging with a hyperspectral fovea"Journal of Optics A: Pure and Applied Opticsvol. 7no. 6pp. S298-S302June2005

BibTeX Reference

Comment on `extended depth of field in hybrid imaging systems: circular aperture'
@ARTICLE{Sherif:2005,
    author = {Sherif,
    S. S. and Gonzalo Muyo and Andrew R. Harvey},
    title = {Comment on `extended depth of field in hybrid imaging systems: circular aperture'},
    journal = {Journal of Modern Optics},
    year = {2005},
    volume = {52},
    pages = {1783--1788},
    number = {13},
    abstract = {A recent paper by Sherif et al. (S.S. Sherif,
    E.R. Dowski,
    W.T. Cathey,
    J. Mod. Opt. 51 1191 (2004).) reported the derivation of an aspherical phase plate,
    which when placed at the exit pupil of a conventional imaging system and combined with digital processing of the recorded images,
    increases the depth of field by an order of magnitude. An error in the derivation of this phase plate has been identified,
    which makes the reported extension of depth-of-field sub-optimum rather than invalid. In this comment,
    an optimum phase plate is obtained and the relevant results are repeated.},
    adsurl = {http://dx.doi.org/10.1080/09500340500141649},
    publisher = {Taylor \\& Francis} }
A recent paper by Sherif et al. (S.S. Sherif, E.R. Dowski, W.T. Cathey, J. Mod. Opt. 51 1191 (2004).) reported the derivation of an aspherical phase plate, which when placed at the exit pupil of a conventional imaging system and combined with digital processing of the recorded images, increases the depth of field by an order of magnitude. An error in the derivation of this phase plate has been identified, which makes the reported extension of depth-of-field sub-optimum rather than invalid. In this comment, an optimum phase plate is obtained and the relevant results are repeated.
Sherif, S. S. and Gonzalo Muyo and Andrew R. Harvey"Comment on `extended depth of field in hybrid imaging systems: circular aperture'"Journal of Modern Opticsvol. 52no. 13pp. 1783-17882005

BibTeX Reference

Decomposition of the optical transfer function: wavefront coding imaging systems
@ARTICLE{Muyo:2005a,
    author = {Gonzalo Muyo and Andy R. Harvey},
    title = {Decomposition of the optical transfer function: wavefront coding imaging systems},
    journal = {Opt. Lett.},
    year = {2005},
    volume = {30},
    pages = {2715--2717},
    number = {20},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Muyo_DecompositionOfTheOTF_WavefrontCodingImagingSystems.pdf},
    keywords = {Lens system design; Imaging systems; Optical transfer functions},
    publisher = {OSA} }
Gonzalo Muyo and Andy R. Harvey"Decomposition of the optical transfer function: wavefront coding imaging systems"Opt. Lett.vol. 30no. 20pp. 2715-27172005

Publications in 2004

BibTeX Reference

A generative deterministic model for digital mobile fading channels
@ARTICLE{Wang:2004a,
    author = {Cheng-Xiang Wang and Matthias P\\'atzold},
    title = {A generative deterministic model for digital mobile fading channels},
    journal = {IEEE Communications Letters},
    year = {2004},
    volume = {8},
    pages = {223-225},
    number = {4},
    month = {April},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_AGenerativeDeterministicModelForDigitalMobileFadingChannels.pdf} }
Cheng-Xiang Wang and Matthias Pätzold"A generative deterministic model for digital mobile fading channels"IEEE Communications Lettersvol. 8no. 4pp. 223-225April2004

BibTeX Reference

Rotationally invariant space-time trellis codes with 4-D rectangular constellations for high data rate wireless communications
@ARTICLE{Wang:2004b,
    author = {Corneliu E. D. Sterian and Cheng-Xiang Wang and Ragnar Johnsen and Matthias P\\'atzold},
    title = {Rotationally invariant space-time trellis codes with 4-D rectangular constellations for high data rate wireless communications},
    journal = {Journal of Communications and Networks},
    year = {2004},
    volume = {6},
    pages = {258-268},
    number = {3},
    month = {March},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wang_RotationallyInvariantSpace-TimeTrellisCodesWith4DRectangularConstellationsForHighDataRateWirelessCommunications.pdf} }
Corneliu E. D. Sterian and Cheng-Xiang Wang and Ragnar Johnsen and Matthias Pätzold"Rotationally invariant space-time trellis codes with 4-D rectangular constellations for high data rate wireless communications"Journal of Communications and Networksvol. 6no. 3pp. 258-268March2004

BibTeX Reference

Viewpoint independent matching of planar curves in 3D space
@ARTICLE{Wallace:2004,
    author = {Liang,
    B. and Wallace,
    A.M.},
    title = {Viewpoint independent matching of planar curves in 3D space},
    journal = {Pattern Recognition},
    year = {2004},
    volume = {37},
    pages = {525-542},
    number = {4},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_ViewPointIndependentMatchingOfPlanarCurvesIn3DSpace.pdf} }
Liang, B. and Wallace, A.M."Viewpoint independent matching of planar curves in 3D space"Pattern Recognitionvol. 37no. 4pp. 525-5422004

BibTeX Reference

Wavefront coding for athermalization of infrared imaging systems
@INPROCEEDINGS{Muyo:2004,
    author = {Muyo,
    G and Harvey,
    AR},
    title = {{Wavefront coding for athermalization of infrared imaging systems}},
    booktitle = {{ELECTRO-OPTICAL AND INFRARED SYSTEMS: TECHNOLOGY AND APPLICATIONS}},
    year = {{2004}},
    editor = {{Driggers,
    RG and Huckridge,
    DA}},
    volume = {{5612}},
    series = {{PROCEEDINGS OF THE SOCIETY OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS (SPIE)}},
    pages = {{227-235}},
    note = {{Conference on Electro-Optical and Infrared Systems,
    London,
    ENGLAND,
    OCT 25-27,
    2004}},
    abstract = {{Wavefront coding involves the insertion of an asymmetric refractive mask close to the pupil plane of an imaging system so as to encode the image with a specific point spread function that,
    when combined with decoding of the recorded image,
    can enable greatly reduced sensitivity to imaging aberrations. The application of wavefront coding has potential in the fields of microscopy,
    where increased instantaneous depth of field is advantageous and in thermal imaging where it can enable the use of simple,
    low-cost,
    light-weight lens systems. It has been previously shown that wavefront coding can alleviate optical aberrations and extend the depth of field of incoherent imaging systems whilst maintaining diffraction-limited resolution. It is particularly useful in controlling thermally induced defocus aberrations in infrared imaging systems. These improvements in performance are subject to a range of constraints including the difficulty in manufacturing an asymmetrical phase mask and significant noise amplification in the digitally restored image. We describe the relation between the optical path difference (OPD) introduced by the phase mask and the magnitude of noise amplification in the restored image. In particular there is a trade between the increased tolerance to optical aberrations and reduced signal-to-noise ratio in the recovered image. We present numerical and experimental studies based of noise amplification with the specific consideration of a simple refractive infrared imaging system operated in an ambient temperature varying from 0degreesC to +50 C. These results are used to delineate the design and application envelope for which infrared imaging can benefit from wavefront coding.}},
    adsurl = {http://dx.doi.org/10.1117/12.579738},
    doi = {{10.1117/12.579738}},
    isbn = {{0-8194-5565-2}},
    issn = {{0277-786X}},
    unique-id = {{ISI:000226768900024}} }
Wavefront coding involves the insertion of an asymmetric refractive mask close to the pupil plane of an imaging system so as to encode the image with a specific point spread function that, when combined with decoding of the recorded image, can enable greatly reduced sensitivity to imaging aberrations. The application of wavefront coding has potential in the fields of microscopy, where increased instantaneous depth of field is advantageous and in thermal imaging where it can enable the use of simple, low-cost, light-weight lens systems. It has been previously shown that wavefront coding can alleviate optical aberrations and extend the depth of field of incoherent imaging systems whilst maintaining diffraction-limited resolution. It is particularly useful in controlling thermally induced defocus aberrations in infrared imaging systems. These improvements in performance are subject to a range of constraints including the difficulty in manufacturing an asymmetrical phase mask and significant noise amplification in the digitally restored image. We describe the relation between the optical path difference (OPD) introduced by the phase mask and the magnitude of noise amplification in the restored image. In particular there is a trade between the increased tolerance to optical aberrations and reduced signal-to-noise ratio in the recovered image. We present numerical and experimental studies based of noise amplification with the specific consideration of a simple refractive infrared imaging system operated in an ambient temperature varying from 0degreesC to +50 C. These results are used to delineate the design and application envelope for which infrared imaging can benefit from wavefront coding.
Muyo, G and Harvey, AR"Wavefront coding for athermalization of infrared imaging systems"vol. 5612pp. 227-2352004

BibTeX Reference

Amplitude and phase filters for mitigation of defocus and third-order aberrations
@INPROCEEDINGS{Mezouari:2004,
    author = {Mezouari,
    S and Muyo,
    G and Harvey,
    AR},
    title = {{Amplitude and phase filters for mitigation of defocus and third-order aberrations}},
    booktitle = {{OPTICAL DESIGN AND ENGINEERING}},
    year = {{2004}},
    editor = {{Mazuray,
    L and Rogers,
    PJ and Wartmann,
    R}},
    volume = {{5249}},
    series = {{PROCEEDINGS OF THE SOCIETY OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS (SPIE)}},
    pages = {{238-248}},
    note = {{Conference on Optical Design and Engineering,
    St Etienne,
    FRANCE,
    SEP 30-OCT 03,
    2003}},
    abstract = {{This paper gives a review on the design and use of both amplitude filters and phase filters to achieve a large focal depth in incoherent imaging systems. Traditional optical system design enhances the resolution of incoherent imaging systems by optical-only manipulations or some type of post-processing of an image that has been already recorded. A brief introduction to recent techniques to increase the depth of field by use of hybrid optical/digital imaging system is reported and its performance is compared with a conventional optical system. This technique,
    commonly named wavefront coding,
    employs an aspherical pupil plane element to encode the incident wavefront in such a way that the image recorded by the detector can be accurately restored over a large range of defocus. As reported in earlier work,
    this approach alleviates the effects of defocus and its related aberrations whilst maintaining diffraction-limited resolution. We explore the control of third order aberrations (spherical aberration,
    coma,
    astigmatism,
    and Petzval field curvature) through wavefront coding. This method offers the potential to implement diffraction-limited imaging systems using simple and low-cost lenses. Although these performances are associated with reductions in signal-to-noise ratio of the displayed image,
    the jointly optimised optical/digital hybrid imaging system can meet some specific requirements that are impossible to achieve with a traditional approach.}},
    isbn = {{0-8194-5133-9}},
    issn = {{0277-786X}},
    unique-id = {{ISI:000189460000025}} }
This paper gives a review on the design and use of both amplitude filters and phase filters to achieve a large focal depth in incoherent imaging systems. Traditional optical system design enhances the resolution of incoherent imaging systems by optical-only manipulations or some type of post-processing of an image that has been already recorded. A brief introduction to recent techniques to increase the depth of field by use of hybrid optical/digital imaging system is reported and its performance is compared with a conventional optical system. This technique, commonly named wavefront coding, employs an aspherical pupil plane element to encode the incident wavefront in such a way that the image recorded by the detector can be accurately restored over a large range of defocus. As reported in earlier work, this approach alleviates the effects of defocus and its related aberrations whilst maintaining diffraction-limited resolution. We explore the control of third order aberrations (spherical aberration, coma, astigmatism, and Petzval field curvature) through wavefront coding. This method offers the potential to implement diffraction-limited imaging systems using simple and low-cost lenses. Although these performances are associated with reductions in signal-to-noise ratio of the displayed image, the jointly optimised optical/digital hybrid imaging system can meet some specific requirements that are impossible to achieve with a traditional approach.
Mezouari, S and Muyo, G and Harvey, AR"Amplitude and phase filters for mitigation of defocus and third-order aberrations"vol. 5249pp. 238-2482004

Publications in 2003

BibTeX Reference

Combined amplitude and phase filters for increased tolerance to spherical aberration
@ARTICLE{Mezouari:2003,
    author = {Mezouari,
    S and Harvey,
    AR},
    title = {{Combined amplitude and phase filters for increased tolerance to spherical aberration}},
    journal = {{JOURNAL OF MODERN OPTICS}},
    year = {{2003}},
    volume = {{50}},
    pages = {{2213-2220}},
    number = {{14}},
    month = {{SEP 20}},
    abstract = {{Analysis of the expression for Strehl ratio for a circularly symmetric pupil allows one to design complex filters that offer reduced sensitivity to spherical aberration. It is shown that filters that combine hyper-Gaussian amplitude transmittance with hyper-Gaussian phase modulation provide five-fold reduction in sensitivity to spherical aberration. Furthermore,
    this is achieved without the introduction of zeros into the modulation transfer function and deconvolution can restore the transfer function to that of a diffraction-limited imager. The performance of the derived combined amplitude and phase filter is illustrated through the variation of its axial intensity versus spherical aberration. This technique is applicable to imaging in the presence of significant amounts of spherical aberration as is encountered in,
    for example,
    microscopy.}},
    adsurl = {http://dx.doi.org/10.1080/0950034031000099205},
    doi = {{10.1080/0950034031000099205}},
    issn = {{0950-0340}},
    unique-id = {{ISI:000185386700009}} }
Analysis of the expression for Strehl ratio for a circularly symmetric pupil allows one to design complex filters that offer reduced sensitivity to spherical aberration. It is shown that filters that combine hyper-Gaussian amplitude transmittance with hyper-Gaussian phase modulation provide five-fold reduction in sensitivity to spherical aberration. Furthermore, this is achieved without the introduction of zeros into the modulation transfer function and deconvolution can restore the transfer function to that of a diffraction-limited imager. The performance of the derived combined amplitude and phase filter is illustrated through the variation of its axial intensity versus spherical aberration. This technique is applicable to imaging in the presence of significant amounts of spherical aberration as is encountered in, for example, microscopy.
Mezouari, S and Harvey, AR"Combined amplitude and phase filters for increased tolerance to spherical aberration"JOURNAL OF MODERN OPTICSvol. 50no. 14pp. 2213-2220SEP 202003

BibTeX Reference

Validity of Fresnel and Fraunhofer approximations in scalar diffraction
@ARTICLE{Mezouari:2003b,
    author = {Mezouari,
    S and Harvey,
    AR},
    title = {{Validity of Fresnel and Fraunhofer approximations in scalar diffraction}},
    journal = {{JOURNAL OF OPTICS A-PURE AND APPLIED OPTICS}},
    year = {{2003}},
    volume = {{5}},
    pages = {{S86-S91}},
    number = {{4}},
    month = {{JUL}},
    note = {{Conference on the Applied Optics and Optoelectronics (Photon02),
    CARDIFF,
    WALES,
    SEP 02-05,
    2002}},
    abstract = {{Evaluation of the electromagnetic fields diffracted from plane apertures are,
    in the general case,
    highly problematic. Fortunately the exploitation of the Fresnel and more restricted Fraunhofer approximations can greatly simplify evaluation. In particular,
    the use of the fast Fourier transform algorithm when the Fraunhofer approximation is valid greatly increases the speed of computation. However,
    for specific applications it is often unclear which approximation is appropriate and the degree of accuracy that will be obtained. We build here on earlier work (Shimoji M 1995 Proc. 27th Southeastern Symp. on System Theory (Starkville,
    MS,
    March 1995) (Los Alamitos,
    CA: IEEE Computer Society Press) pp 520-4) that showed that for diffraction from a circular aperture and for a specific phase error,
    there is a specific curved boundary surface between the Fresnel and Fraunhofer regions. We derive the location of the boundary surface and the magnitude of the errors in field amplitude that can be expected as a result of applying the Fresnel and Fraunhofer approximations. These expressions are exact for a circular aperture and are extended to give the minimum limit on the domain of validity of the Fresnel approximation for plane arbitrary apertures.}},
    issn = {{1464-4258}},
    unique-id = {{ISI:000184571900033}} }
Evaluation of the electromagnetic fields diffracted from plane apertures are, in the general case, highly problematic. Fortunately the exploitation of the Fresnel and more restricted Fraunhofer approximations can greatly simplify evaluation. In particular, the use of the fast Fourier transform algorithm when the Fraunhofer approximation is valid greatly increases the speed of computation. However, for specific applications it is often unclear which approximation is appropriate and the degree of accuracy that will be obtained. We build here on earlier work (Shimoji M 1995 Proc. 27th Southeastern Symp. on System Theory (Starkville, MS, March 1995) (Los Alamitos, CA: IEEE Computer Society Press) pp 520-4) that showed that for diffraction from a circular aperture and for a specific phase error, there is a specific curved boundary surface between the Fresnel and Fraunhofer regions. We derive the location of the boundary surface and the magnitude of the errors in field amplitude that can be expected as a result of applying the Fresnel and Fraunhofer approximations. These expressions are exact for a circular aperture and are extended to give the minimum limit on the domain of validity of the Fresnel approximation for plane arbitrary apertures.
Mezouari, S and Harvey, AR"Validity of Fresnel and Fraunhofer approximations in scalar diffraction"JOURNAL OF OPTICS A-PURE AND APPLIED OPTICSvol. 5no. 4pp. S86-S91JUL2003

BibTeX Reference

Representation and classification of 3D objects
@ARTICLE{Wallace:2003,
    author = {Csakany,
    P. and Wallace,
    A.M.},
    title = {Representation and classification of 3D objects},
    journal = {IEEE Transactions on Systems,
    Man and Cybernetics},
    year = {2003},
    volume = {33},
    pages = {638-647},
    number = {4},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Wallace_RepresentationAndClassificationOf3DObjects.pdf} }
Csakany, P. and Wallace, A.M."Representation and classification of 3D objects"IEEE Transactions on Systems, Man and Cyberneticsvol. 33no. 4pp. 638-6472003

BibTeX Reference

Phase pupil functions for reduction of defocus and spherical aberrations
@ARTICLE{Mezouari:2003,
    author = {Samir Mezouari and Andrew R. Harvey},
    title = {Phase pupil functions for reduction of defocus and spherical aberrations},
    journal = {Opt. Lett.},
    year = {2003},
    volume = {28},
    pages = {771--773},
    number = {10},
    adsurl = {http://www.eece.hw.ac.uk/research/visp/downloads/Mezouari_PhasePupilFunctionsForReductionOfDefocusAndSphericalAberrations.pdf},
    keywords = {Diffraction theory; Image formation theory},
    publisher = {OSA} }
Samir Mezouari and Andrew R. Harvey"Phase pupil functions for reduction of defocus and spherical aberrations"Opt. Lett.vol. 28no. 10pp. 771-7732003