Stephen Ierodiaconou

About Me

Consultant, developer & engineer

Stephen Ierodiaconou

Advising & bootstrapping startups, technical team lead, Ruby/Rails, node.js. Now exploring my own ideas.


Past experiences (LinkedIn):


Research

PhD in Image and Video Processing

2006 - 2011 University of Bristol, UK.

image segementation, image compression, texture analysis & texture synthesis.

UoB Research Reception • 2008 (First prize for research on image compression using higher level perceptual redundancy.)


PhD Thesis: Content Based Image Compression Using Texture Analysis and Synthesis

Stephen Paul Ierodiaconou • March 2011 • University of BristolISN:0000000427060572

The abstract:

Since demand for image and video content continues to outstrip available bandwidth there is a need for improved image and video compression techniques. However, improvements in traditional approaches are providing diminishing returns and thus new redundancies in visual content must be exploited. In this thesis a new architecture for image compression is proposed in which textures are removed from a source image and replaced using perceptually similar synthesised textures during decoding. Implementations for two codecs are presented, one block-based and open-loop and the other wavelet based with a synthesis quality-assessment feedback loop. In both techniques texture regions are located and the homogeneity assessed. Three homogeneity techniques are described and evaluated. For the texture synthesis process patch-based techniques are used. These are combined with region colour reconstruction techniques of which two are described and assessed. Common synthesis artefacts are determined and then two artefact detection techniques are proposed and discussed. Finally a new texture synthesis technique is described, which builds upon the findings of the prior work, and introduces a patch placement optimisation scheme that incorporates a spatial model of the sample texture to ensure the global structure of the synthesised texture is maintained. Results for the proposed algorithms are given to validate their performance and images coded using the two image compression architectures show savings of up to 26% over JPEG and up to 11% over JPEG2000 at the same quantisation. The perceptual quality of the results is discussed and, where appropriate, improvements suggested and otherwise the limitations highlighted. The results indicate that utilising higher level perceptual redundancies can indeed offer significant benefits to compression although more research is needed from both the fields of the psychology of vision and image processing.

Conference papers

See Google Scholar

Unsupervised image compression using graphcut texture synthesis

S. Ierodiaconou, J. Byrne, D.R. Bull, D. Redmill & P. Hill

Nov 2009

BibTex

An unsupervised image compression-by-synthesis system is proposed utilising wavelet based image segmentation and analysis combined with patch based texture synthesis. High perceptual quality is ensured using an artefact detection algorithm in the encoder loop. EBCOT is used to transform code texture samples and residual image data. Resulting bitrate savings of up to approximately 17% over JPEG2000 for little change in perceptual quality have been shown.

Unsupervised image compression-by-synthesis within a JPEG framework

J. Byrne, S. Ierodiaconou, D.R. Bull, D. Redmill & P. Hill

Oct 2008

BibTex

An image compression scheme is proposed, utilising wavelet- based image segmentation and texture analysis, and patch- based texture synthesis. This has been incorporated into a JPEG framework. Homogeneous textured regions are identified and removed prior to transform coding. These regions are then replaced at the decoder by synthesis from marked samples, and colour matched to ensure similarity to the original. Experimental results on natural images show bitrate savings of over 18% compared with JPEG for little change in measured visual quality.

Implementation and Optimisation of a Video Object Segmentation Algorithm on an Embedded DSP Platform

S.P. Ierodiaconou,N. Dahnoun and L.Q. Xu

June 2006

BibTex

The Gaussian mixture model (GMM) is a popular algorithm employed for visual scene segmentation. In this paper we present an investigation into a real-time implementation of the algorithm on an embedded TI DM642 DSP platform suitable for outdoor and indoor surveillance applications. We present a number of possible implementations in fixed-point arithmetic and investigate their respective performances over varying model parameters. We discuss a number of different optimisations capitalising on the DSP architecture that lead to a flexible and efficient video object segmentation working prototype system.