Menu Content/Inhalt
Main arrow Research arrow Trabajos arrow Microcalcification cluster diagnosis in digitized mammograms
Microcalcification cluster diagnosis in digitized mammograms PDF
Written by Ramón Gallardo Caballero   
Article Index
Microcalcification cluster diagnosis in digitized mammograms
Our Proposal
Methodology
System implementation
Results
Future developments

Our proposal

Although nowadays digital mammography systems become popular, up to date the main data source in research investigation tasks has been digitized mammograms. Mammographic scanners provide a high resolution level: pixel sizes ranging in micron tenths and grey level resolution from 11 to 16 bits (2048 to 65536 grey values). This gives us an idea of the precision level which can be used working with mammograms.

Our work proposal is based in the use the technique know as Independent Component Analysis (ICA), as an efficient image feature extractor which will be used in a neural classifier. Independent Component Analysis is a technique which unlike some classic methods as variance or standard deviation uses high order statistics. Moreover, using samples from the signal space to model is able to infer a base which let us represent with a low number of components any image (signal) belonging to the space inferred.

This is the same task we carry out when decomposing a signal in its Fourier components or build wavelet decomposition, in the multiresolution field. But there exist important differences between the mentioned methods and ICA. First, both Fourier and wavelet decompositions uses fixed bases. ICA bases are generated to fit as better as possible the data space to model. Additionally an ICA development builds base matrices which maximize the non gaussianity of the input data space; this means that ICA bases model the most interesting characteristics of the modelled space. And this is precisely the key fact which leads us to use ICA as a feature extractor block instead of other more extended techniques as the previously mentioned wavelet transform or principal component analysis.

The second important element in our architecture is the neural classifier. We can’t deny that one of the most important reasons to use this kind of classifier is the previous experience of this group with this kind of systems, from our viewpoint a clear advantage. But additionally this kind of systems has characteristics which are especially interesting to broach the problem. Perhaps the most well-know can be the capacity to adjust its operation by means of “samples”, colloquially we can say it has learning capacity. But these systems have other important characteristic which is known as generalization capacity. Generalization in neural classifiers is the capacity to provide a right response for a completely unknown input. As can be inferred, this characteristic makes a neural classifier a great choice for a classification task like our, where input data variability is very high (contrast variations, mammography errors, artifacts, etc.).



Last Updated ( jueves, 25 noviembre 2010 )