Cellular Automata Model of Radiation Therapy in Cervical Cancer
Robert N. Donnelly ('09), Katherine Belsky (Scripps '08), Hana Ueda (UC Berkeley '08), Ami Radunskaya, L.G. de Pillis (HMC)
Spatial interactions and the local chemical environment can play a major role both in the growth of a tumor and its resistance to radiation treatment. We propose a cellular automaton (CA) model of radiation therapy in early cervical cancer. This model not only incorporates cellular metabolism as a function of nutrient concentration, but also models diffusion of these nutrients with a modified random-walk. In particular, since tissue oxygenation plays a major role in the success of radiation therapy, we have included realistic determination of oxygen levels and the formation of a hypoxic core. Radiation damage is determined using an empirically-supported modified linear-quadratic (LQ) model. Our model can simulate fractionated doses of both external beam radiotherapy and brachytherapy, similar to in vivo treatments described in medical literature. Better understanding the interactions between a tumor and its environment may enhance not only our understanding of tumor growth but also allow us to better predict the effect of radiation therapy on a given tumor. Successful modeling of the effects of radiation therapy on tumor cells and normal cells may prove helpful in optimizing radiation treatment protocols to minimize collateral damage to healthy cells while still effectively treating the cancer.
Funding provided by: HHMI, NSF-REU (Claremont Colleges Mathematics)
Chain Partitions of Normalized Matching Posets
Elinor Gardner Escamilla ('09), Andreea Nicolae ('08), Jordan Tirrell Lafayette ('08), Paul Salerno ('09), Shahriar Shahriari
Imagine a network of computers. Visually, we could represent these computers by points, and if two computers can communicate, we draw an edge between them and say they are connected. A three dimensional cube, for example, gives a network of eight computers with a total of 12 connections. In our research we concentrated on certain network configurations that have higher dimensional cubes as a special case. Technically these networks are called normalized matching posets. In any network, a list of some of the computers with the property that each computer is connected to the next one is called a chain. For example, if 1 is connected to 2 which in turn is connected to 3, then we have a chain of size 3. A still unproven mathematical conjecture from 1975, claims that normalized matching posets can be partitioned into chains with particularly nice properties. In our research this summer, we were able to prove the validity of the conjecture for special classes of normalized matching posets.
Funding provided by: SURP (Seaver - EE); SURP (PS); NSF-REU (SS)
Variance and Bias in the Correlation of Sample Quantiles
Austen Wallace Head ('08), Johanna Hardin, Steve Adolph (HMC)
This project focuses on the correlation of data which contain sampling error. The correlation of measures such as means or quantiles generated by small samples routinely underestimate the correlation of the populations from which those measures come. When looking at sample means, there is a well established correction coefficient to account for this bias. Last summer I derived a similar equation to correct the bias in the correlation of sample maxima, but my correction relies on the true variance of maximum trials between participants which is often impossible to determine. This summer I have expanded my previous research. First, I estimated the true variance of the maximum participant value in two ways: by fitting distributions to each participant’s set of trials and by using a model traditionally used to fit enzyme kinetic data (Michaelis-Menten model). Second, I have expanded my correlation correction formula to address bias associated with correlation of other quantiles.
Funding provided by: HHMI (Harvey Mudd College)
Adaptive Nonparametric Tests for the Two-Sample Location Model with Applications to Microarray Data
Patrick K. Kimes ('09), John Kloke
Analysis of microarray experiments has become an important component of biological research. Microarray analysis primarily aims to test for genes that are differentially expressed. For example, comparing gene expression levels in cancerous and normal cells. Even though much work has gone into normalization of microarray data, the expression values for many genes still do not follow a Gaussian distribution. Simple linear rank statistics, which require the selection of a score function, generalize the Wilcoxon rank sum for testing for a difference in two populations or treatment groups. For genes that follow a Gaussian distribution, using normal scores yields optimal inference, while with heavier tailed distributions, using Wilcoxon or sign scores results in a more efficient use of the microarrays. Using simple data-based simulation methods applied to a null microarray dataset, we explored power for the two sample location problem. Our focus is on adaptive procedures, a technique by which score selection depends on the data.
Funding provided by: SURP
An Analysis of the Bi-Weight Metric with Microarray Data
Robert Jacob Kurtzman ('08), Johanna Hardin
For the specific problem of microarray analysis, it is important to provide a foundation for the biologists clustering gene expressions. Researchers have delved into every step of the clustering process and have shown marked results. Nonetheless, there is no consensus regarding which distance metric to use in creating a dissimilarity matrix. Typically, distance metrics are sensitive to outliers and/or do not take into account the structure of the data. Nonetheless, researchers have begun to address this problem; Professor Hardin (2007) shows her bi-weight metric to take into account the structure of the data and to be more robust than Pearson’s correlation and other non-parametric correlation measures. The goal of this project is to show that the bi-weight metric is more efficient and robust than other metrics for clustering microarray data. We compare the metric to Pearson's correlation and the percentage bend correlation. These measures are commonly used measures in microarray data. We use the PAM and HOPACH algorithms to cluster our data. This project's research was done in 'R,' a post-S+ computer language widely used in the Statistics community.
Funding provided by: SURP (Richter)
Effect of Walking Path on Gait Stride Variability
Juan Diego Rodriguez ('09), Ajoy Vase ('07), Ami Radunskaya, John Milton (JSD)
It has been reported that during self-paced human walking the variability in inter-stride intervals exhibit fractal dynamics characterized by long--range correlations with power-law decay having exponent ?. We measured gait stride variability on four walking paths having different shapes: ovals of differing major and minor axes and different surface roughness. Gait stride times were measured using force sensing resistors placed in the sole of a custom built wool felt slipper as described previously. It was observed that the auto-correlation functions calculated for interstride intervals resembled exponential cosines in which the period was related to the times between turns on the walking paths. No periodic component was observed for a subject walking on a treadmill. The range of ?’s reported previously for gait stride variability could be reproduced by adding white noise of varying intensities to a cosine function. These observations suggest that it may be possible to directly assess gait stability by measuring stride variability on specially constructed walking paths which introduce sudden changes in direction.
Funding provided by: NSF (AR); SURP (Craddock-McVicar - AV)
Edge Detection by Multi-Dimensional Wavelets
Julie Ann Siloti ('08), Marlana Anderson (Albany State University, '08), Katherine Maschmeyer (Washington University, '08), Pierre Gremaud*, Chris Brasfield*, Kevin McGoff**
*Department of Mathematics, North Carolina State University, Raleigh, NC **Department of Mathematics, University of Maryland, College Park, MD
It is well known that one-dimensional wavelet techniques are suboptimal in the representation of images. Recently a new generation of intrinsically two-dimensional wavelets, e.g. shearlets, has been introduced to alleviate these deficiencies. In this project, new edge detection methods were developed based on the shearlet transform. As a refinement of these methods, subdomain decomposition was introduced to preserve less dominant edges. Furthermore, several basic post-processing schemes were used to provide more distinct edges. All of the above methods were applied to both artificially generated and natural images. In order to measure the accuracy of the various methods, the Hausdorff distance between the actual and approximate edges of artificial images was computed. Through this analysis, it was concluded that edge detection methods based on shearlets are at least as accurate as popular methods, such as Canny and Sobel. On images with sharp corners, the results suggest that shearlet methods may actually improve upon traditional methods.
Funding provided by: NSF-REU (North Carolina State University)