Extending the Recursive JensenShannon Segmentation of Biological Sequences
Abstract
In this paper, we extend a previously developed recursive entropic segmentation scheme for applications to biological sequences. Instead of Bernoulli chains, we model the statistically stationary segments in a biological sequence as Markov chains, and define a generalized JensenShannon divergence for distinguishing between two Markov chains. We then undertake a meanfield analysis, based on which we identify pitfalls associated with the recursive JensenShannon segmentation scheme. Following this, we explain the need for segmentation optimization, and describe two local optimization schemes for improving the positions of domain walls discovered at each recursion stage. We also develop a new termination criterion for recursive JensenShannon segmentation based on the strength of statistical fluctuations up to a minimum statistically reliable segment length, avoiding the need for unrealistic null and alternative segment models of the target sequence. Finally, we compare the extended scheme against the original scheme by recursively segmenting the Escherichia coli K12 MG1655 genome.
Genomic sequences, Markov chain, segmentation, model selection, meanfield analysis.
I Introduction
\PARstartLargescale genomic rearrangements, such as transpositions, inversions, and horizontal gene transfer (HGT), play important roles in the evolution of bacteria. Biological functions can be lost or gained in such recombination events. For example, it is known that virulent genes are frequently found near the boundaries of HGT islands, suggesting that virulence arise from the incorporation of foreign genetic material [1, 2, 3]. In a recent essay, Goldenfeld and Woese argued that the mosaic nature of bacterial genomes resulting from such largescale genomic rearrangements requires us to rethink familiar notions of phylogeny and evolution [4]. As a first step in unraveling the complex sequence of events that shape the probable evolutionary history of a bacterium, we need to first identify the recombination sites bounding recombined segments, which are frequently distinguishable statistically from their flanking sequences.
We do this by modeling the native and recombined segments in a genome as stationary Markov chains. The boundaries between such statistically stationary segments (or domains) are called change points in the statistical modeling literature, or domain walls in the statistical physics literature. Given a nucleotide or amino acid sequence of length , the problem of finding segments generated by stationary Markov chains is called segmentation [5, 6]. Many segmentation schemes can be found in the literature (see minireview by Braun and Muller [7]). For both unknown, Gionis and Mannila showed that finding the optimal segmentation for a given sequence is NPhard [8]. Therefore, some segmentation schemes assume , while others assume that is small, and known beforehand.
Of these, the recursive segmentation scheme introduced by BernaolaGalván et al. [9, 10] is conceptually appealing because of its simplicity. In this scheme, a given sequence is recursively partitioned into finer and finer segments — all modeled as Bernoulli chains — based on their JensenShannon divergences. The unknown number of segments (assumed to be equal to the number of segment types ) is then discovered when segmentation is terminated based on an appropriate statistical criterion. In this paper, we describe our extensions to this recursive segmentation scheme. In Sec. II, we explain how Markov chains model the shortrange correlations in a given sequence better than Bernoulli chains, and thereafter generalize the JensenShannon divergence to distinguish between two Markov chains. In Sec. III, we carry out a meanfield analysis to better understand the recursive segmentation scheme and its pitfalls, before describing two local segmentation optimization algorithms for improving the statistical significance of domain walls in Sec. IV. We also develop in Sec. V a new termination criterion, based on the intrinsic statistical fluctuations of the sequence to be segmented, for the recursive segmentation scheme. Finally, we compare our extended scheme against the original scheme by recursively segmenting the Escherichia coli K12 MG1655 genome in Sec. VI, before concluding in Sec. VII.
Ii Modeling Segments As Markov Chains
In the earliest recursive segmentation scheme proposed by BernaolaGalván et al. [9, 10], the divergence between 1mer statistics from two or more subsequences of a given sequence is examined. These subsequences are modeled as Bernoulli chains (equivalent to Markov chains of order ), even though it is well known that biological sequences exhibit dinucleotide correlations and codon biases [11, 12, 13, 14, 15]. Later versions of the recursive segmentation scheme examine higher order subsequence statistics, so as to take advantage of different codon usage in coding and noncoding regions [16, 17, 18], but these are still assumed to be drawn from Bernoulli chains, albeit with extended alphabets. The first study we are aware of modeling subsequences as Markov chains for recursive segmentation is the work by Thakur et al. [19].
In this section, we will explain why the observed dinucleotide frequences and codon biases in biological sequences can be better modeled by Markov chains of order , compared to Bernoulli chains with the same high order statistics. We will then generalize the JensenShannon divergence, so that it can be used in entropic segmentation schemes to quantify the statistical difference between Markov chains of order . Finally, we discuss the added modeling complexities associated with using Markovchain orders that vary from segment to segment, and change when segments are further divided.
Iia Markov Chains Versus Bernoulli Chains
Given a sequence , where the symbols are drawn from an alphabet containing letters, we want to model as being generated sequentially from a single stationary stochastic process. In the bioinformatics literature, is usually modeled as a Bernoulli chain or as a Markov chain. For a Bernoulli chain, the symbols are obtained from independent trials, governed by the state probabilities
(1) 
whereas for a Markov chain of order , the probability
(2) 
of finding the th symbol to be is conditioned by the symbols preceding it. We call these the transition probabilities
(3) 
of going from the mer to the 1mer . For the rest of this paper, we use the shorthand to represent the transition .
A Bernoulli chain over the alphabet is equivalent to a Markov chain of order , and is completely uncorrelated, in the sense that . Markov chains of order , on the other hand, contain shortrange correlations, the scale of which is set by , but no longrange correlations. Therefore, if the target sequence contains shortrange correlations, a Markov chain of nonzero order models better than a Bernoulli chain over the the same alphabet. It is also possible to capture shortrange correlations in using Bernoulli chains over extended alphabets. For example, a Markov chain of order over and a Bernoulli chain over the extended alphabet can both be used to model the mer statistics of . They are, however, not equivalent.
Let be the number of times the mer appears in . To model as a Markov chain of order over , we must use these mer counts to make maximumlikelihood estimates
(4) 
of the independent transition probabilities, subject to the normalization
(5) 
i.e. every sequence position in contributes one count to the model estimation. A Bernoulli chain over the extended alphabet , on the other hand, contains independent state probabilities — independent parameters more than the Markov chain of order .
Besides having many more independent parameters, there is also the subtle question of how to estimate the state probabilities, if we are to model using the Bernoulli chain over . If the state probabilities were given, we would generate a Bernoulli chain by sequentially appending mers drawn according to . This suggests that we partition the observed sequence into nonoverlapping mers, and use their counts to determine the maximumlikelihood state probabilities
(6) 
However, only one in sequence positions in contributes a count to the model estimation. The result is that such a Bernoulli chain model of would be much less statistically significant than a order Markov chain model of , since we are using a smaller number of counts to estimate a larger number of parameters. Alternatively, we can note the fact that there are different ways to partition into nonoverlapping mers. If we combine the counts from these different partitions, then every sequence position contributes one count to model estimation, just as for the Markov chain model.
No matter how we perform the model estimation, two adjacent mers in a Bernoulli chain over are guaranteed to be uncorrelated. This means that correlations at the mer level in can only be partly captured by a maximumlikelihood Bernoulli chain over . In contrast, a maximumlikelihood Markov chain of order over will capture most of the mer correlations in . We can ensure that most or all of the mer correlations in are captured by a Bernoulli chain model, by going to even larger extended alphabets, but as we have seen above, the statistical quality of the model deteriorates rapidly as we try to model counts with an exponentially increasing numbers of parameters. Based on the discussions above comparing Markov chain and Bernoulli chain models, we argue that a Markov chain of order is a more compact model of the mer statistics of an observed sequence .
IiB Generalizing the JensenShannon Divergence
After the maximumlikelihood model of an observed sequence is determined, we can compute its sequence likelihood within the model. This is the probability of getting , if we use the maximumlikelihood model to generate random sequences of length . For a Bernoulli chain model of over , we find the sequence likelihood to be given by
(7) 
where is the number of times the 1mer appears in .
Now, if we suspect that actually comprises two statistically stationary segments and , with domain wall at the cursor position , we must determine one maximumlikelihood Bernoulli chain model for , and another for . This involves tallying the 1mer counts and for and respectively, and then estimating the maximumlikelihood state probabilities
(8) 
In this twoBernoullisegment model of , the sequence likelihood is given by
(9) 
Because we have more free parameters in the twosegment model to fit 1mer statistics in the observed sequence , we have for all . The JensenShannon divergence [20]
(10) 
a symmetric variant of the relative entropy known more commonly as the KullbackLeibler divergence, is then a quantitative measure of how much better the twosegment model fits , compared to the onesegment model, subject to the constraints
(11) 
In the literature, the JensenShannon divergence is only defined for proper distributions , , and , for which
(12) 
When we want to compare the mer statistics of the subsequences and , by modeling these as Bernoulli chains, the expression for the JensenShannon divergence becomes
(13) 
comparing the proper distributions and against , which satisfy the normalization condition
(14) 
As we have explained, the mer statistics of , , and are better modeled as order Markov chains over . These maximumlikelihood onesegment and twosegment Markovchain models are determined by tallyng the transition counts , , and , and estimating the transition probabilities , , and according to Eq. (4). However, the full sets , , and of transition probabilities are not proper distributions, because they sum to
(15) 
Rather, we must think of them as sets of proper distributions, in which case the JensenShannon divergence that simultaneously compares all distributions generalizes to
(16) 
A similar expression for was derived by Thakur et al. [19].
Just as for Bernoullichain models, the JensenShannon divergence in Eq. (16) for Markovchain models of , , and , is also the log ratio of the onesegment and twosegment sequence likelihoods
(17) 
respectively. Here let us note that the summary of transition counts is satisfied by more than one sequence, but all these sequences have the same sequence likelihoods within the maximumlikelihood Markov chain models.
IiC Using Variable MarkovChain Orders for Segmentation
When all the segments in a sequence are modeled as Bernoulli chains, we need to determine only the optimal domain wall positions . When the segments are modeled as Markov chains, we need also decide what Markovchain orders to use for the segments. These two problems are coupled, and we shall understand in this subsection how to solve them, within the recursive segmentation framework, by considering the refinement of a sequence into two segments.
We start by considering the onesegment Markovchain model of the sequence , which serves as the null model against which the twosegment Markovchains model is compared. In principle, we can fit to a onesegment Markovchain model of any order . The onesegment sequence likelihood given in Eq. (17) increases with , because of the exponentially increasing number of free parameters to fit the observed counts. To perform segmentation, we want to be as large as possible, so that we can distinguish segments differing only in their highorder statistics. To determine the maximum Markovchain order justified by the statistics of , we compare the penalized sequence likelihoods for various . The negative logarithms of these penalized sequence likelihoods are also called the information criteria
(18) 
where is a penalty function. Common information criteria found in the literature are, the Akaike information criterion (AIC) [21], the Schwarz information criterion (SIC) [22], and the Bayes information criterion (BIC) [23]. For a Markov chain of length , order over an alphabet of letters, these differ in their penalty functions
(19) 
In particular, Katz showed that the BIC gives an unbiased estimate of the true order of a Markov chain [23], so we use the Markovchain order minimizing the BIC as the order of our onesegment model of .
Since we do not know beforehand whether contain statistically distinct segments, or whether the segments, if present, differ in loworder or highorder statistics, it is safest to search for them at , the maximum statistically justifiable Markovchain order. In the recursive segmentation scheme described in this paper, we would compute the JensenShannon divergence given in Eq. (16) as a function of the cursor position at order , and partition into two segments and , where is the sequence position maximizing the order JensenShannon divergence. Once the segments are discovered, we can compute the maximum Markovchain orders and that we would use to further partition and by minimizing their respective BICs. Because and are shorter than , we naturally have .
Next, suppose we believe that is indeed composed of two statistically distinct segments, and , but is not the best position for the domain wall between them. To find a better position for the domain wall between and , we can compute the twosegment sequence likelihoods for at various cursor positions , using an order model for , and an order model for , and pick to be the sequence position maximizing . Alternatively, we can compute the JensenShannon divergence spectrum at Markovchain order , and accept as its divergence maximum. We choose for doing so because this Markovchain order is statistically justifiable in both segments, and also has a smaller uncertainty associated with the domain wall position, as discussed in Sec. IIIA.
Iii MeanField Picture of Recursive JensenShannon Segmentation
In statistical physics, intrinsic fluctuations in the properties of a physical system (for example, the local density in a fluid) makes its true behaviour difficult to analyze and understand. In many physical systems, however, a great deal about their properties can be understood from simplified meanfield pictures, where we ignore statistical fluctuations, and assume that these properties take on systemwide values. In the same spirit, we develop in this section a meanfield picture of the recursive JensenShannon segmentation scheme proposed by BernaolaGalván et al. [9, 10], and explain the need to move or remove domain walls that have lost statistical significance as the segmentation is recursively refined. We start by examining in Sec. IIIA the general anatomy of a JensenShannon divergence spectrum, and how transitions rare on one side of the cursor position give rise to ‘noise’ in the spectrum that cloud our understanding of recursive segmentation. We argue that a meanfield picture will be helpful, and thus proceed in Sec. IIIB to define the appropriate meanfield limit, and thereafter perform meanfield analysis on the recursive JensenShannon segmentation scheme.
Iiia Influence of Rare Transitions on the JensenShannon Divergence Spectrum
As discussed earlier, the most straightforward way to determine the optimal point to cut a given sequence into two segments is to compute the JensenShannon divergence spectrum between the left segment and the right segment for all cursor positions , and then determine the divergence maximum such that . As we move away from , each step we take results in the left and right distributions increasing or decreasing by one transition, giving rise to discrete jumps in the JensenShannon divergence. From the definition of the JensenShannon divergence in Eq. (16), we find that
(20) 
for some given changes to the transition counts. When the cursor position is shifted over by one base, only a single transition is affected, for which . However, all transition probabilities associated with the mer are affected, and we have to write the jump as
(21) 
where we demand that .
Noting that
(22) 
and also
(23) 
we simplify the expression for the divergence jump to
(24)  
where we made use of the facts that , , and . As we can see, for a unit shift to the right, the sign and magnitude of the jump depends only on the transition probabilities, but not the transition counts. Here let us distinguish between common and rare states, versus common and rare transitions. Common (or rare) states are mers with high (or low) counts , while common (or rare) transitions are mers associated with high (or low) transition probabilities .
Fig. 1 shows a typical JensenShannon divergence spectrum obtained from an artificial sequence of length over letters, consisting of two length , segments. General downward trends left and right of the single domain wall at are observed in the JensenShannon divergence spectrum, though we sometimes find regions going against the trend locally. When this happens, it is statistically acceptable, and frequently desirable, to place additional domain walls in the sequence, even though in this example we know there is only one true domain wall. Spikes, which are large divergence jumps resulting from transitions rare in one of the segments, feature ubiquitously in the JensenShannon divergence spectra of discrete sequences.
From this analysis, we learned that the uncertainty in determining the true domain wall position is dictated by the competition between common and rare transitions. The common transitions, which are present in larger numbers, determine the average jumps and and hence the general downward trends left and right of the domain wall. The rare transitions, on the other hand, are associated with spikes (local maxima) in the divergence spectrum. Specifically, rare transitions that are most asymmetrically distributed between the two segments are the most important, since they give rise to the largest spikes . Moving bases away from the true domain wall, we expect the JensenShannon divergence to decrease by roughly . If is small, , and a single maximal spike encountered within these bases will bring the divergence up to a value greater than that at the true domain wall, throwing off the cut that we make. In contrast, a maximal spike encountered after we have moved more than bases away from the true domain wall will not be able to raise the divergence beyond that observed at the true domain wall. The ratio therefore gives a quantitative measure of the uncertainty involved in determining the position of the true domain wall.
For a fixed sequence length , the number of rare transitions increases, while the transition probabilities of the rarest transitions decrease, with increasing . We thus find more and stronger spikes. For a fixed Markovchain order , the number of rare transitions remain more or less constant with decreasing , but the transition probabilities of the rarest transitions decrease with decreasing as a result of stronger statistical fluctuations in the transition counts. We therefore find stronger spikes. The proliferation of strong spikes makes segmentation unreliable, and also distracts from our understanding of the recursive segmentation scheme. This is where a meanfield picture, within which we can study the progress of recursive segmentation in the absence of such statistical fluctuations, would be extremely helpful.
IiiB JensenShannon Divergence Spectrum in the MeanField Limit
In a discrete genomic sequence of nucleotides, the sequence positions and take on integer values. The frequencies of various mers occurring within the interval are also integers. From the previous subsection, we understood how a spike in the JensenShannon divergence spectrum arise when there is a unit increment for a rare transition count. Such rare transitions, of course, occur infrequently along the sequence. If we can distribute the unit increment associated with a rare transition over the interval between two such transitions, the statistical effect of the increment will not be piled up over a single base as a spike. This can be done not just for rare transitions, but for all transitions. A continuum description of the sequence can then be obtained by allowing both the sequence positions and statistical frequencies to vary continuously, as shown in Fig. 2.
Depending on how we break the unit increments over one base into fractional increments over many bases, there are many ways to redistribute the transition counts over the interval of the sequence. If our goal is to model this interval as a stationary Markov chain of order , then in the meanfield limit we distribute the mer statistics within uniformly along the interval. In the meanfield limit so defined for , the count of the transition within the subinterval is given by
(25) 
where is the net transition count within . In this way, we remove local fluctuations in the mer statistics within the interval.
In Fig. 3, we show as an example the JensenShannon divergence spectrum for an artificial binary sequence consisting of ten meanfield segments. As we can see, peaks in the divergence spectrum occur only at domain walls, but not all domain walls appear as peaks in the divergence spectrum. Some domain walls manifest themselves as kinks in the divergence spectrum, while others, under special distributions of the segment statistics, may even have vanishing divergences. Performing recursive JensenShannon segmentation on this tensegment sequence, we recover all nine domain walls. These, however, are not discovered in the order of their true strengths (heights of the blue bars), measured by the JensenShannon divergence of the pairs of segments they separate. For the example shown, the third strongest domain wall is discovered in the first recursion step, the second and fourth strongest domain walls in the second recursion step, and the strongest domain wall discovered only in the third recursion step.
Iv Optimized Recursive Segmentation Scheme
A typical bacterial genome contains on the order of bases, within which we can justify at most statistically stationary segments. The segmentation problem thus involves optimally placing domain walls . If we place no restrictions on where can be, apart from the fact that it must be an integer between 1 and , our highdimensional optimization problem lives in an enormous maximal search space
(26) 
consisting of points. The actual search space is much smaller, because we must satisfy the constraint , and smaller still if we demand that , for statistical reasons, must be larger than some minimum separation. But even this realistic search space is huge, and there is no good global algorithm for moving the domain walls simultaneously from an initial point in the search space, whatever objective function (e.g. segment sequence likelihood, net strengths of the domain walls, net deviation of segments from stationarity) we choose for the optimization problem.
Clearly, some complexity reduction strategy is needed to solve this highdimensional optimization problem. In the recursive JensenShannon segmentation scheme of BernaolaGalván et al. [9, 10], the complexity of the full problem is reduced by breaking the search for domain walls into stages. At each stage of the search, the computational complexity is further reduced by the restriction that there shall be at most one new domain wall between every existing pair of adjacent domain walls. We review this recursive segmentation scheme in Sec. IVA, and explain that while this method is conceptually appealing, the meanfield analysis in Sec. IIIB tells us that the segmentation obtained at each stage of the recursion is not optimal. We then describe in Sec. IVB two local optimization schemes, in which a single domain wall is moved each time, and discuss how these schemes that can be incorporated into the recursive JensenShannon segmentation framework.
Iva The Need for Segmentation Optimization
While not explicitly stated in their formulation, BernaolaGalván et al. assumed that the segment structures of genomic sequences are organized hierarchically, i.e. strongly distinct segments are long, and contain less distinct segments that are shorter, which in turn contain even less distinct segments that are shorter still. Within this hierarchical picture of the genomic sequence, the recursive segmentation algorithm listed below:

for a given sequence , compute the JensenShannon divergence spectrum at each cursor position , and cut at the divergence maximum , where , into two segments;

cut each segment at its divergence maximum into two subsegments, and continue doing so recursively;

whenever a new cut is made, check whether a termination criterion is met,
is expected to first discover the strongest domain walls, and then progressively weaker domain walls. The segmentation of a given sequence will in this way be progressively refined, by adding new domain walls to the existing set of domain walls at every recursion, until a terminal segmentation corresponding to a prescribed termination criterion is obtained.
From the simple analysis presented in Sec. IIIB, we learned that the situation is not quite so simple. Indeed, within the meanfield picture, all domain walls will be discovered if we allow the recursive segmentation to go to completion. However, the effective strengths of the domain walls change with each recursion, with some strong domain walls becoming weak, and other weak domain walls becoming strong. Consequently, the domain walls are not discovered in order of their true strengths, and an incomplete segmentation may pick up weak domain walls, but miss stronger ones. For real discrete sequences subject to local statistical fluctuations, we can never be sure by the hypothesis testing or model selection processes that we have exhausted all domain walls, so getting incomplete segmentations is a very real worry. In their early paper [10], RománRoldán et al. noticed the domain wall strengths changing as the segmentation is recursively refined, and their solution was to reject a new cut if it causes the statistical significance of existing domain walls to fall below the confidence limit. This ad hoc modification of the termination criterion was questioned by Li in Ref. 24.
Since the JensenShannon divergence tells us how much better a segmented model fits the observed sequence compared to an unsegmented model, a strong domain wall improves the sequence likelihood more than a weak domain wall does. By retaining strong domain walls that have become weak in the segmentation, and rejecting new strong cuts that would cause existing domain walls to weaken further, we will not be picking the best segment model at each recursion. The final set of recursively determined domain walls is therefore likely to miss some strong domain walls, no matter how sophisticated the hypothesis testing on the statistical significance of each new cut, and how much care is taken to ensure that existing domain walls remain significant.
Ultimately, we want our segmentation, if incomplete, to consist of the strongest possible set of domain walls. This can be obtained, if we trade weak domain walls for stronger ones by moving existing domain walls from weaker to stronger positions, or remove weak domain walls, and let the segmentation scheme find stronger replacements in the next recursion step. Realizing this, Li proposed the use of branchmerging algorithms developed in the field of recursive partitioning and treebased methods, revisiting all domain walls to see whether their removal will increase the statistical significance of the segmentation. However, this was not done in Ref. 24, where this was proposed, nor in any papers to date, as far we know. Instead of removing weak domain walls, we will present in Sec. IVB two local optimization schemes for moving domain walls from weaker to stronger positions, in the spirit of relaxation methods used in optimization and numerical solution of partial differential equations.
IvB Segmentation Optimization Schemes
From Fig. 4, we see that moving the domain wall changes the statistics of the two segments, and , and as a result, the strengths of three domain walls, , , and , change. Otherwise, the effect of moving is entirely contained within the supersegment . We can of course compute the likelihood of the supersegment directly, for each , to determine the position optimizing this supersegment likelihood. However, such a supersegment likelihood alone will not tell us whether the domain wall is statistically significant. This is why we fall back on the JensenShannon divergence, to justify selecting the foursegment model of , as opposed to a null model containing fewer domain walls. When we move only , there are two such null models, which suggest two ways to go about optimizing . We call these the firstorder segmentation optimization scheme, and the secondorder segmentation optimization scheme. No higherorder optimization schemes are possible, if we allow only one domain wall to move at a time.
In the firstorder segmentation optimization scheme, we compute the JensenShannon divergence spectrum of the supersegment , and move to the divergence maximum of this supersegment. Here, the foursegment model is compared against a threesegment model, with segments , , and . This is a natural comparison to make, but not the only one. In Ref. 25, Li et al. reported an experiment to determine the replication origins and replication termini of circular bacterial genomes. They cut a circular genome at an arbitrary position to make it into a linear sequence, and then determine the divergence maximum of this linear sequence. By varying , they found that would remain essentially constant, before changing abruptly to another almost constant value. Li et al. identified these two stable divergence maxima with the replication origin and replication terminus of the circular bacterial genome. They also noticed that reaches a maximum when is either at the replication origin or the replication terminus.
This observation can be stated more generally as follows: given four fixed domain walls , , , and , and a variable domain wall , such that , the sum of JensenShannon divergences comparing the foursegment model with segments , , , and against the twosegment model with segments and is maximum when is at a true domain wall position . Therefore, in the secondorder segmentation optimization scheme, we would compute over the supersegment as we vary , and move to the point maximizing the sum of divergences. As we can see from the null models used, the two segmentation optimization schemes are not equivalent. Nevertheless, we find in pilot numerical studies on real bacterial genomes that the segmentations they produce are always in strong agreement.
Because these optimization schemes are local updates, they must be applied in turn to the domain walls in the segmentation. Clearly, if we move after moving , there will be no guarantee that remains optimal within either schemes. Therefore, the optimization of the domain walls must be iterated, until a fixedpoint segmentation is obtained. Apart from the need to iterate the local moves, both optimization schemes can be implemented serially, or in parallel, right after new cuts have been made in Step 1, and before further cuts are made in Step 2 of the recursive segmentation. Specifically, for the firstorder optimization scheme, which is simpler to implement and thus used exclusively for our segmentation studies, we can optimize all the even domain walls simultaneously, before optimizing all the odd domain walls simultaneously.
V Segmentation Termination Condition
In the recursive segmentation scheme described in Sec. IV, the number of statistically significant domain walls is known only after segmentation is terminated everywhere in the sequence. For an existing segment, this involves making a decision to stop further refinement, based on some termination criterion derived within a hypothesis testing framework [9, 10], or a model selection framework [26, 27]. The chief shortcoming of these termination criteria is that their common assumption that the appropriate null model is that of a statistically stationary sequence. As observed by Fickett et al., there is generally less local homogenuity than necessary in the sequence statistics to justify such a null model [28].
Indeed, we saw in Fig. 3 that the meanfield divergence spectrum of the tensegment sequence looks nothing like that of a twosegment sequence. Nevertheless, we should make cuts to this tensegment sequence, and in the meanfield limit, keep doing so until the divergence spectra of the segments vanish identically. For real sequences, we find that as we segment the sequence at a finer and finer scale, the divergence spectrum looks less and less like that coming from a onesegment, or twosegment sequence. This suggests that we probably should not be doing hypothesis testing, or model selection between onesegment and twosegment models of a given sequence, but look at properties intrinsic to the statistical fluctuations. For example, in Fig. 5, we show the divergence spectra of a long sequence (the interval in the E. coli K12 MG1655 genome, bound by two tRNAs), which we clearly should segment, and a short sequence (the interval in the E. coli K12 MG1655 genome, consisting of the proAB operon [29]), which we are inclined not to further segment. The key feature that led us to our intuitive decision is the strength of the peak relative to the typical statistical fluctuations.
But what distinguishes statistical fluctuations from a genuine statistical trend, for example, that lead to the divergence peak at within the interval (see top plot in Fig. 5)? To answer this question, let us consider a 200segment binary sequence, where each segment is of length one, and has unit count for either ‘0’ or ‘1’. In this binary sequence, we find long strings of ‘0’s and ‘1’s, as well as regions where the sequence alternates rapidly between ‘0’s and ‘1’s. These correspond to smooth and spiky regions in the meanfield divergence spectrum, shown as the black curve in the top plot of Fig. 6, respectively. Based on the meanfield divergence spectrum, we can think of the sequence as comprising long and short segments, the shortest of which are of unit length. Let us call the divergence spectrum of a sequence containing unitlength segments the raw divergence spectrum .
Of course, it makes no statistical sense to talk about unitlength segments, but let us pretend that it is perfectly alright to have segments with length . This being the case, the segments must be absorbed into its longer flanking segments, or merged amongst themselves, to give segments that are at least long. This is the sequence segmentation problem in a different guise, so there are no simple solutions. There are, however, many statistically reasonable ways to perform this coarse graining, and one of them gives rise to the red meanfield divergence spectrum shown in Fig. 6. We can repeat this coarsegraining procedure, coarse graining segments to obtain segments at least long, giving the green meanfield divergence spectrum, and then again, coarse graining segments to to obtain segments at least long, giving the blue meanfield divergence spectrum, both of which are shown in the top plot of Fig. 6.
As we can see, by making the shortest segments in the sequence longer and longer, the meanfield divergence spectrum becomes smoother and smoother. Once we have achieved the desired minimum segment length , we can compare the coarsegrained divergence spectrum to the raw divergence spectrum . The strength of the statistical fluctuations in the sequence can then be quantified by the integrated absolute difference between the two divergence spectra,
(27) 
for this given final . To decide whether we should segment the sequence, we compare this integrated statistical fluctuation against the total area
(28) 
under the raw divergence spectrum. If the ratio is small, like for the top divergence spectrum in Fig. 5, a cut placed at the divergence maximum will be statistically significant. If is large, like for the bottom divergence spectrum in Fig. 5, a cut placed at the divergence maximum will not be statistically significant.
In practice, coarse graining a segmentation at scale (meaning that the shortest segments are at least long) to give a segmentation at scale is a timeconsuming optimization problem, so we devise an approximate form of the test statistic to quantify the strength of statistical fluctuations in the divergence spectrum. This approximate test statistic is constructed for a discrete sequence as follows:

for a given set of sequence positions and divergences , partition the sequence positions into a set of convex points, satisfying
(29) and a set of concave points, which fail the above convexity condition;

iterate through the set of concave points, and retain the concave point if and . Otherwise, we reject , and all convex points within of it. The set of retained concave points, together with convex points that are not rejected, form a coarsegrained set of domain walls that are at least apart from each other, and the divergences at these points give a coarsegrained divergence spectrum;

multiply by two, and if the new length scale is less than the desired final length scale , repeat steps 1 and 2. Otherwise, stop the coarse graining process.
In the bottom plot of Fig. 6, we show the divergence spectra obtained from coarse graining approximately at length scales .
For real genomes, we require this approximate coarse graining to proceed until for a quaternary sequence whose optimal Markovchain order is . This is to ensure that each transition count would be at least 20–30, so that the maximumlikelihood transition probabilities can be reliably estimated. Going back to Fig. 5, we see that the approximate coarsegrained divergence spectrum (red curve) for the long interval is very close to its raw divergence spectrum (black curve), whereas for the short interval , the approximate coarsegrained divergence spectrum (red curve) is significantly different from its raw divergence spectrum (black curve). If we compute using the approximate coarsegrained divergence spectra for the two intervals, we will find a small for the long interval, and a large for the short interval.
Based on the above discussions, we propose to use
(30) 
as an alternative termination criterion, where is a user prescribed tolerance. The test statistic requires no prior assumption on how many segments to partition the sequence into, but instead requires that the shortest segments used to model the sequence must yield adequate statistics to reliably estimate the model parameters. We believe that , which measures quantitatively the relative strength of statistical fluctuations up to some cutoff length scale , is a better test statistic to use as the termination criterion, as there is no bias towards any particular segment model of the given sequence. Based on extensive experimenting, we find that will stop the recursive segmentation just before individual genes are segmented.
Vi Application to Real Genomes
In the original recursive segmentation algorithm proposed by BernaolaGalvan et al. [9, 10], it does not matter how many new cuts are added to the sequence at each recursion step — we get the same terminal segmentation for the same termination criterion. However, if we optimize the segmentation before new cuts are added, then the terminal segmentation depends very sensitively on the termination criterion. To benchmark the performance of the recursive JensenShannon segmentation scheme, with and without segmentation optimization, we add only one new cut — the strongest of all new cuts possible — every recursion step. A series of recursive segmentations (with and without segmentation optimization) obtained this way is shown in Fig. 7 for the E. coli K12 MG1655 genome, for domain walls in the sequence.
From Fig. 7, we find that in contrast to the truly hierarchical unoptimized recursive segmentations, the optimized recursive segmentations of E. coli K12 MG1655 are almost hierarchical, i.e. few domain walls in segmentations containing more domain walls are shifted relative to segmentations containing fewer domain walls. Those few domain walls which do get shifted, however, can be shifted a lot. For example, we see the domain wall in the optimized segmentation with domain walls get shifted to in the optimized segmentation with domain walls (). Another example is in the optimized segmentation with domain walls shifted to in the optimized segmentation with domain walls (). We also observed that domain walls ‘lost’ as a result of optimization frequently reappear later in the optimized recursive segmentation. For example, the domain wall , which was shifted to when a new cut was made to the optimized segmentation with domain walls, reappears in the optimized segmentation with domain walls (not shown in Fig. 7). These are manifestations of what we call the context sensitivity problem, which we will carefully study in another paper [30].
In the unoptimized recursive segmentation scheme, a domain wall remains at where it was first discovered, and its statistical significance is measured solely by its strength. However, the strength of existing domain walls frequently change as more and more domain walls are added to the segmentation, so it is not clear what kind of statistical significance to attach to a domain wall that becomes progressively weaker (or one that becomes progressively stronger, for that matter). In the optimized recursive segmentation scheme, we find that there are some domain walls that remain close to where they are first discovered, which we call stable domain walls, and those that do not as recursion proceeds. Stability with respect to segmentation refinementoptimization provides an alternative measure of statistical significance, in the sense that at any level of segmentation, a stable domain wall is more significant than a domain wall that has been shifted around for the past few recursions. A domain wall that has been stable over a larger number of recursions is also more significant than a domain wall that has been stable over a smaller number of recursions. From Fig. 7, we see that stable domain walls are always discovered first by the optimized segmentation scheme, if not simultaneously by both schemes.
Vii Conclusions
To summarize, we have in this paper presented improvements to three different aspects of the recursive JensenShannon segmentation scheme proposed by BernaolaGalván et al. In Sec. II, which touch on the modeling aspects of recursive segmentation, we explained how Markov chains of order , producing better fits of the higher order statistics with fewer independent parameters, are better models of the segments within a heterogeneous genomic sequence, compared to Bernoulli chains over the quaternary or extended alphabets. We then wrote down a generalized JensenShannon divergence, given in Eq. (16), which can be used as an entropic measure to distinguish between two different Markov chains of the same order. Moving on, we described how to select a maximum Markovchain order for the segmentation of a given sequence, by minimizing its Bayes information criterion (BIC), and how to optimize the position of the domain wall between two segments with different maximum Markovchain orders.
Next, in Sec. III, we described how the spatial distribution of rare transitions along the sequence give rise to local fluctuations in the JensenShannon divergence spectrum, confusing our intuitive picture of how recursive segmentation proceeds in a heterogeneous sequence. We argued that a meanfield analysis, which we then undertook, would be useful in better understanding recursive segmentation. We found that in the meanfield JensenShannon divergence spectrum, true domain walls can appear either as peaks or kinks, or even have vanishing divergences. We showed that all true domain walls will eventually be discovered, if we allow the recursive segmentation to go to completion, but these will in general not be discovered in the order of their true strengths. Consequently, an incomplete segmentation will generically be suboptimal, because it contains weak domain walls that are discovered ahead of stronger ones, which are thus left out of the segmentation, and we argued in Sec. IV that there is a need to optimize the segmentation obtained at every stage of the recursive entropic segmentation. We then proposed two local optimization schemes to move domain walls from weaker to stronger positions, which improves the statistical significance of an incomplete segmentation.
In Sec. V, we improved on the statistical testing aspect of the recursive JensenShannon segmentation scheme by proposing a new termination criterion. We motivated the test statistic associated with this new termination criterion by considering the problem of recursively coarse graining an initial segmentation containing very short segments to produce a final segmentation containing no segments shorter than a cutoff length scale . We devised an algorithm for performing this recursive coarse graining approximately, and argued that the fraction difference in areas under the raw divergence spectrum of the initial segmentation and the smoothed divergence spectrum of the final segmentation, can be used as the test statistic for terminating recursive segmentation. The principal advantage of using this new termination criterion, which is a quantitative measure of the strength of sub statistical fluctuations relative to the maximum divergence in the raw divergence spectrum, is that it requires no prior knowledge of how many segments to partition the sequence into, and thus has no need to assume a singlesegment null model for the sequence.
Finally, in Sec. VI, we performed a benchmark study on the genome of the model bacterium E. coli K12 MG1655, comparing the recursive JensenShannon segmentation schemes with and without optimization. The optimized segmentations obtained at different stages of the recursion were found to be not perfectly hierarchical, because of the context sensitivity problem which we will investigate in another paper [30]. At the coarse levels of segmentations examined, we found the recursive segmentation schemes with and without segmentation agreeing on a set of stable domain walls, but not on the order these are discovered. We argued that the stability of a domain wall with respect to optimized recursive segmentation is an alternative measure of its statistical significance, and found that these stable domain walls are always discovered first by the optimized segmentation scheme, if not simultaneously by both schemes.
Besides the context sensitivity problem which we realize plagues all segmentation schemes [30], the innovations and discoveries in this paper also inspired other studies. In a future paper [31], we will report a clustering analysis on the terminal genomic segments, obtained using the optimized recursive JensenShannon segmentation scheme with the new termination criterion, of a bacterial plant pathogen. Based on this clustering analysis on a single bacterial genome, we will construct its statistical genetic backbone, and identify possible horizontally transferred genetic islands. We will then extend the clustering analysis for comparative studies of several closely related bacteria, where we will detect syntenic regions, and identify a phylogenetic backbone. We would like to suggest that, even though the optimized recursive segmentation scheme presented in this paper is formulated as a method for biological sequence segmentation, it can also be applied in other engineering segmentation problems (for example, in image segmentation, and change point detection in noisy time series).
Acknowledgment
SAC and CRM acknowledge support from USDA Agricultural Research Service project 19072100001705. Computational resources of the USDAARS plant pathogen systems biology group and the Cornell Theory Center assisted this research.
References
 [1] J. Hacker, G. BlumOehler, I. Mühldorfer, and H. Tschäpe, “Pathogenicity Islands of Virulent Bacteria: Structure, Function and Impact on Microbial Evolution”, Molecular Microbiology, vol. 23, no. 6, pp. 1089–1097, 1997.
 [2] E. A. Groisman and H. Ochman, “How Salmonella Became a Pathogen”, Trends in Microbiology, vol. 5, no. 9, pp. 343–349, 1997.
 [3] J. Hacker and J. B. Kaper, “Pathogenicity Islands and the Evolution of Microbes”, Annual Review of Microbiology, vol. 54, pp. 641–679, 2000.
 [4] N. Goldenfeld and C. Woese, “Biology’s next revolution”, Nature, vol. 445, pp. 369, 2007.
 [5] E. G. Carlstein, H.G. Müller, and D. Siegmund, ChangePoint Problems, Lecture NotesMonograph Series, vol. 23. Institute of Mathematical Statistics, 1994.
 [6] J. Chen and A. K. Gupta, Parametric Statistical Change Point Analysis. Birkhäuser, 2000.
 [7] J. V. Braun and H.G. Müller, “Statistical Methods for DNA Sequence Segmentation”, Statistical Science, vol. 13, no. 2, pp. 142–162, 1998.
 [8] A. Gionis and H. Mannila, “Finding Recurrent Sources in Sequences”, Proc. Int’l Conf. Research in Computational Molecular Biology (RECOMB), pp. 123–130, 2003.
 [9] P. BernaolaGalván, R. RománRoldán, and J. L. Oliver, “Compositional Segmentation and LongRange Fractal Correlations in DNA Sequences”, Physical Review E, vol. 53, no. 5, pp. 5181–5189, 1996.
 [10] R. RománRoldán, P. BernaolaGalván, and J. L. Oliver, “Sequence Compositional Complexity of DNA through an Entropic Segmentation Method”, Physical Review Letters, vol. 80, no. 6, pp. 1344–1347, 1998.
 [11] R. Grantham, C. Gautier, M. Gouy, M. Jacobzone, and R. Mercier, “Codon Catalog Usage is a Genome Strategy Modulated for Gene Expressivity”, Nucleic Acids Research, vol. 9, no. 1, pp. R43–R74, 1981.
 [12] J. C. W. Shepherd, “Method to Determine the Reading Frame of a Protein from the Purine/Pyrimidine Genome Sequence and Its Possible Evolutionary Justification”, Proc. Nat’l Academy of Sciences USA, vol. 78, no. 3, pp. 1596–1600, 1981.
 [13] R. Staden and A.D. McLachlan, “Codon Preference and Its Use in Identifying Protein Coding Regions in Long DNA Sequences”, Nucleic Acids Research, vol. 10, no. 1, pp. 141–156, 1982.
 [14] J. W. Fickett, “Recognition of Protein Coding Regions in DNA Sequences”, Nucleic Acids Research, vol. 10, no. 17, pp. 5303–5318, 1982.
 [15] H. Herzel and I. Große, “Measuring Correlations in Symbol Sequences”, Physica A, vol. 216, no. 4, pp. 518–542, 1995.
 [16] P. BernaolaGalván, I. Grosse, P. Carpena, J. L. Oliver, R. RománRoldán, and H. E. Stanley, “Finding Borders Between Coding and Noncoding DNA Regions by an Entropic Segmentation Method”, Physical Review Letters, vol. 85, no. 6, pp. 1342–1345, 2000.
 [17] D. Nicorici, J. A. Berger, J. Astola, and S. K. Mitra, “Finding Borders Between Coding and Noncoding DNA Regions Using Recursive Segmentation and Statistics of Stop Codons”, Proc. Finnish Signal Processing Symposium (FINSIG), pp. 231–235, 2003.
 [18] D. Nicorici and J. Astola, “Segmentation of DNA into Coding and Noncoding Regions Based on Recursive Entropic Segmentation and StopCodon Statistics”, EURASIP Journal on Applied Signal Processing, vol. 1, pp. 81–91, 2004.
 [19] V. Thakur, R. K. Azad, and R. Ramaswamy, “Markov models of genome segmentation”, Physical Review E, vol. 75, no. 1, art. 011915, 2007.
 [20] J. Lin, “Divergence Measures Based on the Shannon Entropy”, IEEE Trans. Information Theory, vol. 37, no. 1, pp. 145–151, 1991.
 [21] H. Akaike, “A New Look at the Statistical Model Identification”, IEEE Trans. Automatic Control, vol. 19, no. 6, pp. 716–723, 1974.
 [22] G. Schwarz, “Estimating the Dimension of a Model”, Annals of Statistics, vol. 6, no. 2, pp. 461–464, 1978.
 [23] R. W. Katz, “On Some Criteria for Estimating the Order of a Markov Chain”, Technometrics, vol. 23, no. 3, pp. 243–249, 1981.
 [24] W. Li, “Delineating Relative Homogeneous G + C Domains in DNA Sequences”, Gene, vol. 276, no. 1–2, pp. 57–72, 2001.
 [25] W. Li, P. BernaolaGalván, F. Haghighi, and I. Grosse, “Applications of Recursive Segmentation to the Analysis of DNA Sequences”, Computers and Chemistry, vol. 26, no. 5, pp. 491–510, 2002.
 [26] W. Li, “New Stopping Criteria for Segmenting DNA Sequences”, Physical Review Letters, vol. 86, no. 25, pp. 5815–5818, 2001.
 [27] W. Li, “DNA Segmentation as a Model Selection Process”, Proc. Int’l Conf. Research in Computational Molecular Biology (RECOMB), pp. 204–210, 2001.
 [28] J. W. Fickett, D. C. Torney, and D. R. Wolf, “Base Compositional Structure of Genomes”, Genomics, vol. 13, no. 4, pp. 1056–1064, 1992.
 [29] H. Salgado, S. GamaCastro, M. PeraltaGil, E. DíazPeredo, F. SánchezSolano, A. SantosZavaleta, I. MartínezFlores, V. JiménezJacinto, C. BonavidesMartínez, J. SeguraSalazar, A. MartínezAntonio, and J. ColladoVides, “RegulonDB (Version 5.0): Escherichia coli K12 Transcriptional Regulatory Network, Operon Organization and Growth Conditions”, Nucleic Acids Research, vol. 34, database issue, pp. D394–D397, 2006.
 [30] S.A. Cheong, P. Stodghill, D. J. Schneider, and C. R. Myers, “The Context Sensitivity Problem in Biological Sequence Segmentation”, in preparation.
 [31] S.A. Cheong, P. Stodghill, D. J. Schneider, and C. R. Myers, “Clustering Statistically Stationary Segments of Bacterial Genomes”, in preparation.
[]SiewAnn Cheong received his BSc with firstclass honors in physics in 1997 from the National University of Singapore, and his PhD in physics in 2006 from Cornell University. He was a postdoctoral associate with the Cornell Theory Center between 2006 to 2007, and is now an assistant professor in the Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore. His research interest is on the use of statistical methods to discover meso and macroscale structures in bacterial genomes.
[]Paul Stodghill received the B.A. degree in mathematics and computer science from Dickinson College, Carlisle, PA, in 1988 and the Ph.D. degree from the Department of Computer Science at Cornell University, Ithaca, NY, in 1997. He is currently a computational biologist for the Agricultural Research Service of the US Department of Agriculture.
[]David J. Schneider is a computational biologist for the Agricultural Research Service of the US Department of Agriculture.
[]Samuel W. Cartinhour is the USDAARS Courtesy Professor in the Department of Plant Pathology, Cornell University. His research interest is in employing computational and laboratory experiments to understand gene regulation, and in particular, pathogenesis, in bacterial plant pathogen Pseudomonas syringae.
[]Christopher R. Myers received a B.A. in History from Yale University, and a Ph.D. in physics from Cornell University. His research interests snake along interdisciplinary boundaries connecting physics, biology, and computer science, with particular emphasis on the emergence of information processing in cellular regulation and signaling, and on the functional organization of complex biological networks.