- Journal List
- NIHPA Author Manuscripts
- PMC2635924

# Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network

^{*}To whom correspondence should be addressed: Phone: (317) 278-7674, Fax: (317) 278-9201, Email: ude.iupui@uohzqy

## Abstract

This paper attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in ten-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36° for *ψ*, and 22° for *ϕ*. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu.

**Keywords:**Artificial Neural Networks, Dihedral Angles, Solvent-accessible surface area, Protein Structure prediction

Direct prediction of protein structures from their sequences is challenging. As a result, protein structure prediction is often assisted by predicting one-dimensional structural properties including residue solvent-accessibility (RSA) and backbone torsion angles of proteins. While the usefulness of predicted RSA values in structure prediction is well established^{1}^{-}^{6}, the application of predicted torsion angles is still in its infancy (fold recognition^{7}^{-}^{9}, sequence alignment^{10}, and secondary structure prediction^{11}^{,}^{12}). However, the latter has the potential to replace or supplement predicted secondary structures^{13}^{,}^{14} because torsion angles provide a more detailed description of the backbone structure than three-state secondary structures.

Both residue solvent-accessible surface areas and backbone angles are continuously varying variables because proteins can move freely in a three-dimensional space. Thus, a real-value prediction is preferred over the prediction of a few arbitrarily-defined states. While several methods for real-value prediction of solvent accessibilities were developed^{15}^{-}^{21}, most methods (except two papers on *ψ* angles^{11}^{,}^{21}) on predicting backbone torsion angles are limited to discrete dihedral-angle states based on local (fragment) structural patterns^{7}^{,}^{12}^{,}^{22}^{-}^{27}. The real-value prediction of both torsion *ϕ* and *ψ* angles was only developed recently by us^{28}. Reasonable accuracy has been achieved for both solvent accessibilities and backbone torsion angles by using an integrated neural networks with a back-propagation algorithm^{21}^{,}^{28}. In the backpropagation algorithm, the errors propagate backwards by updating neural-network weights in the direction that minimizes the error. This gradient-based algorithm, however, often leads to local minima^{29}.

Many different types of methods were developed to overcome the local-minimum problem of the backpropagation algorithm. One obvious approach is to concentrate on optimization of learning rates or step sizes^{30}^{-}^{33} and the employment of various minimization methodologies such as conjugate gradient^{34}^{,}^{35}, Levenberg-Marguardt algorithm^{36}^{,}^{37}, stochastic backpropagation^{38}, genetic algorithm^{39}^{,}^{40}, simulated annealing^{41}^{,}^{42}, or a hybrid of optimization methods^{43}. The second approach focuses on optimizing network architecture during training by employing genetic algorithm^{44}^{-}^{47}, self-organized network^{48}, or fuzzy logic^{49}^{-}^{51}. The third approach develops the algorithms for estimating initial weights and uses the backpropagation algorithm for refinement. Several initialization methods such as evolutionary algorithm^{52}, orthogonal least-square^{53}, statistically controlled weight optimization^{54}, linear least-square^{55}^{-}^{58}, ant-colony optimization^{59}, and a restricted Boltzmann machine for initial mapping^{60}, were developed. Other methods developed include ensemble learning for consensus prediction^{61}, boosting^{62}, learning from hints^{63}^{,}^{64} (using known information about the output to constrain learning), regularization (favoring smooth network function and avoiding over-fitting)^{61}, pruning (removing redundant networks)^{65}, and “induced learning retardation” (inhibiting the largest contributing neurons temporally)^{66}.

The purpose of this paper is to develop improved neural network methods that are suitable for large-scale learning that requires optimization of hundreds of thousands of weights simultaneously, as in the case of predicting RSA and torsion angles. Clearly, global optimization techniques such as genetic algorithm are computationally too expensive to carry out. Here, we propose a guided weighting scheme to steer the learning to a more optimized solution. The guided weighting scheme is conceptually similar to many approaches such as learning from hints that employs known information about the output to constrain learning^{63}^{,}^{64} and regularization that penalizes against certain models^{61}. The guided weighting scheme developed here is tailored for the large-scale learning often encountered in predicting structural properties of proteins.

We have performed five experiments with different database sizes, different network architectures, and different sizes of input windows. All results reveal a consistent improvement due to guided learning. Moreover, a two-layer neural networkwith a bipolar activation function is effective in improving prediction accuracy, for *ψ* angle, in particular. All together, the resulting method reaches a new level of accuracy for predicting residue solvent accessibility and backbone torsion angles.

## THEORY

### Basic Network Architecture

Without losing generality, we consider a simple neural-network with two hidden layers. The input to the neural network will be designated by *x _{j(i)}*, where

*j*= 1, . . . ,

*J*is an index designating the sequence position of the amino acid in a window surrounding the central residue and

*i*= 1, . . . ,

*n*is an index for the input features of a given residue

*j*. For the two-hidden-layer network the output values of the hidden layers will be designated by

and

where *k* = 1, . . . , *K*, *l* = 1, . . . , *L*, with *K, L* the total number of neurons in the first and second hidden layers, respectively, *f*(*x*) is the activation function, ${w}_{jk}^{1}$ and ${w}_{jk}^{2}$ are the neural network weights that connect the neurons in the input and the first hidden layer and the neurons in the first and second hidden layers, respectively. In calculating ${S}_{k}^{1}$, ${w}_{jk}^{1}$ is a vector of length *n* and the multiplication is the vector dot product.

The values of the output neurons, *p _{m}*, are obtained in a similar fashion,

where ${w}_{lm}^{3}$ are the weights that connect the neurons in the second hidden layer with the neurons of the output layer, and *m* = 1, . . . , *M* with *M* the number of neurons in the output layer.

Training of the neural network is achieved by comparing the predicted outputs, *p _{m}*, to their known values (e.g.,

*ψ*values) for the training proteins by obtaining the sum square error

*E*. For example for the

*ψ*angle

This error function is then minimized by the steepest gradient descent method, i.e., updating the weights according to

with *η* the learning rate. Similar expressions for ${\stackrel{.}{w}}_{kl}^{2}$, ${\stackrel{.}{w}}_{lm}^{3}$ are obtained. Note that Eq. (5) results in the minimization of the sum squared error due to the relationship $\stackrel{.}{E}=\frac{\delta E}{\delta {w}_{jk}^{1}}\cdot {\stackrel{.}{w}}_{jk}^{1}$. This description of the computational model is known in the neural network literature as the *backpropagation method*^{67}, the result of Eq. (5) is to correct the weights based on the prediction error being back propagated from the output layer towards the input layer.

The above equations are for a two-hidden-layer network. Equations for the one hidden layer network are essentially the same. However, in this study, we will use a unipolar activation function for one-hidden layer network [*f*(*x*) = 1/(1 + exp(-*αx*)) with *α* = 1, the activation parameter that was decided upon by a process of trial and error optimization]. For the two-hidden-layer network we will use a bipolar activation function [*f*(*x*) = tanh(*αx*), *α* = 0.2 by trial and error]. We use two networks of different layers and different activation functions to test if the effect of guided learning is robust for different neural networks.

### Guided Neural Weights

Computational neural networks can in principle approximate any continuous function, in any finite number of variables, to any degree of accuracy^{68}. Stated more precisely, for any finite function *ψ*(*x*) and positive number *ε* > 0 there exist a set of weights ${w}_{jk}^{i}$ such that the prediction of the network, *p*(*x*), obeys ||*ψ*(*x*) - *p*(*x*)|| < *ε*. For the case of sequence-based structure prediction for proteins, *ψ* would represent the dihedral angles of the amino acids, and *x* would represent the amino acid sequence. Hence the heart of the neural network is the selection of the weights. The standard approach described above is to initialize the weights in some random fashion and then use some minimization algorithm on the sum square error to train the weights. The steepest gradient descent method described above often leads a locally rather than globally optimized solution.

To go beyond the basic gradient-based backpropagation algorithm, we propose a guided learning scheme based on an intuitive pattern for neural-network weights. To do this, we treat each weight as composed of two parts,

The first part, ${b}_{jk}^{i}$, is the to-be-optimized weights, whereas the second part, ${g}_{jk}^{i}$, is the fixed guiding factor that represents a-priori intuitive knowledge about the system (i.e. the knowledge does not have to be exact). Each of the ${b}_{jk}^{i}$’s is initialized to a random value in the range [-0.5, 0.5], whereas the ${g}_{jk}^{i}$’s are set at the beginning of the training and are not updated throughout the training in this study. On the other hand, if a lot of information is known about the system being predicted, such that the initial choice of the ${g}_{jk}^{i}$ gives good predictions, a possible modification for the initial choice of the randomized weights is to set ${b}_{jk}^{i}=1+\mathit{rnd}$, where *rnd* is a uniform distribution of random numbers with zero mean within some interval. In this way the ${b}_{jk}^{i}$’s can be used to refine the prediction given by ${g}_{jk}^{i}$.

As an illustrative example for this guided learning, we wish to incorporate the intuition that input features of a to-be-predicted residue should have the largest contribution to the prediction accuracy, whereas the more distant in sequence location is a residue from the to-be-predicted residue the weaker the contribution of its input features to the prediction accuracy. This sequence-distance-dependent decay is only true for the majority of residues but not for every residue because nonlocal interactions (strong interactions between residues far from each other in sequence locations) are known to be important for stabilizing protein structures. They are yet to be included in machine learning. To implement this intuition as a guiding factor, we assume that the neural network is positioned on a two-dimensional plane and the guiding factor, ${g}_{jk}^{i}$, for a two-layer network are given by equations below.

and

with ${k}_{c}=\frac{K+1}{2}$,${l}_{c}=\frac{L+1}{2}$, and ${m}_{c}=\frac{M+1}{2}$, the central location of the two hidden and output layers respectively. The guiding weights are designed so that residues that are closer (in sequence distance) to a given amino acid will contribute more in determining its corresponding structural properties. One should also note that the decay of the signals through longer connections also naturally mimics the decay in the voltage signal between far away physiological neurons. Obviously, there are many other possible equations for the guiding factors that will satisfy the same intuition. Because the purpose of this paper is to validate the approach of guided learning, we did not study any other possible functional form for the guiding factors.

Given the above approach, the equations for updating the training weights are as follows. For the weights between the second hidden layer and the output layer let

then

For the weights between the first and second hidden layers let

then

For the weights between the input and the first hidden layers let

then

### Technical Details

We conducted five experiments for testing the proposed guided learning. Experiment I uses a one-layer network, 21-residue input window, and a database of randomly selected 500 proteins from the original SPINE dataset^{69}. Experiment II differs from Experiment I by employing a two-layer network. Experiment III differs from Experiment I with a larger database of 2479 proteins with length less than 500 amino acids from the original SPINE database. Experiment IV is same as Experiment III except with a two-layer network. The only difference between Experiments IV and V is that the latter uses a larger input window of 41 residues. The first two experiments of tests contain prediction of the backbone *ψ* angle while the other three experiments predict *ψ*, *ϕ* and residue solvent accessibility. All experiments are done twice: one with and the other without the guiding factors. Thus, we have made a total of 22 neural networks for testing the proposed method. This large number of tests is conducted to check if the performance of the guided learning depends on the database size, different properties predicted, and the size of input features. A one-layer neural network with a unipolar activation function was used in Real-SPINE for RSA prediction and Real-SPINE 2.0 for torsion angle prediction.

We use 28 input features for characterizing each residue as described in SPINE^{69}, real-SPINE^{21} or real-SPINE 2.0^{28}. They are sequence profiles, seven representative physical parameters, and the secondary structure. The actual three-state secondary structures from DSSP^{70} are used for training the weights, and predicted secondary structures from SPINE^{69} are employed in testing the prediction accuracy. The terminal regions of proteins were accounted for by setting appropriate boundary conditions on the input window of the neural network. For example, with an input window of 21 residues for the first residue in the protein chain we use only residues at positions 11 to 21 in the input window. A bias is used to further refine the network. All the networks presented in this paper have 101 neurons per hidden layer. In total we have 28×*n _{window}* features plus one bias for a given window size of

*n*residues.

_{window}Experimental values of *ψ* and *ϕ* angles and solvent accessible surface areas are obtained from the DSSP program^{70}. As introduced in a previous paper^{28}, the *ψ* angles were shifted such that a minimum number of angles occur at the edges of the prediction window, i.e., the *ψ* angle is shifted by adding 100° to the angles between -100° and 180°, and adding 460° to the angles between -180° and -100°. This shift ensures that a minimum number of angles occur at the ends of the sigmoidal function. This region is inherently difficult to predict in a neural-network-based machine learning method. No shift was performed for the *ϕ* angle since no improvement was observed in these results. Both angles are further normalized to be between [-1,1] for the two-hidden layer network (bipolar activation function) and [0,1] for the one-hidden layer network (unipolar activation function). The solvent accessibility of a residue (RSA) is obtained by its solvent accessible surface area relative to the maximum value in the dataset. Note that this is a slight departure from the method employed by Real-SPINE^{21} which was based on normalizing the RSA by the accessible surface area of the residue in its “unfolded” state^{15}. The reason for this departure is that the results of the DSSP program contain RSA values which are greater than the “unfolded” values given by Ahmad et al.^{15}. The normalization factors for the RSA are given in Table 2.

*ϕ*,

*ψ*, and RSA for residue types and secondary-structure elements based on ten-fold cross validation with Experiment IV

The reported accuracy is based on the ten-fold cross validation technique. In this procedure 90% of the data is used for training and the remaining 10% is used for testing. This process is repeated 10 times such that every protein will be part of one of the testing groups. Over-fit protection is achieved by setting aside a random 5% portion of the training data for independent testing. The training is terminated when the accuracy of the prediction does not improve for the 5% of residues set aside for 100 epochs, or when 1000 training epochs have completed. Weights corresponding to the best prediction over the 5% data are then used to give prediction for the 10% data used for validation.

We optimized learning rates by trial-and-error in testing prediction accuracy of *ψ* with a small dataset. We found an optimal learning rate of 0.01, which is much faster than those used in SPINE^{69}, Real-SPINE^{21} (0.001) and Real-SPINE 2.0 (0.0001). Thus, this learning rate is used for predicting all three properties (*ψ*, *ϕ*, and solvent accessibility). In addition, a momentum coefficient of 0.4 is used.

The quality of the prediction is evaluated by the following parameters. Mean absolute error (MAE) is the absolute difference between predicted and actual values of a normalized structural property that are averaged over all predicted residues. *Q*_{10} is the fraction of residues whose angles are in correctly classified states when the torsion angles (or RSA) are equally divided into 10 states (36° per bin for angles or 0.1 per bin for RSA). We will also use *Q*_{10%}, the fraction of residues whose predicted angles are within 36° from the actual angle value (or 10% for RSA).

The final reported result is based on a simple average of five predictors based on different random initial weights (only two for Experiment V, due to intensive computing requirement). We report Pearson’s correlation coefficients for angles to compare with previous results. Other statistical tests may be more suitable for circular data such as angles.

The processing times of each epoch for weight training are 2.4 minutes for Experiment III, 3.2 minutes for Experiment IV, and 12.7 minutes for Experiment V on an Intel Xeon CPU model E5345 clocked at 2.33GHz. Note that these are approximate times and that guided and non-guided neural networks take approximately the same duration. Additional details regarding the method, dataset, and algorithm can be found in Ref. ^{28} for backbone angles and Ref. ^{21} for residue solvent accessibility.

## RESULTS

Table 1 summarizes the results of five experiments that compare the prediction accuracy of the real-value *ψ* and *ϕ* angles and RSA given by the networks with and without guided learning for *ψ*, *ϕ*, and RSA. There is a consistent improvement after introducing the guided learning, regardless of the number of hidden layers, the size of input window, the size of the database for training and cross-validation, and the parameter that measures the accuracy. The absolute improvement ranges between 0.9-2.2% for Q_{10} and Q_{10%} in *ψ*, 0.4-1.3% for Q_{10} and Q_{10%} in *ϕ*, and 0.7-1.2% for Q_{10} and Q_{10%} in RSA. The mean absolute errors of *ψ*, *ϕ* and RSA are reduced by 2-4%. These improvements are often greater than the standard deviation among ten folds. The increase of correlation coefficient due to guided learning is also observed. Positive improvements in five different experiments indicate the statistical significance of the observed improvements according to student’s T-test. For example, the paired T-test^{71} on five pairs of *Q*_{10} values (guided versus unguided) in Row 1 of Table 1 yields a P-value of 0.0007, indicating that the difference between two sets of data is statistically significant. The mean of the difference is 1.4% with 95% confidence interval of the difference from 1.0% to 1.8%.

*ϕ*and

*ψ*angles, and RSA from five experiments. Standard deviations between ten folds are also shown for Experiments III, IV and V

Table 1 indicates that Experiment IV (2479 proteins, a two-layer network and a window size of 21 residues) yields the most accurate predictor for *ψ*, *ϕ* and RSA. Thus, we analyze the results from Experiment IV in more detail. Table 2 displays the mean absolute errors for individual residues and secondary structural elements. The reduction of the error occurs essentially for every residue type and every secondary structure element for all three properties (*ψ*, *ϕ* and RSA). The average improvement is approximately 2%. Interestingly, residue Proline (P) has a greater improvement of 12% for the *ϕ* angle and 6% for the *ψ* angle. The only exception is that the accuracy for the RSA of the Cysteine residue is reduced by 2% with the guided learning network. However, even for Cysteine there is a 2% improvement in the prediction accuracy of the *ϕ* and *ψ* angles. Similarly, 1-3% reductions of MAE for helical, strand and coil residues are observed.

We further investigate the improvement of the prediction for the different regions of the angles and RSA spaces. The *Q*_{10} scores for ten evenly spaced bins are given in Figs. Figs.1,1, ,2,2, and and3,3, for the angles *ϕ* and *ψ*, and the RSA respectively. In the figures we also give the results of a random predictions based only on the distribution of angles or RSA. Note that the random prediction bars are proportional to the frequency of occurrence of their respective bins. As before we find an improvement in the prediction accuracy of approximately 2% for the most populated bins. We note that for the RSA, the *Q*_{10} score for the second most populated bin between 0.1 and 0.2 shows a reduction in the prediction accuracy with the introduction of guided learning. In general, guided learning makes the most improvement in the highly populated regions.

*Q*

_{10}score for the residue surface accessibility for 10 evenly spaced bins with a [0,1] normalization.

In addition to guided learning, changing from one-layer with a unipolar activation function to a two-layer neural network with a bipolar activation function also leads to significant improvement. The effect is the largest for *ψ*. For example, there is a 2.3% absolute improvement from 48.4% to 50.7% in Q_{10} with a database of 2479 proteins. However, the corresponding improvements are only 0.5% (from 55.6% to 56.1%) for *ϕ* and 0.2% for solvent accessibility. Thus, changing neural network architecture affects different structural properties differently. Both guided learning and changing the neural network architecture improves the prediction accuracy.

To facilitate the comparison with previous work, we further performed Experiment IV on the original data set of 2640 proteins^{69}^{,}^{21}^{,}^{28} that includes 161 proteins with more than 500 residues. Results are shown in Table 3. The accuracy based on the ten-fold-cross validated values from the database of 2640 proteins is essentially the same as that from the database of 2479 proteins for *ϕ* and residue solvent accessibility but is slightly worse for *ψ* (<2% in relative difference). Note that the guided learning for the 2640 protein dataset also improves the prediction accuracies by approximately 2% relative to un-guided neural networks (results not shown). We also tested the effect of learning rates on the accuracy. No significant effect is observed (Table 3).

*ϕ*and

*ψ*angles, and RSA

It is also of interest to know if a method trained with short chains is useful to predict structural properties of long chains. We apply the method trained with the database of 2479 proteins (chain length <500) to 161 proteins with chain length of more than 500 amino acid residues. We obtained 64.8%, 81.0%, 55.2% for *Q*_{10%} of *ψ*, *ϕ* and RSA, respectively. This was done from the average of 10 sets of the results generated from weight parameters from 10 fold training with the database of 2479 proteins. The corresponding *Q*_{10%} accuracies (ten-fold-cross-validated) are 64.8%, 81.1%, and 57.1%, respectively, when long chain proteins are used as a part of training and 10-fold cross-validation. Thus, only the accuracy of RSA is improved when long chain proteins are included in training and test sets.

## DISCUSSION

We have introduced a machine learning technique called guided learning. The purpose of guided learning is to guide the neural network based on prior knowledge or intuition on neural network weights. The idea is tested using five different experiments and three structural properties of proteins. A consistent 2% reduction in mean absolute error is observed regardless the size of database, the number of hidden layers, the size of input window, the residue type or secondary structure type. Thus, the observed improvement is robust and statistically significant. Such an improvement is obtained without any significant increase in computational time. This is important because we are optimizing a large number of weights simultaneously ((28 × 21 + 1) × 101 + 102 × 101 + 102 weights for experiment IV).

Although a 2% improvement may seem small, it is significant for protein structural properties. For example, the accuracy of secondary structure prediction has been stagnated around 77%^{13} until the ten-fold cross-validated 80% that was reached recently^{69}. In a separate study, we found that this technique leads to 1% improvement in secondary structure prediction (*Q*_{3} = 81%, Faraggi and Zhou, in preparation). Moreover, this study represents only a preliminary implementation of the proposed guided learning technique to a few specific cases. Application to other problems with better defined “intuitions” may be more profitable. Furthermore, the functional form for guiding factors [Eqs. (7-9)] used in this study may not be optimal. Another possibility to improve guided learning is to develop an iterative method to improve the guiding factors.

Introducing the sequence-distance-dependent decay as a guiding factor also makes physical sense. It mimics the natural processes associated with natural neural networks. Due to the resistance of the axon connecting different natural neurons there will be a potential drop as a signal is propagated from one neuron to the next. Since one neuron may be connected to several others, with axons of different lengths connecting them, the outcome will be that neurons that are connected by longer axons will propagate weaker signals between them. This is exactly the effect of the guiding factors introduced here.

It is of interest to compare the prediction accuracy to the best reported accuracy from Real-SPINE^{21} for solvent accessibility and Real-SPINE 2.0^{28} for torsion angles. Table 4 shows that there are 3%, 11%, and 5.5% absolute increases in *Q*_{10} score of *ψ*, *ϕ* and RSA, respectively, 3% and 2% absolute increases in *Q*_{10%} score of *ψ* and *ϕ*, respectively, and 5%, 10%, and 22% reduction of MAE of *ψ*, *ϕ* and RSA, respectively. Interestingly, we found that there is a reduction, rather than an increase, of correlation coefficient from 0.707 to 0.656 for *ϕ*. We found that this is mainly due to the fact that the correlation coefficients were calculated between shifted *ϕ* angles in Real-SPINE and unshifted angles in this work. If shifted angles are converted back to normal values, the correlation coefficient will reduce from 0.707 to 0.53 in Real-SPINE2.0. This result highlights the fact that correlation coefficients are unsuitable for circular data such as angles. Moreover, correlation coefficient ignores the possible complex distribution pattern of the variables to be correlated^{72}. Both *ϕ* and *ψ* angles have a bimodal rather than a normal distribution.

*ϕ*and

*ψ*angles, and RSA. The comparison between prediction accuracies of this study and best reported accuracies

The overall improvement on *ψ* angles can be mostly interpreted with improvement due to introduction of guided learning and a two-layer neural network. The more significant improvements for *ϕ* and RSA exist prior to the introduction of the two-layered network (Experiment III in Table 1). For both the RSA and the *ϕ* angle we have found that the new scaling introduced here proves beneficial. We also found that the faster learning rate of 0.01 yields no improvement but allows a much faster convergence of the neural network training.

The predicted angles and residue surface accessibility will likely be useful for improving fold recognition and conformational sampling of protein structures. This was demonstrated in a number of earlier studies for surface accessibility^{1}^{-}^{5}^{,}^{73}. The predicted angles have been also used to improve fold recognition^{7}^{-}^{9}, sequence alignment^{10}, and the accuracy of secondary structure prediction^{11}^{,}^{12}. The work presented here not only provides a new technique for machine learning but also an improved prediction tool available on http://sparks.informatics.iupui.edu, which is useful for protein structure prediction.

## ACKNOWLEDGMENTS

The authors would like to thank the National Institutes of Health (NIH) for funding through Grants GM966049 and GM068530.

## References

^{5}: Improving protein fold recognition by using predicted torsion angles and profile-based gap penalty. PLoS ONE. 2008 in press. [PMC free article] [PubMed]

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (560K) |
- Citation

- Predicting residue-residue contact maps by a two-layer, integrated neural-network method.[Proteins. 2009]
*Xue B, Faraggi E, Zhou Y.**Proteins. 2009 Jul; 76(1):176-83.* - Real-value prediction of backbone torsion angles.[Proteins. 2008]
*Xue B, Dor O, Faraggi E, Zhou Y.**Proteins. 2008 Jul; 72(1):427-33.* - Real-SPINE: an integrated system of neural networks for real-value prediction of protein structural properties.[Proteins. 2007]
*Dor O, Zhou Y.**Proteins. 2007 Jul 1; 68(1):76-81.* - SPINE X: improving protein secondary structure prediction by multistep learning coupled with prediction of solvent accessible surface area and backbone torsion angles.[J Comput Chem. 2012]
*Faraggi E, Zhang T, Yang Y, Kurgan L, Zhou Y.**J Comput Chem. 2012 Jan 30; 33(3):259-67. Epub 2011 Nov 2.* - Fluctuations of backbone torsion angles obtained from NMR-determined structures and their prediction.[Proteins. 2010]
*Zhang T, Faraggi E, Zhou Y.**Proteins. 2010 Dec; 78(16):3353-62.*

- Evaluation of Protein Dihedral Angle Prediction Methods[PLoS ONE. ]
*Singh H, Singh S, Raghava GP.**PLoS ONE. 9(8)e105667* - DDIG-in: discriminating between disease-associated and neutral non-frameshifting micro-indels[Genome Biology. 2013]
*Zhao H, Yang Y, Lin H, Zhang X, Mort M, Cooper DN, Liu Y, Zhou Y.**Genome Biology. 2013; 14(3)R23* - From local structure to a global framework: recognition of protein folds[Journal of the Royal Society Interface. 201...]
*Joseph AP, de Brevern AG.**Journal of the Royal Society Interface. 2014 Jun 6; 11(95)20131147* - SPOT-Seq-RNA: Predicting protein-RNA complex structure and RNA-binding function by fold recognition and binding affinity prediction[Methods in molecular biology (Clifton, N.J....]
*Yang Y, Zhao H, Wang J, Zhou Y.**Methods in molecular biology (Clifton, N.J.). 2014; 1137119-130* - Energy Functions in De Novo Protein Design: Current Challenges and Future Prospects[Annual review of biophysics. 2013]
*Li Z, Yang Y, Zhan J, Dai L, Zhou Y.**Annual review of biophysics. 2013; 4210.1146/annurev-biophys-083012-130315*

- Improving the prediction accuracy of residue solvent accessibility and real-valu...Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural networkNIHPA Author Manuscripts. 2009 Mar; 74(4)847

Your browsing activity is empty.

Activity recording is turned off.

See more...