{"id":1914,"date":"2019-03-22T17:46:35","date_gmt":"2019-03-22T17:46:35","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/03\/22\/model-learns-how-individual-amino-acids-determine-protein-function\/"},"modified":"2019-03-22T17:46:35","modified_gmt":"2019-03-22T17:46:35","slug":"model-learns-how-individual-amino-acids-determine-protein-function","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/03\/22\/model-learns-how-individual-amino-acids-determine-protein-function\/","title":{"rendered":"Model learns how individual amino acids determine protein function"},"content":{"rendered":"<p>Author: Rob Matheson | MIT News Office<\/p>\n<div>\n<p>A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein\u2019s function, which could help researchers design and test new proteins for drug development or biological research.\u00a0<\/p>\n<p>Proteins are linear chains of amino acids, connected by peptide bonds, that fold into exceedingly complex three-dimensional structures, depending on the sequence and physical interactions within the chain. That structure, in turn, determines the protein\u2019s biological function. Knowing a protein\u2019s 3-D structure, therefore, is valuable for, say, predicting how proteins may respond to certain drugs.<\/p>\n<p>However, despite decades of research and the development of multiple imaging techniques, we know only a very small fraction of possible protein structures \u2014 tens of thousands out of millions. Researchers are beginning to use machine-learning models to predict protein structures based on their amino acid sequences, which could enable the discovery of new protein structures. But this is challenging, as diverse amino acid sequences can form very similar structures. And there aren\u2019t many structures on which to train the models.<\/p>\n<p>In a paper being presented at the International Conference on Learning Representations in May, the MIT researchers develop a method for \u201clearning\u201d easily computable representations of each amino acid position in a protein sequence, initially using 3-D protein structure as a training guide. Researchers can then use those representations as inputs that help machine-learning models predict the functions of individual amino acid segments \u2014 without ever again needing any data on the protein\u2019s structure.<\/p>\n<p>In the future, the model could be used for improved protein engineering, by giving researchers a chance to better zero in on and modify specific amino acid segments. The model might even steer researchers away from protein structure prediction altogether.<\/p>\n<p>\u201cI want to marginalize structure,\u201d says first author Tristan Bepler, a graduate student in the Computation and Biology group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). \u201cWe want to know what proteins do, and knowing structure is important for that. But can we predict the function of a protein given only its amino acid sequence? The motivation is to move away from specifically predicting structures, and move toward [finding] how amino acid sequences relate to function.\u201d<\/p>\n<p>Joining Bepler is co-author Bonnie Berger, the Simons Professor of Mathematics at MIT with a joint faculty position in the Department of Electrical Engineering and Computer Science, and head of the Computation and Biology group.<\/p>\n<p><strong>Learning from structure<\/strong><\/p>\n<p>Rather than predicting structure directly \u2014 as traditional models attempt \u2014 the researchers encoded predicted protein structural information directly into representations. To do so, they use known structural similarities of proteins to supervise their model, as the model learns the functions of specific amino acids.<\/p>\n<p>They trained their model on about 22,000 proteins from the Structural Classification of Proteins (SCOP) database, which contains thousands of proteins organized into classes by similarities of structures and amino acid sequences. For each pair of proteins, they calculated a real similarity score, meaning how close they are in structure, based on their SCOP class.<\/p>\n<p>The researchers then fed their model random pairs of protein structures and their amino acid sequences, which were converted into numerical representations called embeddings by an encoder. In natural language processing, embeddings are essentially tables of several hundred numbers combined in a way that corresponds to a letter or word in a sentence. The more similar two embeddings are, the more likely the letters or words will appear together in a sentence.<\/p>\n<p>In the researchers\u2019 work, each embedding in the pair contains information about how similar each amino acid sequence is to the other. The model aligns the two embeddings and calculates a similarity score to then predict how similar their 3-D structures will be. Then, the model compares its predicted similarity score with the real SCOP similarity score for their structure, and sends a feedback signal to the encoder.<\/p>\n<p>Simultaneously, the model predicts a \u201ccontact map\u201d for each embedding, which basically says how far away each amino acid is from all the others in the protein\u2019s predicted 3-D structure \u2014\u00a0essentially, do they make contact or not? The model also compares its predicted contact map with the known contact map from SCOP, and sends a feedback signal to the encoder. This helps the model better learn where exactly amino acids fall in a protein\u2019s structure, which further updates each amino acid\u2019s function.<\/p>\n<p>Basically, the researchers train their model by asking it to predict if paired sequence embeddings will or won\u2019t share a similar SCOP protein structure. If the model\u2019s predicted score is close to the real score, it knows it\u2019s on the right track; if not, it adjusts.<\/p>\n<p><strong>Protein design<\/strong><\/p>\n<p><a name=\"_gjdgxs\"><\/a>In the end, for one inputted amino acid chain, the model will produce one numerical representation, or embedding, for each amino acid position in a 3-D structure. Machine-learning models can then use those sequence embeddings to accurately predict each amino acid\u2019s function based on its predicted 3-D structural \u201ccontext\u201d \u2014 its position and contact with other amino acids.<\/p>\n<p>For instance, the researchers used the model to predict which segments, if any, pass through the cell membrane. Given only an amino acid sequence, the researchers\u2019 model predicted all transmembrane and non-transmembrane segments more accurately than state-of-the-art models.<\/p>\n<p>\u201cThe work by Bepler and Berger is a significant advance in representing the local structural properties of a protein sequence,\u201d says Serafim Batzoglou, a professor of computer science at Stanford University. \u201cThe representation is learned using state-of-the-art deep learning methods, which have made major strides in protein structure prediction in systems such as RaptorX and AlphaFold. This work has ultimate application in human health and pharmacogenomics, as it facilitates detection of deleterious mutations that disrupt protein structures.\u201d<\/p>\n<p>Next, the researchers aim to apply the model to more prediction tasks, such as figuring out which sequence segments bind to small molecules, which is critical for drug development. They\u2019re also working on using the model for protein design. Using their sequence embeddings, they can predict, say, at what color wavelengths a protein will fluoresce.<\/p>\n<p>\u201cOur model allows us to transfer information from known protein structures to sequences with unknown structure. Using our embeddings as features, we can better predict function and enable more efficient data-driven protein design,\u201d Bepler says. \u201cAt a high level, that type of protein engineering is the goal.\u201d<\/p>\n<p>Berger adds: \u201cOur machine learning models thus enable us to learn the \u2018language\u2019 of protein folding \u2014 one of the original \u2018Holy Grail\u2019 problems \u2014 from a relatively small number of known structures.\u201d<\/p>\n<\/div>\n<p><a href=\"http:\/\/news.mit.edu\/2019\/machine-learning-amino-acids-protein-function-0322\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Rob Matheson | MIT News Office A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein\u2019s [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/03\/22\/model-learns-how-individual-amino-acids-determine-protein-function\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":462,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1914"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1914"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1914\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/461"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}