Posts Tagged ‘ Genomics ’

What’s true for E. coli is true of an elephant

The quote by Jacque Monod in the title celebrates our recent publication of an article suggesting that our previous results in Escherichia coli hold true for most other prokaryotes:

  • del Grande, M., & Moreno-Hagelsieb, G. (2014). The loose evolutionary relationships between transcription factors and other gene products across prokaryotes. BMC Research Notes, 7, 928. doi:10.1186/1756-0500-7-928

This article expands on the part about transcriptions factors presented in our previous study comparing the conservation of different kinds of functional associations:

  • Moreno-Hagelsieb, G., & Jokic, P. (2012). The evolutionary dynamics of functional modules and the extraordinary plasticity of regulons: the Escherichia coli perspective. Nucleic Acids Research, 40(15), 7104–7112. doi:10.1093/nar/gks443

The earlier article dealt with several experimentally-confirmed functional interactions determined in Escherichia coli: genes in operons, genes whose products physically interact, genes regulated by the same transcription factor (regulons), and genes coding for transcription factors and their regulated genes. In that study we found that the associations involving transcription factors tend to be much less conserved than any of the other associations studied. Our work is not the first to suggest this lack of conservation, but is the first to compare conservation across different kinds of associations, and thus show that those mediated by transcriptional regulation are the least conserved.

The most recent article was an expansion of the association between genes coding for transcription factors and other genes. The idea being to extend the study towards as many other prokaryotes as possible. But how could we determine conservation between genes coding for transcription factors and other genes without experimentally-determined interactions? We knew that at least some transcription factors could be predicted from their possessing a DNA binding domain. But what about their associations? Our prior experience has been that target genes are hard to predict even when there’s information on some characterized binding sites (sites that we like calling operators for tradition’s sake). So what to do if we have only the transcription factors? Well, to answer that we should first explain how we measured relative evolutionary conservation.

To measure evolutionary conservation we used a measure of co-occurrence called mutual information. For any two genes, the higher the mutual information, the less the observed co-occurrence looks random. Since we obtained mutual information scores for all gene pairs in the genomes we analyzed, we decided that instead of something as hard as predicting operators, and matching them to predicted transcription factors, we could use top scoring gene pairs as representatives of the most conserved interaction between our predicted transcription factors and anything else. This allowed us to compare the most conserved interactions involving transcription factors against the conservation of other interactions. Our findings suggest that interactions involving transcription factors evolve quickly in most-if-not-all of the genomes analyzed.

Please read the articles for more details and information.

-Gabo

Democratic genomics

We had two articles recently published:

  1. G. Moreno-Hagelsieb, B. Hudy-Yuffa, Estimating overannotation across prokaryotic genomes using BLAST+, UBLAST, LAST and BLAT. BMC Res Notes 7, 651 (2014).
  2. N. Ward, G. Moreno-Hagelsieb, Quickly Finding Orthologs as Reciprocal Best Hits with BLAT, LAST, and UBLAST: How Much Do We Miss? PLoS ONE 9, e101850 (2014).

The story goes as follows. At a talk by some group I heard that they were using UBLAST to quickly find members of some protein families rather than use a Hidden Markov Model approach. They said it was much faster, so I became curious. I downloaded USEARCH 5 back then to try and test for the things I commonly do with NCBI’s BLAST. I was surprised at how fast this program ran. In any event, I thought that testing this program for some task would be a good work for an undergrad student. That was Natalie’s undergrad thesis. Back then about using different options under USARCH to try and get as much coverage with UBLAST as with NCBI’s BLAST (UBLAST was not an option in USEARCH 5, rather, a local alignment search had to be done). We became more ambitious, and decided to test a few more programs. BLAT was something I was already playing with, while an article by Jonathan Eisen (Darling et al., PhyloSift: phylogenetic analysis of genomes and metagenomes. PeerJ 2, e243. 2014) pointed me in LAST’s direction (besides reviewers asking for more programs to be tested).

Later on, at some other talk, I think this was a talk by Robert Beiko. He mentioned something about BLAST being too slow for some task, and I asked him why not try UBLAST. He said something to the effect of not knowing how much they might miss.

The articles we published cover one task each. One is the task of finding orthologs as reciprocal best hits. Pretty straightforward. How many orthologs are found by each program when compared to BLAST. Essentially, finding orthologs as reciprocal best hits does not require the finding of every possible match. Top matches would be enough. So, if UBLAST, for example, found just a few top matches (under version 5, we could control the number of matches found  before the program stops looking), that would be enough to determine the best, and thus figure out reciprocal best hits. We though we might miss many matches, but still find most of the reciprocal best hits, and that’s what we found to be the case except between evolutionarily distant genomes (see second reference above).

For the test on overannotation, the main idea was that for that task we compare proportions, not total number of matches. Thus, if UBLAST, LAST, and BLAT missed potential homologs, but still found equivalent proportions to those found by NCBI’s BLAST, then the programs would work fine for estimating overannotation. Well, that’s what we found.

Finally, why democratic genomics? Well, tools that can run sequence comparisons in a fraction of the time that BLAST runs, and that in a desktop computer, then comparative genomics of a much larger scale becomes available for most if not all bioinformaticians. Why would I care? Well, because the most people can participate the higher the number of ideas that can make it into the field. Not everybody has access to computer clusters. There’s other avenues towards this democracy, like the availability of some precomputed homologies and orthologies. Yet, people will want to do their own tests for many reasons. From doubting the quality of existing data, to testing genomes and protein sequences not already available in databases. Maybe there’s also a good chance that genome and protein comparisons will be done via cloud computing, and be quite accessible to mere mortals. Maybe web-based tools like RAST and MG-RAST are good enough for these tasks instead of having our own thing. I don’t know. For now I think that the more options the better. These two articles are not enough. Strategies should also be developed to avoid wasting time and effort comparing sequences. As we develop our ideas and test programs, we will publish our results either in articles, or, if not enough for a publication proper, in blog entries.

Have fun!

-Gabo

Evolutionary conservation

We have a new article in Nucleic Acids Research:

  • Moreno-Hagelsieb G, Jokic P (2012) The evolutionary dynamics of functional modules and the extraordinary plasticity of regulons: the Escherichia coli perspective. Nucleic Acids Res 40: 7104-7112.

The article twists the normal use of phylogenetic profiles, which is that of predicting functional interactions. The idea for phylogenetic profiles is that if we observe that two genes co-occur their products might work together. What does this mean? Well, to co-occur means to appear both in the same genome, and to be both absent whenever the either one would be absent. A most excellent idea. A most difficult one to use for actual predictions. OK then, hard to use for predictions? Why? Not sure, but, for starters, we can see that genes that work together in one organism do not co-occur that much across organisms. So I thought, maybe functional interactions are not well conserved. Maybe partners in functional crime are exchanged with ease. How would we know? Well, maybe if we look at the phylogenetic profiles of collections of genes whose products functionally interact we could see something of a rate of exchange, maybe the rates would be difficult to estimate, so what about comparing against the whole background of co-occurrence? What about finding some “gold standards”? … and that was like an eureka moment. What about comparing different kinds of interactions in terms of their conservation? So, I tried a few, and lo and behold, interactions via co-regulation (regulons) looked worse than a “gold negative,” namely transcription unit boundaries (adjacent genes in the same strand, but different transcription units).

So there you go. The most surprising result was the low levels of conservation for interactions mediated via regulons. The best part was that the most conserved interactions were those among genes found in the same transcription unit (in operons). Why best? Because a lot of my research has been about using operon predictions for predicting networks of functional interactions. Since these interactions are the most conserved, we might expect them to be the most useful to infer functional interactions. Right? Well, maybe. Still lots of research needed. I hope you enjoy the article.

-G

Computational Genomics and Metagenomics

ArgR

Network of functional interactions for the arginine repressor

Welcome to the web page of the lab of Computational Genomics and Metagenomics, a.k.a. the lab of Computational Microbiology, the lab of Computational Microbiomics, and the lab of Computational Con-Sequences.

We are interested in all things genomic, metagenomic, postgenomic, postmetagenomic, and hyperultramegasupragenomic (!). Our work centres around the evolution and the inference both of function and of functional interactions of gene products, mostly in Prokaryotes.

Everything in this lab is done with computers. Yet, besides working with other computational biologists, we also have collaborations with wet labs.

You might be wondering how this kind of research got started. Well, it all began with the idea that we should stop finding the genes in the human genome, one by one, by laborious and intense work linking phenotypes (what we see) to finding the very gene, or genes (what we don’t see), responsible for such phenotypes. Not that such work is not valuable, au contraire, without that work providing us with real life examples we would not be in any position for making sense of genome sequences. It was probably some kind of a case of impatience [and boldness]. Of course, there is also the tiny detail that knowing our complete genetic complement (a useful definition of “genome”) would provide us with a wider and more accessible basis for the better and faster finding of genes behind phenotypes. Now, substitute the word “phenotype” by whatever disease that might involve genetics (or badly gone genetics), such as “cancer,” or “diabetes,” and you might get a better feeling of importance for this task.

As you might guess, quite well, this ambitious project set the whole machinery in motion. Long story short (but I might try and let you know better later), the technological advancements brought about by the idea of having our beautiful 23+1/2 pairs of chromosomes sequenced allowed scientists to sequence microbes. With those genomes available, before we even had a first draft of our own, other technologies arose, technologies focused on making sense of newly found genes in those genomes. A couple of these are transcriptomics, which started with microarrays, used to find out which genes are expressed by finding their messenger RNAs; and proteomics, used to figure out which proteins are being produced. That my friends, started the “postgenomic era.” I gave it away, didn’t I? You have thus guessed that “postgenomics,” in the second paragraph, refers to the products of these new technologies, and you are absolutely right.

Well, not content with the human genome (the draft was announced in 2000, yes, ten years ago!), and with powerful sequencing technologies available, another new field arose. The field of environmental genomics, or metagenomics (the word “metagenomics” was used to mean the fishing of genes with particular functions from the environment, before it was used in this context, but let’s not go there). This thing is about sequencing fragments of DNA isolated from an environment (I was going to write “from a given environment,” but I resisted), and then guessing things about the microbes, or whatever, in such an environment. Others refer to “metagenomics” as sequencing without culturing, but let’s not go there either. Well, since now scientists are sequencing mRNA, rather than DNA, we could say that the postmetagenomic era has dawned. Though I haven’t seen this word used in a paper yet. In any event, such a humongous amount of data necessarily calls for computational analyses, and here we are.

Well, hard to guess where all this is going, but the sequencing technologies keep improving and getting cheaper. We cannot but expect the word “deluge” in all of the published papers of the genomic era to become ridiculous by comparison. This means lots of challenges to make sense of the information. Lots of new avenues of research too. This is why I am reserving the word “hyperultramegasupragenomic” (and its “post-” derivative) for later use. The way things are going, it might not be that much later.

With that, welcome again, and enjoy your visit.

-Gabo

%d bloggers like this: