Skip navigation

Accelrys Blog

2 Posts tagged with the libdock tag
0

Previously, I reported on the results of the docking challenge, held at the 241st ACS in Anaheim, CA.  To recap, the competition was split into two tasks.  The first was to demonstrate the ability of a docking engine to reproduce crystallographic coordinates of ligands bound into 85 proteins (ASTEX diverse data set, Hartshorn, et al. 2007) and the second was a virtual screening challenge, including a negative set of decoy compounds (taken from the DUD data set, Huang et al. 2006).  Each participant group presented their findings and a joint paper is planned for a special issue of the Journal of Computer-Aided Molecular Design.

 

In my last blog, I mentioned the variation in docking performance was closely tied to the efforts put into protein preparation and suggested that “Proper Protein Preparation Prevents Poor Performance” (or the 6P’s rule of docking as I now like to call it).  I also indicated that I’d talk about the second part of the docking challenge in a follow up blog.  Here it is:

 

In the second part of the challenge, participants were asked to report on the performance of their docking engines in a series of virtual screening studies.  Actives and decoy inactives were supplied (taken from the DUD data set (Huang et al. 2006)) for a set of 40 proteins.  For each screen, results were reported as a fraction of area under the curve of Receiver Operating Characteristics (ROC) plots.  Overall results were then calculated as an average of all values. 

 

As with the previous pose prediction task, teams reported that protein preparation influenced screening performance.  Indeed, with appropriate preparation, many participants could demonstrate improved discrimination performance between actives and decoys.  However, this isn’t what interested me the most.  What was really interesting was a set of results presented by the Accelrys R&D team, comparing their screening results with the docking tools to results obtained from receptor-based pharmacophore models.  This was very appealing!

 

On one level, it was certainly fascinating to see the comparative performance of the relatively faster and computationally less sophisticated pharmacophore representation next to the docking-based methods.  However, this wasn’t the exciting bit.  Instead, it was the concept that I could make use of both methods together.  Surely if you have enough data points to hand, to be able to conduct studies with either method, then why not use both together?  Since both would be working from the same protein structure starting point, then any compound identified by both approaches should be worth considering further!  I.e., a consensus ranking from multiple methods should give more reliable rankings, and potentially be less susceptible to erroneous false positives from any single method. 

 

The other advantage of combining two or more methods together, into a consensus approach, is that compounds unique to any one method could provide additional useful insight, both into the choice of models and of the computational methods used.  For example, you do not need to explicitly define all of the required binding site features a-priori with docking methods.  Thus, novel hits  from docking could reveal useful information about binding site features, or indeed identify novel compounds that would not necessarily match the feature rules used to represent a pharmacophore definition.  In comparison, pharmacophore models offer a much simpler representation of the features required for ligand binding.  Consequently, they can prove to be very adept at finding novel scaffolds, which might otherwise rank poorly via the scoring functions used in docking methods.  For example, docking methods are well known to be highly sensitive to both side-chain and loop conformations in protein models.  Thus, a simpler representation of the key features required for binding can avoid undue penalization elicited by the available choice of protein conformers.

 

One of the very nice features of Discovery Studio (DS), is that you can build your own consensus approaches.  Because every task in DS is actually a Pipeline Pilot protocol, the underlying science and workflows are laid out as components connected together by pipelines.  Thus, you can quickly and easily cut, copy, paste together components to create novel protocols that combine different scientific methods.

 

Why stop at just two methods?  If you have enough appropriate data points, then why not consider including approaches like model learning methods (E.g., Bayesian, Recursive partitioning, etc)? 

 

So, why wait?  Why not try building your own consensus modeling protocol today!

 

References:

Hartshorn M.J., et al.  J. Med. Chem., 2007, 50(4), pp 726–741; DOI: 10.1021/jm061277y

Huang H., Shoichet B.K., Irwin J.J.  J. Med. Chem., 2006, 49(23), pp 6789–6801, DOI: 10.1021/jm0608356

466 Views 0 References Permalink Categories: Modeling & Simulation Tags: discovery_studio, pharmacophore, docking, discovery-studio, cdocker, libdock, acs, protein_preparation, consensus_modeling
0

At the Spring ACS in Anaheim, we saw the introduction of a new protein-ligand docking challenge. The competition was open to all and was split into two tasks: The first was to demonstrate the ability of a docking engine to reproduce crystallographic coordinates of ligands bound into 85 proteins (ASTEX diverse data set, Hartshorn, et al. 2007), and the second was a virtual screening challenge, including a negative set of decoy compounds (taken from the DUD data set( Huang et al. 2006). Each participant group presented their findings and a joint paper is planned for a special issue of the Journal of Computer-Aided Molecular Design. In the meantime, hopefully, this challenge will become a regular feature of the ACS.

 

For the first part of the challenge, competitors were set the task of reproducing the crystallographic coordinates for a series of high affinity, small molecule ligands for 85 diverse targets. Competitors were asked to present their results in two parts: The percentage of targets where the best coordinate match (within an RMS of 2 angstroms) was also the top scoring pose and secondly, the percentage of targets where the best coordinate match was in the top 30 scoring poses. Each team presented their findings and the overall results were then reviewed. What was not surprising, was that the majority of docking engines were each able to find the crystallographic coordinates in the top 30 poses, with a very good deal of success. What was more surprising, was the impact of the protein preparation steps on the variance of the top pose being the best match. Indeed, taking the proteins precisely as provided, many competitors reported very poor initial results. However, with the inclusion of clean-up steps, individual docking engines fared significantly better. In our case, we used the Protein Clean command with the following options:

  • Adjust hydrogens and fix bonds
  • Correct crystallographic disorder, remove all alternate conformations and resetting all occupancies to 1.0. Only the first conformer is retained.
  • Standardize atom order in amino acids: Ensures that the atoms within each amino acid residue are in the standard order
  • If an amino acid side-chain or backbone is incomplete, the missing atoms are added.
  • Nonstandard nomenclature is replaced by standard names

 

What the results of the first challenge clearly showed, was the importance of protein preparation as a crucial step in ligand docking. Indeed, the message to all docking software vendors is quite clear: That they should incorporate protein health checks and cleanup steps, as part of their docking workflows. Fortunately, in the case of Discovery Studio, these tools have been in place for some time now, and are very well established in the product. Moreover, using just the default settings for both LibDock and CDocker, the complete docking procedure and analysis was automated into a workflow using our DS component collection and each achieved success rates as good as any of the other participants could achieve with manual intervention. This last part was most telling: That with sensible preparation of a protein structure, the docking tools in DS can yield good quality results, without need to recourse to manual intervention.

 

I’ll talk about the second part of the challenge in a follow up blog. For now, I’ll finish with a proposal for a new version of the Six P’s Rule: “Proper Protein Preparation Prevents Poor Performance”.

 

Adrian.

 

References:

Hartshorn M.J., et al. J. Med. Chem., 2007, 50(4), pp 726–741; DOI: 10.1021/jm061277y

Huang H., Shoichet B.K., Irwin J.J. J. Med. Chem., 2006, 49(23), pp 6789–6801, DOI: 10.1021/jm0608356

690 Views 0 References Permalink Categories: Modeling & Simulation Tags: discovery_studio, docking, discovery-studio, cdocker, libdock, acs, protein_preparation