Skip navigation
Previous Next

Accelrys Blog

December 2010
0

Let it snow, let it snow...

Posted by stodd Dec 19, 2010

As  the winter vacation looms nearer, I have been thinking about how I am  going to get to Scotland for Christmas. At the same time, a colleague  asked if I was going  to write a seasonal script again this year. Merging those two streams  of thought, I came up with a new seasons greeting script which you can  download or see a movie from here. Note the script requires Materials Studio 5.5 to run.

 

I  wondered initially about how to make snow fall down the screen. I  started off using shearing in DPD in Mesocite but that didn’t really  give me the flexibility  I needed. One of my colleagues, Neil Spenley, recommended trying out  the new electric fields that we added in Materials Studio 5.5. This was a  great idea as it enabled me to switch the field directions with a lot  of flexibility – hence the “wind” blows across  the screen.

 

Next  issue was how to keep the snow falling. As I only have a fixed number  of snow beads in my structure, I had to make them fall from the sky  repeatedly. To do  this, I used a soft, short range potential which, due to the magic of periodic boundary conditions, enables the beads to  fall through the ground and back in through the top of screen. In the  last part of the script, I used close contacts to dock the snow onto the  ground.

 

The  final challenge was to make the greeting appear. Initially, I fixed the  bead positions but this makes the beads appear all the time which was a  bit of an issue!  Finally, I realised that I could use the restraints energy, also  introduced in Materials Studio 5.5, to move the beads to anchor  positions that I created from a pre-defined set of coordinates.

 

After  this, it was just a matter of tuning the parameters. I experimented  with a lower electric field strength which allowed the beads to move  side to side a little.  However, when they hit the Lennard-Jones potential on the roof beads,  they bounced back up into space – not realistic!

 

After  much playing, I finally got something I was happy with – so Seasons  Greetings everyone! Now I have to think about next year...

856 Views 0 References Permalink Categories: Modeling & Simulation Tags: classical_simulation, materials, materials-studio, computational-chemistry
0

Sustainability continues to grow in importance as an issue, both ecologically and sociologically. One source of resistance, I think, is the perceived cost of implementing sustainable practices vs the status quo. A recent article by Rosemary Grabowski in Consumer Goods Technology (CGT) outlines a number of practices and shows - among others things - why cost shouldn't be an issue. Sustainability isn't just good for the environment: it's good for the bottom line. Grabowski lists 10 requirements for sustainable strategies that also serve as drivers of business. I won't repeat them all here since you can read them for yourselves, but I'd like to highlight a couple that fit with the themes that we've already explored on this web site over the past year or so.

 

We have written a lot on sustainability on this web site, with examples such as Michael Doyle's blogs on sustainability, my own blog on methanol from biomass, or Michael Kopach's entry on incorporating green features into electronic lab notebooks (ELNs).

 

As a modeler I just love these topics because there's so much that software can contribute - in fact all 10 of Grabowski's requirements can be met this way. Open Collaboration, for example. This is an idea that P&G pioneered some years ago. Of course anyone can dump a new idea into the system, but you need a way to keep track of it, share it, add to it, and feed it back again. This is where software tools like ELNs come in. They also deliver centralized data, data continuity, and tracking, all of which are mentioned in the CGT article.

 

Another key requirement is Virtual Design. (You guessed it: as a modeler this is my absolute favorite topic.) Software has reached the stage where a lot of work can be done computationally rather than in the lab. No messy chemicals to clean up, no animals to experiment on, no toxic by products. The capabilities include alternative energy, more efficient use of resources, and the prediction of toxicology (see SlideShare for specific examples).

 

Sustainability shouldn't be a buzz word in 2011: it should be a serious consideration for all corporations and individuals concerned about the future. Software offers  inexpensive and non-pollution solutions. I've touched on only a few here, but I'm really interested to hear from you folks out there. What are your challenges in sustainability and how are you meeting them? Whether with software or otherwise, let me know.

723 Views 1 References Permalink Categories: Electronic Lab Notebook, Modeling & Simulation Tags: alternative-energy, green-chemistry, sustainability
0

As Anderson says in his post in wired, the Petabyte Age is upon us. Just consider that in the last 30 years we have gone from ferrite 32K memories in room filled machines, through VMS and paper tape, to my new laptop with a Terabyte array. Similarly we have gone from initial stores to RDBMS and now Google obfuscating and federating many data sources. So, what’s next?

 

Anderson makes a solid argument that a holistic information map is now an unachievable goal and that what we need is locally descriptive analyses. Although right in much of what he says, this is a debatable point. Since the failure of chess like programs in real world scenarios is linked to the lack of contextual capability in computer science, it therefore follows that a wholesale abandonment of context of information will lead to a form of data Alzheimer’s. Where I totally agree with him is the quote of George Box (Box Hunter and Hunter book fame) where he says, "All models are wrong, but some are useful." This is very true and makes the argument for consensus models and consensus decision trees. Anthropomorphically think about the ideas of trial by jury as an analogue. Multiple models have the ability if housed in an accessible framework to allow and enable decision making to be free from local bias or local variances.

 

What also is very interesting of Anderson’s thesis is the idea that although we can and will continue to build multiple taxonomies. The idea like Esperanto of a universal language is further flawed. It is in my opinion as a child of the 90’s and the fall of monolithic states, that people will adapt and morph like a flowing river and that, ideas such as, universal taxa, cannot in any way provide the overall context. Therefore it’s in the provisioning of tools to make these interchangeable and in a platform that makes interactions trivial that we will move forward towards our goal of universal information.

 

Consider what I saw yesterday: There was a lady pushing two kids at Newark airport in a stroller. Both children were using IPAD kiddy programs. They were both, drawing, learning and, in fact, sharing information. Now this is from children that are no older than say 5 years. These represent the future and how we will move. In their world the idea of information having boundaries and limits is completely nonsensical. So we have to understand that the explosion of data is just beginning and start to plan accordingly.

694 Views 0 References Permalink Categories: Data Mining & Knowledge Discovery, Trend Watch Tags: pipeline_pilot
0

In the business world which as we know "never sleeps", there is a constant drive to optimize performance, cost, supply chain and of course, profits. The pace of technology, not just locally but globally, continues along on an all-pervasive “tear” to support these goals. As such, the complexity and interconnectedness of business organizations has increased.

 

Recently there has been a C-change in information strategies that represents a significant switch from reactive information management to pro-active or predictive information management. Many leading corporations are seeking to optimize their processes beyond simple 3 sigma levels and are even pushing beyond 6 sigma levels. To do this they need to develop actionable insights about their corporate data and information.

 

More and more we see businesses moving in the direction of intelligent lead optimization and modeling, or predictive lead research. A key driver of this is, in fact, that all of these processes actually need to interconnect the business decision making processes, functions and layers with the research process and materials optimization layer in order to achieve much sought after efficiencies. For example it is now not enough to know how to manufacture a charge coupled sensor for a camera, because the process, production and testing results need to be linked to the customer response system for predictive or early warning of modifications or tuning of the system that is leading to high levels of product returns due to in-use failures.

 

This shift to quality by or in design rather than just prescriptive testing is the same as happened in the pharmaceutical area over 5 years ago. The FDA process analytical technology (PAT) laid the foundation for the Quality by Design (QbD) initiatives that seek to improve and provide guidance for an industry mind shift to quality by understanding and anticipating, rather than quality by just testing—after the fact.

 

Clearly systems undergo changes along their life cycle, and raw materials can vary at an atomistic or molecular level. So in fact, efficiencies can be driven if those changes can follow right alongside the process and production property and performance changes in a system. This is why a scientific information platform is so needed: Because eventually, these “concept–to--customer” systems must be integrated across “borders” to the product lifecycle and ERP systems layer and downstream to the computational, process and lab information feeds or sources to accomplish meaningful gains in lower costs and time to market.

696 Views 0 References Permalink Categories: Executive Insights, Trend Watch Tags: product-lifecycle-management, business-optimization
0

 

The Inner Workings of the Generalized Born Model– Finally revealed after 20 years

Anna Tempczyk-Russell, Ph.D.

First published over 20 years ago[1], the Generalized Born model has proven to be one of the most successful approaches to incorporating solvation effects into molecular modeling calculations. As of late, it has 1,862 citations, and is still growing strong. Most of the publications are related to model implementation, with a smaller number devoted to method improvements such as, enhancement of the so-called Born radii. The equation proposed in the original paper has been treated as an empirical formula derived by the authors, without explanation of the physical basis. In this blog, I will reveal how I derived this popular formula. I will also disclose the calculation methods behind the Born radii. This potentially can build the basis for improvement of a current equation or even the discovery of new one.

 

Current Blog – Part I, is an introduction to the description of the Generalized Born model. I will mention quantum mechanics and the general way solvation models have been formulated for these calculations. In Part II, I will show you the steps and thought process that went into deriving the models. Finally in Part III, I will show how the Born radii were derived and how one can modify them, in a continuous manner upon conformational and geometrical variations.

 

Part I

Implicit solvent influence on molecular states in quantum mechanics (QM) usually is described by an additional operator which is added into the Hamiltonian operator. This operator is usually defined as an electrostatic (Coulomb) equation. In a molecular mechanical (MM), this operator is replaced by potential energy terms accounting for the impact of solvent on calculated energies and geometry. The electrostatic term in MM calculations is represented by the Coulomb equation which describes interactions between two charged entities as a distance dependent function, Gpol  in equation (1).

 

Equation1.bmp

 

What You Didn’t Know about Coulomb’s Law

 

 

 

 

 

Equation2.bmp

 

Fig1. Graphical representation of the Coulomb energy matrix where the purple area indicates the half of the matrix which is taken into account in energy calculations. The white space is a diagonal where i=j and the distance is zero. This is not a Coulomb energy and is excluded from the calculations, but the Generalized Born equation includes this. Read on to find out how Equation (3) describes Coulombic interaction energy as a sum of all charged entities in the system under consideration.  Zi and Zj stand for the charges; eis an electrostatic dielectric constant of the environment in which charged entities are located and this value is estimated to be 80 for aqueous solutions and 1 for gas phase and is a distance between charges; 332 is a coefficient to obtain the energy in the kcal/mol units.

 

Equation3.bmp

 

 

Fig. 2   Illustration of the radii changes upon interactions and solvent influence

 

Dielectric constant scales down the same gas phase energy equation so the difference between energy of the charged moieties in gas phase versus the energy in solvent, as it is depicted in equation (4).  Charge interactions described by Coulomb law are distance dependent and they grow to infinity when the distance between them is smaller than 1; this prevents atoms from collapsing on each other and it is a desired effect usually described as i # j excluding diagonal elements from the matrix.

 

One of the Coulomb law limitations is the assumption that charges are the point charges described by coordinates only. In reality these are not point charges, but the atoms with the well-defined sizes (radius) which also undergo changes due to polarization by the neighboring charges and of course by the solvent atoms, as well. This effect can be captured by charge modification or can be viewed as phenomenon known in QM where the same electron occupies larger space when adjoined with the other atom or atoms. So, the phenomenon of interaction with solvent atoms and with atoms of the same molecule is based on the same principle of quantum mechanics, Fig 2.

 

The question was how to incorporate this fact into molecular mechanics, force field based calculations.

In 1920 Born[2] observed that the single ions would enlarge their sizes when transferred from vacuum into the solvent and proposed equation for self solvation effect for the ions in the solvent. Born similarly like Coulomb thought about charges as the points which were moving inside of the spheres created by them in the solvent. This understanding caused difficulties with continuous energy calculations which accounts for the two methods.

 

Stepwise analysis of the Coulomb law pointed to the fact that the Coulombic energy is growing when distance between the charges is smaller than 1, at the distance zero i=j the entity becomes the charge i or charge j therefore for diagonal elements Coulomb equation should become Born equation resulting in the full matrix adding the diagonal elements into the Coulomb law.

 

Thus, two things have to be developed to account for this “new Coulomic law”: First an equation which will allow for a smooth transition - continuous from Coulomb law to the Born equation putting elements on diagonal in the Coulomb matrix, second a method to calculate the continuous radii change under interactions with each other upon solvent influence.

 

Stay tuned for Part II...


[1] W. Clark Still, Anna Tempczyk, Roland C. Hawley, and Thomas Hendrickson, Semi-empirical Treatment of Solvation for Molecular Mechanics and Dynamics, J. Am. Chem. Soc. 1990, 112, 6127-6129

 

[2] Born M., (1920) Volumen and Hydrationswarme der Ionen. Z. Phys. I, 45-48

1,108 Views 0 References Permalink Categories: Modeling & Simulation Tags: quantum-chemistry, molecular-mechanics, solvation-model, gbs, generalized-born-solvation-model, gb
0

More and more we are seeing LIMS and ELN mentioned in the same line. Yes, research surveys from Atrium Research have shown that many ELNs are integrated into LIMS and many scientists with LIMS are interested in getting ELNs. This means that users of LIMS and users of ELNs see a clear difference and a clear, mutually beneficial need for both within research laboratories. It is therefore remarkable that we continue to see vendors blur the literature around the definition of an ELN.

 

This article from Lab Manager Magazine nicely clarifies the differences between LIMS and ELN. It also notes an interesting trend with the LIMS vendors going out on a limb and entering the SaaS market (or the cloud for those that love buzz words). While SaaS is not new ground for your HR software or banking software, the scientific industry has been more cautious to dive in, as we’ve discussed in detail. However, pressures on this industry are changing the game and accelerating the evolution of scientific informatics towards SaaS.

 

The article cites cost and infrastructure as the primary drivers for SaaS-based LIMS, and this is where I see another key differentiator between the ELN. A recent survey Accelrys orchestrated to assess interest inscientific SaaS applications showed that costs and infrastructure were not thetop reasons to consider a SaaS ELN. In fact, collaboration was the primary reason, followed by reducing infrastructure, and then potential cost savings.

 

The ELN is truly a collaborative tool where the ideas, components, and context of experiments can be shared between scientists, departments, and research sites. But today’s virtual project teams—often spanning labs, geographical locations, services groups, and organizations—must battle networks and firewalls to share and communicate in an information driven world. SaaS based systems sit in a common “global” network outside the firewall, dissolving the barriers to information exchange and flow. Simply add scientists to a SaaS ELN, and the collaborative environment is restored, but now in nubis.

527 Views 0 References Permalink Categories: Cheminformatics, Electronic Lab Notebook, Bioinformatics, Data Mining & Knowledge Discovery Tags: hosted-informatics, eln, hosted-eln, lims, saas
0

One of the most common questions about “going electronic” with an ELN is whether IP in an electronic record is defendable in court to support proof of invention. The idea of compromising the “defendability” of IP is enough to make all of us break out the garlic, wooden stakes, and crosses. The presentation below by Colin Sandercock, a leading patent attorney, provides an excellent review of the admissibility of electronic records in court. He even goes so far as to say it is more defendable than the paper form if the ER is managed correctly.

 

If you have concerns about the "defendability" of electronic records I’d highly recommend reviewing Colin’s presentation.

660 Views 0 References Permalink Categories: Cheminformatics, Electronic Lab Notebook, Data Mining & Knowledge Discovery, Bioinformatics Tags: notebook, eln, legal, ip, protect, record, electronic
0

So Accelrys did it with Isentris 3.3! When I was product manager of Isentris, I lived for the day when we could add the icing on the cake to Isentris-- visualization. The new Isentris 3.3 released last month added major visualization capabilities fulfilling the request from a community of over 19,000 Isentris and ex-ISIS users to gain not just better access to data  but better visibility into data.

Watching last month’s Isentris webinar I wondered how quickly the remaining ISIS users will transition to Isentris. With the new Isentris Forms and Data Miner upgrade package from ISIS; the self-service, interactive, exploratory environment; upcoming Pipeline Pilot integration; and now the new visualization capabilities, the ISIS PL days are numbered as scientists vote with their feet and make the transition.

 

The new functionality enables scientists without admin or developer support to add charting components into their forms in a simple and intuitive way through the Isentris form designer. These charts can be used to view the distribution of results returned from a particular search or used to display the experimental results for a single record. Where previously scientists needed to push data into tools like Excel or expensive 3rd party visualization software to explore and see trends or outliers in data, now they only need Isentris 3.3.

The new capabilities improve both the way large datasets are navigated and visualized.. Take the screenshot below:

Image1.bmp

Two points from a large dataset stand out in red on the chart. In this experiment, scientists are examining results from a catalysis optimization.  This visualization enables scientists to quickly spot the highest average molecular weight compounds, select the data points, and confirm the conditions and the quality with real-time drill down to the experiment conditions and an FTIR spectrum for compound identity.

 

In this next example, medicinal chemists are trying to gain competitive insight into the toxicological profile of a library of commercially registered, biologically active compounds.  The visualization below makes it easy to see the least toxic compounds (denoted by high LD50 values on the y-axis) and answer the questions “Which compound is this?” and “In which organism was it tested?” by drilling into the data.

image2.bmp

 

If you’re still using ISIS,  you owe it to yourself to look at the latest Isentris and make the transition. And if you’re using Isentris 3.2 or earlier, download and enjoy an early festive season delight with Isentris 3.3.

488 Views 0 References Permalink Categories: Cheminformatics, Data Mining & Knowledge Discovery Tags: visualization, reporting, chemistry, biology, isentris, isis, graphing