Skip navigation
2

The UK Home Office released new statistics on animal testing in the UK last week (21 July 2009). As reported by the BBC , the figures show a strong upward jump.  In 2008, 3.7 million procedures using animals were carried out in the UK, representing an increase by 455,000 or 14% in 2007. Digging a bit deeper into the statistics shows some interesting trends.

 

The increase in number comes almost entirely from a strong increase in the number of fish procedures (278,000), followed by mice (197,000).

 

Animaltestingchart1-300x236.gif

 

Whereas others have remained stable or declined:

 

Animaltestingchart2-291x300.gif

 

According to the report, these seem to be strongly related to increases in biological research. That’s got to do with the increasing attention and promise of personalized medicine and genetic targeting.

 

It’s also interesting to note that, apart from breeding related activities, medical research and pharmaceuticals safety dominate by far. Procedures relating to ecology, and substances used for example in industry, agriculture and food only sum to about 100,000, or just a few percent of the total. Also, these procedures have seen a decline overall, for example, by 20% for substances used in industry. There have been no tests at all on cosmetic substances, in fact none since 1998.

 

So what’s it telling us?

 

There have been great advances over the last 10 years in making use of alternative testing methods. In vitro testing has become more established for cosmetics, and animal testing either has been phased out already or is on its way out very soon in Europe as a result of the 7th Amendment to the  Cosmetics  Directive.

 

While similar trends hold for substances used in industry, it will be interesting to follow this trend in 2009 and 2010, as the majority of substance assessments for the submission of dossiers due to the REACH legislation are taking place. By anecdotal evidence, talking to folks during the recent SETAC conference, labs are getting busy, and in some cases completely booked already with REACH related work. However, that work load also includes massive amounts of data gathering, literature searching, analytical testing, as well as in-silico methods such as QSAR and read-across. Quote: “Some people will soon get in a panic about closing the data gaps.”

 

The question is, how are organisations accessing, processing and handling the information, and making best use of data and information in the literature that’s already out there?  How about a web based ‘workbench’ that’s geared up to support toxicologists and other scientists as well as managers in the field to gather, process, share and report that information? We’ve built out a proof-of-concept for anyone to take a look and try, which includes examples of different functions, from database and document searches, and predictive toxicology analysis to facility monitoring. The trick is that as it is built on the basis of Pipeline Pilot protocols, it’s highly configurable and extensible with almost any third party tool. So it can be built or re-modelled to match the user’s routine practices.

2,023 Views 0 References Permalink Categories: Cheminformatics, Trend Watch Tags: qsar, personalized-medicine, animal-testing, cosmetics, reach, toxicology
0

Brown_JUGM-300x124.jpg

Materials Science modeling can be used to address a heap of research topics including batteries, fuel cells, catalysts, and light weight materials - to name just a few. The Asian markets have adopted the materials modeling approach quite enthusiastically. Accelrys' Japan User Group Meeting(in Japanese but most titles are in English - scroll down) attracted about 150 users with 45% in Materials Science, and the rest split between life science and data pipelining tracks. Contrast that with the US UGM which attracted roughly the same number of users, with the bulk focusing on data pipelining. These guys are really into quantum mechanics.

The user presentations included research on lithium-ion batteries, photocatalysis, and solar cells. Attendees weren't just academics: scientists from Showa Denko and Mitsubishi Chemicals gave presentations, along with colleagues from Tokyo Institute of Technology, the Japan Fine Ceramics Center, and Ryukoku University. In the past few years, Japanese researchers have filed a number of patents based on the results of modeling, such as this one on lithium ion batteries. To be sure, US researchers have done this too, (here, for example) but perhaps not so recently as their Japanese counterparts. Can we directly attribute the success of products such as Toyota's hybrid vehicle - the Prius - to advanced modeling techniques? Perhaps not, but the Japanese have certainly invested heavily in this area, and believe in the returns.

Modeling has long worked with experiment in materials science to increase R&D efficiency. See for example, the Vision 2020 report on modeling. You won't hit a home run with every calculation, but the results will narrow the alternatives and let the experiments focus on the most promising leads. Modeling has been applied to a number of areas of "green chemistry" or alternative energy besides the examples presented at the User Group Meeting. Consider these examples, which show only the tip of the iceberg:

There's a real opportunity for these methods to make the world a better place. I hope all scientists will take a look at how their research could benefit.

385 Views 0 References Permalink Categories: Materials Informatics Tags: user-group-meeting, lithium-ion-batteries, alternative-energy, green-chemistry
0
The quest to find novel lead compounds is still the same, but the computational paradigms tend to shift. It is evident from the recent scientific conferences and publications that fragment based design (FBD) is a popular method in finding novel compounds against biological targets of interest.

FBD is predominantly an experimental approach whereby research groups are using well established techniques, such as NMR and X-ray crystallography, for finding small molecules that bind to proteins.

It wasn’t until recently that computational approaches started to take on the buzzword, or perhaps some of the computational methods were ahead of their times! Such is the case with an algorithm called “Multiple Copy Simultaneous Search” or MCSS. A popular request came via our Discovery Studio (DS) users to see this algorithm from InsightII environment into a more user friendly DS.

It was in 1991 that Martin Karplus and co-workers at Harvard University had first published this force-field based method and demonstrated its used in fragment-based design. Since then, MCSS has been successfully applied by several research groups in generating ideas and suggesting binding modes of small molecules. Why the MCSS name? The algorithm takes a small molecule fragment, makes hundreds of copies of it, and then simultaneously minimizes them in the receptor cavity. Fragments with favorable binding energy are ranked for analysis.  Efficient and clever.

So, by popular demand, we have developed an enhanced MCSS algorithm in Discovery Studio 2.5, which promises to help chemists and modelers in their search of a lead compound – one fragment at a time.
389 Views 0 References Permalink Categories: Modeling & Simulation Tags: algorithms, computational-chemistry, discovery-studio, drug-discovery, fragment-based-design, lead-identification, scaffold-hopping
2
In statistical analysis, we usually try to avoid bias. But in high-throughput screening (HTS), bias may be a good thing. In fact, it may be the reason that HTS works at all.

In his In the Pipeline blog, Derek Lowe discusses a new paper from Shoichet's group at UCSF, entitled "Quantifying biogenic bias in screening libraries." The question is this: Given that the number of possible organic compounds of reasonable size approaches the number of atoms in the universe (give or take a few orders of magnitude), and that an HTS run screens "only" a million or so compounds at a time, why does HTS ever yield any leads? The short answer, as the authors show, is that HTS libraries have a strong biogenic bias. In other words, the compounds in these libraries are much more similar to metabolites and natural products than are compounds randomly selected from chemical space.

The authors used Pipeline Pilot for much of their analysis, including ECFP_4 molecular fingerprints for the similarity calculations. See the paper and Derek Lowe's blog entry for more.
406 Views 0 References Permalink Categories: Cheminformatics Tags: pipeline-pilot, high-throughput
2
Multiscale has been a buzzword for such a long time now, most of us must be genuinely tired of it. Nevertheless, when you see actual applications, and the fruits of a lot of hard work come together, I find it still exciting.

A great example I encountered last week is the work by Prof Markus Kraft and his group at Cambridge University’s Chemical Engineering Department. He was over at our Cambridge office for an Accelrys Science and Technology Seminar, talking about soot particles, the black stuff of course that’s actually used to good effect in dyes, and that engineers try and avoid in combustion engines.

The formation of these nano-particles is really and truly a multiscale process. Kraft’s research team starts the long multiscale journey at the quantum level, using DMol3 in Materials Studio to calculate transition states for oxidation reactions of polycyclic aromatic hydrocarbons (PCAH)

This information then enters into rate constant calculations, which then in turn go into Kinetic Monte Carlo simulations (see some cool and funny examples). With KMC you can see the PCAH structures grow. They are then analysed to give input to a population balance model for particles at the next scale, finally entering into engine models.

You can obviously read up the whole story much better in the Kraft group publications.  The point here is that it’s a great example of how the different simulation tools through the scales fit together to solve a complex engineering problem.

Developing such a multiscale toolset is what the Nanotechnology Consortium is all about. Already its 14 Members access a module (also tested at Markus Kraft’s lab), to determine rate constants on the basis of transition state calculations. The tool was developed by Struan Robertson, Accelrys’ Simulations group manager. Incidentally he’s just got another great publication out on the topic: “Detailed balance in multiple-well chemical reactions” with guys from Sandia, Argonne, Leeds and Oxford. Great stuff about how you get a handle on calculating rate constants for complex reactions such as in combustion and atmospheric chemistry.

Transition state calculations themselves become more realistic as a result of another Consortium development, i.e. hybrid QM/MM calculations with MS QMERA, based on the well-known ChemShell environment.

In many cases, a detailed understanding of reactive processes, especially at interfaces, is required. The challenge is that quantum methods can only provide a very limited range of dynamics, while forcefield methods cannot adaequately describe reactions.

So we got together with Prof Frauenheim’s group at Bremen University and collaborators  to integrate DFTB+ into the Materials Studio toolset .

Last not least of course there is Kinetic Monte Carlo. As in the work by Kraft I described above, KMC really makes the leap in scale, especially time scale, and connects the ‘science into engineering’ world. The Nanotech Consortium is moving forward in this field as well. Watch this space for more on Kinetic Monte Carlo in the Accelrys toolset.
483 Views 2 References Permalink Categories: Materials Informatics, Modeling & Simulation Tags: nanotechnology, chemical-engineering, consortia, kinetics, materials-studio, multiscale-modeling, nanoparticles