Skip navigation

Accelrys Blog

6 Posts tagged with the image-informatics tag
0

Consumer packaged goods (cosmetics, cleaning products and the like) are something that we can all relate to. Who doesn’t want cleaner clothes, softer skin, or shinier hair (at least those of you who have hair). The predilection among consumers for a better look has made this a billion dollar industry. As I wrote last year (Cosmetics Got Chemistry) following my trip to the Society of Cosmetic Chemists conference, the development of goods such as personal cleaning products has progressed from the barbaric (i.e., fatty acid salts) to the sophisticated, e.g., adding silicones to combine shampoo and conditioner.

 

The new trend is ‘cosmeceuticals,’ the combining of cosmetics with active pharmaceutical ingredients to deliver added benefits like moisturizers or anti-aging creams. This was discussed in a recent article Improve Your Product’s Image with Smarter Scientific Informatics by my colleague Tim Moran.

 

He focuses on the use of image analysis and data integration to streamline the development of cosmeceuticals.  Modelers generally consider things like molecular structure, analytical results (IR spectra, etc.), synthesis pathways, and of course product performance. But substantiating the performance is time-consuming and often subjective: does this cream really remove wrinkles, or can another genuinely remove age spots?

 

 

The lips on the left belong to an older individual, the ones on the right to a younger. The ones on the left are clearly more wrinkled, but by how much?

 

 

Image processing provides a new dimension to the types of data and models that R&D teams can process, making quantifiable analysis possible far more efficiently than before. Consider the two images in this blog. One set of lips clearly has more wrinkles, but how many more? How could you substantiate the difference if the results are less obvious? You could measure wrinkle lines one at a time with a ruler, slowly & tediously; or use image analysis to count the total number of wrinkles in seconds. This provides immediate feedback to the R&D team so they know when they’ve got a better product - and how much better is it. It also provides the marketing (and legal) team with objective data that can be used to substantiate product claims.

 

Measurable, quantitative, verifiable information. That’s at the heart of the scientific method, and now even unstructured data can receive benefit.

453 Views 0 References Permalink Categories: Materials Informatics, Trend Watch Tags: publications, image-informatics, consumer-packaged-goods
0

I arrived in London via Heathrow and took the express train to Paddington. What followed was an interesting, but far too ineffective, taxi ride to London City airport at the advice of an online travel forum. Fifty-some odd pounds later (not from the chocolate cake and excessive wine at a family gathering in Vegas last weekend, but rather, the British pounds that withered the muscle of my American dollars), I found myself waiting in the terminal, in conversation with a medical doctor and a systems engineer,  discussing our generations understanding of the human genome. I was excited to hear that both the doctor and the systems engineer agreed that information technology, if applied correctly, will bring us massive systems biology insight in a very short time as we start to look at patterns in the sequencing data and link that to other scientific and clinical data.

 

Systems biology will play an important role in providing a deeper understanding. No longer is it sufficient to outfit our scientific research organizations with software built for specific cameras, microscopes, or other related hardware platforms. Software systems for chemicals, imaging, biologics -  second and even third generations sequencing systems will need to reach far beyond their current myopic data processing capabilities to enable researchers to make more qualified decisions.

 

IT experts continue to explore ways to equip their teams with best in class software, while reducing costs, maximizing the value of existing technology, streamlining workflows and supporting collaboration across scientific domains -  domains that have historically not been able to join forces. Best practices include solutions that easily integrate image data with a range of other scientific data types from diverse areas including – life science research, chemistry, materials, electronics, energy, consumer packaged goods, pharmaceuticals, and aero-space. At the enterprise level, solutions in imaging need to integrate with other enterprise applications. These applications include commercial and open source imaging software applications, as well as enterprise data management systems and corporate portals in applications such as Oracle and SharePoint. Informaticians, researchers and other decision makers need a facile solution that brings together images and other associated data, like chemistry, biological sequences, text and numeric data in a unified data structure.  Association of images, or image regions with specific reported data, increases researcher productivity and cross disciplinary understanding. Automation of error-prone manual tasks like gathering images and associated data, processing, analyzing, preparing and importing data, generating reports and distributing results often required custom built applications. New solutions will need to bypass lengthy coding cycles with “on-the-fly” debugging and immediate deployment of high quality solutions.

 

Perhaps on my way from Heathrow to London City, if I had I a proper tool that allowed me to look at the available data, giving me high quality results with facile links to images (maps) so that I could verify that I was making the best choices, I could have arrived faster with less of a burden on my companies budget.

441 Views 0 References Permalink Categories: Bioinformatics, Trend Watch Tags: sharepoint, microsoft, pipeline-pilot, life-science, pattern-recognition, genomics, machine-learning, image-informatics, systems-biology
0

BioIT 2010 – Join Us!

Posted by AccelrysTeam Apr 15, 2010
Learn how Accelrys is on the forefront of scientific innovation by being one of the first to preview our BioIT World award-winning application, Accelrys Biologics Registration.  Developed with leading pharmaceutical companies, the application was designed to address the challenges posed by the dynamic nature of biological entities.

Get a glimpse into the latest release of Pipeline Pilot and the Imaging Collection; or hear how Accelrys products are being used to address next-generation sequencing analysis challenges by attending “Pipelining Your Next Generation Sequencing Data,” on Wednesday, April 21, 12:00pm in Track 3.

Visit us at booth #301-303 to learn more about our leading scientific informatics solutions.

Accelrys is the official Twitter sponsor for BioIT World Conference & Expo ’10, follow us (#BioIT10) for your chance to win an Apple iPad.
447 Views 0 References Permalink Categories: News from Accelrys Tags: next-generation-sequencing, pipeline-pilot, conferences, biologics, image-informatics, biological-registration
0
Christopher Kane, of Pfizer Global Research and Development, presented his work on targeted drug delivery yesterday at Informa's Nuclear Receptor conference in Berlin.  Kane highlighted the importance of a drugs therapeutic index: TI = efficacy/toxicity.  He showed that his team of scientists have utilized Folic Acid (vitamin B9 occurs naturally as folate) in order to carry a drug to specific cells in the body by way of the folate receptor.  Kane showed how the folate targeted nanoparticles (FTNP's) were preferentially taken up by activated macrophages as they are known to have a high level of folate receptors.  Activated macrophages are concentrated in areas of disease such as lesions and atherosclerosis. Studies showed promise of decreasing systemic exposure of drugs (toxicity) while increasing targeted up take (efficacy) both in-vitro and in-vivo.

Kane's presentation highlighted the use of multiple imaging modalities in carrying out his research. These included fluorescent cell microscopy (HCS), confocal microscopy, TEM, and bright field on tissue samples. His work in developing safe and effective nanoparticles for drug delivery supports the notion that multi modal imaging is advancing our understanding of biochemical interactions in-vitro and in-vivo. Effective and efficient integration of these disparate data sources for a holistic understanding of the research being carried out may best be supported by an image informatics platform. Moreover, this work was an excellent example of how a scientific informatics platform could enable an organization’s entire scientific community by automating and delivering relevant scientific information previously held in silos created by different acquisition modalities. Such a platform can dramatically improve collaboration, innovation, decision making and productivity.
451 Views 0 References Permalink Categories: Trend Watch Tags: nanotechnology, pipeline-pilot, nanoparticles, image-informatics, drug-delivery, folate, nuclear-receptor, vitamin-b
0

A roundtable discussion took place near the close of this year’s HCA meeting in San Francisco. The topics of  Data Analysis and Management,  Image Analysis and Computational Biology were folded into a single discussion. This roundtable was facilitated by Karel Kozak. Participants included:

 

Karel Kozak (Swiss Fed. Institute Of Technology)

 

Lisa Smith (Merck)

 

Peter Horvath (Swiss Fed. Institute Of Technology)

 

Achim Kirsch (PE/Evotec)

 

Ghislain Bonamy (Novartis GNF)

 

Abhay Kini (GE Healthcare)

 

Jonathan Sexton (North Carolina Central University)

 

Mark Bray (Broad Institute)

 

Chris Wood (Stowers Institute for Medical Research)

 

Pierre Turpin (Molecular Devices)

 

Mark Collins (ThermoFisher/Cellomics)

 

The opening shot from Schmerck (Lisa Smith  from Schering now Merck) was fired at the vendors. The bullet in question? “Why tools for pattern recognition and machine learning on image data were not more rapidly addressed for vendor systems?”  Vendors replied with their own question, “Why is this a better approach than algorithmic quantification of a known endpoint?” The result of the ensuing discussion was that the end-users want the ability to extract any additional information from their data that is not derived by the designed analysis algorithm, i.e., look for natural classes in the data, spot outliers, correlate to chemical structure of test compounds, etc. This does not necessarily have to be correlated to known biological endpoints – it can be purely exploratory. Vendors said “that’s why we need companies like Accelrys and products like Pipeline Pilot”. The marketplace needs a third-party environment which provides turnkey or almost-turnkey access to the data, and an exploratory environment like PLP in which users can develop methods to ask “what-if” questions of their data. When users clearly demonstrate that these techniques have merit, they will find their way into the instrument vendors’ products.

 

One other aspect of the above discussion which became apparent is that many, if not most, HCS users have no idea what the difference is between PCA, Classification, Support Vector Machines, genetic algorithms, Self-organizing maps, etc., let alone where or when to apply these methods. What they want, and need, is a kind of wizard which walks them through a process of determining what they want to learn from their data, and selecting internally the best method to do that. An analogy was drawn to curve-fitting programs which apply hundreds or thousands of models to a data set, and tell the user which ones produced the best fit. This idea of “opening up to the wider science community methods previously available only to discipline experts”, specifically in computational biology, is by no means in its infancy (see The Future of Computational Science, Scientific Computing World: May / June 2004).

 

The momentum in machine vision – learning, clustering, modeling, predicative science and ease of use was foreshadowed in the HCA East conference held in 2009 and will likely continue to be the area that enables researchers in High Content Screening and Analysis to make better informed decisions earlier in the discovery process.

 

Special thanks to contributing author Kurt Scudder.

348 Views 0 References Permalink Categories: Bioinformatics, Data Mining & Knowledge Discovery, Trend Watch Tags: clustering, machine-learning, high-content-screening, predictive-science, computational-biology, image-informatics, exploratory-analysis, self-organizing-maps, support-vector-machines
0

Machine learning continued as a growing theme at this year's HCA conference.

 

This first HCA conference east held in Boston, September of this year, showed promise of the increasing use of machine vision tools. These tools are making their way in to the hands of the biologist for everything from subcellular classification and pattern recognition to predictive mechanism of action based on a multivariate image output. The theme continues to grow and will be a major focus at the upcoming HCA 2010 conference in January, as is evidenced by numerous talks around the subject. Mark-Anthony Bray, Ph.D., Computational Biologist, Imaging Platform, Broad Institute, will talk on quantifying  image-based phenotypes with machine learning algorithms. Peter Horvath, Ph.D., Image Processing Scientist, Light Microscopy Centre, ETH Zurich, will also discuss  machine intelligence both for classification as well as for quality control. Pattern Recognition will be applied to Image-Based Small Molecule Screening Data by John McLaughlin, Ph.D., Scientist & Manager, Biology, Rigel Pharmaceuticals, Inc. Numerous other talks by Acclerys, Novartis and Carnegie Melon, to name a few, will also have repeating themes of learning.  I can’t help but wonder if the growth in this area is due primarily to the need or if the adoption has been increased by the growing  number of informaticians working alongside the High Content Screening biologist.

 

For some good background on machine learning by sure to follow Dana Honeycutt’s blog postings, here’s a link to get you started. Good Models Require Good Data October 1st, 2009 by Dana Honeycutt, Ph.D.

422 Views 0 References Permalink Categories: Trend Watch Tags: conferences, pattern-recognition, machine-learning, high-content-screening, image-informatics, subcellular-classification