Skip navigation
0
I’ve been in the Enterprise software business for 15 years working in various roles within IT including server operations, application development, hosting, etc.  Over the years I’ve dealt with many “Enterprise” applications, including those that were making the leap from a workgroup server to an Enterprise class product.  One such product was Microsoft Windows NT and its focus to be an Enterprise Network Operating System (NOS) with file server, application platform, and directory services.  This was back in the day when Novell NDS and Banyan VINES had full control of the market.  So when Microsoft entered into this market, not only did they have to deal with an entrenched market (yet hardly fully saturated), but they also had to answer the “Are you Enterprise Ready” question as well (and yes, Linux also fits into this camp).

Now our situation with Pipeline Pilot isn’t exactly the same, but on occasion we do get that question.  The answer is two parts.  First, YES we are an Enterprise ready application platform.  Secondly, since enterprise can mean a few different things to a variety of people we need to look at the business problem that you’re trying to solve.  It might sound like a bit of “it depends” kind of an answer, but we wouldn’t claim that it’s a Financial Enterprise solution.

While Pipeline Pilot is a great personal productivity and project team collaboration software for scientists, its Enterprise capabilities can be over looked. I’ll spare you the typical sales pitch on the product for now (look here).  That said, Enterprise solutions typically solve different kinds of problems, including aggregating the view and challenges of a collection of project teams and individuals.  And solving many back-end problems (e.g. scientific data management and searching) that helps add value to your individual users, groups, and even executives with dashboards.  Also typical in an Enterprise is the existence of a variety of platforms (server, desktop, mobile), data (vendors, formats, and types), processes (human and back end), and more.  While Corporate IT and Research IT have (always) ongoing projects to consolidate, merge, upgrade, and deploy platforms and applications, rip-and-replace solutions aren’t always the best way to quickly add value to an organization.

Pipeline Pilot certainly has many, if not all, of the pieces to create and support a science informatics platform, but a CIO/Enterprise Architect/R&D IT Developer should look at their own and their users scientific business problems and leverage those areas of Pipeline Pilot (e.g. Web Services, data pipelining, scientific data ETL, etc.) to solve those challenges.  You’ll quickly find that by leveraging Pipeline Pilot within your Enterprise projects you’ll reduce your time to delivery, provide high valued results, and increase the productivity with your existing software development and scientific staff.

While Windows NT had it’s painful years for both Microsoft and their customers, over time they have answered the growing call for more Enterprise type features.  Pipeline Pilot has been delivering Enterprise solutions for customers for several years now, and we also continue to the answer the call for deeper and richer solutions for our customers.  Get ready, because we have LOTS more to share very soon regarding our latest release and the future of Pipeline Pilot.
369 Views 0 References Permalink Categories: Lab Operations & Workflows Tags: pipeline-pilot, data-management, data-pipelining, enterprise-informatics
0

“Chemistry for a Sustainable World” was the theme of the ACS spring meeting. My own interests, of course, are in modeling, and more recently in high-throughput computation. The Monday morning session of CINF was devoted to the application of this to electronic and optical materials. The presentations included both methodological approaches to materials discover and applications to specific materials, with a focus on materials for alternative energy.


Steven Kaye of Wildcat Discovery Technologies(located only about 4 km from Accelrys) talked about high-throughput experimental approaches for applications such as batteries, hydrogen storage, carbon capture, and more. The company can screen 1000’s of materials a week, and produce these in gm or kg quantities for scale-up. Impressive work.

 

Alan Aspuru-Guzik of Harvard’s Clean Energy Project talked about the search for improved photovoltaics. He uses a combination of CHARMm and QCHEM software

to help look for the best molecules possible for: organic photovoltaics to provide inexpensive solar cells, polymers for the membranes used in fuel cells for electricity generation, and how best to assemble the molecules to make those devices.


Uniquely, the project uses the IBM World Community Grid to sort through the myriad materials and perform these long, tedious calculations. You may remember this approach from the SETI (at) home, which was among the first to try this. This gives everyonethe chance to contribute to these research projects: you download software from their site, and it runs on your home computer like a screensaver: when your machine is idle, it’s contributing to the project. Prof. Aspuru-Guzik said that some folks are so enthusiastic that they actually purchase computers just to contribute resources to the project. It’s great to see this sort of commitment from folks who can probably never be individually recognized for their efforts.

 

I don’t want to make this blog too long, so great talks by Prof. Krishna Rajan (Iowa State), Prof. Geoffrey Hutchison (U of Pittsburgh), and Dr. Berend Rinderspacher (Army Research Labs) will be covered in the next blog.

 

I was also quite happy to see that the some of the themes in my presentation were echoed by the others – so I’m not out in left field after all! I’ll blog about my own talk later on, but here’s a quick summary: Like Prof. Aspuru-Guzik’s work we used high-throughput computation to explore new materials, but we were searching for improved Li-ion battery electrolytes. We developed a combinatorial library of leads, set up automated computations using Pipeline Pilot and semiempirical VAMP calculations, and examined the results for the best leads. Stay tuned for a detailed explanation and a link to the slides.

 

And keep an eye out, too, for the 2nd part of Michael Doyle's blog on Sustainabilty.

440 Views 1 References Permalink Categories: Materials Informatics, Modeling & Simulation Tags: materials, conferences, atomic-scale-modeling, high-throughput, lithium-ion-batteries, alternative-energy, virtual-screening, clean-energy, photovoltaics
0
Last week Accelrys participated in CHI’s XGEN congress in San Diego, which focused on the current state and future of next generation sequencing (NGS) technologies. Anyone familiar with the Life Sciences can tell you this is a fast moving field. We are seeing the dramatic advances in sequencing technology increasingly matched by new applications of the data to important problems in medicine and biology. A good indication of the growing significance of this area could be seen not only from the diversity of the attendees from academia and industry, but by the growing number of vendors providing new technologies and services in this area.

Five sessions covered areas from sequence data storage and analysis to applications of the technology to understanding copy number variation and epigenetics. A recurring theme was the growing volume of new data being generated and the evolving strategies for managing it. The amount of data being generated isn’t news, but a notable theme of many of the talks in this meeting was how the complexity of these data is increasing. There are a range of reasons for this. In some areas like microbial and plant genomics the variety of species for which genomes have been determined has grown significantly. Also, the availability of new RNA sequencing technologies is providing an unprecedented access to transcriptomic data. These in turn give novel insights about the functioning of the genome. And in human genomics there are increasingly rich data sets that sample not only multiple individuals, but multiple samples from individuals. A particularly important example of this is cancer genomics where comparison of tumor derived genomes with genomes in healthy cells are providing exciting insights into cancer aetiology.

An impressive illustration of the progress being made came in a talk by Steve Lincoln of Complete Genomics. Complete Genomics provides commercial sequencing services and is working with industry and individuals to sequence genomes from a variety of sources. They have been rapidly scaling up their operations and reported sequencing 50 genomes a month at the moment. They plan to have the capacity to sequence thousands of genomes later this year. The basic assembly and annotation of these genomes is a challenge in itself. But once you begin to take in to account the amount of additional structural variation encountered in these data sets (e.g. indels, chromosomal rearrangements) and the deficiencies of the existing reference genomes, the challenges of performing comparative analyses of these data are brought into striking focus.

The Accelrys team presented a poster detailing the latest progress on our soon to be released Next Generation Sequencing component collection for Pipeline Pilot (“Pipelining Your Next Generation Workflows” – Nancy Miller Latimer et al.). This work detailed how besides providing an application integration platform , the Pipeline Pilot NGS collection provides a flexible solution to the data management problem that is so central to making sense of the new data being generated. Right now the field is in the transition from a period where sequence determination has been the bottleneck to one where sequence analysis is the block to progress and new computational approaches to this will help dictate the direction of the field.
387 Views 0 References Permalink Categories: Bioinformatics Tags: next-generation-sequencing, pipeline-pilot, life-science, genomics
0

Free Workshop at ACS

Posted by AccelrysTeam Mar 23, 2010
Attention ACS attendees! Russ Hillard will be showing reaction planning on the DiscoveryGate platform today (March 23) at noon PDT. The workshop will provide a sneak peak of the soon-to-be-released new DiscoveryGate application that includes access to high quality, curated reactions and chemical catalogs. Russ will also demonstrate the unique Isentris Reaction Planner for inspecting synthesis routes from over 5.6 million known transformations. The free workshop will be held in Moscone Center, Room 110 from noon to 2:30 p.m.
441 Views 0 References Permalink Categories: Scientific Databases Tags: conferences, demos
0

Pipeline Pilot Test Automation

Posted by cagramont Mar 22, 2010

A common request from administrators of Pipeline Pilot Server is to provide a way to validate protocols when migrating from a given server to another.  A good reason for this is when migrating from one version of Pipeline Pilot to another or upgrading your hardware.  There are numerous reasons why you’d want to move servers or just a collection of protocols (E.g. isolating protocols for better performance) and the great news is that there IS a tool that’s there!  Before I give that nugget away, let’s talk about the process of Regression Testing.

 

The process of Regression Testing enables the designer of a protocol to define the best way to validate the usage of their protocol.  When using Pipeline Pilot to solve some interesting scientific tasks, it’s easy for “you” to see if it’s working or not.  But once you share this others in your project team or enterprise, you won’t be there to validate it each time an administrator moves it another server for better performance, migrates to a new version of Pipeline Pilot, or recovers a server due to a failure of some sort (e.g. hardware, building fire, theft, etc.).  Thus, you’ll want a way for the administrator to validate your protocol without “you” having to be there.  For the sake of argument, let’s say it’s just “you” that are doing the protocol designing, do you really want to manually validate possibly hundreds of protocols?

 

So you can probably tell that I’m saying that building a regression test is critical for all protocols that are “complete”, regardless if it’s just for you or shared with others.  If nothing else, it will keep your Pipeline Pilot administrator sane and reduce your time in maintain the protocol in the future.

 

What to learn more about the Regression Testing (please say yes!), then look no further than the existing documentation!  There’s a PDF in the Not authorized to view the specified space 2003 in the Pipeline Pilot portal called, “Regression Test Guide” (regression_test.pdf).  The actual application is called “regress.exe” and it can be found in the <installation folder-typically C:\Program Files\Accelrys>\PPS\bin folder.  It’s all command-line based, so executing tests in batch is pretty straight forward.

 

In future posts, we’ll discuss some best practices in using the tool and embedding it within your development process and IT operations.  If you’ve used the tool before, please share how useful (or not) the tool is and what you’d like to see improved.  (Comment within the blog or share in our forums: https://community.accelrys.com)

473 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, regression-testing
0
Christopher Kane, of Pfizer Global Research and Development, presented his work on targeted drug delivery yesterday at Informa's Nuclear Receptor conference in Berlin.  Kane highlighted the importance of a drugs therapeutic index: TI = efficacy/toxicity.  He showed that his team of scientists have utilized Folic Acid (vitamin B9 occurs naturally as folate) in order to carry a drug to specific cells in the body by way of the folate receptor.  Kane showed how the folate targeted nanoparticles (FTNP's) were preferentially taken up by activated macrophages as they are known to have a high level of folate receptors.  Activated macrophages are concentrated in areas of disease such as lesions and atherosclerosis. Studies showed promise of decreasing systemic exposure of drugs (toxicity) while increasing targeted up take (efficacy) both in-vitro and in-vivo.

Kane's presentation highlighted the use of multiple imaging modalities in carrying out his research. These included fluorescent cell microscopy (HCS), confocal microscopy, TEM, and bright field on tissue samples. His work in developing safe and effective nanoparticles for drug delivery supports the notion that multi modal imaging is advancing our understanding of biochemical interactions in-vitro and in-vivo. Effective and efficient integration of these disparate data sources for a holistic understanding of the research being carried out may best be supported by an image informatics platform. Moreover, this work was an excellent example of how a scientific informatics platform could enable an organization’s entire scientific community by automating and delivering relevant scientific information previously held in silos created by different acquisition modalities. Such a platform can dramatically improve collaboration, innovation, decision making and productivity.
451 Views 0 References Permalink Categories: Trend Watch Tags: nanotechnology, pipeline-pilot, nanoparticles, image-informatics, drug-delivery, folate, nuclear-receptor, vitamin-b
0
Offering insight from the perspective of a user, Accelrys is please to host a posting written by guest blogger Matthew Srnec, who will be presenting a poster of his work during the American Chemical Society (ACS) Conference next week, in San Francisco.

"As an undergraduate in chemistry, I was very fortunate to participate in a Research Experience for Undergraduates at Duquesne University in the summer of 2009.  Under the advising of Dr. Jennifer Aitken of the Department of Chemistry and Biochemistry, a computational research project was outlined for me regarding various materials that Dr. Aitken and her graduate students were studying.  I was introduced to Accelry's DMol3 module in Materials Studio and was instructed to learn the intricacies of this program and apply it to the diamond-like semiconductors that Dr. Aitken's lab was interested in.  Throughout my ten weeks at Duquesne, I found Accelrys' DMol3 program extremely user friendly.  Their "help" feature aided immensely in performing the desired calculations and positive feedback was provided from their helpdesk when I came across any problems with my calculations.  At the conclusion of my research experience, tremendous progress was made using the DMol3 module with regard to diamond-like semiconductor calculations.

The success of these calculations has given me the opportunity to present the aforementioned research at the American Chemical Society Conference in San Francisco on March 22nd, 2010.  This presentation will take place during the Undergraduate Research Poster Session in the Division of Chemical Education.  I encourage anyone interested in learning more about this research and my experiences with Accelrys' DMol3 module to visit my poster (#596) between 12:00-3:00 PM for an overview of my work.  I am delighted to have this opportunity and look forward to seeing you in San Francisco!"
486 Views 0 References Permalink Categories: Cheminformatics, Materials Informatics Tags: dmol3, materials-studio, semiconductors
1

Sustainability: Part 1

Posted by mdoyle Mar 16, 2010
Recently I have been flying a lot; its sort of an occupational hazard of being a field scientist during the winter months. Anyway, on one of the flights I was reading an article about the Exxon initiative in green algae based bio-plastics and feedstocks. This brought my mind back to the concept of sustainability and its myriad facets in chemical, pharmaceutical and materials sciences. Sustainability in terms of energy usage and cost of synthesis. Sustainability in terms of sourcing and raw materials and bio processing or germline design and breeding.  Even sustainability in terms of our education. These features fall into a common framework of technology and planning for the future, or using some of the perception and benefits of technology to guide our steps forward.

I remembered my initial feelings as a young scientist working in a refinery. I was overawed by the complexity and size of the cracking, reforming, fractionating, stripping and converting columns and reactors in the refinery and plant. I was there setting up a modelling and near remote sensor, (infra-red) process control system, to help optimize plant throughput and quality using chemical and process optimization, but that is another story.

Anyway, one day after climbing up and down the towers, I was discussing the plant with my manager and he asked me a question. "What is the most interesting or amazing thing about a chemical plant?”  I replied about the size, intricacy and complexity. He said in his view “no;” in his view it was the fact that you could never see, touch or smell the product, unless there was a vent over, or some sort of failure "bad thing" in the process. He continued the comparison; think of a car production line, there you can see all the product as it moves down the line with new wheels and engines being attached. In a chemical plant you cannot see anything of the complex changes that are performed on the materials as they flow through the reactors, stripping columns and condensers. In hind sight, this is a green or sustainable comment.  My view a few years on is that the chemical industry, and yes there have been some very few mistakes, is an amazing industry where very little of the product is exposed to the environment and all the complex and myriad transformations are performed in carefully protected containers. This in terms of efficiency, complexity and microscale materials engineering is a stunning achievement. Now other aspects of sustainability are the fate of the products that come out of these processes, the use of precious energy and fossil fuel reserves in these processes and the impact these materials have on people and their lives.

It is interesting to also note that one of the larger areas of sustainable research is in the pharmaceutical area. Here, there is significant research and development activity in the area of sustainable synthesis, which we will explore later, in Part 2 of this post.  Stay tuned!
428 Views 1 References Permalink Categories: Materials Informatics, Trend Watch Tags: materials, pharmaceuticals, green-chemistry, chemicals, refinery, sustainability
0
Join us at the ACS Spring 2010 Conference, held at Moscone Center in San Francisco on March 21-25, 2010.  At booth #1008, Accelrys will be showcasing the latest developments in Pipeline Pilot, Discovery Studio and Materials Studio.

We have a variety of talks, workshops and posters planned for the conference, including:

TALKS
* High-throughout quantum chemistry and virtual screening for materials solutions
* CAESAR II: The combination of direct geometry method and CAESAR algorithm for super fast conformational search
* Fast and accurate computational approach to protein ionization: Combining the generalized Born model with an iterative mobile cluster method

WORKSHOPS                                                                                                                                                                                                                                                                                                                                                                   * High-Throughput Computational Methods for Materials Discovery and Optimization
* Staying Ahead Of Your Medicinal Chemistry Project Data

POSTER                                                                                                                                                                                                                                                                                                                                                                              * Electronic structure calculations of diamond-like semiconductors

For more information, and to register for the workshops, go to http://accelrys.com/events/conferences/conference-pages/acs-spring-2010.html

If you'd like to learn about the ACS conference on Twitter and see Accelrys' live tweets, follow #ACS_SF.

 We look forward to seeing you in San Francisco!
426 Views 0 References Permalink Categories: News from Accelrys Tags: materials, pipeline-pilot, conferences, high-throughput, materials-studio, discovery-studio, proteins, virtual-screening, caesar, medicinal-chemistry
0

In the 21st century, materials and energy are more topical than ever before. Insights at the atomistic and quantum level help us to design cleaner energy sources, and find less wasteful ways of using energy. Join us on March 16th as Dr. George Fitzgerald presents "High-throughput Quantum Chemistry and Virtual Screening for Lithium Ion Battery Electrolyte Materials."

Register to learn:

  • How modeling can support the discovery of components to enhance the performance of lithium ion battery formulations
  • How to use Materials Studio components in Pipeline Pilot to analyze and screen a materials structure library for Li-Ion battery additives
  • Results from a collaboration with Mitsubishi Chemical Inc which was also published in The Journal of Power Sources

 

This presentation is part of our ongoing webinar series that showcases how Accelrys products and services are transforming materials research. You can download related archived presentations in this series or register for future webinars.

 

We look forward to sharing our insights with you throughout this webinar series.

429 Views 0 References Permalink Categories: Materials Informatics Tags: materials, pipeline-pilot, high-throughput, lithium-ion-batteries, quantum-chemistry, virtual-screening
0

Conference review: NFAIS

Posted by Carmen.Nitsche Mar 12, 2010

I recently returned from the National Federation of Advanced Information Services (NFAIS) conference, a meeting devoted to discussing the role of information and content in business and research, and would like to share with you my impressions of the conference and a summary of my talk.

 

The theme of this year’s meeting was Redefining the Value of Information—Exploring the New Equation. Most of the attendees and speakers were vendors—creators of content or content-supporting technologies—though there were some representatives from university and industry information centers too. A full agenda and links to slides are available.

 

My talk was titled Does Volume Really Equal Value? I reviewed the buying behaviors of corporate and academic institutions, which have dramatically shifted from a “buy for a rainy day” mode to buying to, well, get by. The reality today is that scientists have little patience for hard-to-use systems. For just about any other type of information, they can Google and find immediate answers. So why does scientific information require them to navigate complicated interfaces—and pay for that “privilege”?

 

I offered vendors two ways to deliver value. We can practice agile development to involve scientists more directly in developing the solutions we offer. We can also provide flexibility through “containerless” content—service-oriented systems that provide content in context with common workflows such as experiment planning and sourcing.  I concluded with some thoughts on the business models we content providers need to explore given the challenges of the economy and the current “Frenzy of Free.” The positive comments I received afterwards suggested that I hit a resonant chord.

 

I found it interesting that despite all the discussions and talks centered on ROI and value propositions, no one has really cracked the nut on how to overcome the overwhelming economic and user behavioral changes facing our industry. I would argue that as an information community we did a poor job in the past of communicating quality and value and educating users to be discerning consumers of content, which is now exacerbating the current challenges facing our industry.

 

The general consensus among attendees was that end-user driven knowledge gathering is here to stay, ease and speed are prerequisites, and vendors need to experiment with social networks and media, mobile technologies, and other mechanisms to create some sort of alliance with “free.” I’d be curious what our blog readers think. How are you using content today, and what can vendors do to help you use it more productively? And what are you really willing to pay for?

421 Views 0 References Permalink Categories: Scientific Databases Tags: conferences, content-in-context
0

In the first part of this series, we discussed the basic collection of cloud offerings and what type of value they provide to IT, Developers, and CustomersThe second part explained some of the Business Issues when leveraging the cloud from with your Enterprise environment.  In this post, we’ll focus more on the various services models that are associated with the Cloud.

 

Even within an ASP, there will be a range of providers.  Let’s take Accelrys Pipeline Pilot (PP) for example.  It’s a product that provides rich data-flow capabilities and has a specialty in science computing.   Today, most customers deploy PP on-premise by either the Research & Development (R&D) Information Technology (IT) department or by a group of scientists.  PP makes using and managing the platform in either of these scenarios extremely easy, yet powerfully scalable.  Regardless of how easy it is to manage PP, there are other concerns one must have when managing any platform or application.  This includes maintenance, backup and recovery, security, data management, etc.  Not to mention supporting an ever growing user base also looking to leverage Pipeline Pilot.  This could result in time being taken away from your main business driver: Science!

 

Taking the step to move your basic deployment into the “Cloud,” such as Amazon Web Service (AWS), is an interesting first start.  Now you don’t have to worry about the Operation System and everything underneath it (e.g. hardware, cooling, power, etc.), but you’re still left with everything else.  This is where Application Service Providers (ASP) comes into play.  An ASP can come in different packages.  For one, the ASP could actually be a group internally to your business. OK, so they’re not “really” an ASP, but they could function as one as they provide the service for a given cost and they’re not directly tied to your organization.  Hey, could this be Corporate IT?  Sure, or perhaps another scientific group within your business offering their investment to another team and doing cross charging to offset the costs.  And by the way, doing this in the cloud to remove the burden and cost from IT to manage it.  Perhaps this scenario has too many moving parts for your fancy.  I’ll move on.

 

A more traditional ASP manages the application and perhaps even provides application level support.  Taking Pipeline Pilot as an example again, providing application support really comes in two flavors.  The first is supporting the application platform and tools themselves; for instance, if you’re writing a protocol (a set of tasks in a data pipeline) or running an application built on PP.  The other is more focused on the science itself and relating it to the product.  While there may be many that could help with the PP Platform, Infrastructure, and even the tools, it’s a big leap to also support the science.  The key here for you is, when shopping for a cloud vendor or ASP take a look at the breath of services you’ll get from them and anticipate your need for science, application, and infrastructure support.  Not to mention the difference in cloud infrastructure that requires a Message Passing Interface (MPI) infrastructure (more on that later).

 

If you’re in any stage of interest, planning, evaluating, or deploying Accelrys products or other scientific applications in the Cloud, we’d love to hear from you!  As the leading provider of Scientific Informatics Solutions, we’re interested in supporting our customers no matter where there environment is – at home or in the cloud.  Visit our forums to continue the discussion: https://community.accelrys.com

 

To view all Conrad’s Cloud Series posts, please click here.

503 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, cloud-computing, application-service-providers
0
It was indeed very pleasant to visit the Stanford campus last week; I had a chance to see familiar faces, as well as new ones, amongst the attendees at the workshop, “Bridging the Gap Between Theory and Experiment: Which Theoretical Approaches Are Best Suited To Solve Real Problems In Nanotechnology and Biology”

There were several invited talks on semiconductors and catalyst nano particles, apart from my talk on alternate energy.  Many of the speakers discussed the suitability of a particular simulation approach for the study of specific applications, while others discussed the most recent state-of-the-art theoretical advances to tackle real problems at several timescales.  It is particularly challenging when simulations are to be used not just for gaining insights into a system but to be a predictive tool as well as for virtual screening.  While virtual screening is a well-studied art in the world of small molecule drug discovery, this is only now gaining traction in the materials world.

For further inight into virtual screening in materials, check out George Fitzgerald's webinar on High-throughput Quantum Chemistry and Virtual Screening for Lithium Ion Battery Electrolyte Materials, next Wednesday, March 16.
439 Views 0 References Permalink Categories: Materials Informatics, Trend Watch Tags: biology, catalysis, materials, nanotechnology, high-throughput, alternative-energy, semiconductors, virtual-screening
0

How many modelers?

Posted by AccelrysTeam Mar 8, 2010

How many of “us” are out there? I mean how many people doing modeling and simulation? I’d really like to know, ideally broken down by discipline, such as Materials Science vs Life Science, and quantum, classical and mesoscale.

 

Alas, there are preciously few statistics on that, so when I read in the Monthly Update (Feb 2010) of the Psi-k network that they conducted a study on size of the ab initio simulation community, it got my immediate attention.

 

Representing a network of people from the quantum mechanics field, Peter Dederichs, Volker Heine and colleagues Phivos Mavropoulos and Dirk Tunger from Research Center Jülich searched publications by keywords such as ‘ab initio’, and made sure not to double-count authors. In fact they tend to underestimate by assuming people with the same surname and first initial are the same. As Prof Dederichs, the chair of the network tells me, checks were also made to ensure that papers from completely different fields are not included. Also they estimate that their keyword range underestimates the number of papers by about 10%. Of course there are those that didn’t publish a paper in 2008, the year for which the study was done. Moreover, Dederichs says, there are those who published papers which don’t have proper keywords like “ab initio” or “first principles” in the abstract or title, so they are not found in the search. All of that is likely to compensate for counting co-authors that are not actually modelers.

 

All in all, they come up with about 23,000 people! And the number of publications in the field indicates a linear rise year on year.

 

That’s quite a lot more than they expected, and I agree. The global distribution was also surprising, with about 11,000 in Europe, about 5,600 in America, and 5,700 in East Asia (China, Japan, Korea, Taiwan and Singapore). That’s a lot of QM guys, especially here in Europe. Now, there will be a response from the US on that one I guess?

 

 

I wonder how many classical modelers there are. I’d hazard a guess that the number of classical modelers is about half those in the QM community, at least in the Materials Science field. Assuming that the mesoscale modeling community is quite small, that would make for a total of at least 30,000 modelers worldwide.

 

What is your view, or informed opinion? Anybody else knows about or has done some studies? I am going to open up a poll in the right sidebar on the number of people involved in quantum, classical and mesoscale modeling in total. It would be great to hear also how you came up with your selection.

426 Views 0 References Permalink Categories: Modeling & Simulation Tags: atomic-scale-modeling, quantum-mechanics, computational-chemistry, ab-initio-modeling
0

My guest author today is Dr. Wendy Warr, who been working in the research informatics space since its inception. In 1992, she founded a consultancy offering competitive intelligence studies and reporting to a range of clients in our industry. Wendy recently authored a paper on cloud computing; here she provides a few takeaways from that write up. Find more expert opinion from Wendy Warr at her Web site, The Warr Zone.


WW smaller.jpgThere are lots of disagreements regarding cloud computing (Wendy says). People argue everything from the definitions to its applicability for various projects to whether it is secure and viable. Here are some of the things I learned while researching this topic:


  • There are three basic types of cloud computing. Infrastructure as a Service (IaaS) delivers hardware space (grids, cpu, virtualized servers, storage, or systems software) as a service. The best known example is Amazon’s Elastic Compute Cloud (EC2). Platform as a Service (PaaS) provides virtualized servers on which users can run applications or develop new ones. Well known examples are Microsoft’s Azure and Salesforce’s Force.com. SaaS is software that is developed and hosted by a vendor and accessed over the Internet on a subscription basis by customers. Well known examples are Salesforce.com, and Google’s Gmail and apps like Google Docs.
  • Big advantages of working in the cloud are elasticity (the ability to add or subtract apps at a moment’s notice); paying as you go; less IT overhead; easier deployments; and more effective collaboration. Concerns are security; control over data; what constitutes “standards” and “best practices;” and latency and performance.
  • Pharma is concerned most about security: what happens to data that are currently stored on in-house systems behind an organization’s own protective firewalls, and how to police researchers’ access to the cloud (particularly in collaborative situations with offsite scientists or a contract research organization).
  • Despite these concerns, pharma is already working in the cloud. Cloud computing is especially handy for speedy answers to complex calculations. One unnamed company was responsible for the initial query that resulted in CycleCloud BLAST, a publicly available service.  And Pfizer, Eli Lilly, Johnson & Johnson, and Genentech have experimented with cloud computing, mainly for managing large computational tasks in bioinformatics and gene sequencing.

My article provides specific information on some of the cloud-based apps currently being developed by vendors and how vendors such as Symyx, CambridgeSoft, and ChemAxon position themselves in relation to cloud computing and hosted offerings. I urge you to download it for more information and comment here with any questions for me or Symyx.

 

[Editor's note: In July 2010 Symyx merged with Accelrys, Inc.]

488 Views 1 References Permalink Categories: The IT Perspective Tags: hosted-informatics, cloud-computing, guest-authors
3

This entry is authored by Steven Bachrach, chair of the department of chemistry at Trinity University in San Antonio, TX, and author of Computational Organic Chemistry, a blog supporting the text book of the same name. Steven’s first post outlined his need for a chemical drawing package that he could use for blog production. This post provides more detail on his use of Symyx Draw.

 

[Editor's note: In July 2010 Symyx merged with Accelrys, Inc. Symyx Draw is now Accelrys Draw.]

 

stevenb.jpgI have been extraordinarily pleased with Symyx Draw as my chemistry blogging tool (says Bachrach). It is intuitive to use (with some very minor differences in operation from other drawing programs) and creates excellent quality images for use in the blog or in manuscripts. I could not sustain the work flow of the blog without its assistance!

 

The “Generate Text from Structure” option from the “Chemistry” menu signals the program to create the InChI or InChIKey, the IUPAC name or the SMILES string. The InChI settings can be customized, and the default is the Standard InChI representation. I have yet to encounter any compound for which Symyx Draw could not generate an appropriate InChI or InChIKey—and I’ve discussed some rather unusual compounds.

 

On the other hand, while I have generally found the naming function quite good, Symyx Draw has been unable to generate a name for a number of molecules. These are typically polycyclics. An example from the blog is the compound below. Symyx Draw was able to generate its InChI and InChiKey but not its IUPAC name.


3ring.gifInChI=1/C30H46/c1-5-23-13-15-24(16-14-23)7-4-12-30-22-28-10-2-6-25-17-19-26(20-18-25)8-3-11-29(30)21-27(28)9-1/h13,15,17,22-24,26-27,29-30H,1-12,14,16,18-21H2/t23-,24+,26?,27+,29?,30+/m0/s1InChIKey=XGHWWYQBRPEHRU-QMVGZNFYBT


I typically draft my blog posts within Microsoft Word, creating the structures with Symyx Draw and using copy/paste to bring them back into Word. These “pasted” structures are active—double clicking on them launches Symyx Draw with the structure present, ready for editing. I copy and paste the InChIs into the document as well. Lastly, I save the document as an HTML file and then copy and paste the HTML into WordPress to create the actual blog post. Images, including 2D chemical structures generated by Symyx Draw, need to be uploaded to the server. (I typically do this as jpg files.)

 

A special feature of the blog is the ability to view 3D molecules directly within the blog page itself. All figures of 3D molecules are actually links to the 3D coordinates that will automatically load up into a Jmol applet, allowing you to manipulate the structure on-screen, in real time, within the blog window. Simply click on the figure to get this to work! (My thanks to Dan at the OpenScience Project who provided the scripts for embedding Jmol into WordPress.)

 

To see more examples of images, InChIs, and InChiKeys made possible by Symyx Draw, I encourage you to visit my blog. And thanks to Symyx for featuring my blog in these entries. I’d love to hear from others who might be blogging about chemistry. What tools help you be better bloggers?

 

[Editor's note: Symyx Draw, now Accelrys Draw, is available for free academic and home use.]

3,818 Views 0 References Permalink Categories: Cheminformatics Tags: guest-authors, academic-labs, accelrys-draw, chemical-representation
0

Blogging Chemistry

Posted by keith.taylor Mar 1, 2010

My guest author for a couple posts on chemical representation is Steven Bachrach, chair of the department of chemistry at Trinity University in San Antonio, TX, and author of Computational Organic Chemistry, a blog supporting the textbook of the same name. Steven will discuss one of the challenges of blogging chemistry: representing and rendering chemical structures in the blogosphere. In the interest of full disclosure, Steven is also related by marriage to a Symyx employee. That said, he was in no way strong armed into using Symyx Draw in his blog or authoring this article.

 

[Editor's note: In July 2010 Symyx merged with Accelrys, Inc. Symyx Draw is now Accelrys Draw.]

 

stevenb.jpgAs I was writing my textbook (says Bachrach), I was concerned about how quickly it would go out of date. Research continues on, and even during the time between when I finished writing and when the book was published, new science had appeared! I decided to make use of Web 2.0 technology—a blog—to help me keep the book relevant; posts would discuss new articles that touch on or expand upon subjects in the book.

 

The language of chemistry is the chemical structure, which means that any attempt to communicate chemistry, especially among practicing chemists, must include structures. I had long used a well-known program for general chemical structure drawing, and while I was pleased with its utility in producing nice images, it could not generate the cheminformatics suitable for a web environment. In particular I wanted to incorporate InChIs and InChiKeys throughout the blog, making the blog searchable for chemistry. Because this other package couldn’t support this, I was in the untenable situation of using one program to create 2D images of structures, another to generate InChIs, and a web site for generating InChiKeys—this for each of the 100 blog posts I wrote in the first year!

 

About 18 months ago I tried Symyx Draw, a standalone drawing package that could not only create publication-quality images, but could also generate IUPAC names and InChIs and InChIkeys from a simple pull-down menu. In testing out the program, I found it quite capable of all my needs––both for blogging and for manuscript and grant preparation. The user interface is simple and transparent; I was drawing complex structures right away. There are ring templates across the top, including chair cyclohexane, and templates for polycyclics, amino acids, sugars and inorganic complexes. Down the left side are the essential drawing tools to make bonds, wedge and dashed bonds, add heteroatoms, and plain text. Right-clicking on an atom brings up a pop-up menu that allows for changing the valence and charge, changing the isotope or creating a radical center. Right-clicking on a bond gives a pop-up menu that includes such options as to move it behind another bond to indicate three-dimensional relations, change bond order, and apply color.

 

In my next post, I’ll illustrate some of the ways I’ve used Symyx Draw within my blog.

 

[Editor's note: Symyx Draw, now Accelrys Draw, is available for free academic and home use.]

486 Views 1 References Permalink Categories: Cheminformatics Tags: guest-authors, academic-labs, accelrys-draw, chemical-representation