Skip navigation

Accelrys Blog

13 Posts tagged with the pipeline_pilot tag
0

Today Accelrys announced the availability of two new, free mobile apps in the Apple App Store. The apps make it easier to deploy tasks to mobile devices and collaborate on projects—and they come from the same place you get your favorite music and the latest version of Angry Birds. What could be easier?

 

A couple of months ago Accelrys launched ScienceCloud, an innovative, cloud-based information management and collaboration workspace that brings globally networked drug researchers together. Mobile, social media-like communication and collaboration are key features of ScienceCloud. Now these capabilities are in the App Store and ready for download.

 

ScienceCloud Tasks lets you use the Accelrys Pipeline Pilot protocol authoring application and the Pipeline Pilot Mobile Collection to build and test scientific protocols (tasks) and deploy them to mobile-enabled scientists across your team. The protocols can be written and deployed either in ScienceCloud or through an on-prem server. If you’re a Pipeline Pilot author, you can leverage mobile input services such as image capture, speech recognition, location capture and barcode reading in the protocols you create. You can deploy specific tasks to specific user audiences and optimize charts and graphs for display on different devices. Your mobile-enabled team members can then run specific tasks supporting scientific workflows (e.g., uploading images to an electronic lab notebook, scanning barcodes, performing chemical searches, etc.). Everyone on the team can instantly view status reports and project dashboards, while also quickly and easily sharing results and ideas with other mobile users.

 

ScienceCloud Projects lets project teams access information using mobile devices and communicate and collaborate more effectively in the ScienceCloud workspace or an on-prem system. You can access, search and share project data anywhere and anytime, view project status and communicate results, post and read project-related, Twitter-like updates, receive instant notifications of new information and engage with external partners using your phone.

 

Additional mobile apps will become available to support further scientific workflows. How would you like to “mobilize” your project team today?    

 

To learn more about ScienceCloud mobile enablement, register for the complimentary webinar entitled “Research Collaboration Going Mobile,” scheduled for Tuesday, May 6, 2014, 8:00 AM - 9:00 AM PDT.

 

If you are at BioIT World in Boston (April 29-May 1), you can also attend the presentation by Accelrys CTO Matt Hahn entitled “Accelrys ScienceCloud: A Hosted Strategy for Collaborative R&D.” His talk will be on Wednesday, April 30 at 2:55 pm (Collaborations and Open Access Innovations Track 11).

155 Views Permalink Categories: Trend Watch Tags: pipeline_pilot, cloud, collaboration, tasks, mobile, science_cloud, projects, mobile_apps, app
0

As a newcomer to the Accelrys product marketing team, I've spent a great deal of time over the past few months learning more about our products and how customers use them. I noticed after going through this exercise that my “studies” kept going back to our Accelrys Notebook product. Why is that? Well, it's simple. Literally! Accelrys Notebook is a simple, easy to use product that even a newbie like me can understand its importance in transitioning to a digital laboratory.

 

But there's another reason that I found myself spending time digging into Notebook. That simplicity message resonates with our customers, and that response is what drives the growth of our product. Customer surveys and interviews have shown me that customers really value how simple it is to use Accelrys Notebook. Users and IT managers tell us that the product's ease of use and simple user interface quickly break down barriers for their lab people to use. This leads to faster adoption for companies that have a competitive need to move their laboratory out of paper notebooks and in to a digital, paperless world. Faster adoption means that companies can quickly expand Notebook to more people and more sites for their global operations, while avoiding headaches for the people who set up and manage their company's IT infrastructure. This helps the business in so many ways, as digital labs can save time sharing and searching for information that used to be stored on paper and searched by hand. It also helps in speeding up the product development process, as sharing, collaboration and digital signatures for lab work all lead to faster patent processes that are necessitated by recent US legislation.

 

So, "simple" and "easy to use" sounds great. But we are seeing that these words have real meaning for our customers. Faster adoption and faster deployment means real time and money saved by our customers. Below are some of the quotes we’ve received in surveys from customers who are using our product:

 

-“It used to take me 20 minutes to prepare my paper notebook and now it takes me 2 seconds…literally.”

-“I cannot overstate how impressed I am with this ELN product…it is reducing my documentation time by 75% or more.”

-“It used to take me 2 hours to write, print and sign every piece of paper just for one page I can now complete the whole experiment in 7 minutes!”


 

Customers are seeing real value to using Accelrys Notebook. And that value grows as companies quickly deploy our product to more users and more of their global sites. We have helped companies roll out Notebook from a pilot project to hundreds of users within just 2 months. The simplicity of the product and the ease of which it can be installed and quickly expanded ensure that all areas of a business – users, IT managers and business decision makers – can quickly see the value of Accelrys Notebook in their organizations.

 

notebook-landing-page-banner.png

 

With our latest Accelrys Notebook 5.0 software release, we've added powerful new capabilities with the integration into our Accelrys Enterprise Platform and Pipeline Pilot. So users can now pull experiment data from other sources - LIMS, instruments, Excel, other scientific tools, even inventory management - while working directly from the Accelrys Notebook. This makes the digital lab more powerful and has the potential to save additional time for labs that typically have to spend time searching through paper notebooks for old experimental data, or duplicate efforts because their existing data management systems are not compatible.


 

But the best part of this message is that even though we've made Notebook even more powerful, we've kept the cornerstone of our product - simple, easy to use interface - intact. We want people to be more productive with Notebook, and we want them to tap into the power of our Accelrys Enterprise Platform. But we won't sacrifice ease of use, Because we know that is what drives further usage and adoption, and keeps our customers happy.


 

If you'd like to learn more about Accelrys Notebook, click the link and you'll find case studies and webinars from companies who have experienced the value of moving to a digital lab through Accelrys Notebook. These resources will show how Notebook can help your company's product development efforts.

 

Also, we have an upcoming webinar on March 6 that provides the basics on Accelrys Notebook. Clink on the link and you will find more information and a path to register for our session entitled, “Accelrys Notebook 5.0 Helps Science-Driven Organizations Transition to Digital Labs.” I think you'll learn what I've learned: Accelrys Notebook is a powerful, yet easy-to-use electronic lab notebook that will save you time, money and reduce the pain of transitioning to a paperless lab.


482 Views Permalink Categories: Electronic Lab Notebook Tags: pipeline_pilot, integration, notebook, eln, lims, platform, lab, 5.0
0

Today Accelrys announces the launch of ScienceCloud. I have not been this excited about a product launch in years -- fifteen years to be exact!  Back then, life science research was struggling with large volumes of disparate scientific data and no way to automate its access, processing and analysis. Our response was the industry changing software, Pipeline Pilot.   Fifteen years later, we are introducing what I believe will be another game changer: ScienceCloud.

 

Today the life science industry is in the midst of significant change, leading to even greater challenges. Biopharma is radically reinventing itself by embracing globalization, looking for innovation from outside partners, and focusing on operational excellence (read: cost pressure). Externalized collaborative research is transforming drug discovery. Organizations are expanding relationships beyond traditional boundaries and creating flexible networks of researchers―some in-house, others with industry and academic partners and contract research organizations (CROs).

 

These networks are increasing in size and complexity, combining numerous partners with diverse project objectives. Externalized drug discovery introduces substantial data and project management challenges. How do you secure the IP of different parties and enable networked partners to share project data in real time? How do you ensure that each team member can only access what he/she is authorized to see? How do you quickly spin up and down dynamic collaborations in today’s fast-changing project landscape? How do you keep team members informed of the progress each partner is making?

 

ScienceCloud was created to address these challenges. It is an innovative, cloud-based information management and collaboration workspace designed to support globally networked drug R&D. A cloud-based solution provides business agility that’s simply not possible with on-premises and IT-supported infrastructure. You can stand up a project partnership with minimal IT support quickly and easily. You can expose, via the web, those applications needed by particular partners to participate in specific scientific workflows. Finally, the cloud promises to reduce the overall cost of software usage, just as it has in other major industries.

 

ScienceCloud provides an integrated suite of project-centered applications which are available to all team members wherever they are, at any time. ScienceCloud leverages social communication to connect teams―empowering researchers to capture and search user and application feeds, post from internal/external systems and share crowd knowledge.

 

With the flexible, multi-disciplinary Notebook, you can capture, access and share experimental information. Pipeline Pilot is also in ScienceCloud, making it possible for you to create and manage scientific protocols and implement standard business rules for partners. You can integrate on-premise systems with ScienceCloud’s web APIs so that data is exchanged easily between on-premise and the cloud, facilitating staged migration to the cloud. ScienceCloud is mobile enabled, and lets you easily define and share mobile tasks across your team. ScienceCloud Exchange is a web portal, similar to an App Store, where you can publish and share Pipeline Pilot protocols useful to the research community.  And, these are just some of the first ScienceCloud capabilities - more are in store.

 

I think this all adds up to an exciting next 15 years! Join us again.

 

To learn more about ScienceCloud, please visit ScienceCloud.com. You can also register for the complimentary webinar entitled “ScienceCloud, a cloud solution for externalized, collaborative R&D,” scheduled for Tuesday, February 18, 8:00 AM - 9:00 AM PST.

708 Views Permalink Categories: Executive Insights, Trend Watch, Electronic Lab Notebook, News from Accelrys Tags: pipeline_pilot, notebook, eln, cloud, sciencecloud, drug_discovery
0

Keeping our Pipeline Pilot protocol authors involved in the wider user community and ensuring they’re updated on new features is a key activity for us at Accelrys. The Pipeline Pilot platform, now named the “Accelrys Enterprise Platform” (AEP), is fundamental to the full range of Accelrys software. A key example of this is the extensibility of our various end-user applications with Pipeline Pilot protocols, exposing the broad scientific capabilities of the platform to both Pipeline Pilot protocol authors as well as application end-users. To this end, over the last couple of weeks Accelrys has run three very successful Pipeline Pilot User Forums at various locations around the US. The events were held in Boston (Oct 17, at the Harvard Club), Princeton, NJ (Oct 18, Elements Restaurant) and San Mateo, CA (Oct 22, Kingfish Restaurant) and were attended collectively by over 100 of our users. The events were informal, free-to-attend gatherings of Pipeline Pilot enthusiasts at local area restaurants, and the format was a combination of presentations and demos from our users and Accelrys, held either side of a networking lunch.

 

harvard_logo.jpgelements_logo.jpgkingfish_logo.jpg

 

We like to use restaurants for these user forums because for audiences of this size, they offer a more convivial and intimate atmosphere than the typically sterile hotel conference rooms, encouraging interaction amongst the attendees, which after all is what we’re aiming for. However, holding the events in restaurants can present certain challenges – the presentation facilities may require some “creative problem solving” to ensure technical issues are resolved and that everyone gets a good view. And at two of the events, the modern-day coffee-culture and the proximity of the espresso machines in the main meeting room meant that the post-lunch speakers were competing with the gurgling, bubbling and hissing of lattes in production! On the other hand, the coffee-culture was a plus when it came to generating excitement for the spot quizzes, where Starbucks gift cards were highly sought after prizes.

 

 

The customer talks featured presentations on structure-based drug design (Suresh Singh, from Vitae Pharmaceuticals), management and dissemination of 3D structure data (Johannes Voigt, Gilead), and exploring the chemical synthetic space (Warren Wade, BioBlocks, Inc.). These excellent talks covered classical use cases for Pipeline Pilot and were well-received by the audiences. Charting a somewhat different course, Jenny Heymont from Eisai entertained the audience with a migratory IT tale, recounting her story of using Pipeline Pilot to assist in a migration of Windows, Internet Explorer, and Microsoft Office at Eisai. By using Pipeline Pilot to access the help desk database, manage the list of applications to be tested, and coordinating the migration process, Jenny and her colleagues saved an estimated $600K through re-use of hardware and savings in personnel costs, not to mention a successful, on-time project completion. Just shows what Pipeline Pilot can be used for, with a little imagination!

 

 

This set of forums are over for the year, but if you want to see the agendas, you can access them here, and links to most of the Accelrys presentations are given below:

 

 

 

 

Tips and Tricks are always popular with our users, so Dave Rogers, co-creator of Pipeline Pilot, will share his insight in a webinar on November 7th. Please join Dave for Accelrys Pipeline Pilot – Hidden Tips and Tricks - he guarantees you will come away with at least one new technique to speed up your own protocol development..

 

We’ll keep holding these forums as long as there continues to be interest. Many of the attendees also expressed a willingness to tell their own adventures with Pipeline Pilot, and we’ll be contacting them to hear more as we plan for next year’s events. If you were unable to attend these events but would like to share your Pipeline Pilot stories, contact me and we can find a good way to get your story out. If you didn’t know about the forums and want to make sure you don’t miss out next year, let me know and we can make sure you get alerted when the next events are upcoming.

 

Happy pipelining!

 

Andrew LeBeau

Product Manager, Pipeline Pilot


2,597 Views 0 References Permalink Categories: Data Mining & Knowledge Discovery, News from Accelrys, Cheminformatics, Modeling & Simulation Tags: pipeline_pilot, data_analysis, user_forum, data_pipelining
0

At Accelrys, we have seen the use of predictive sciences applications in the Life Sciences transition from being a tool for experts only, to a tool used by all.  Be it the routine use of physicochemical property predictors, ADME and 'Toxicity' models, or the now ubiquitous use of structure-based design (SBD) tools across project teams, or even the application of biophysical property predictions by antibody researchers.  No longer are these the preserve of a few computational chemists or computational biologists.  So, what drove this change?

 

Across the industry, a number of common factors can be seen to reoccur.  The first of these that I would like to call out has been the impact that the “patent cliff” has had across the industry.  2012 saw something in the region of $67B USD in drug sales put at risk as major blockbuster drugs came off patent (“Embracing the Patent Cliff” EvaluatePharma, 2012).  Furthermore, between 2012 and 2018, it’s been estimated that somewhere in the order of $290B USD of sales could be at risk.  That’s a pretty big hit on anyone’s budget!  Amongst the numerous effects this has had on the industry has been the increased move to externalized research.  But, such distributed research structures creates new challenges.  Can each of the research partners plan, resource and collate the relevant results in time to mine them and make informed team decisions?  Indeed, it is not always possible to coordinate experiments and get results returned in time to affect decision making.  Hence, design decisions are often made in the absence of all information.

 

Another factor that cannot go unmentioned is the increased regulation and scrutiny of drug discovery and development by governmental agencies across all major drug markets.  To address many of the parameters now required in the optimization of a drug candidate, both in vivo and in vitro screening are now introduced early on in the discovery process to weed out potential issues long before clinical testing.  However, with so many more experimental parameters to test for, comes the added problem of resource availability and affordability.  In the combinatorial chemistry era, it is simply not cost effective to take every potential compound idea and synthesise, purify, characterise and test.  Some reduction is essential, so choosing the right molecules to make and test raises challenges.  Which ones to test and which ones to infer?

 

In the face of the above challenges, it starts to become apparent why the use of predictive sciences is becoming more wide-spread.  In comparison, they are fast and cheap and can be readily integrated seamlessly into existing decision making systems.  However, at the heart of this move has been addressing the question of accuracy.  Arguably, predictive algorithms are now broadly mature enough to facilitate the resolution of a broad range of R&D challenges.  Crucially, the industry has recognised that while individual prediction tools might not always be absolutely precise for an individual molecule, across a series they are often broadly accurate and can be reliably applied as a rank-order tool to separate the ‘good’ from the ‘bad’.  This pragmatic approach to their use enables scientists to evaluate many more hypotheses more quickly than is possible with experimentation alone.  As a result, predictive science methodologies not only enhance the quality and speed of decision making, they also provide a less expensive and more scalable approach to improving R&D efficiency, especially when deployed on a unified informatics platform.

 

Want to read more? Check out the Predictive Sciences web page, or read our Predictive Sciences solution brief, or register for the Predictive Sciences webinar.


931 Views 0 References Permalink Categories: Cheminformatics, Data Mining & Knowledge Discovery, Bioinformatics, Modeling & Simulation Tags: discovery_studio, pipeline_pilot, admet, qsar, biologics, model, predictive_science, prediction_sciences, computational_chemistry
0

With the release of the Accelrys Enterprise Platform 9.0 (AEP), we started to communicate our thoughts on Big Data as it relates to the large volumes and complexity of scientific data on which our customers rely.  While the original message around this provided some high level views, I thought that I would dig deeper into what this means and provide some examples that we have worked on with our partners from IBM and BT.

 

First, what do we mean by Big Data?  By now you have probably heard Big Data described in terms of: Variety, Velocity, and Volume.  Let’s quickly tackle each of these, but I’ll try not to bore you to death...

 

Variety

 

The IT Industry generally views “variety” as various data sources (files, databases, web, etc.) and types of data such as text, images, geospatial, etc.  However, data within a scientific organization is unique and includes chemical structures, biologics, plate data, sequences, and much more.

 

Velocity

 

“Velocity”, or the rate of change, can be viewed in two parts.  First, every day new data is added within scientific organizations from machines such as Next Generation Sequencers or Electronic Notebooks.  Second, new data is regularly made available through government resources such as the United Kingdom’s National Health Service and scientific data providers like Accelrys DiscoveryGate (shameless plug). Thus, the velocity of new data now accessible within an organization is growing exponentially and businesses need to be able to access and analyze this data to achieve their objectives.

 

Volume

 

When you multiply the Variety of data by the Velocity at which this data is delivered, you can get a sense for the exponential amount of data available to parse.  However, you might have a very small amount of data and still have to parse it to uncover meaning.  Besides the total amount, you should also consider that the data items are often large.  For example, the human genome is about 3.1 Gigabytes in one uncompressed file (FASTA format) and a complete sequencing of a human yields 100 GB (and up) per experiment.  If chromosomes are in separate files, chromosome 1 is the largest at 253 Megabytes uncompressed. Plant data can be even bigger.

 

Also, organizations that have deployed an Electronic Lab Notebook with over 1,500 users can have data volume sizes reaching beyond 2 Terabytes.

 

Scientific Big Data and the Accelrys Enterprise Platform

 

Scientific Big Data is a reflection of the expanded types of data that an organization must work with in order to ensure that regulatory, privacy, and security rules are adhered.  The Accelrys Enterprise Platform 9.0 and Accelrys Pipeline Pilot help solve the many Big Data initiatives that organizations are working towards.

 

The Accelrys Enterprise Platform 9.0 (AEP9) expands support for High Performance Computing (HPC); a requirement for all Big Data projects. There are two options for HPC available: Cluster and Grid.  Cluster deployments leverage a Map-Reduce technology that is geared towards organizations that require HPC capabilities without the high investment into a grid infrastructure.  Grid integration is available for those customers that want to leverage their existing investments in a Grid engine. Both options enable an organization to scale their infrastructure to meet the computing capacity.

 

At the Accelrys Tech Summit in Brussels this year, IBM delivered a performance analysis session that used AEP9 with IBM’s GPFS (proprietary parallel file system technology) to handle scientific data and showed how I/O impacts computing resources.  A link to the associated whitepaper is available on the IBM site and the session was so popular that we are hosting a webinar on this topic on September 5th.

 

British Telecom (BT) leveraged AEP9 Clustering along with their Cloud Compute environment to mine the enormous dataset from the United Kingdom’s (UK) state-funded National Health Service (NHS).  That data is unstructured and includes 55 million people from England, 3.5 million people from Wales, and data points are in the region of 4 billion.  With the power of a Cloud Computing infrastructure and the simplicity in design from Accelrys Pipeline Pilot, researchers can utilize the data from the NHS and interrogate that data against other unstructured or structured data at their disposal without having to build another data warehouse or data mart.  This is one of the big benefits of addressing Big Data with AEP9. Read the press release.

 

BT provided additional details about how AEP9, Accelrys Pipeline Pilot, and BT Cloud Compute complement each other at the Accelrys Tech Summit in their session: Cloud Enablement and Big Health Data Analytics in the Cloud

 

In future posts, I will provide technical details on how Accelrys Enterprise Platform and Accelrys Pipeline Pilot enable Big Data capabilities including how other data repositories can be leveraged to benefit other Accelrys applications (ELN, LIMS, etc.).

964 Views 0 References Permalink Categories: The IT Perspective, Trend Watch Tags: pipeline_pilot, platform, cloud, tech_summit, big_data, accelrys_enterprise_platform, cloud_computing
0

Having worked with Pipeline Pilot for over 10 years, during which time I’ve built numerous demos showcasing its powerful data processing capabilities, I am one of those people who think that anything can be made better with a dose of Pipeline Pilot. So when Accelrys acquired VelQuest early last year, I suspected that there would be some opportunities for Pipeline Pilot to shine. As expected, I didn’t have to wait long for such an opportunity to present itself.

 

Currently, users of Accelrys Lab Execution System (formerly VelQuest SmartLab) rely on Crystal Reports for their reporting needs. The output created by Crystal Reports, while very useful and flexible, is static. Users select the reporting criteria and a PDF report is generated that can be shared and filed. In order to look at a different slice of data, or get more information about a specific item in the report, a new report has to be generated.

 

Alternatively, the Accelrys Enterprise Platform (AEP) provides very powerful tools including components to build HTML query forms, execute queries against databases, create interactive charts and ways to link all these together to build highly interactive reports and dashboards.

 

So, on one hand, we had data in the Accelrys Lab Execution System, on the other, AEP’s querying and reporting capabilities. Surely, this was a match made in software heaven.

 

To test this assumption, we set off to build some interactive example reports based on data in the Accelrys Lab Execution System. Choosing from the many candidates was difficult, but in the end, we settled on four reports/dashboards that would provide the greatest value to users and thoroughly test the ability of the Accelrys Enterprise Platform to handle these reporting needs, as well as demonstrate the advantage over static reports generated by Crystal Reports. The examples we settled on were the following:

•          Compliance report

•          Instrument usage report

•          Trending report

•          Consumable usage report

 

To see one of these interactive reports in action, watch this short video.

 

 

All these dashboards use the Accelrys Query Service to retrieve data from the Accelrys Lab Execution System database. They also use the AEP interactive reporting components to create HTML reports that allow users to query their data and then drill down for more information on selected items. Information in the report is retrieved on-demand from the database, so the latest data points are available as soon as they are collected.

 

These reports can offer useful insights into data in the Accelrys Lab Execution System, providing answers to questions like:

•          Which instruments are used most in a specific lab?

•          Which procedures have the highest rate of “convert to manual” incidents?

•          Which labs/procedures use the largest amount of a certain consumable?

 

With this information readily available, scientists and technicians can more easily spot and act upon trends and bottlenecks, thus improving the workflow and, ultimately, by optimizing certain factors, saving time and money.

 

And this is just the tip of the iceberg. In our announcement about the new Accelrys Process Management and Compliance Suite, you can learn more about the ways our integration with VelQuest technology is helping to accelerate “science to compliance” across industries.

3,342 Views 0 References Permalink Categories: Lab Operations & Workflows, The IT Perspective, Data Mining & Knowledge Discovery Tags: pipeline_pilot, platform, smartlab, les, velquest, compliance, lab_execution_system
2

Over the past couple of years I've become a big fan of using 'workflow automation' to help me with my modeling projects. What exactly does that mean? Like many of my quantum mechanics colleagues, I had trouble getting my head wrapped around this idea. What I mean by 'workflow automation' is creating some kind of little computer "program" that helps me do my job faster and easier. I put "program" in quotes because you no longer need to be a programmer to create one of these. Instead you use a 'drag & drop' tool that makes it so easy that (as GEICO says) a caveman could do it. This terrific capability has been added to the commercial version of MS 6.1, and it should make life easier for lots of modelers.

 

workflow1.png

What happens when you need to do the same DFT calculation over & over & over again? You'll also probably need to extract one particular datum from the output files, e.g., HOMO-LUMO gap, or total energy, or a particular C-O bond length. With workflow automation you drag in a component that reads the molecular structure, another to do the DFT calculation, and a third to display the results. This is pictured in the figure. (I cheated a bit and added a 4th component to convert the total energy into Hartrees.)

 

Not impressed? Let's look at a more complex case. A colleague of mine was studying organic light-emitting diodes (OLEDs) based on ALQ3. He created a combinatorial library of 8,436 structures. In order to characterize these he needed to compute a bunch of stuff:

  • total energy, HOMO, and LUMO
  • Vertical ionization potential (IP)
  • Vertical electron affinity (EA)
  • Adiabatic IPOLED2.png
  • Adiabatic EA

This requires a total of 3 geometry optimizations (neutral, cation, and ion) and a few extra single point energy calculations. You then need to extract the total energies from the results and combine them to compute the IPs and EAs. Doing 1 calculation like that is no big deal, but how do you do it 8,436 times? And how do you sort through the results? Using automation, of course! Even a novice can set up the protocol in under an hour, about the time it would take to process 4 or 5 of the structures manually. Plus you can display your results in a really nice html report like the one pictured.

 

Using this approach makes it easy to combine calculations in new ways. One protocol I wrote, for example, uses MS Polymorph Predictor to determine the lowest energy crystal structures of a molecule, but before starting the calculation it determines the atomic partial charges with density functional theory (DFT). And once it's done it can use DFT again to refine the predictions. Another combines MS Amorphous cell and MS Forcite to automate the computation of solubility parameter of polymers.

 

You've been able to generate these sorts of workflows for quite some time using Pipeline Pilot. So what's new? Now Materials Studio customers will get a license to these tools, and they'll be able to call their protocols from within the MS GUI. The combination of Pipeline Pilot and Materials Studio means you can create sophisticated workflows easily - without the complexity of perl scripting - and you can launch the protocols from and get the results back to MS.

 

ProtocolDialog.png

Share and share alike

The final point I'd like to make about these protocols is how easy they are to share. Simply place the protocol in a shared folder on the server and anybody can run jobs. There's no need for them to download a copy or configure anything. If they have an MS Visualizer then they can use your protocol. This lets experienced modelers create reliable, repeatable workflows that they can share with non-modelers. Perhaps more interesting to expert modelers, we can learn from each other and share advanced workflows.

 

I think it'd be awesome if we had a section on the Materials Studio community pages where we exchange protocols. I'll kick off the process by loading my Polymorph Predictor protocol. I'll let you know when that's ready. Stay tuned.

 

In the meantime, let me know what types of protocols you'd like to see. What are the things that you spend the most time on? What things could be streamlined, automated? Don't be shy: post a note to this blog and let everybody know what you think.

808 Views 0 References Permalink Categories: Materials Informatics, Modeling & Simulation Tags: pipeline_pilot, materials_studio, dft, materials, polymorphs, automation
0

If the twitter feeds from Marco Island are blowing up, its because our friends at Oxford Nanopore have lit the fuse. They have finally made public the crucial information about 2 new nanopore-based sequencing instruments to be generally available the second half of 2012. There are some great summaries of the technology that have already appeared, see especially:

 

http://www.nature.com/news/nanopore-genome-sequencer-makes-its-debut-1.10051

 

http://omicsomics.blogspot.com/2012/02/oxford-nanopore-doesnt-disappoint.html

 

http://pathogenomics.bham.ac.uk/blog/2012/02/oxford-nanopore-megaton-announcement-why-do-you-need-a-machine-exclusive-interview-for-this-blog/

 

The capsule review: staggering long read lengths (50kbp or more), minimal sample prep, uniform error rate, extremely high throughput, eventual direct reading of RNA (no cDNA required).

 

All very impressive and impactful on their own...

 

But the kicker is the MinION: a USB-stick disposable sequencing instrument for less than $1000.

 

Let that sink in for a moment. A sequencing instrument that you can carry in your pocket, that plugs right into your laptop, provides results within hours, and costs less than $1000 with no other instrument required. I had to pick my jaw off the floor when ONT first briefed us…

 

As we announced some time ago, ONT will be distributing Pipeline Pilot and the NGS Collection as their preferred bioinformatics solution for the GridION and MinION systems. Combining Pipeline Pilot with ONTs real-time sequencing technology is enabling some amazing capabilities. You no longer need to run an instrument for days or longer and blindly hope for usable results. Now, your analysis can control the experiment: run until polymorphisms are found in locations of interest, run until you achieve a desired level of exome coverage, you can even parallelize sequencing experiments, with results from one sample controlling sequencing performed on other samples. And with the MinION "stick", you can sequence anywhere you can take a laptop.

 

No kidding…

 

A few brief videos on the ONT site highlight how Pipeline Pilot is being used to work with and analyze the data from these remarkable instruments:

 

http://www.nanoporetech.com/news/movies#movie-22-gridion-part-2

 

http://www.nanoporetech.com/news/movies#movie-25-run-until-dna-sequencing-informatics-on-the-gridion-and-minion-systems

 

We think ONT’s technology is game-changing, and we’re delighted to be the informatics platform of choice for this breakthrough technology.

 

How does this change how you and your organization will deploy genome sequencing?

2,140 Views 0 References Permalink Categories: Trend Watch, News from Accelrys, Bioinformatics Tags: pipeline_pilot, oxford_nanopore_technologies, minion, dna_sequencing_informatics, dna_strand_sequencing
2

Turning Over Rocks to Find Oil

Posted by mdoyle Oct 13, 2011

As many of you know or have noticed, the price of petrol or gasoline is increasing. This is due to the fact that we have reached or passed the peak of easily available and exploitable oil reserves. Of course, there are fields which become economically producible depending on different oil prices, however, the majority consumption of gasoline and other petrochemical products is now coming from increasingly costly and difficult reserves. Finding, locating, quantifying, producing and then operating oil reserves is a complex multi-disciplinary process. There is a huge amount of science and technology devoted to the processing and analysis of seismic and geo-seismic data. However, I am focusing here on the drilling, evaluation, completion and production areas of the oil extraction and production process from shale.

 

Shale is the most abundant of the various types and volumes of rocks where oil is found #Shale is the most abundant source of hydrocarbons for oil and gas fields. Shale is commonly considered to be the hydrocarbon source mineral from which the majority of the oil and gas in conventional reservoirs originated. Shale is a sedimentary (layered) rock that has thin layers of clay or mud. There are many variations in shale geology, geochemistry, and production mechanisms. These variations occur between wells or even between or within the same shale area.  Shales are dissimilar one to another and each shale deposit offers its own set of unique technical challenges and learning curves. This being said, Shale gas was characterized recently as a "once-in-a-lifetime opportunity" for U.S. manufacturing, by Bayer Corp. CEO Gregory Babe.

 

Shale is unique in that it often contains both free and absorbed gas. This results in initially high production rates, quick decline curves, and then long-term steady production. Formed millions of years ago by silt and organic debris accumulations on lake beds and sea bottoms, the oil substances in oil shale are solid and cannot be pumped directly out of the ground. To be produced effectively, oil shale is first mined then heated to a high temperature. An alternative, but currently experimental process referred to as in situ retorting, involves heating the oil shale while it is still underground, and then pumping the resulting liquid to the surface. When producing gas from a shale deposit, the challenge and difficulty is in allowing and enhancing the access to the gas stored within the shale's natural fracture system.

 

The complexity of the materials' interactions which occur sub-surface and the advanced challenges within the system require that formulators and product designers in the service companies providing drilling fluids, muds and completion fluids investigate all the complex interactions and trade-offs in formulation, recipe and ingredient space.

 

761 Views 0 References Permalink Categories: Materials Informatics, Data Mining & Knowledge Discovery, Modeling & Simulation, Electronic Lab Notebook, Trend Watch Tags: pipeline_pilot, materials-studio
6

So iPads are all the rave. They’re cool. They’re easy to use and they keep you in touch with your favorite media anywhere you want. I want one! So, is an iPad ready for the information-intensive scientific lab?  I welcome everyone to comment…

 

There’s a chicken and egg scenario here. Vendors are looking to add scientific software to the iPad but still trying to understand what to support and the value of using an iPad in the lab verses a tablet PC, for instance. There are a couple of challenges: Today there are only a few scientific software applications that work on the iPad and secondly, there is a debate how best to use the iPad in the lab. After all, is the iPad really an efficient way to enter or document information in the lab? It’s a great tool for browsing data such as safety sheets, protocols, dashboards or inventory information. The iPad is also an acceptable data entry tool if limited information is being entered into the touch screen UI. But do you really want to document your whole experiment through an iPad?

 

The challenge for the scientific informatics industry is which applications should be ported, not porting just because we can. Does moving a highly feature rich and interactive ELN to the iPad really make sense, for instance? Or, should just a subset of functionality be ported?  I posit it is a subset of functionality. Focusing on the ergonomics of an iPad, it would be great to be able to browse my lab data, dashboards, protocols and methodologies to get real time information access while doing the wet work in the lab. It would also be highly useful during the wet work to input small amounts of runtime information into my mobile device. But I argue we should not encourage scientists to do all their notebooking in the iPad, just as we should avoid arming scientists with a sledgehammer to crack a nut.

 

I would like to hear what you think?

1,477 Views 1 References Permalink Categories: Bioinformatics, Cheminformatics, Data Mining & Knowledge Discovery, Electronic Lab Notebook, Lab Operations & Workflows, The IT Perspective Tags: pipeline_pilot, notebook, eln, symyx-notebook-by-accelrys, cheminformatics, ipad
0

For nearly a decade, scientific IT has debated thin vs smart client for decision support. The dilemma is that scientists have different needs and until now, scientific search and browse applications came in one flavor: Smart OR thin. 

 

When forming a product evaluation committee for a new product, it’s often the computer savvy scientist’s that jump on board. Those who frequently access their data, have a deep understanding of their information, and highly appreciate the fast, responsive, interactive and in-depth capabilities that you get with a smart client that’s caching the information. Accelrys Isentris is a case in point. As a result, scientific organizations often adopt rich smart client applications at sometimes the expense of the less computer savvy users.

 

However, for the scientists that infrequently access their data and who don’t want to be computer experts, less means more. In many cases, up-to 80% of a scientific community can be comprised of scientists who have basic access needs to their data on an infrequent basis. These scientists don’t desire to become experts nor do they want to use expert systems, yet the system presented to them often has to be “tamed” to present itself as simple to use, to make life simpler.

 

Now with the new announcement by Accelrys that Isentris licensees are entitled to use the Pipeline Pilot web based search interface, as well as the rich Isentris client .Net application, scientific organizations have the best of both worlds.  For the system expert, the Isentris rich, smart client provides deep and fast interrogation of data with advanced search and browse capabilities to extract the right data at the right time and present it in the right context to make decisions and run reports.  For the infrequent user, the lite web application provides a simple-to-use UI that can be rapidly deployed across the organization to answer common questions from any workstation with a minimal amount of training and delay. For those scientists that want web access and a smart, rich client, they can now be BOTH smart and thin.

 

What do you want to be? Smart, thin or both?  Now, you can have it all!

445 Views 0 References Permalink Categories: News from Accelrys, Cheminformatics, Data Mining & Knowledge Discovery, The IT Perspective, Scientific Databases Tags: pipeline_pilot, isentris, isis, database, cheminformatics, web_client, thin_client, thick_client, smart_client
0

As Anderson says in his post in wired, the Petabyte Age is upon us. Just consider that in the last 30 years we have gone from ferrite 32K memories in room filled machines, through VMS and paper tape, to my new laptop with a Terabyte array. Similarly we have gone from initial stores to RDBMS and now Google obfuscating and federating many data sources. So, what’s next?

 

Anderson makes a solid argument that a holistic information map is now an unachievable goal and that what we need is locally descriptive analyses. Although right in much of what he says, this is a debatable point. Since the failure of chess like programs in real world scenarios is linked to the lack of contextual capability in computer science, it therefore follows that a wholesale abandonment of context of information will lead to a form of data Alzheimer’s. Where I totally agree with him is the quote of George Box (Box Hunter and Hunter book fame) where he says, "All models are wrong, but some are useful." This is very true and makes the argument for consensus models and consensus decision trees. Anthropomorphically think about the ideas of trial by jury as an analogue. Multiple models have the ability if housed in an accessible framework to allow and enable decision making to be free from local bias or local variances.

 

What also is very interesting of Anderson’s thesis is the idea that although we can and will continue to build multiple taxonomies. The idea like Esperanto of a universal language is further flawed. It is in my opinion as a child of the 90’s and the fall of monolithic states, that people will adapt and morph like a flowing river and that, ideas such as, universal taxa, cannot in any way provide the overall context. Therefore it’s in the provisioning of tools to make these interchangeable and in a platform that makes interactions trivial that we will move forward towards our goal of universal information.

 

Consider what I saw yesterday: There was a lady pushing two kids at Newark airport in a stroller. Both children were using IPAD kiddy programs. They were both, drawing, learning and, in fact, sharing information. Now this is from children that are no older than say 5 years. These represent the future and how we will move. In their world the idea of information having boundaries and limits is completely nonsensical. So we have to understand that the explosion of data is just beginning and start to plan accordingly.

694 Views 0 References Permalink Categories: Data Mining & Knowledge Discovery, Trend Watch Tags: pipeline_pilot