Recently, #we released Collection Update 1 (CU1) for Pipeline Pilot 8.5, and along with many other updates to various component collections, we have updated the Professional (Pro) Client and released a new feature called “Protocol Comparison”. In software development it is very common to use a “diff” tool to compare two versions of code, to see where differences occur and determine if the differences should be kept or resolved. This can be very useful when working on a newer version of a program, or, when multiple developers have worked on the same program, combining it into a single working version.
Well, Protocol Comparison is like “diffing” for Pipeline Pilot protocols. For two versions of a protocol, it allows you to see which components have been added, deleted, or modified. For modified components, there is a report that shows all the changes to that component, and the same information is provided at the protocol level itself. It also shows which pipes have been added or deleted, and we have incorporated a tool for comparing scripts and long parameter values. Typical scenarios where we think this will be valuable is to review recent changes to your protocol, to see which ones to keep and which to revert, and where multiple authors are working on the same protocol. You can also compare the same protocol deployed to different servers, to simplify the process of making changes to a protocol on a development server, and deploying that protocol to a production server with confidence that only the desired changes have been made.
Protocol Comparison is a feature that has been requested for some time, and although conceptually quite simple, proved to be a fairly challenging task to implement. We’re not aware of any other diff that makes comparisons in the graphical way that Protocol Comparison works, so we’re excited about it and we think this is a big advantage to anyone who uses Pipeline Pilot to create data processing pipelines. I hope you will use it, and provide us feedback about your experiences with it, and any further enhancements you’d like to see.
It was a great pleasure to announce last month that Accelrys has added HEOS by SCYNEXIS to the new Accelrys Cheminformatics Suite. There has been a lot of buzz since the announcement, including a well-attended webinar. So, I've invited Frederic Bost, director of information services at SCYNEXIS, back to expand on his recent post about HEOS and explain the value it brings to our Cheminformatics Suite.
One of the most significant trends influencing the way drug discovery proceeds is externalization. We've seen the impact directly here at SCYNEXIS, which is why we began building HEOS nine years ago. to facilitate collaborations with our research customers.
Today, research organizations are complementing their unique expertise and moving research programs forward by using externalization to bring valued, skilled partners into select projects. This has fundamentally changed the nature of outsourcing relationships. Once, outsourcing was based on transactions, with a commissioning company telling another company to "make this" or "test that." Modern partnerships and outsourcing now aim to "discover this" through collaborations that often involve multiple partners playing different roles in different projects from locations around the world.
Most research organizations currently support collaborative research using one of two models. Some trade files via email or an e-room, an option that can be insecure and error prone, requires scientists to mediate and interprest the data exhanged between partners, doesn't occur in real time and simply doesn't scale to projects involving multiple partners. Other organizations opt to open up their internal systems to partners via VPN connections, relying on security settings in their internal systems to keep partners segregated. Most cheminformatics systems can do this, but internal systems have been designed on the assumption that everyone accessing data is an employee entitled to full access rights. Hence, opening up an internal system at best increases risk and requires extra resources to maintain security; at worst, it is time-consuming, expensive and, like the other model, fails to scale given the dynamic nature of collaborative research.
More importantly, both of these current models fail to address IP ownership. Traded files are hard to track, and storing IP from collaborative research alongside internally owned IP in the same system makes it difficult to delineate and separate who owns what when a collaboration ends.
The needs associated with modern collaborative, externalized researchare perfectly suited to cloud computing. HEOS, as we've demonstrated with the Drugs for Neglected Diseases initiative, captures all the data and processes associated with conducting multi-company drug discovery projects and was designed from the ground up to enable multiple companies with different access rights and requirements to collaborate within the system.
The graphic below shows how it works. A primary collaborator can establish any number of secure team areas in the HEOS cloud to manage collaborative research. Within each research area, the primary collaborator partitions access to the data among its partners. Some partners may only have the ability to upload or publish data. Others may be able to download certain data from other team members (such as an analytical lab needing to conduct tests on compounds produced by a medicinal chemistry partner). New partners can be rapidly spun onto projects or spun down when their services are no longer required. Best of all, collaborative IP remains in the collaborative space and can be partitioned among the partners and downloaded into their internal systems when appropriate.
Above #is a figure showing how it works. A primary collaborator can establish any number of secure team areas in the HEOS cloud to manage collaborative research. Within each project area, the primary collaborator partitions access to the data among its partners. Some partners may only have the ability to upload or publish data. Others may be able to download certain data from other team members (such as an analytical lab needing to conduct tests on compounds produced by a medicinal chemistry partner). New partners can be rapidly spun onto projects or spun down when their services are no longer required. Best of all, collaborative IP remains in the collaborative space and can be partitioned among the partners and downloaded into their internal systems when appropriate.
To those who worry whether they can trust the cloud for managing research data, HEOS has been audited by some of the largest pharma companiesand has survived ethical hacking attack tests (read the HEOS security white paper to find out more). I’d say the cloud is ready for research data—HEOS is a proven technology that has managed millions of compounds and tens of millions of data points.
So, are you and your research partner networks ready for the cloud?
QbD is fundamentally about how to build quality into a process to design and make a product versus trying to test the quality into a product. The ideation of the process and the final manufactured product starts early in development with process engineering and formulation. In order to clearly understand the “design space” information about the process, the product needs to be captured and made available to enable scientists to understand and determine the critical quality attributes that effect a product. Scientific informatics is thus fundamental in supporting QbD and more specifically, an ELN that provides the context of who did what, why and when around a product design and process.
ELNs are now playing a key role in early to late development and thus are an integral part of the informatics life cycle that supports QbD. The new evolution of the ELN to be able to create and capture workflows, as well as new data access and analysis, now supports pharma development to deploy and better support an informatics environment to deliver QbD. If you’re interested I’d highly recommend reading the new QbD article just published by Accelrys that discusses the role of informatics in support of QbD and specifically the role of the ELN.
Click the link to read the new white paper--and do, by all means, let me know what you think!
It was interesting to read the following research report findings from Thompson Reuters:
“Academic pharma patents more than trebling, from 602 in 2005 to 2025 in 2010”.
"This seems to show that [industry] is patenting more than ever in efforts to protect its intellectual property developments, but also academia is waking up to the need to protect itself, and raise its ability to generate revenue streams," the report claims. IP and Pharma funding is now becoming a significant source of funding for academia and especially important in light of drying government and research body grants. Recent changes for US patent laws to a 'first to file' process from a 'first to invent', funding paucity from government and the transient nature of academic research staff, mean that it is now more important than ever to protect, keep safe and prevent IP related to future patents from “wandering”.
Today’s# ELN is# now ideally suited for academic deployment. Web based, zero installation, low cost of ownership, ease of use and little reliance on IT resources now enables any university or group of universities to embrace an informatics infrastructure to capture and protect IP. Today we see universities being able to deploy ELNs to over 5000 users.
An ELN provides a secure environment that can be rapidly deployed to protect research information and also provides a means to trace and defend research “secrets” straying into other institutions as a result of a migrant work force. From a competitive funding perspective, universities that have embraced an informatics infrastructure that better manages and protects research information in a highly transferable electronic format with Pharma partners are arguably more likely to receive research funding.
So what's holding up academia from joining the other 154 thousand estimated ELN users in the world?
When we think about going electronic from our paper notebooks we often think about just replacing the paper document process. Initially, ELN’s evolved as a paper on glass replacement, but have in the last 10 years gone through a successive evolution to well beyond a document replacement. Industry talks about 5th generation ELNs that define and create a new role for the ELN in today’s labs. Driving this change is a need for simplicity, efficiency, centralization and fewer resources to meet the growing needs of globalization and diminishing budgets. As a result, we are seeing a trend where LIMS (Laboratory Information Management Systems) capabilities and ELN capabilities are merging as scientists want a single system to manage and orchestrate their experiment and sample processing.
Traditionally, scientists looked for ELNs to capture who did what, how, why and when. In other words, the day-to-day unstructured activities, thoughts and processes. Meanwhile, LIMS systems captured the routine structured information streaming from software and instruments. The question, and the reality, is why do we need the 2 systems for managing lab data? It’s for this reason you are seeing LIMS vendors adding ELN capabilities and ELN vendors adding LIMS capabilities. Today, the latest Accelrys ELN not only replaces the paper document for unstructured information, but extends the role of the notebook. Now capable of storing data entered in a notebook section in a structured format, the ELN allows experiment and sample parameters to be stored and re-accessed with reporting and analysis tools.
To see some examples of what’s now possible with 5th generation ELNs, check out the new videos that demonstrate how new data management, analysis and reporting capabilities change the role of the ELN. The first video is using the Thermo Fisher Envision tool to analyze and process spectra within the Notebook, the second video shows how managers and PI’s in the Notebook can see dashboards of experiment and sample progress and, the third is about how every day scientists can query and access their information from across experiments. All capabilities that redefine the role of the ELN in the lab to help with enterprise lab management.
In conclusion, while the ELN will not replace all LIMS and LIMS with added document capabilities will not replace all ELNs, we are seeing a convergence of capabilities. For the scientist, they will see the benefits of these vendor investments by being able to execute day-to-day lab operations with fewer software interfaces and centralized environments.
So should we be calling the 5th generation ELNs, "ELNLIMS", or "ELMS" – Electronic Lab Management System? What are your thoughts? Is ELN now a confusing name for the newer generation ELNs focused on a convergent lab?