Accelrys’ goal is to help our customers optimize their Scientific Innovation Life Cycle, and we are constantly working on new and improved software solutions to help them achieve success through innovation. In Discovery, a critical task for our customers’ teams is to make informed decisions about the path forward for their projects. Ideally, these decisions will be made collaboratively, quickly and based on a complete and accurate set of all available data across their organization, their partners and public data. However, today for many companies this can be challenging...
Data may be locked in many and disparate silos and require tedious and error-prone manual processes to collate.
The volume of data may be overwhelming and access to sophisticated calculations, analytics and visualization needed to turn the data into useful knowledge may be difficult.
Finally, the means to collaborate on the project data and knowledge in real-time, across all team members may simply not be available.
With these applications, our customers can help project teams make better informed decisions faster by providing them with a modern, collaborative scientific decision support environment. Together Accelrys Insight and Insight for Excel allow teams to gather data from internal, partner and public sources, curate that data, simply apply sophisticated predictive calculations and analytics, and visualize, report and collaborate on the data in a single seamless workflow.
Accelrys has a long and proud tradition in cheminformatics. Insight and Insight for Excel are the culmination of our endeavours to create a single unified, next generation cheminformatics suite that now provides a complete end-to-end workflow for discovery scientists including sophisticated scientific representation, scientific entity registration, data access and analysis and reporting and collaboration. We are delighted that the Suite now provides a solid migration path for all of our existing customers on our earlier offerings, supporting their critical processes while at the same time providing them modern ways of working with data and ways of collaborating. The Suite is built on the Accelrys Enterprise Platform providing full enterprise support as well as a unique graphical programming environment that makes the applications configurable and extensible in ways never before seen.
If you’d like to learn more about Accelrys Insight and Insight for Excel please check out the microsite or register for the webinar.
Werum (www.werum.de), one of our strategic technology partners that supplies MES systems for the pharmaceutical industry, hosted a great meeting in Lüneburg for their users of PAS-X. The two day event covered many product and industry related topics including regulatory compliance updates and Pharma business trends with great speakers! Around 150 customers attended the well-organized event and enjoyed the conference as well as the “Dinner & Drive”, a test drive organized by the German Automobile Association (ADAC).
As the Product Marketing Manager for the Analytical, Development, Quality and Manufacturing space I delivered a presentation about ‘Reducing Cycle Times and Risk through Smart Interfaces with your Lab’ which discussed the interfaces of Accelrys ELN, LES, Environmental Monitoring and Discoverant with PAS-X. During the session we had some great discussions around systems integration and their benefits for users! To request a copy of the slides presented, please click here.
Life sciences R&D faces major challenges today: shortages of new drugs in company pipelines, growing costs to bring drugs to market, patents expiring, increased competition, downward price pressures from the market, new safety and regulatory measures coming into effect and increased public scrutiny. All of these factors are combining to make successful innovation much more difficult for pharmaceutical organizations.
The industry is responding to these challenges by pursuing new markets, new formulations and new uses for existing products. We’re also seeing attempts to reduce costs through mergers, acquisitions and aggressive externalization. At the lab level, pharmas and biotechs are also exploiting high-throughput and high-content technologies and advances in biotechnology and genomics. Pharmacogenomics, toxicogenomics, chemogenomics and pathway analysis are improving scientists’ insight into druggable targets and the metabolic pathways they control. Translational medicine is linking clinical information back to early-stage discovery.
A side effect of these advanced technologies has been a dramatic increase in the amount of data being collected, which has created another fundamental challenge—managing massive amounts of complex data and collaboratively transforming it into knowledge. Somewhere, hidden within this data-rich R&D environment, there may be pointers to the next pharmaceutical discovery: what proteins are encoded by these genes, what biological pathways do they impact, are these targets druggable, what kinds of molecules might interact with these targets, what adverse effects are expected?
The huge amount of data is often considered the main component of the challenge. But even when managing smaller amounts of data, informational complexity and diverse data “silos” spread across the enterprise (and externally among partners) still present a major obstacle to innovation that most informatics systems and data analysis tools simply cannot overcome.
While new technologies and specializations are resulting in enormous progress in the right direction, the data explosion they have created means that scientists today can spend about 20% of their time just managing information. The oft-repeated refrain is: “I’m spending nearly all my time, finding, processing, organizing and moving data around instead of being productive in the lab—and it’s only getting worse.”
The informatics challenge today is really a four-part puzzle around how to improve data access, data analysis, data reporting and overall team collaboration. With scientific data often locked in different database systems in various locations, data acquisition can be a complex, time-consuming, often manual and, therefore, error-prone task of cutting, pasting and joining. Scientists often do not or cannot complete these tasks, so they end up making decisions based on incomplete data or stale information collected at an earlier time. Likewise, large volumes of data can be overwhelming to analyze, sometimes requiring computational experts to intervene―and this can result in another bottleneck or the temptation to make fast decisions without the benefit of full analysis. Reporting can be another manual, labor-intensive and error-prone process resulting in a logjam of spreadsheets, PowerPoints and emails that hinders decision making with no real-time data sharing.
You and your team need to be making informed decisions every day. What’s the greatest challenge you face when it comes to accessing and sharing the right information in the right context and at the right time, so you can move your projects forward in an efficient, timely manner?
Accelrys will make a major announcement about our launch of an exciting new collaborative data access, analysis, reporting and decision support solution at the Pharmaceutical IT Congress on September 23-24 in London. If you’re at the conference, visit us to learn more. Be sure to attend Dr. Rob Brown’s presentation, “Next Generation Cheminformatics Suite for Enhancing R&D Productivity and Innovation,” in the “Managing Enterprise IT” track on Monday, September 23 at 12:20.
I am excited to announce today the acquisition of ChemSW, a leading environmental health and safety solutions provider headquartered in Fairfield, California. This is an important combination for our companies and our customers, and I look forward to the opportunities we have to help our customers accelerate innovation by optimizing the lab-to-commercialization process.
With this acquisition, Accelrys is expanding its portfolio of scientific innovation lifecycle management (SILM) software to include new solutions, offered via the Cloud or on premises, for managing and tracking the source, use and disposal of chemicals from lab to plant. This important capability is crucial to our customers across industries – from specialty chemicals, to consumer packaged goods to food and beverage, life sciences and oil and gas – who are seeking more efficient ways to enhance compliance with increasingly complex EH&S regulations.
By replacing today’s paper-based systems, the ChemSW EH&S compliance portfolio eliminates many of the errors and inconsistencies that can slow innovation and hinder compliance. Importantly, ChemSW’s EH&S compliance portfolio will be integrated with our solutions via the Accelrys Enterprise Platform to bring our customers a complete set of solutions for optimizing chemical and material selection and processing throughout the scientific innovation lifecycle.
As with our recent acquisitions of Velquest, Aegis and Vialis, the addition of ChemSW advances Accelrys’ SILM strategy and will help our customers bring innovative products to market faster and more efficiently. This acquisition is particularly exciting in that it represents a significant advance for Accelrys in delivering solutions that support sustainability. Specifically, our combined software portfolio will help our customers improve operational efficiency, deliver innovation and comply with regulatory requirements – all hallmarks of strong sustainability initiatives. I invite you to discuss your own sustainability initiatives with our solutions teams and to learn more about how the ChemSW solutions can support you.
With the release of the Accelrys Enterprise Platform 9.0 (AEP), we started to communicate our thoughts on Big Data as it relates to the large volumes and complexity of scientific data on which our customers rely. While the original message around this provided some high level views, I thought that I would dig deeper into what this means and provide some examples that we have worked on with our partners from IBM and BT.
First, what do we mean by Big Data? By now you have probably heard Big Data described in terms of: Variety, Velocity, and Volume. Let’s quickly tackle each of these, but I’ll try not to bore you to death...
The IT Industry generally views “variety” as various data sources (files, databases, web, etc.) and types of data such as text, images, geospatial, etc. However, data within a scientific organization is unique and includes chemical structures, biologics, plate data, sequences, and much more.
“Velocity”, or the rate of change, can be viewed in two parts. First, every day new data is added within scientific organizations from machines such as Next Generation Sequencers or Electronic Notebooks. Second, new data is regularly made available through government resources such as the United Kingdom’s National Health Service and scientific data providers like Accelrys DiscoveryGate (shameless plug). Thus, the velocity of new data now accessible within an organization is growing exponentially and businesses need to be able to access and analyze this data to achieve their objectives.
When you multiply the Variety of data by the Velocity at which this data is delivered, you can get a sense for the exponential amount of data available to parse. However, you might have a very small amount of data and still have to parse it to uncover meaning. Besides the total amount, you should also consider that the data items are often large. For example, the human genome is about 3.1 Gigabytes in one uncompressed file (FASTA format) and a complete sequencing of a human yields 100 GB (and up) per experiment. If chromosomes are in separate files, chromosome 1 is the largest at 253 Megabytes uncompressed. Plant data can be even bigger.
Also, organizations that have deployed an Electronic Lab Notebook with over 1,500 users can have data volume sizes reaching beyond 2 Terabytes.
Scientific Big Data and the Accelrys Enterprise Platform
Scientific Big Data is a reflection of the expanded types of data that an organization must work with in order to ensure that regulatory, privacy, and security rules are adhered. The Accelrys Enterprise Platform 9.0 and Accelrys Pipeline Pilot help solve the many Big Data initiatives that organizations are working towards.
The Accelrys Enterprise Platform 9.0 (AEP9) expands support for High Performance Computing (HPC); a requirement for all Big Data projects. There are two options for HPC available: Cluster and Grid. Cluster deployments leverage a Map-Reduce technology that is geared towards organizations that require HPC capabilities without the high investment into a grid infrastructure. Grid integration is available for those customers that want to leverage their existing investments in a Grid engine. Both options enable an organization to scale their infrastructure to meet the computing capacity.
At the Accelrys Tech Summit in Brussels this year, IBM delivered a performance analysis session that used AEP9 with IBM’s GPFS (proprietary parallel file system technology) to handle scientific data and showed how I/O impacts computing resources. A link to the associated whitepaper is available on the IBM site and the session was so popular that we are hosting a webinar on this topic on September 5th.
British Telecom (BT) leveraged AEP9 Clustering along with their Cloud Compute environment to mine the enormous dataset from the United Kingdom’s (UK) state-funded National Health Service (NHS). That data is unstructured and includes 55 million people from England, 3.5 million people from Wales, and data points are in the region of 4 billion. With the power of a Cloud Computing infrastructure and the simplicity in design from Accelrys Pipeline Pilot, researchers can utilize the data from the NHS and interrogate that data against other unstructured or structured data at their disposal without having to build another data warehouse or data mart. This is one of the big benefits of addressing Big Data with AEP9. Read the press release.
In future posts, I will provide technical details on how Accelrys Enterprise Platform and Accelrys Pipeline Pilot enable Big Data capabilities including how other data repositories can be leveraged to benefit other Accelrys applications (ELN, LIMS, etc.).
Accelrys has a passion for enabling customers to solve their business challenges and drive scientific innovation. Innovation often starts with the desire to adopt and leverage new technologies presented by the market. With Microsoft Windows 8.1 following on the heels of Windows 8, Accelrys continues to evaluate our support for future product releases with Microsoft through our continuing partnership.
IMPORTANT NOTE: The information on the roadmap and future software development efforts is intended to outline general product direction and should not be relied on in making a purchase decision.
Microsoft Windows 8 delivers a wide range of capabilities providing new scenarios for Accelrys applications to deliver compelling, innovative solutions to our customers on the desktop, in web applications and tablet and device connectivity.
A separate Tablet and Mobility Statement of Direction will be published to address these topics.
Many Accelrys applications are delivered as Windows-based applications. Microsoft Windows 8 and Windows 8.1 continues to support the desktop experience.
Accelrys products will support Microsoft Windows 8 and Microsoft Windows 8.1 on desktop environments.
Accelrys applications require testing and certification on both Desktop and UI versions to ensure that the applications meet the expectations of, and opportunities for, distinct end user roles. Accelrys also continues to evaluate and adopt emerging web technology standards (e.g., HTML5) with a view to delivering solutions with rich and intuitive user interfaces to enhance user productivity.
Accelrys products will support Internet Explorer 10 on Microsoft Windows 8 and Internet Explorer 11 on Microsoft Windows 8.1.
Microsoft has also refreshed the entire User eXperience (UX) with the introduction of Windows 8. Some of the changes in UX are continuations of changes made within Windows Vista and Windows 7, while others are quite dramatic and new, including going from a “Start” button to a “Start” screen in Windows 8. While Windows 8.1 reintroduces the “Start” button, users are still presented with the new Windows 8 “Start” screen along with a new view showing all applications installed on that device.
These changes in UX can improve usability and productivity, which are major goals of Microsoft, but they can also require changes in the applications delivered by Accelrys. While these changes are not mandatory for Accelrys applications, Accelrys is looking to leverage these UX innovations to ensure a consistent and fluid UX for users.
There is a growing hardware trend towards increased availability of high definition display devices. These include widescreen laptops, monitors, projectors and tablets. This trend has been accompanied by a shift in default aspect ratio for applications, for example, Microsoft PowerPoint has changed from traditional 4:3 screen ratio to 16:9 screen ratio. Accelrys is taking this new aspect ratio into consideration for all products and services when designing user experiences.
Accelrys products will embrace the new user experiences available within Microsoft Windows 8 and Microsoft Windows 8.1.
Windows 8 provides an improved API set for touch-based applications. Devices which support a touch interface (e.g., Microsoft Surface, touch-based monitor/laptop, or mobile device) still function with applications that are not touch-enhanced, operating in normal “mouse” fashion. Applications which are touch enhanced typically have larger button sizes and adjusted application layout to present the touch experience to the user in a way that improves productivity and usability. Accelrys applications utilizing mouse functionality will remain functional even in a touch-enhanced environment.
Accelrys will develop touch-based applications for Microsoft Windows 8 and Microsoft Windows 8.1 to meet appropriate user scenarios.
Windows 8 and Windows 8.1 also expanded hardware connectivity to accommodate growing connectivity trends such as USB 3.0 and Near Field Communication (NFC). USB 3.0 enables faster communication than USB 2.0 (up to 10 times) while using less power. NFC is a set of standards that allows devices located within inches of each other to communicate using Radio Frequency Identification (RFID). With standards in place and native support within Microsoft Windows 8 and Microsoft Windows 8.1, there are great opportunities for innovative and collaborative applications in the scientific industry.
Accelrys will support device connectivity provided by Microsoft Windows 8 and Microsoft Windows 8.1 with alliances through Accelrys hardware partners.
Supported Applications & Solutions
To learn more about Accelrys applications and solutions supported on Microsoft Windows 8, Windows 8.1 or any other operating system or application, please view the latest “Accelrys Consolidated Support Matrix” which is available via the Accelrys Community.
As a chemist, organometallic chemistry was one of my favorite areas of science because it combined the synthetic elegance of traditional organic chemistry with the rational ordering and geometric beauty of inorganic systems. In a way, it’s the best of both worlds. Plus, there is a lot of unknown and unchartered chemical space in this branch of chemistry - from novel stereotactic control catalysts to metal-organic medical and chemotheraputic reagents.
Many of these types of complexes feature coordination bonds between metal and organic ligands. The organic ligands often bind the metal through a heteroatom such as oxygen, nitrogen or sulphur, in which case such compounds are considered coordination compounds. Where the ligand forms a direct metal bond, the system is usually termed organometallic. These compounds are
not new - i.e. many organic coordination compounds occur naturally. For example, hemoglobin and myoglobin contain an iron center coordinated to the nitrogen atoms of a
porphyrin ring. Magnesium is the center of chlorophyll. In the bioinorganic area, Vitamin B12 has a cobalt-methyl bond. The metal-carbon bond in organometallic compounds is generally of character intermediate between ionic and covalent.
One of the reasons that this area of chemistry was studied in great detail is that organometallic compounds are very important in industry, as they are both relatively stable in solutions and relatively ionic to undergo reactions. Two obvious examples are organolithium and Grignard reagents.
Organometallics also find practical uses in stoichiometric and catalytic processes, especially processes involving carbon monoxide and alkene-derived polymers. All the world's polyethylene and polypropylene are produced via organometallic catalysts, usually heterogeneously via Ziegler-Natta catalysis.
Some classes of semiconductor materials are produced from trimethylgallium, trimethylindium, trimethylaluminum and related nitrogen / phosphorus / arsenic / antimony compounds. These volatile compounds are decomposed along with ammonia, arsine, phosphine and related hydrides on a heated substrate via metal-organic vapor phase epitaxy (MOVPE) process or atomic layer deposition, vapor deposition or decomposition processes (ALD, CVALD) and are used to produce materials for applications such as light emitting diodes (LEDs) fabrication.
So why did I choose my title about these materials? Well, recently scientists in the UK have made a dramatic breakthrough in the fight against global warming and it involves sea urchins. A chance find by Newcastle University physicist Dr. Lidija Siller as she studied the sea creature, is being hailed as a potential answer to one of the most important environmental issues of modern times. She found sea urchins use nickel particles to harness carbon dioxide from the sea to grow their "exoskeleton". That is, they naturally complex CO2 out of their environment and with natural enzymes, use the CO2 to produce harmless calcium carbonate metal loaded materials. This was not deemed practical or possible historically so the breakthrough has the potential to revolutionize the way carbon or CO2 is captured and stored, while at the same time producing a useful material that is strong, light and water resistant.
In other words, this process or the ability to mimic this process in industrial environments not only reduces carbon output but also produces a useful by-product. This discovery could significantly reduce CO2 emissions, the key greenhouse gas responsible for climate change.
Where Accelrys technology plays a key part is the understanding of the organometallic structure - its geometry, energy barriers, electronic properties, fluxional or dynamic behavior. Also, Accelrys technology allows the screening of options, choices and chemical modifications that can be made to the framework. These changes can enhance, stabilize or industrialize the natural system in this recent discovery.
In the future, the application of virtual and predictive chemistry systems will have an elegant symmetry - the research and development of the catalyst which produces plastics and materials will be complemented by research and development in the remediation and end of life of those materials. Although plastics contribute to our carbon footprint, catalysts that create them can and should be used to reduce or sequester byproducts of those processes and materials. This will clearly increase the sustainability of the complete process.
As I hope most people know, there is no point in storing data unless you can either search it or use it for model building and analytics. That is why at Accelrys, we have worked very hard to develop a data source agnostic platform, that can reach into multiple and diverse data sources, as well as provide extensive intrinsic and extrinsic analytic and reporting capabilities.
One of the pivotal decisions we made was to both develop our own algorithms, as well as integrate with the R-Statistics package in an extensible way. This package has seen incredible growth over the last four years so it is very heartening to read Robert A. Muenchen's post about the rate and technical richness of the science and mathematics that's been incorporated recently into the R suite.
Simply put, R’s 2012 Growth in Capability Exceeds SAS’ All Time Total as of March 2013. I know that the two packages are not directly comparable, so this comment is perhaps a little aggressive. But the fact remains that the growth of R (see below) has been so dramatic that it now clearly provides the significant technical richness that other leading packages of statistical tools are starting to envy. I believe that by combining this technical diversity, innovation and richness with the data handling and parsing / reporting capabilities in the Platform, is a significant leap in statistical and analysis lead experimentation for science differentiated companies.
It began as a typical visit with a customer to learn more about their goals and desires, and to help explain how our product roadmaps aligned with that vision. I would gather data on their priorities and their challenges, and hopefully draw some conclusions that would lead to a better software application, even if only incrementally better. This meeting became a lightning bolt, however, completely rearranging my view of the product we call the Experiment Knowledge Base (EKB). I knew EKB could help customers make use of data locked into aging LIMS systems that didn’t allow scientists to search in the ways they wanted to. I knew EKB could help them manage and execute experiments. Yet I hadn’t made the connection to the big picture before. I never recognized the enormous opportunity to help our customers make a “step function” improvement in their product development effectiveness.
Our customer listened patiently to the presentation, which explained how our latest features would make everything just a little better than before. Then they explained what they really needed. They walked us past impressive server rooms housing massive disks full of scientific data of every imaginable type - X-ray diffraction experiments, gas chromatograph data, plant & process data, and more. They said the investment made to collect that data – that ASSET – was tens of millions of dollars. High tens of millions. Yet the investment was almost worthless. While the people who ran those experiments had undoubtedly learned from them, their organization hadn’t retained any knowledge whatsoever. Now they had a vision for transforming that data into knowledge; knowledge that would inform how to make better products, faster and less expensively. Sounds like a quest for the Holy Grail – but they also had a plan.
The plan would allow researchers to search their entire data archives, using any sort of descriptor possible, and then outline their product and process variables against performance, cost, energy consumption, and all other measures of “goodness” to determine which are correlated and important. And finally, to build mathematical models that allow them to predict which processes, ingredients, and conditions combined will be optimal. That would make them one of the first true learning organizations I’ve encountered. It would enable them to escape the trap of repeating experiments, simply because they can’t find the original results and associated metadata. IDC Manufacturing Insights figures that up to 40 percent of all experiments are repeated for this reason; the potential savings is enormous. And it would help to narrow their R&D focus to the most productive experimentation pathways, accelerating the time required to transform raw materials into valuable products that generate revenue and profits.
That meeting was an epiphany; I realized we had an opportunity to do something profound for our customers. It won’t be easy. It won’t happen overnight. But we’re going to do it, and I’m thrilled to tell you that today we took a major step with the first product launch of EKB. Built on the hard work, brains, and talent of some of our best professional services staff, scientists, and customers, EKB is truly a game-changer in scientific informatics.
Over the past five years, one of the fastest growing areas of research has been the development of novel biologic therapeutics based on second and third-generation antibody technologies. These are antibodies that utilize various engineering and optimization technologies, to derive improved clinical properties, such as enhanced antigen binding properties, effector function, clinical safety, etc. Bolstered by increasing drug approvals from the FDA in recent years [i], combined with improved patent protection rights [ii], biotherapeutics are now widely viewed as one of the most valuable opportunities to the pharmaceutical industry. Indeed, some analyst forecasts are now projecting biotherapeutics will comprise 50% of the top 100 drugs by 2018 [i],[iii].
With such promise and competition in biotherapeutics, efficient development is increasingly an essential pre-requisite. However, their development is non-trivial and includes a number of unique challenges. For example, unlike small molecule drugs, storage and administration of Biotherapeutics typically requires the use of high-concentration solutions. This requires specific biophysical profiles including solubility, thermal and chemical stability and low aggregation propensity. While it is possible to test for many of these properties and characteristics, the testing is typically time consuming, expensive and dependent on both experimental methods to test and on cell lines to express the biologic material. This has become a major bottleneck for the industry.
Increasingly, the pharmaceutical industry is taking a lesson from the small molecule drug discovery process and applying predictive, or in silico methods, to help identify and optimize the best biologic leads early on. When applied in conjunction with experimental procedures, predictive methods can not only help prioritize the selection of biologics; they can also identify potential undesirable properties long before any material has even been expressed.
One important area where predictive methods are being applied is in protein maturation. For example, understanding the effects of mutation on protein binding affinity can help to guide experiment design and reduce the time spend in maturation. At Accelrys, we’ve recently published a paper detailing a new in silico mutagenesis algorithm that can accurately calculate the effect of mutation on protein-protein binding. Uniquely, our method can do this as a function of pH [iv]. Experimental studies have previously shown that small changes in solution pH can have a significant effect on the binding affinity of protein complexes [v],[vi]. Usefully, this same pH-dependent binding profile can be exploited in antibody engineering, to improve the half-life of therapeutic antibodies in serum, by increasing its binding to FcRn [vii],[viii].
As part of the validation studies, we considered the experimentally measured pH-dependency of the effect of mutations on the dissociation constants for the complex of turkey ovomucoid third domain (OMTKY3) and proteinase B. The results for the predicted pH-dependent energy profiles demonstrated excellent agreement with the experimental data [Figure 1] (further details are available in the paper ). Based on the full validation studies, we believe the method should be reliable enough to be useful for initial screening to find candidate antibodies for further experimental study.
Figure 1. Binding of OMTKY3 inhibitor to proteinase B. pH-dependence of logR, derived from the calculated mutation energies shown in Figure 1A. R is the ratio of the binding constants Ka(Glu18)/Ka(Gln18) (red line), Ka(His18)/Ka(Gln18) (blue line), and Ka(Leu18)/Ka(Gln18) (green line). The triangles, circles, and squares represent the experimental logR values, obtained in the study of pH dependency of OMTKY3 binding to proteinase B [vi]
The pH-dependent mutational analysis tools are one example of the new generation of in silico algorithms being developed to address the need to predict biologic physical (biophysical) properties. Today the development and validation of these novel biophysical property prediction methods is arguably one of the most exciting and active areas of innovation in predictive science.
[i]. ‘EvaluatePharma: World Preview 2018 Embracing the Patent Cliff’, EvaluatePharma,2012.
[ii]. ‘Patient Protection and Affordable Care Act (PPACA)’, Public Law 111‑148, 124 STAT. 119, 111th Congress, 2010.
[iii]. Strohl W.R., Strohl L.M., ‘Therapeutic Antibody Engineering: Current and Future Advances Driving the Strongest Growth Area in the Pharma Industry’, Woodhead Publishing Ltd., 16 Oct., 2012, ISBN-10: 1907568379, ISBN-13: 978-1907568374.
[iv]. Spassov V.Z., Yan, L., ‘pH-Selective mutagenesis of protein-protein interfaces: In Silico design of therapeutic antibodies with prolonged half-life’, Proteins, 2013. DOI: 10.1002/prot.24230
[v]. Schreiber G., Fersht A.R., ‘Interaction of barnase with its polypeptide inhibitor barstar studied by protein engineering’, Biochemistry, 1993, 32(19), 5145–5150. DOI: 10.1021/bi00070a025
[vi]. Qasim M.A., Ranjbar M.R., Wynn R., Anderson S., Laskowski M. Jr, ‘Ionizable P1 residues in serine proteinase inhibitors undergo large pK shifts on complex formation’, J. Biol. Chem.,1995, 270, 27419–27422. DOI:10.1074/jbc.270.46.27419. DOI: 10.1074/jbc.270.46.27419
[vii]. Roopenian D.C., Akilesh S., ‘FcRn: the neonatal Fc receptor comes of age’, Nat. Rev. Immunol., 2007, 7, 715–725. DOI: 10.1038/nri2155
[viii]. Dall’Acqua W.F., Kiener P.A., Wu H., ‘Properties of human IgG1s engineered for enhanced binding to the neonatal Fc receptor (FcRn)’, J. Biol. Chem., 2006, 281, 23514–23524. DOI: 10.1074/jbc.M604292200
The trend towards Big Pharma dismantling their R&D organizations and outsourcing drug discovery and development instead is clear. A dynamic mix of new smaller players will now take over the innovative experimental workflows and deliver results to larger sponsoring organizations that will primarily focus on clinical trials and marketing. However, this raises some concerns regarding both patenting and quality issues, as well as efficiency. Those of us who have experienced Big Pharma from the inside can easily see that small innovators could outperform the dinosaurs because of their greater flexibility, agility and speed. What is more worrying is that some of these gains are sometimes made at the expense of quality. Few small organizations actually have the infrastructure in place to properly store, manage and retrieve their research data in a way that fulfills the requirements of Big Pharma.
There is another twist. Given improvements in today’s technology, small players can now access informatics systems that previously were too big, complex and expensive for them to consider. This not only means they can meet their customers’ quality standards; they can also improve their own internal efficiency. Keeping track of assay data in relational databases is actually a lot smarter than using an Excel spreadsheet sent via e-mail. Recording all experimental details in electronic lab notebooks enables quick shipping of results to research partners, small or large.
While many large pharma companies have finally implemented ELNs and data warehouses, a surprisingly large portion still have not. It is not difficult to understand why. Any infrastructure change is difficult in a large organization, and the level of detail in research informatics systems can be overwhelming. This provides an opportunity for small biotechs and CROs. They could actually become leaders in their areas, not just by providing high quality results to their customers but also by becoming more competitive through the use of next-generation informatics solutions and lab automation. The cloud-based informatics systems now available are making this a reality at a low cost.
We recently created a microsite dedicated to Enhancing the Productivity of Externalized Research Networks, where we share some shocking industry stats and address some of the challenges these trends are presenting. Are you facing any of these challenges? I'd welcome any insight into how your organization is managing this growing trend of externalized research.
Good things come in pairs. Today at Bio-IT World Accelrys made a couple of announcements that will be good news for life science research organizations involved in collaborative research. First, we announced the new cloud-based Accelrys Externalized Collaboration Suite that helps pharma, biotech, academic and CRO network partners work together more efficiently and effectively. We also announced the move of the Accelrys HEOS collaboration workspace, which is a key component of the Suite, to the secure, reliable BT cloud environment.
As Matt Hahn wrote in his blog a week ago, the pharma industry is moving more and more from the traditional in-house R&D model to an externalized research model to lower costs, lessen risk and innovate faster. The downside of externalized collaboration is that it tends to increase organizational and operational complexity and put a strain on an IT infrastructure that was not typically designed to handle the network research paradigm. Companies risk losing their innovation gains as a result of the operational and execution issues created by collaborative network research.
The Accelrys Externalized Collaboration Suite meets the challenges of this new networked research landscape. It enables seamless technology transfer, secure, real-time collaboration and IP protection for all participants. It also facilitates simple compliance to sponsors’ data standards by CROs, biotechs, academic groups and others partnering with the sourcing organizations. The solution moves partner networks up the maturity curve away from reliance on emails, SharePoint portals and/or depositing CRO data directly into in-house systems. The result is a more intelligent, integrated informatics workspace in the cloud built from the ground up and already proven to meet the needs of scientists working in externalized research networks.
The Suite consists of Accelrys HEOS, which provides a real-time collaboration workspace for all scientific research data; the cloud-based Accelrys Contur ELN for experiment documentation and exchange; and the Accelrys Enterprise Platform that provides the conduit for a seamless transfer of data across the network and between partners and the cloud systems.
With the new solution, partners can work together in a real-time collaboration space. Fine-grained security means each partner accesses only what they are entitled to see. IP is stored in a secure, project-specific data warehouse for transfer into in-house systems when appropriate (and allowed by the collaboration agreement), and sponsor company business rules are easily published to partner organizations to help them easily meet agreed data standards. Finally, the Suite enables agile and dynamic research networks that can be spun up and down quickly and securely without disrupting ongoing operations. Participating scientists only need an Internet connection and a password to gain instant access to shared scientific information, tools and workflows in the cloud.
The Accelrys Externalized Collaboration Suite enables organizations to realize the full potential of innovation gains as they move to networked research.
See the new Accelrys Externalized Collaboration Suite in action in this video clip or visit our microsite to learn more.
The trend towards externalization in pharma is prompting a plethora of acronyms today―usually a pretty good indicator that a trend is real and people are talking about it. Case in point, the pharma industry is rapidly moving from their traditional in-house R&D models to an externalized research network model. AstraZeneca have described their change as moving from being a fully integrated pharmaceutical company (FIPCO) to using a fully integrated pharmaceutical network (FIPNET) model -- others have even considered going to virtually integrated variations on the theme (VIPCO/VIPNET).
The acronym trail is really about pharmas moving large chunks of their discovery process from inside their firewalls to networks of third-party collaborators. For example, compound development gets handled by one Contract Research Organization (CRO) and ADME/TOX by another with a healthy mix of academic researchers with unique expertise also drawn into the net.
Today’s increased externalization is in response to both cost pressures and perceived innovation deficits as pharmas navigate their respective patent cliffs. They all have different endpoints in mind depending on their business needs, but the externalization trend is unmistakable. David Brennan, ex-CEO of AstraZeneca has said: “…externally we do a lot more now than we were doing five or six years ago. Our product pipeline is now up from 5% about six years ago to close to 40% from external technology. And that’s a big, big shift.”Christopher Viehbacher, CEO of Sanofi has said: “Right now, it’s about a 70/30 ratio between internal to external. My objective is to bring that to about a 50/50 balance.”
So the rush to externalized collaboration is happening—and it’s happening fast. According to Current Agreement, there were 1,500 life science partnering alliances with biotech companies between 1997 and 2002. Last year there were almost 3,500 of these partnering deals.
A critical aspect of today’s research partnership networks is that they’re not one-to-one relationships; it isn’t a pharma simply outsourcing tasks to a partner and getting results back. Instead there are usually multiple partners within single research projects. We did a survey last year with Cambridge Healthtech Institute (CHI) to get a sense of what the partnership networks look like. Already 74% of projects across our sample of over 300 organizations involve two or more CROs, and almost 20% involve four or more. Twenty-five percent of the organizations we polled expect their partner network to increase. In one-to-one relationships, organizations can just exchange files and emails with one another. When a network expands to four or five partners, something more sophisticated is needed to work together efficiently and effectively.
Research IT especially is struggling with the informatics challenges of the new network research paradigm. With 30% of all pharma R&D spend already externalized and growing at 20% year-on-year, emails and e-rooms just aren’t cutting it, and giving partners VPN access to internal systems poses serious issues around security, information access and safeguarding IP.
In a recent survey, only two of top-20 pharma felt they had a good collaboration data management system in place. What are your thoughts on this “acronymical” topic? How is your organization responding to the challenging business changes and opportunities around collaborative network research?
Accelrys will be making a major announcement about our launch of an exciting new externalization solution at Bio-IT World 2013. If you’re at the conference, stop by booth #301 to learn more. Join Fred Bost’s workshop "New Informatics Model to Leverage Collaborative Network Research" in the IT & Informatics in Support of Collaboration and Externalization pre-conference workshop on Tuesday, April 9th from 8:00 am - 11:30 am (registration required) and attend Dr. Rob Brown’s presentation "Solving the Informatics Challenges in Collaborative Network Research" in the Drug Discovery Informatics Track on Wednesday, April 10th – 12:15pm.
 Collaborations & Communications within Drug Discovery Research Survey. Cambridge Healthtech Media Group, Jan 2012.
Many of us are familiar with the concept of the ‘patent cliff’, associated with multi-billion dollar blockbuster drugs coming off patent [i], and how it is affecting change across the industry. However, cutting operating costs is just one half of the story. Improving the attrition rates of drugs in clinical trials is just as crucial. As Andrew Witty, CEO of GSK recently put it: “If you stop failing so often, you massively reduce the cost of drug development” [ii].
A high rate of drug attrition in late-stage clinical trials is a costly issue for the industry. Arguably, drug safety issues, in the form of adverse drug reactions (ADR), are seen as one key reason contributing to these high attrition rates [iii]. Moreover, not only can safety issues affect a drug in development, but they can also limit the use of approved drugs, or even lead to withdrawal from the market, if discovered after launch. Fortunately, many adverse drug reactions are dose dependent [iv],[v] and can be detected ahead of time using bioactivity and toxicity studies. For example, in vitro methods associated with absorption, distribution, metabolism, excretion and toxicity (ADMET) studies can identify potential pharmacological and drug toxicity issues well in advance of in vivo or clinical trials studies. But, it is not just in vitro experimental methods that are being utilised today. In the early stages of drug discovery, where much larger numbers of compounds are typically investigated, experimentalmethods are being supplemented with the application of complementary prediction models (often referred to as in silico models). Such models are very fast, low-cost and are often sufficiently predictive to meet early-stage discovery requirements. It is here, in the field of in silico prediction modelling, that the industry is undergoing something of a revolution.
Until recently, most in silico ADMET models could only be developed by organisations that possessed the resources to compile sufficiently large bioactivity and toxicity datasets. In contrast, for smaller organisations and biotechnology companies (biotechs), development of such models was simply not feasible; they typically did not have access to appropriate sets of screening data. Today, this limitation is offset by the emergence and rapid adoption of publicly accessible databases such as ChEMBL and PubChem, and platforms that integrate chemical and pharmacology data, such as eTOX and Open PHACTS. What is most interesting is that it is not just the small biotechs that are adopting these resources into their work practices. Large pharma are actively utilizing them too. This is particularly interesting, as there was a time when they wouldn’t have considered the use of such resources; perceived views on data quality and homogeneity with respect to their in-house data sets meant that consolidating the public databases was not considered worthwhile.
The Open PHACTS project is particularly notable and one that Accelrys is actively collaborating on. The objective of the Open PHACTS project is to deliver an online platform, integrating pharmacological data from a broad range of sources, to encourage innovative in silico model development. For example, combining pharmacological data for both target and off-target effects can flag possible adverse side effects, or even identify prospective new drug targets. The three-year project from the Innovative Medicines Initiative (IMI) is described as ‘a unique partnership between the European Community and the European Federation of Pharmaceutical Industries and Associations (EFPIA)’ [vi]. It currently includes a diverse consortium of 28 partners, including industry, academia and software vendors. For its part, Accelrys has the unique combination of the market-leading data pipelining application, Pipeline Pilot, an extensive portfolio of cheminformatics and interactive data analysis tools, and access to a diverse range of in-house and external data sources, such as the Thomson Reuters Cortellis™ Collection. By collaborating with members of the Open PHACTS developer community, Accelrys is helping to ensure that the Open PHACTS API can be integrated with Pipeline Pilot, enabling customers and partners to rapidly benefit from the Open PHACTS platform.
Recently, Accelrys hosted a ‘hackathon’ at the Cambridge offices to provide members of the Open PHACTS developer community with an opportunity to work directly with Accelrys Pipeline Pilot experts. The objective of the day was to kick-start the process of developing a series of Pipeline Pilot components that utilized the Open PHACTS API. The day proved to be very successful with a number of API-based components and protocols produced. Sitting in the back of the room, I was fortunate enough to see the group present some of their results. A couple of them stood out as particularly interesting because of the opportunity they afforded users to search for potential ‘off-target’ activities.
One interesting example was by Chau Han (Chris) Chau from GSK, and was based on the recent joint paper by AZ, GSK, Novartis and Pfizer [vii]. In this publication, the four pharmaceutical companies collaborated in a pre-competitive study to identify a ’minimal screening panel’ of in vitro pharmacological targets, likely to help make an assessment of potential drug safety issues early on. In total, 44 targets were recommended. At the hackathon, Chris developed a protocol that searched Open PHACTS for active compounds for a development target of interest, and then searched each of these in turn to identify whether they possessed pharmacological information linking them to members of this ‘minimal screening panel’. The protocol workflow (Figure 1) starts off by first retrieving active small molecule compounds to a target of interest and then retrieves pharmacology information for each returned entry. These pharmacology profiles are then parsed to see if any member of the cited ‘minimal screening panel’ is reported. If you would like to learn more and are an Accelrys Community member, you can download the latest version of Chris’ protocol here.
Figure 1. Chris Chau’s workflow for searching for off-target pharmacology
The second protocol example that caught my attention was by Jean-Marc Neefs from J&J. His idea was to use active compounds from near neighbour protein targets as a starting point to screen a new protein target for which you did not already have suitable active hits. The workflow (Figure 2) works by first entering target information for the new ‘unknown’ protein and then searching the Open PHACTS knowledgebase for its sequence information. This is then submitted to NCBI BLAST to search for similar protein sequences. The UniProt accession numbers are returned for the top 20 hits and are then used in turn to search for corresponding entries back in the Open PHACTS knowledgebase. The associated pharmacology information on any active compounds returned is then compiled. This set of compounds presents a potentially valuable starting point to, for example, search for commercially available compounds, or search in-house screening collections to then build up a panel of compounds to screen the unknown target. Again, if you are an Accelrys Community member, you can download the latest version of Jean-Marc Neefs’ protocol here.
Figure 2. Jean-Marc Neefs’ workflow for finding pharmacology information for similar proteins to a new target with no known active compounds.
Following the hackathon, Accelrys has created a new OpenPHACTS group page on its customer forum site, Accelrys Community. The aim is to provide a central location for Open PHACTS developers to share and collaborate on the development of Pipeline Pilot components and protocols. The Open PHACTS API mentioned above will officially be launched to the wider community at the upcoming Open PHACTS Community Workshop in London, on April 22-23. If you are interested in attending, a copy of the meeting agenda is available here.
Open PHACTS is one example of how public or open-access data is increasingly being used routinely by the pharmaceutical industry. Indeed, the potential uses are both novel and diverse and are applicable to many aspects of rational drug discovery. For example, using protein-ligand interaction networks [viii], or using pharmacology profiles to automatically design novel ligands, a-priori, with specific multi-target profiles [ix]. Today, if the industry really does “want to stop failing so often” one way might be to start using all the data available from the outset.
[vii]. Bowes B., Brown A.J., Hamon J., Jarolimek W., Sridhar A., Waldron G., Whitebread S. ‘Reducing safety-related drug attrition: the use of in vitro pharmacological profiling’, Nature Rev. Drug Disc., 2012, 11, 909- 922. DOI:10.1038/nrd3845
[viii]. Metz J.M, Hajduk, P.J., ‘Rational approaches to targeted polypharmacology: creating and navigating protein-ligand interaction networks’, Current Opinion in Chemical Biology, 2010, 14, 498-504. DOI: 10.1016/j.cbpa.2010.06.166.
[ix]. Besnard J, Ruda G.F., Hopkins A.L., et al, ‘Automated design of ligands to polypharmacological profiles’, Nature, 2012, 492, 215-222. DOI: 10.1038/nature11691
Last week the world changed. Well at least the United States Centric part of the world changed, and that's quite a fiscally significant part. What the Leahy-Smith America Invents Act changed was the premise behind US Patenting regulations and rules. Sure this change has been in the works since last September, but now the change is law, so its now a real situation and not just an academic exercise. Simply put the USA moved from a constitutionally enshrined position of, who invented a product first legal basis, to a much simpler and more closely aligned to the rest of the world's approach which is first to file: http://en.wikipedia.org/wiki/First_to_file_and_first_to_invent.
What this means is that all of the focus on dates and proving that one individual had an idea first, and the consequent need to record, date stamp, time stamp, ownership and access rights, becomes less critical. What replaces it, is to me the really interesting thing. If you think about the first to file; i.e. the approach that whomever gets to the patent office first with their paperwork wins, then speed, agility and information performance become significant factors. This is a big change and is clearly having an effect on both the corporate psyche and organization's behavior and plans.Historically, a patent granted in the U.S. would have gone to the first to invent the claimed subject matter. Where separate inventors were working on similar inventions at the same time, the "first-to-invent" system could sometimes result in interference proceedings at the USPTO to determine who was the first to come up with the claimed invention.
Under the new USA rules as a"first-to-file" country, the novelty provision of the Patent Act will change to create an absolute bar to patentability if the claimed invention of a patent application was "patented, described in a printed publication, or in public use, on sale, or otherwise available to the public" anywhere in the world before the patent application’s effective filing date. This is a major transformation in at least two respects:
First, with respect to activities outside the U.S., the law before the change, only limited as a bar to patentability foreign patents and printed publications. Other activities, such as public use, sale or other "public availability" outside of the U.S. were not considered. After the Act, that will no longer hold true.
Second, because the law before the change permitted applicants in some situations to prove that they came up with their inventions before the effective date of a cited patent or printed publication, there was recourse if another entity independently developed the same invention after you, but beat you to the USPTO with a patent application. As of March 16, 2013, what will matter is when you filed your patent application, and not when you came up with your invention.
So now this drives the need for enhanced and significantly upgraded competitive intelligence analysis and also document production. http://www.innography.com/blog/?p=606 The fomer is driven by the change in provision from externally patenetable publications to the test of "public availability" of information. In this case there is now a clear need to be aware of what, how, why and where other companies are working on, inventing and producing products, since availability could invalidate a corporate patent that has had a significant amount of research behind it. Secondly, there is the document production arms race. Why such a term? Well just think about it, say a corporation had been working for a number of years on a breakthrough technology. They had spent significant money on the work only to see a competitor scoop them for the patent and the whole opportuity and area because they are more agile and have better informatics processes.
The main and usual response to this is to then file a series of defensive or encircling patents around the original domain of intellectual property. This then becomes a document production exercise. The corporation will seek to retrieve all data, information and for example spectra, pertinent and relvent to the filing, and as quickly as possible generate the legal document, refine and collaborate on it and then parse the whole system through their legal process to the patent office.
To be clear, most companies outside the pharmaceutical space do not have a sufficient innovation, scientific, information management process and digital backbone to do this today. Hence the problem. These companies have invested much money, due to the business need in their development and production or supplychain processes. However, they have not invested in the same processes and tools i.e. informatics capabilities for research, due to the highly convoluted, complex and non simple data types. This is why the concept of scientific information life cycle management (SILM) has occured and is growing. Solutions that manage the scientific lifecycle like a data source agnostic platform to connect disparate workflows and tools, an easy-to-use scientific electronic laboratory journal and notebook, and specialised process-centric laboratory execution and management systems are needed even more in light of the recent Act.
So again, the world has changed, but in change are seeds of new and exciting growth and technical challenges. What are your thoughts on the opportunities and challenges ahead?