Skip navigation
Previous Next

Accelrys Blog

October 2011
2

Turning Over Rocks to Find Oil

Posted by mdoyle Oct 13, 2011

As many of you know or have noticed, the price of petrol or gasoline is increasing. This is due to the fact that we have reached or passed the peak of easily available and exploitable oil reserves. Of course, there are fields which become economically producible depending on different oil prices, however, the majority consumption of gasoline and other petrochemical products is now coming from increasingly costly and difficult reserves. Finding, locating, quantifying, producing and then operating oil reserves is a complex multi-disciplinary process. There is a huge amount of science and technology devoted to the processing and analysis of seismic and geo-seismic data. However, I am focusing here on the drilling, evaluation, completion and production areas of the oil extraction and production process from shale.

 

Shale is the most abundant of the various types and volumes of rocks where oil is found #Shale is the most abundant source of hydrocarbons for oil and gas fields. Shale is commonly considered to be the hydrocarbon source mineral from which the majority of the oil and gas in conventional reservoirs originated. Shale is a sedimentary (layered) rock that has thin layers of clay or mud. There are many variations in shale geology, geochemistry, and production mechanisms. These variations occur between wells or even between or within the same shale area.  Shales are dissimilar one to another and each shale deposit offers its own set of unique technical challenges and learning curves. This being said, Shale gas was characterized recently as a "once-in-a-lifetime opportunity" for U.S. manufacturing, by Bayer Corp. CEO Gregory Babe.

 

Shale is unique in that it often contains both free and absorbed gas. This results in initially high production rates, quick decline curves, and then long-term steady production. Formed millions of years ago by silt and organic debris accumulations on lake beds and sea bottoms, the oil substances in oil shale are solid and cannot be pumped directly out of the ground. To be produced effectively, oil shale is first mined then heated to a high temperature. An alternative, but currently experimental process referred to as in situ retorting, involves heating the oil shale while it is still underground, and then pumping the resulting liquid to the surface. When producing gas from a shale deposit, the challenge and difficulty is in allowing and enhancing the access to the gas stored within the shale's natural fracture system.

 

The complexity of the materials' interactions which occur sub-surface and the advanced challenges within the system require that formulators and product designers in the service companies providing drilling fluids, muds and completion fluids investigate all the complex interactions and trade-offs in formulation, recipe and ingredient space.

 

761 Views 0 References Permalink Categories: Materials Informatics, Data Mining & Knowledge Discovery, Modeling & Simulation, Electronic Lab Notebook, Trend Watch Tags: pipeline_pilot, materials-studio
1

In a recent article, David Axe of Wired Science rightfully points out many of the pros and cons and bids farewell to the NASA shuttle program, fondly calling it "the most visually impressive high-tech boondoggle in American history".

 

 

The space shuttle, like any iconic and classical piece of engineering and science, has a finite life and operating span. For its time, that the shuttle flew so well for so long and in such amazing situations was truly an amazing tribute to the legion of aerodynamics, space engineering, production and the aeronautical materials and chemical engineers and scientists. Just consider the conditions the shuttles endure on re-entry for one second: This is a speed, i.e., drag, across the shuttle's surface, of over Mach 5 which creates incredible forces that stress the surface, the tiles and the materials.

 

 

 

Further, this flow challenge for the shuttle's materials happens simultaneously with other challenges such as plasma temperatures over 5000 degrees Celsius and the transition from vacuum to atmosphere. Pretty tough environment for anything to cope with, let alone fly afterwards. The fact that we as a species can consider, design and build the system and the materials that constitute this, is a tour de force of science.

 

 

 

 

This is just part of the amazing heritage the shuttle brings.  There is also the incredible materials science that goes into the leading edge coatings, the tiles, the thrusters, the propellant and yes, even the solid rocket boosters. Just remember each tile is made, tested, coated and affixed in place. Consider the heat cycle that the leading edge of the shuttle goes through and how amazing the chemical understanding of the heat mediated grain segregation and zone diffusion is. These parts create a visible rainbow of beautiful colors due to the effect of extreme heat and pressure have on the alloy structure. Being able to predict and understand the performance of these parts under stringent space flight safety limits is again a triumph of materials science and engineering.

 

 

 

So the shuttle, rather than representing a failure or waste of engineering time, truly represents the triumph of our materials science and engineering innovation.

 

As we look forward to the next decade and the new multi-purpose crew vehicle (MPCV) derived from the NASA Gemini project, there are many new challenges for longer duration missions.

 

 

 

First, is thermal protection for a larger re-entry vehicle. With more astronauts the vehicle is enlarged and consequently, on atmospheric re-entry requires more deceleration and hence more heat to be dissipated. Next of course is the radiation encountered for the long haul or extended duration mission to Mars, which requires advanced shielding to protect both the crew members, as well as, the sensitive command and control subsystems in the MPCV. Further, the altitude control and motor systems have to function and continue to work after long period idle in temperatures from 100 degrees below zero, to baked in the sun. And finally, the MPCV will have different missions, in different orbits and of different durations which then require a degree of flexibility without compromising the system capabilities and its fundamental safety profile and performance.

 

 

 

For example, one might consider the use of advanced graphene based or Bucky tube based composites, such as are used in the new generation of commercial aircraft. The difficulty or tradeoff there is that the use of composites in exposed areas is complicated by atomic oxygen levels in certain orbits or at certain oxygen levels. This atomic oxygen is known, as Dr. Lackritz of Lockheed Martin said, to "chew up Bucky tubes and graphene" at an amazing rate, so this causes the design challenges of these materials to increase.

 

 

 

If you then think about the next generation of solid rocket boosters and attitude control systems, these too need higher performance. They need to reduce size, extend their operating lifetime, survive longer, i.e., resist space radiation longer and have a lower environmental impact. A long list of somewhat contradictory or opposing challenges.

 

 

 

 

 

So the nature, demand and need for advanced materials will inevitably continue. Perhaps I would be so bold as to say it is going to increase as our demand for faster low orbit communications, quicker travel and even space tourism takes off. This is the true nature of the challenge we face:  Increasing our expertise and knowledge of materials, advanced processing capabilities while simultaneously increasing our knowledge capital and skill in designing and optimizing these materials from their inception in the lab, to their use in a design system (conceptual and practical), to their integration in a production plant, to their use in space.

 

 

 

 

I feel that the closing of the space shuttle program is not a failure or a negative event, at all. It merely marks the next chapter or the first step in the continuation of good old fashioned, scientific innovation. This innovation is what I believe will, as did developments like penicillin and the light bulb, move us as a society forward to greater achievements and greater accomplishments.

 

 

 

What do you think?

672 Views 0 References Permalink Categories: Trend Watch Tags: materials, materials-studio, materials-science, aerospace
0

Due to the response to my recent post about how the Hit Explorer Operating System (HEOS) collaborative program is assisting in the treatment of neglected diseases, I've invited Frederic Bost, director of information services at SCYNEXIS, to talk a little bit more about HEOS and the project. It is with great pleasure that I welcome Fred to our blog!

 

Dr. Bost.JPGThank you Frank, it's great to have this opportunity to talk to your readers.  We couldn't think of a better case for the HEOS® cloud-based collaborative platform than what we've seen with the committed scientific community engaged in Drugs for Neglected Diseases initiative (DNDi). The project is grand in scope and comprises scientists spread over five continents representing different cultures, disciplines, processes and companies. In this way, it's a macrocosmic example of what happens in industrial pharma research.

 

Collaboration requires all team members -- from different physical locations, disciplines and cultures -- to interact equally and as needed regardless of their physical location, disciplinary background or expertise. We've set out to develop a platform that invites all scientists involved in a project to contribute any information that might be beneficial to the team, especially if these scientists don't have the opportunity to interact frequently face-to-face. HEOS ensures that scientists can share whatever they deem relevant; be it a data point,  comment on another's work, an annotation, a document, a link from the web or a Pipeline Pilot protocol. The science or the data should never be compromised by external factors. For that reason, we embrace the motto of the DNDi -- and extend it: The Best Science (and the best supporting software) for the Most Neglected.

 

What does true collaboration look like? Here's an example from the DNDi project:  The non-profit organization started a research program against an endemic disease by collecting small compounds sets from volunteer large pharmaceutical and biotech companies. Assays were run by an expert screening company in Europe. While several of the programs proved to be dead ends, one showed promise. The non-profit organization hired an integrated drug discovery contract research organization (CRO) to produce additional analogs using high-throughput screening. Using HEOS, the biotech that provided the initial compounds was able to continue to manage the project while the CRO for high-throughput screening confirmed the most promising hits and leads. The managing biotech was also able to track in vivo studies performed by a US university.

 

As the program moved along, several ADME, safety and pharmacokinetic teams got involved in the project. Several peer organizations were also consulted on certain decisions. All these efforts successfully delivered a compound ready for the clinic that is today showing great promise in treating a disease for which a new treatment hasn't been produced in decades.

 

Managing this type of program, whether in a non-profit setting or an industrial one, demands flexible, rich features that can accommodate the needs of each partner at each stage of research while capturing data, keeping it secure and consolidating it so that it is available in real-time to authorized team members when they need it. Data must also be curated, validated and harmonized according to the rules that the project team has established and provided in a common language that enables scientists to compare results, whatever their origin. And because of the power of embedded Accelrys tools, HEOS can also provide the scientific analysis tools necessary to support the team in its decision process. All of these capabilities enable scientists to compare results and make decisions as a team.

 

It's been fascinating and rewarding to serve this community of passionate scientists fighting against endemic diseases. Together they have participated in an evolution, creating an agile networking environment that combines competencies and science from many places to achieve a common goal. HEOS has quite simply helped the DNDi's virtual teams function as if the world were much smaller than it really is.

1,295 Views 0 References Permalink Categories: Trend Watch Tags: pipeline-pilot, data-mining, cloud-computing, cheminformatics, neglected-diseases, knowledge-discovery, collaboration
0

My recent article in Bio-IT World discusses the need for a common computational platform in enterprise NGS deployments. The article touts the benefits of a platform that enables rapid integration of varied tools and data…a platform that lets bioinformaticians tailor NGS analyses to the needs of specific groups, that facilitates the sharing of computational best practices and accommodates rapidly evolving standards and hardware. In three words, a platform that is versatile, agile and scalable.

 

A deeper dive into how NGS data management and analysis are typically handled today makes a strong case for a common platform like this. Most life sciences organizations assign bioinformatics experts to particular therapeutic groups that want to use NGS. All too often, these experts write their own Perl or Python scripts to manage NGS data computation. The glaring problem is: It’s hard enough to rewrite scripts you wrote last week but for another purpose, let alone expect someone else to understand and re-deploy scripts you wrote six months ago.

 

A case in point—One of our large Pharma customers has built up over several years a substantial library of Perl scripts for managing and massaging NGS data. So much is invested in these scripts that they have people dedicated to supporting the use of these scripts in other parts of the organization. The same scripts might have utility first in oncology, then later in neurodegenerative disease or infectious disease research. And they get the inevitable questions: what is the optimal parameterization of these scripts, say for short read data with lots of repeats? Or for data that may have large numbers of rearrangements? How do I know the scripts are appropriate for my research? And how do I interpret their results with results I get using other methods? The bottom line is: The company is expending an inordinate amount of time, money and resources supporting the use of Perl scripts across the enterprise. 

 

A better approach, and one our customer is implementing, is twofold: first they are wrapping these scripts individually as separate Pipeline Pilot components and providing help documentation at the component level so that other informaticians can use them more efficiently; second, they are creating “best practice” protocols using both the componentized scripts and components from the NGS Collection, together with customized protocol documentation, so that researchers in different groups  can use these protocols more easily in a variety of computational contexts.   Instead of dithering with raw Perl scripts that often raise more questions than answers, researchers have the benefit of “plug-and-play” components, like Lego blocks, that harmonize and accelerate NGS analysis.

 

A plethora of Perl/Python scripts and desktop software programs is problematic in today’s dynamic and data-rich NGS environment. With so many ways to interact with the data, it’s next to impossible to efficiently leverage scripts developed by other people for use in other contexts without some sort of shared computational framework. With Pipeline Pilot, on the other hand, researchers can publish clearly documented protocols through a Web interface and be assured that everybody else doing that kind of analysis is doing it in the same way. This common underlying computational model provides organizational scalability for the work of individual experts. Once everybody sees what the model is, even if they continue to use scripts (which many will), they’re at least aligned with a well understood NGS platform that can be deployed and shared by others across the organization.

 

What’s your greatest challenge and opportunity in managing NGS data and computational pipelines today? What are you looking forward to dealing with tomorrow?

590 Views 0 References Permalink Categories: Bioinformatics Tags: ngs, pipeline-pilot, ngs-data-computation
0

What a load of RFI! Can we just show you the tire swing instead?

 

I’ll explain…. It’s common practice for companies adopting an ELN to create a Request For Information (RFI) that is sent to multiple ELN vendors requesting information including capabilities, licensing and cost. Having narrowed to 2-4 ELN vendors, the Request for Proposal (RFP) is then sent specifying detailed requirements--and I mean detailed, really detailed. In the course of responding to RFIs and RFPs, I have seen RFPs of over 50 pages to 200+ pages.  All around this is a huge effort for both requestors and vendors alike. Is there a way we can mutually shorten the process and save time for everyone? I’d like to propose an idea….

 

For the requester, the RFI and RFP process is a huge effort which drags the ELN selection process out, resulting in cycle times from 3 months to 3 years. With over 35 ELN vendors out there, the sheer volume of vendors is a hurdle. The effort in bringing all the stakeholders and gathering the requests sucks time and resources.  Not only that, there is feature creep so that by the time the RFI is formulated, few can see the wood for the trees. Furthermore, as requirements are subject to interpretation and semantics; here’s the tire swing analogy:  Ask multiple different people to write requirements for a tire swing, then have someone interpret the written requirements. The result will be a lot of variation and nothing like you had envisioned. Not unlike when Henry Ford was famously quoted to have said that if you asked someone what they wanted, they would have asked for a faster horse.

 

From the vendors perspective, upon receiving the RFI or RFP, they want to respond in the best possible light to get selected. Since requests are subject to their interpretation, this pretty much guarantees that most requestors, when they get RFP responses back, have trouble selecting a vendor and are overwhelmed by both the amount of information as well as, a rational way to compare it. At the same time, customers tell me that rarely is the decision made at the RFI or RFP stage, but more commonly at the demonstration or the evaluation stage and usually it’s the scientist who will experience the ELN every day that will make the final decision.

 

While I recognize that RFIs and RFPs may be a mandatory process in some organizations, I have some ideas I would like to share to accelerate the process and probably get a better result. I believe there is a way to re-architect the process, saving everyone a lot of time and still achieve the desired outcome.  Here’s how. Create a strategy for what you want your ELN to achieve and set the goals to achieve that strategy. Then use the internet, LinkedIn, Blogs and research from companies like Atrium Research to narrow down the vendors you will send use cases to. Unless you are after a niche ELN or have something you feel is very specific there are only 3 main market leading ELN providers so starting there can shorten the selection process considerably. In this step you avoid the RFI and a process that can take months becomes weeks at the most.

 

Next, is there a way to avoid the RFP? Based on your strategy and goals collect the use cases from the scientists, power users and IT that will meet your goals. Then hand these to the vendors with the request “Show me your Swing!” The point here is to avoid lengthy documentation and get to the main decision point for selecting the ELN- the evaluation or demonstration. Let the scientists experience the ELN.  This process also has some advantages apart from speed and the fact a vendor can clearly understand what the scientist’s need to accomplish. You are most likely to see more out of the box capabilities that will mean a lower cost of ownership but also you may be enlightened by new and novel ways to achieve your goals that vendors have learnt from other engagements.

 

tire_swing.png

I know some may be thinking, “that sounds great but my organization insists on RFI and RFPs”. I suggest a reflection on the relative value and the relative effort put in for each step of the process. In other words “what’s good enough” to satisfy internal process and check the RFI and RFP box, then focus on online research and getting the use cases to the right ELN vendors as the main effort to facilitate you decisions. You will find that there is a wealth of quality information available via online resources.

 

Then, in the final selection process leverage the vendors service team to get a full understanding of cost of ownership. Understand what is configuration versus customization that adds short and long-term costs and introduces upgrade risks. Also understand the effort it will take to change, how long will it take to create and deploy  new sections and what resources area available to assist. This latter part of the selection process is essential if you want to avoid surprises and delays later on.

 

Thoughts? I’d welcome other suggestions for how this process can be accelerated and save time for all involved.

1,125 Views 0 References Permalink Categories: Cheminformatics, Lab Operations & Workflows, Electronic Lab Notebook, The IT Perspective Tags: notebook, eln, symyx, rfp, rfi