Skip navigation

Accelrys Blog

14 Posts authored by: cagramont

Accelrys has a passion for enabling customers to leverage our product capabilities to solve their business challenges and drive scientific innovation.  Innovation often starts with the desire to adopt and leverage new technologies presented by the market.  With the recent release of Oracle Database 12c, Accelrys continues to evaluate and provide guidance on support for Oracle Database versions and editions.


IMPORTANT NOTE: This information is intended to outline general product direction and should not be relied on in making a purchase decision.


Statement of Direction Summary


Oracle Database server is a key technology throughout many enterprise environments.  As many Accelrys applications utilize Oracle Databases, Accelrys will continue its support of those Oracle Databases.  Further, Accelrys will continually assess which editions and technologies of Oracle Database to support. 


Accelrys will support Oracle Database 12c for specific Accelrys Applications and Data Connectivity




Multitenant applications are typically brought up when discussing Software as a Service (SaaS) based deployment.  However, enterprise customers are also looking to leverage SaaS based deployments within their own corporate environment to support project or data isolation while reducing the total cost of ownership.  In previous versions of Oracle Database, a method known as Schema Separation was used to deploy multiple copies of the same application database structure into a single deployment of Oracle Database.  With Schema Separation, the data remained separate and a given application instance would understand how to connect to the appropriate database.  With Oracle Database 12c, there is now a built-in mechanism to support this called, “Oracle Multitenant.”


Oracle Multitenant expands on the capabilities of Schema Separation with provisioning and management tools to ultimately reduce the total cost of ownership.


Accelrys will support Oracle Multitenant deployments for Accelrys Applications and Data Connectivity.


Database Editions


Oracle Database 12c is available in a few different editions.  Each edition is specific to the amount of CPU, RAM, or Database size which it supports.  Each edition also defines specific product capabilities that are available within that edition. By working closely with our customers and analysts, we have found the Standard Edition One, Standard Edition, and Enterprise Edition to be the most beneficial and commonly used editions, and we will continue to support to provide support for those editions.


For detailed information on Oracle database editions, please visit their site:


Accelrys will support Standard Edition One, Standard Edition, and Enterprise Edition of the Oracle Database 12c editions for production use within the relevant Accelrys Applications.


High Availability


Enterprise information technology organizations typically require that enterprise grade applications meet an internal service level agreement (“SLA”).  An SLA will typically require the infrastructure deployment to be architected to be resilient in the face of certain failures within the system (e.g. network and power outages).  Oracle Database 12c provides a collection of high availability technologies to as part of Oracle’s Maximum Availability Architecture.


Oracle Real Application Clusters (RAC) enables a multi-server architecture which enables increased performance, scalability, and reliability.  Oracle RAC is available within Standard Edition and is an optional component of Oracle Enterprise.


Oracle Database Express Edition is not supported with Accelrys products.


Accelrys will support Oracle RAC for Accelrys Applications.


Supported Applications & Solutions


To learn more about third party applications and operating systems supported by Accelrys solutions, please view the latest “Accelrys Consolidated Support Matrix” which is available via the Accelrys Community.

655 Views Permalink Categories: Executive Insights, The IT Perspective Tags: oracle, statement-of-direction

Over the past few years, Accelrys has been significantly expanding cloud-delivered applications within our existing portfolio.  Our investment in these cloud services has grown both organically and through mergers and acquisitions.  A number of these cloud services have different business and technical approaches pertaining to areas such as Privacy Statements, Datacenters, User Credentials, etc.


At Accelrys, it is our mission to bring together these services to provide a unified and rich experience to our customers under a single umbrella called Accelrys Cloud Operations (ACO). It is the goal of ACO to provide a consistent set of policies and practices to ensure the service is managed, operated and delivered to a set of standards acceptable to our customers and partners.


A key component to any cloud-delivered service is security.  Today, Accelrys has published the “Accelrys Security Baseline 2013” which is available on the Accelrys IT & Developer Community (  Here we provide a clear, strong message to our customers, to our partners and to the market that security is critical to delivering reliable cloud services and that transparency in our communication about security is essential.


The Accelrys Security Baseline 2013 is our commitment to cloud services security and our continued investment in meeting or exceeding the market’s cloud security requirements.


Conrad Agramont

Lead Product Manager, Cloud Computing and Platform Services, Accelrys

499 Views Permalink Categories: The IT Perspective Tags: security, cloud, cloud_computing

With the release of the Accelrys Enterprise Platform 9.0 (AEP), we started to communicate our thoughts on Big Data as it relates to the large volumes and complexity of scientific data on which our customers rely.  While the original message around this provided some high level views, I thought that I would dig deeper into what this means and provide some examples that we have worked on with our partners from IBM and BT.


First, what do we mean by Big Data?  By now you have probably heard Big Data described in terms of: Variety, Velocity, and Volume.  Let’s quickly tackle each of these, but I’ll try not to bore you to death...




The IT Industry generally views “variety” as various data sources (files, databases, web, etc.) and types of data such as text, images, geospatial, etc.  However, data within a scientific organization is unique and includes chemical structures, biologics, plate data, sequences, and much more.




“Velocity”, or the rate of change, can be viewed in two parts.  First, every day new data is added within scientific organizations from machines such as Next Generation Sequencers or Electronic Notebooks.  Second, new data is regularly made available through government resources such as the United Kingdom’s National Health Service and scientific data providers like Accelrys DiscoveryGate (shameless plug). Thus, the velocity of new data now accessible within an organization is growing exponentially and businesses need to be able to access and analyze this data to achieve their objectives.




When you multiply the Variety of data by the Velocity at which this data is delivered, you can get a sense for the exponential amount of data available to parse.  However, you might have a very small amount of data and still have to parse it to uncover meaning.  Besides the total amount, you should also consider that the data items are often large.  For example, the human genome is about 3.1 Gigabytes in one uncompressed file (FASTA format) and a complete sequencing of a human yields 100 GB (and up) per experiment.  If chromosomes are in separate files, chromosome 1 is the largest at 253 Megabytes uncompressed. Plant data can be even bigger.


Also, organizations that have deployed an Electronic Lab Notebook with over 1,500 users can have data volume sizes reaching beyond 2 Terabytes.


Scientific Big Data and the Accelrys Enterprise Platform


Scientific Big Data is a reflection of the expanded types of data that an organization must work with in order to ensure that regulatory, privacy, and security rules are adhered.  The Accelrys Enterprise Platform 9.0 and Accelrys Pipeline Pilot help solve the many Big Data initiatives that organizations are working towards.


The Accelrys Enterprise Platform 9.0 (AEP9) expands support for High Performance Computing (HPC); a requirement for all Big Data projects. There are two options for HPC available: Cluster and Grid.  Cluster deployments leverage a Map-Reduce technology that is geared towards organizations that require HPC capabilities without the high investment into a grid infrastructure.  Grid integration is available for those customers that want to leverage their existing investments in a Grid engine. Both options enable an organization to scale their infrastructure to meet the computing capacity.


At the Accelrys Tech Summit in Brussels this year, IBM delivered a performance analysis session that used AEP9 with IBM’s GPFS (proprietary parallel file system technology) to handle scientific data and showed how I/O impacts computing resources.  A link to the associated whitepaper is available on the IBM site and the session was so popular that we are hosting a webinar on this topic on September 5th.


British Telecom (BT) leveraged AEP9 Clustering along with their Cloud Compute environment to mine the enormous dataset from the United Kingdom’s (UK) state-funded National Health Service (NHS).  That data is unstructured and includes 55 million people from England, 3.5 million people from Wales, and data points are in the region of 4 billion.  With the power of a Cloud Computing infrastructure and the simplicity in design from Accelrys Pipeline Pilot, researchers can utilize the data from the NHS and interrogate that data against other unstructured or structured data at their disposal without having to build another data warehouse or data mart.  This is one of the big benefits of addressing Big Data with AEP9. Read the press release.


BT provided additional details about how AEP9, Accelrys Pipeline Pilot, and BT Cloud Compute complement each other at the Accelrys Tech Summit in their session: Cloud Enablement and Big Health Data Analytics in the Cloud


In future posts, I will provide technical details on how Accelrys Enterprise Platform and Accelrys Pipeline Pilot enable Big Data capabilities including how other data repositories can be leveraged to benefit other Accelrys applications (ELN, LIMS, etc.).

966 Views 0 References Permalink Categories: The IT Perspective, Trend Watch Tags: pipeline_pilot, platform, cloud, tech_summit, big_data, accelrys_enterprise_platform, cloud_computing

Accelrys has a passion for enabling customers to solve their business challenges and drive scientific innovation. Innovation often starts with the desire to adopt and leverage new technologies presented by the market. With Microsoft Windows 8.1 following on the heels of Windows 8, Accelrys continues to evaluate our support for future product releases with Microsoft through our continuing partnership.


IMPORTANT NOTE: The information on the roadmap and future software development efforts is intended to outline general product direction and should not be relied on in making a purchase decision.


Learn more about Accelrys Statements of Direction:


Statement of Direction Summary

Microsoft Windows 8 delivers a wide range of capabilities providing new scenarios for Accelrys applications to deliver compelling, innovative solutions to our customers on the desktop, in web applications and tablet and device connectivity.


A separate Tablet and Mobility Statement of Direction will be published to address these topics.


Desktop Applications

Many Accelrys applications are delivered as Windows-based applications.  Microsoft Windows 8 and Windows 8.1 continues to support the desktop experience.


Accelrys products will support Microsoft Windows 8 and Microsoft Windows 8.1 on desktop environments. 


Web Applications

Many applications by Accelrys use pervasive web technologies such as HTML, JavaScript, Cascading Style Sheets (CSS), JSON, REST/SOAP and more.  Windows 8 ships with two versions of Microsoft Internet Explorer 10: Desktop and Windows UI.  The Desktop version continues the traditional view of Internet Explorer available in Desktop mode.  The Windows UI version is geared more towards a tablet/touch-based interactive experience.


Microsoft Windows 8.1 includes Microsoft Internet Explorer 11, which allows for faster browsing through performance improvements in JavaScript and image rendering using GPUs.  Microsoft Internet Explorer 11 also delivers enhanced security features.


Accelrys applications require testing and certification on both Desktop and UI versions to ensure that the applications meet the expectations of, and opportunities for, distinct end user roles.  Accelrys also continues to evaluate and adopt emerging web technology standards (e.g., HTML5) with a view to delivering solutions with rich and intuitive user interfaces to enhance user productivity. 


Accelrys products will support Internet Explorer 10 on Microsoft Windows 8 and Internet Explorer 11 on Microsoft Windows 8.1.


User Experience

Microsoft has also refreshed the entire User eXperience (UX) with the introduction of Windows 8.  Some of the changes in UX are continuations of changes made within Windows Vista and Windows 7, while others are quite dramatic and new, including going from a “Start” button to a “Start” screen in Windows 8.  While Windows 8.1 reintroduces the “Start” button, users are still presented with the new Windows 8 “Start” screen along with a new view showing all applications installed on that device.


These changes in UX can improve usability and productivity, which are major goals of Microsoft, but they can also require changes in the applications delivered by Accelrys.  While these changes are not mandatory for Accelrys applications, Accelrys is looking to leverage these UX innovations to ensure a consistent and fluid UX for users.


There is a growing hardware trend towards increased availability of high definition display devices. These include widescreen laptops, monitors, projectors and tablets.  This trend has been accompanied by a shift in default aspect ratio for applications, for example, Microsoft PowerPoint has changed from traditional 4:3 screen ratio to 16:9 screen ratio.  Accelrys is taking this new aspect ratio into consideration for all products and services when designing user experiences.


Accelrys products will embrace the new user experiences available within Microsoft Windows 8 and Microsoft Windows 8.1.



Windows 8 provides an improved API set for touch-based applications.  Devices which support a touch interface (e.g., Microsoft Surface, touch-based monitor/laptop, or mobile device) still function with applications that are not touch-enhanced, operating in normal “mouse” fashion.  Applications which are touch enhanced typically have larger button sizes and adjusted application layout to present the touch experience to the user in a way that improves productivity and usability.  Accelrys applications utilizing mouse functionality will remain functional even in a touch-enhanced environment. 


Accelrys will develop touch-based applications for Microsoft Windows 8 and Microsoft Windows 8.1 to meet appropriate user scenarios.



Windows 8 and Windows 8.1 also expanded hardware connectivity to accommodate growing connectivity trends such as USB 3.0 and Near Field Communication (NFC).  USB 3.0 enables faster communication than USB 2.0 (up to 10 times) while using less power.  NFC is a set of standards that allows devices located within inches of each other to communicate using Radio Frequency Identification (RFID).  With standards in place and native support within Microsoft Windows 8 and Microsoft Windows 8.1, there are great opportunities for innovative and collaborative applications in the scientific industry.


Accelrys will support device connectivity provided by Microsoft Windows 8 and Microsoft Windows 8.1 with alliances through Accelrys hardware partners.


Supported Applications & Solutions

To learn more about Accelrys applications and solutions supported on Microsoft Windows 8, Windows 8.1 or any other operating system or application, please view the latest “Accelrys Consolidated Support Matrix” which is available via the Accelrys Community.

4,312 Views 0 References Permalink Categories: Executive Insights, The IT Perspective Tags: microsoft, windows, statement-of-direction

Top 11 Tweets from the UGM

Posted by cagramont Jun 1, 2011

During this year’s Accelrys User Group Meeting in Jersey City, NJ (still one to go in Athens), we put a big focus on communicating event highlights during the event through Twitter. We used a hashtag #acclugm to make it easy for users to follow what was happening at the event. Here are my personal top 11 tweets using the hashtag.


  1. @cabbagered: fascinating talk by Rob Lochhead on role of the National Formulation Science Lab - high throughput testing through to scale up #acclugm
  2. @accelrys: Just announced at #ACCLUGM - The Experiment Knowledge Base, a new solution created to accelerate materials innovation
  3. @thomykay: Impressive social media presence of ongoing @Accelrys customer event. Who says, #pharma people are conservative? #acclugm
  4. @cnitsche: #acclugm Panelists: Fred Bost (Scynexis), Susan Fitzpatrick (Merck), Jason Bronfeld (BMS) Neil Kirby (DOw AgroScience)
  5. @agramont: Mark Castro, Pfizer Software Dev Manager presenting "Symyx Notebook Deployment Best Practices" #acclugm
  6. @Cnitsche: #acclugm Francisco highly recommends Molprobity - out of Richardson lab
  7. @accelrys: Neglected diseases project teams @scynexis show need for global, secure collaboration #acclugm
  8. @epyngndm: #ACCLUGM. George Famini giving a stunning talk about chemical toxicology
  9. @ugenya: RT @epyngndm: #ACCLUGM. Matt Hahn describing the Accelrys vision of enabled and linked scientific tools and services
  10. @agramont: Jennifer Trumbore is talking about the project to deploy Symyx Notebook at J&J #acclugm
  11. @jesterotl: Good panel discussion at #ACCLUGM focusing on external collaboration and the need for standards. Surprised that no one brought up CRIX yet


Thanks to all of those that followed our tweets and retweeted us. Do you have ideas of how you’d like us to continue to use twitter during an event?


Quick Note: I used TweetDeck on both my iPhone and Windows 7 desktop to post my tweets…. In case you were interested.


- Conrad Agramont, Sr. Product Manager, Enterprise Technologies, Accelrys

604 Views 0 References Permalink Categories: News from Accelrys

In a few days, we’ll be on the ground at the Accelrys User Group Meeting (UGM) in New Jersey. A new addition to this year’s UGM is a complete track focused on IT. I have two sessions that I’ll be delivering which will be a condensed & combined version of what was delivered at the Accelrys Tech Summit (ATS) in London.


For those that are attending or thinking of attending, here’s a quick preview of what I plan to cover and why:


A Lap Around Pipeline Pilot 8.5 for IT Professionals


There’s lots of great stuff that we’ve been working on within the next release of Pipeline Pilot, but there’s often many capabilities that IT professionals are looking for that have been in the product for many releases. During this session, I’ll go over various tools that are within the product that every IT person deploying or operating Pipeline Pilot should know.


For those of you thinking, “I’ve been doing Pipeline Pilot for many moons. What can I learn from this session?” There are some new things within Pipeline Pilot 8.5 that will be discussed. Regardless of your experience with Pipeline Pilot, I’m sure you’ll walk away with learning something new that you’ll put into practice.


Pipeline Pilot Application Lifecycle Management


Scientists that develop tools and solutions within Pipeline Pilot are quite familiar with how easy it is to get started building something valuable. But there’s a big step from building something useful for themselves or a small group of users to taking that to an enterprise-wide-deployment.


So what are the things that a creator of a Pipeline Pilot based innovation need to add to their project to make it an enterprise success? What does an IT professional need to do to provide an enterprise infrastructure to deliver those scientific innovations? Answering those questions and more are the target focus for this session. Do you have more questions that you’d like answered? Ask us within the Accelrys IT-Dev Community!


Pipeline Pilot Architecture Deep Dive


While this isn’t my session, I would like to advertise it a bit. Jason Benedict is our Senior Architect for Pipeline Pilot and he’ll be delivering this session. He did this same session at the ATS in April, and it was mind blowing. I get the luxury of sitting a few doors down from him and I get nuggets of great information about how Pipeline Pilot works deep within the product and why we did things a certain way. Many times this lands on a whiteboard, but often doesn’t get out into the wild. Well now is your chance to get this information first hand! He covers topics such as how Pipeline Pilot takes your request to perform a job, and what the system does to honor that job. This spans from authentication and security to memory management and data processing.


If you aren’t registered yet, you should be! Lots of great speakers and content, so what are you waiting for? Oh, the link to register? No problem, we got you covered:


Finally, stay connected to what's going on with the event via Twitter with our dedicated hashtag: #acclugm


- Conrad Agramont, Sr. Product Manager, Enterprise Technologies, Accelrys

523 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, user-group-meeting

Accelrys Enterprise Update

Posted by cagramont Apr 25, 2011

There have been a lot of great things going on within Accelrys focused on Enterprise customers and scenarios, and it’s about time that I share some of this.


Accelrys User Group Meeting (UGM)


For the first time at a UGM, we have a dedicated track focused on the needs around IT. In years past, we had some sessions that covered the IT space but it was sprinkled in with all of the other great scientific sessions delivered by our customers. During the UGM, which is in Jersey City, New Jersey from May 17-19, we’ll have dedicated IT related sessions around Pipeline Pilot, Symyx Notebook, and Isentris just to name a few. You can learn more about these sessions here:


Make sure to tell them that Conrad sent you and you’re excited about the IT Track.


Accelrys Pipeline Pilot 8.5 Beta


Several weeks ago, we invited a select group of customers to participate in the Pipeline Pilot 8.5 beta. There are many features we’ve added to the product to target Enterprise specific requirements and enable greater integration with the Symyx Notebook by Accelrys. We’ve received some great feedback from customers, but we’re hungry for more. If you’d like to know more about Pipeline Pilot 8.5 and willing to install it to provide feedback, send a request to to request access. We’re looking to close the project soon, so be sure to act quickly.


OK, that’s all for now, but we’re not done. We have some great updates to the products, events, and resources coming soon. If you’d like more information on Pipeline Pilot 8.5 , be sure to contact me directly or post in the IT-Dev community.


Accelrys Tech Summit (ATS)


Earlier this month, we held our first ATS in London which was a 2-day affair concentrated around best practices and deep technology discussions around Pipeline Pilot. We had 24 customers attend the summit and we had some great speakers from Accelrys R&D and Consulting Services. The sessions yielded some killer content which we’ve delivered back to the attendees and plan on providing to the Accelrys IT-Dev Community. You can find out more about the event here:


The hands down favorite session at the ATS was the “Pipeline Pilot Deep Dive” session delivered by our very own Senior Architect, Jason Benedict. If you’re interested in the topic, you should be sure to catch his redo (and promised to be even more amazing) session at the User Group Meeting in Jersey City May 17th – 19th.  If you’re not attending the UGM, this is a reason to start planning your travel!


Accelrys IT-Dev Community


As previously noted, we have a community site focused on IT and Developer related topics. This is an area where IT professionals can search for information and ask questions related to general IT and development topics. While Accelrys provides a ton of great scientific capabilities, the IT and Developer professionals that support scientists have a place to share information, best practices, and ask and answer questions of their peers.


There’s already some great material in there today and we look forward to your questions and contributions as time goes on. Here’s a link to the Accelrys IT-Dev community. Make sure to bookmark it and/or subscribe to the RSS feed for it.


- Conrad Agramont, Senior Product Manager, Enterprise Technologies, Accelrys

475 Views 0 References Permalink Categories: The IT Perspective
Everyone at Accelrys is excited to see that Pipeline Pilot 8.0 (PP8) is launched and soon will be in the hands of customers and partners to advance their goals in Science.  Not to be overshadowed with the science empowerment in PP8, there are a ton of great features for administrators to take advantage of as well.

While there’s a huge list of improvements, I’ll start off with a few I think are pretty interesting.

64-bit Computing with Windows Server 2008 R2

Both Microsoft Windows Server 2008 (including Windows Server 2008 R2) and 64-bit computing have been growing in adoption in many Information Technology (IT) environments due to their power and flexibility.  And let’s not forget all of the great advancements that our friends over at Intel have done with their server class chip Xeon which also makes targeting 64-bit so attractive from a performance perspective.  Pipeline Pilot 8.0 now officially supports Windows Server 2008 (Service Pack 2) and Windows Server 2008 R2 on the 64-bit platforms.  We continue to support 32-bit platforms as well on Windows Server 2003 (Service Pack 2), Windows XP, and Windows 7.  We made a decision to focus our development resources on the latest 64-bit platforms available from Microsoft to ensure longevity in support and to maximize the capabilities of the operating system.

And just as a reminder, we also support 64-bit processing on Red Hat Enterprise Linux 5.

Job Queuing

The popularity of Pipeline Pilot in many scientific organizations has created some interesting work habits because of the consumption of computing resources on a given server.  Pipeline Pilot 8 (PP8) allows you to define a threshold for the number of active protocols that can be executed at any given time.  An administrator can, for example, allow only 5 protocols to run at a given time.  While users can continue to submit jobs to run, if 5 jobs are already running the others will get added to a queue (which can also be manipulated by the administrator in real-time)

Data Management

Data Management is a new capability within the Administration Portal of PP8 that enables an administrator to define named data sources that can be leveraged by protocols.  This allows an end user to select a given named data source and use it directly within their protocol without the need to understand the connection details for that database.  This is also great when building a protocol in a development environment with test data and then move to a production server.  The named data source in the protocol remains the same and uses the appropriate data depending on the server in which it’s deployed.  Permissions (use, view, modify) to consume a given named data source can be defined on a per-user or group level.

There’s also support to create Open Database Connectivity (ODBC) connections without having to be a local server administrator.  Additionally, we now support for Java Database Connectivity (JDBC) which enables access to a number of JDBC drivers that can connect directly to a number of database vendors.

Again, there’s a lot more to cover in future blog posts (e.g. Role-base Security), but this should give you an idea of the investments we’ve made to Pipeline Pilot 8.0.

Finally, I should mention that during the Accelrys User Group Meeting 2010 in Boston this week, I had a few talks about Cloud Computing.  There was lots of excitement from our customers and partners, and we’re eager to share more details soon via our blog and forums.  More on that later.  If you’re interested in more about IT related topics for PP8 or Cloud Computing, feel free to shoot me an email.

- Conrad Agramont, Senior Product Manager, , Accelrys
816 Views 0 References Permalink Categories: Lab Operations & Workflows, The IT Perspective Tags: protocols, pipeline-pilot, data-management, cloud-computing
I’ve been in the Enterprise software business for 15 years working in various roles within IT including server operations, application development, hosting, etc.  Over the years I’ve dealt with many “Enterprise” applications, including those that were making the leap from a workgroup server to an Enterprise class product.  One such product was Microsoft Windows NT and its focus to be an Enterprise Network Operating System (NOS) with file server, application platform, and directory services.  This was back in the day when Novell NDS and Banyan VINES had full control of the market.  So when Microsoft entered into this market, not only did they have to deal with an entrenched market (yet hardly fully saturated), but they also had to answer the “Are you Enterprise Ready” question as well (and yes, Linux also fits into this camp).

Now our situation with Pipeline Pilot isn’t exactly the same, but on occasion we do get that question.  The answer is two parts.  First, YES we are an Enterprise ready application platform.  Secondly, since enterprise can mean a few different things to a variety of people we need to look at the business problem that you’re trying to solve.  It might sound like a bit of “it depends” kind of an answer, but we wouldn’t claim that it’s a Financial Enterprise solution.

While Pipeline Pilot is a great personal productivity and project team collaboration software for scientists, its Enterprise capabilities can be over looked. I’ll spare you the typical sales pitch on the product for now (look here).  That said, Enterprise solutions typically solve different kinds of problems, including aggregating the view and challenges of a collection of project teams and individuals.  And solving many back-end problems (e.g. scientific data management and searching) that helps add value to your individual users, groups, and even executives with dashboards.  Also typical in an Enterprise is the existence of a variety of platforms (server, desktop, mobile), data (vendors, formats, and types), processes (human and back end), and more.  While Corporate IT and Research IT have (always) ongoing projects to consolidate, merge, upgrade, and deploy platforms and applications, rip-and-replace solutions aren’t always the best way to quickly add value to an organization.

Pipeline Pilot certainly has many, if not all, of the pieces to create and support a science informatics platform, but a CIO/Enterprise Architect/R&D IT Developer should look at their own and their users scientific business problems and leverage those areas of Pipeline Pilot (e.g. Web Services, data pipelining, scientific data ETL, etc.) to solve those challenges.  You’ll quickly find that by leveraging Pipeline Pilot within your Enterprise projects you’ll reduce your time to delivery, provide high valued results, and increase the productivity with your existing software development and scientific staff.

While Windows NT had it’s painful years for both Microsoft and their customers, over time they have answered the growing call for more Enterprise type features.  Pipeline Pilot has been delivering Enterprise solutions for customers for several years now, and we also continue to the answer the call for deeper and richer solutions for our customers.  Get ready, because we have LOTS more to share very soon regarding our latest release and the future of Pipeline Pilot.
368 Views 0 References Permalink Categories: Lab Operations & Workflows Tags: pipeline-pilot, data-management, data-pipelining, enterprise-informatics

Pipeline Pilot Test Automation

Posted by cagramont Mar 22, 2010

A common request from administrators of Pipeline Pilot Server is to provide a way to validate protocols when migrating from a given server to another.  A good reason for this is when migrating from one version of Pipeline Pilot to another or upgrading your hardware.  There are numerous reasons why you’d want to move servers or just a collection of protocols (E.g. isolating protocols for better performance) and the great news is that there IS a tool that’s there!  Before I give that nugget away, let’s talk about the process of Regression Testing.


The process of Regression Testing enables the designer of a protocol to define the best way to validate the usage of their protocol.  When using Pipeline Pilot to solve some interesting scientific tasks, it’s easy for “you” to see if it’s working or not.  But once you share this others in your project team or enterprise, you won’t be there to validate it each time an administrator moves it another server for better performance, migrates to a new version of Pipeline Pilot, or recovers a server due to a failure of some sort (e.g. hardware, building fire, theft, etc.).  Thus, you’ll want a way for the administrator to validate your protocol without “you” having to be there.  For the sake of argument, let’s say it’s just “you” that are doing the protocol designing, do you really want to manually validate possibly hundreds of protocols?


So you can probably tell that I’m saying that building a regression test is critical for all protocols that are “complete”, regardless if it’s just for you or shared with others.  If nothing else, it will keep your Pipeline Pilot administrator sane and reduce your time in maintain the protocol in the future.


What to learn more about the Regression Testing (please say yes!), then look no further than the existing documentation!  There’s a PDF in the Not authorized to view the specified space 2003 in the Pipeline Pilot portal called, “Regression Test Guide” (regression_test.pdf).  The actual application is called “regress.exe” and it can be found in the <installation folder-typically C:\Program Files\Accelrys>\PPS\bin folder.  It’s all command-line based, so executing tests in batch is pretty straight forward.


In future posts, we’ll discuss some best practices in using the tool and embedding it within your development process and IT operations.  If you’ve used the tool before, please share how useful (or not) the tool is and what you’d like to see improved.  (Comment within the blog or share in our forums:

472 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, regression-testing

In the first part of this series, we discussed the basic collection of cloud offerings and what type of value they provide to IT, Developers, and CustomersThe second part explained some of the Business Issues when leveraging the cloud from with your Enterprise environment.  In this post, we’ll focus more on the various services models that are associated with the Cloud.


Even within an ASP, there will be a range of providers.  Let’s take Accelrys Pipeline Pilot (PP) for example.  It’s a product that provides rich data-flow capabilities and has a specialty in science computing.   Today, most customers deploy PP on-premise by either the Research & Development (R&D) Information Technology (IT) department or by a group of scientists.  PP makes using and managing the platform in either of these scenarios extremely easy, yet powerfully scalable.  Regardless of how easy it is to manage PP, there are other concerns one must have when managing any platform or application.  This includes maintenance, backup and recovery, security, data management, etc.  Not to mention supporting an ever growing user base also looking to leverage Pipeline Pilot.  This could result in time being taken away from your main business driver: Science!


Taking the step to move your basic deployment into the “Cloud,” such as Amazon Web Service (AWS), is an interesting first start.  Now you don’t have to worry about the Operation System and everything underneath it (e.g. hardware, cooling, power, etc.), but you’re still left with everything else.  This is where Application Service Providers (ASP) comes into play.  An ASP can come in different packages.  For one, the ASP could actually be a group internally to your business. OK, so they’re not “really” an ASP, but they could function as one as they provide the service for a given cost and they’re not directly tied to your organization.  Hey, could this be Corporate IT?  Sure, or perhaps another scientific group within your business offering their investment to another team and doing cross charging to offset the costs.  And by the way, doing this in the cloud to remove the burden and cost from IT to manage it.  Perhaps this scenario has too many moving parts for your fancy.  I’ll move on.


A more traditional ASP manages the application and perhaps even provides application level support.  Taking Pipeline Pilot as an example again, providing application support really comes in two flavors.  The first is supporting the application platform and tools themselves; for instance, if you’re writing a protocol (a set of tasks in a data pipeline) or running an application built on PP.  The other is more focused on the science itself and relating it to the product.  While there may be many that could help with the PP Platform, Infrastructure, and even the tools, it’s a big leap to also support the science.  The key here for you is, when shopping for a cloud vendor or ASP take a look at the breath of services you’ll get from them and anticipate your need for science, application, and infrastructure support.  Not to mention the difference in cloud infrastructure that requires a Message Passing Interface (MPI) infrastructure (more on that later).


If you’re in any stage of interest, planning, evaluating, or deploying Accelrys products or other scientific applications in the Cloud, we’d love to hear from you!  As the leading provider of Scientific Informatics Solutions, we’re interested in supporting our customers no matter where there environment is – at home or in the cloud.  Visit our forums to continue the discussion:


To view all Conrad’s Cloud Series posts, please click here.

499 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, cloud-computing, application-service-providers

In the first part of this series, we discussed the basic collection of cloud offerings and what type of value they provide to IT, Developers, and Customers.  In this post, we’ll focus more on the business issues when leveraging the cloud.


One of the biggest hurdles leveraging Cloud Services is around securing and transporting of the data.  There’s no single answer or solution to resolve these issues, and there is no shortage of webinars, papers, conferences, etc. that focus on this so I don’t think I need to dig into that (just yet).  But what’s important to recognize is that all of the Cloud vendors, security experts, and network providers are working to both provide an answer that meets your business and technical requirements but also earns your trust.  The best way to get over that hump is to learn more and try it out.


First try the cloud on non-critical but impactful tasks.  Then start to increase your usage of critical data, connect directly to internal data, and perform tasks that provide real business value.  This isn’t an original approach since it’s pretty much the typically evaluation or Proof of Concept (POC), but that’s exactly the point!  Driving a project like this is more than just technology based, as you’ll most likely involve many people within your organization, such as Legal, Finance, IT, and Security in order to plan and complete the project.  There will be lots of concerns from these various groups, many reasonable and some that just requires lots of education.  So make sure you invest in educating them on the basics of the Cloud first.  This will make the rest of the process much smoother, but not easier.


Second, you’ll need to consider network bandwidth usage and data storage costs.  All of the cloud vendors have some sort of fee when uploading, downloading, and storing your data.  When you first look at this, its penny’s per GB, but when dealing with large data volumes and data transactions (e.g. Read and Writing across the network) those costs can get pretty high.  So your first thought will be that cloud pricing is extremely high, but what you may not be factoring in is all the things the cloud vendor is doing for you that’s beyond just the price of the disk, network, cooling, and power.  The cloud vendors typically offer a high SLA, so that includes data replication, de-duplication, resiliency, continuity, and more.  And not to mention the staff, planning, and operations to make all of that happen.  If you compared that to your own infrastructure and added that to your internal per-GB cost of storage, you’ll most likely see that the Cloud is more affordable but that assumes your meeting the same level of SLA and process as the Cloud vendors which most are not.  That said, there are some applications and data that may not be a good fit for many of the cloud vendors because of the special nature of the application, massive data size with high volume transactions, high throughput requirements, legal requirements, and more.  But this is starting to be the exception versus the rule.


Many organizations are making the leap of putting their most trusted data into the cloud, and some are doing it without realizing the significance. Email and Sales force automation having been leading the charge in hosted applications and Software as a Service (SaaS) deployments.  Now think of it this way, if you can store all of your communications and customer records on the cloud, why can’t you do more?  By businesses taking this leap, they start to build trust in external parties maintaining and operating their business critical services.  In a recent report by Goldman Sachs, they note that customers see a “shift towards cloud unstoppable”. The trend towards cloud services and applications won’t be a complete rip and replace, business will look to the cloud as an extension of their overall enterprise architecture and infrastructure.


When comparing the many Cloud/IaaS vendors in the market today, it’s already moving towards mass commodity price points and common functionality.  And that’s great if you want to take a piece of existing traditional on-premise software and simply deploy it to the cloud.  What you have to look out for are pitfalls in the software license, security, deployment architectures, and the fact that you’re still responsible for managing that software in the “Cloud”.  So the next layer to look for is a Services vendor that can deliver you the application.  This can at times come from the vendor directly or through partner network supported by the vendor.  Each has their own value proposition and differences in how flexible they can be delivering additional custom services.   Again, this type of application + service model isn’t new as the Application Service Provider (ASP) model has been around for years.  What’s new is that these ASP’s can still provide lots of value and cost reduction to the customer but now leveraging computing and storage that provided by a “Cloud” offering (e.g. AWS).


In the next part of this blog series, we’ll focus more on the various services models that will be available to customers based on a cloud version of Pipeline Pilot.


To view all Conrad's Cloud Series posts, please visit:

388 Views 1 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, cloud-computing, software-as-a-service

I last left you with a list of terms and examples surrounding the term "cloud computing;" now it's time for a little context.  Utility Computing, such as Amazon Web Services (AWS) Elastic Cloud Computing (EC2) provides a customer with the ability to spin up new machines on-demand.  From the customer side, you don’t care what machine it’s on but you do get to define the type of resources you want to consume such as CPU cores and Memory.  So far this sounds just like Hosting, right?  Correct!  What’s different is that you don’t have to sign a long term contract for that resource AND you’re not tied to that actual hardware since in the background it’s really just a Virtual Machine.  Now this is where it gets interesting.  Hosting has been around for a while, but since Server Virtualization technologies such as Microsoft Hyper-V and VMware vSphere has become mature, it enables the flexibility and architectures of Cloud Computing.  And since this Server Virtualization is available to Enterprises, this is where you hear the term “Private Cloud” being add to the Enterprise mix.


Now let me quickly tackle a common question.  “What’s the difference between Amazon Web Services, Microsoft Azure, and Salesforce?  Aren’t they all the same?”  First off, this is a great question, but it’s really comparing apples, oranges, and tomatoes.  Yes, those are all fruits but each provide something very different to the consumer.  Where Clouds are different than fruit is that you can layer some of the clouds to deliver a service.  Remember that AWS is a Utility.  Microsoft Azure is a resource targeted towards developers.  Developers are different than IT and therefore have different requirements.  They like to write applications that typically consume some data and provide a User Interface.  They don’t want to be bothered with patch management, monitoring systems, deployment of servers, etc.  Microsoft Azure abstracts this from the developer.  They instead write to the “Fabric” of the Cloud Computing platform that Microsoft manages, which allows the developer focus on what they do best.  Finally, with it’s even further abstracted.  You still have developers that can write applications based on, but the developer is given even more constraints on what they can develop and how it can be implemented.


OK, enough of the Cloud Tutorial, but hopefully you have an understanding that there are many different types of clouds and how they can be used.  Are there challenges to adoption? You bet!  But there are always challenges when adopting technology.  While the above was about the technology, there are a number of business issues, concerns and questions that need to be addressed as well.  In the case of many organizations, one of the biggest hurdles is around securing and transporting of the data.


In the coming weeks, we’ll provide an update on our roadmap for leveraging, supporting, and providing guidance on using Cloud Computing and Virtualization technologies.  Accelrys has already been moving forward to partner with a number of Cloud vendors, Service Providers, and third-party software vendors to ensure our customer have the power of choice, delivery models, and a clear path to leverage Accelrys products in the cloud.


If you’re in any stage of interest, planning, evaluating, or deploying Accelrys products or other scientific applications in the Cloud, we’d love to hear from you!  As the leading provider of Scientific Informatics Solutions, we’re interested in supporting our customers no matter where there environment is – at home or in the cloud.  Visit our forums to continue the discussion:


In the next part of this blog series, I’ll focus on the Business Issues found with leveraging the cloud.


To view all Conrad's Cloud Series posts, please visit:

457 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, materials-studio, discovery-studio, cloud-computing, virtualization

The scientific community is seeing an explosion of outsourcing, collaboration, massive data production and consumption, and financial pressures.  Driven by these challenges, Research & Development Information Technology (R&D IT) and even the scientists themselves are looking to the potential of Cloud Computing to enable an increase in science innovation and allow R&D IT to provide higher valued service along with reduced costs.  Cloud Computing isn’t a “silver bullet” to solve these challenges, but it does provides the tools to address many of these key business drivers.

I’m sure many of you have seen the benefits of the cloud such as cost reduction, cost management, on demand, and scalability.  But what does this mean in the context of a Scientist using a product such as Pipeline Pilot?  Before we can get into the specifics of how Cloud Computing will provide value to a Science organization, let’s first get the terminology straight.  This won’t be a deep dive into each area, but just a quick primer.

First off let’s just all agree that “Cloud Computing” is a pretty generic term and it actually comes in many different forms.  Here are some terms loosely used and thrown around with common examples:

  • Platform Virtualization - Virtualization of computers or operating systems. It hides the physical characteristics of a computing platform from users, instead showing another abstract computing platform.
    • VMWare vSphere, Microsoft Hyper-V, Citrix XenServer

  • Grid Computing - Combination of computer resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data.
    • Microsoft HPC, Sun Grid

  • Managed Hosting - A dedicated hosting service, dedicated server, or managed hosting service is a type of Internet hosting in which the client leases an entire server not shared with anyone.

  • Utility Computing (Cloud)- packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility (aka Infrastructure as a Service)
    • Amazon EC2, Rackspace Cloud, GoGrid

  • Platform as a Service (PaaS) - a computing platform and/or solution stack as a service, generally consuming cloud infrastructure and supporting cloud applications.
    • Microsoft Azure Services, Google App, Rackspace Cloud Apps

  • Software as a Service (SaaS) - model of software deployment whereby an Application Service Provider (ASP)  licenses an application to customers for use as a service on demand

  • Software plus Services (S+S) - combining hosted services with capabilities that are best achieved with locally running software.
    • Microsoft Exchange Hosted Services, Google Message Labs

That’s a pretty quick and dirty listing of terms, so I'll add a little context next time...


To view all Conrad's Cloud Series posts, please visit:

405 Views 0 References Permalink Categories: The IT Perspective Tags: pipeline-pilot, hosted-informatics, cloud-computing, grid-computing, platform-as-a-service, software-as-a-service, virtualization