Materials Studio 7.0: More Science, More Applications…
And so another year of development comes to a close and we have released Materials Studio 7.0! From my perspective, this is a pretty exciting release because after Materials Studio 6.1 - which was mostly focused on re-architecting Materials Visualizer - the focus for this release is back on the science. We also used milestones in this release to gather very early feedback from a small set of customers which worked really well (a big thanks to those who contributed their time!)
So what did we deliver in this release? Well, a year is a long time and there are lots of interesting new pieces of functionality and performance and usability enhancements.
Way back in late 2012, we had a Materials Science and Technology Forum in India. This was the first meeting of its kind in India and was a great opportunity to meet face to face with our customers. One message that kept on coming over again and again was around performance. The main focus was on DMol3 with customers saying “it’s fast on one core but just doesn’t scale well enough”. Modern computer architectures have changed a lot since DMol3 was last parallelized, and so it was time for a re-think. Our DMol3 expert already had many ideas about how to improve things, and he succeeded spectacularly and now we have good scaling up to 64 cores. So check out the scaling, throw more cores at your calculations and get the results faster. It might go higher than 64 cores but that is all we have in house!
There was a similar message on the performance of Forcite Plus. Whilst we could do charge neutral molecules quickly using charge groups, when it came to charged systems like surfactants and ionic liquids, you had to use Ewald which scaled well but was slow. After quite a bit deep deliberation, we have added Particle-particle-particle Mesh Ewald (P3M). This is much faster for larger systems than Ewald and gives access to calculations on hundreds of thousands of atoms.
Also, in 2012, we sent out a large survey which many of you completed (in fact, over 400 people filled it out which was brilliant!) We asked you to rate different properties by level of importance to you. From this, solubility came out as the number one property of interest linked closely with free energy calculations. In response, In Materials Studio 7.0, we have added a new Solvation Free Energy task to Forcite Plus enabling you to use Thermodynamic Integration or Bennett Acceptance Ratio to calculate free energy of solvation.
Another important area from the survey was to improve the prediction of properties for electronic materials. In collaboration with TiberLab and the University of Bremen, we have added non-equilibrium greens functions to DFTB+ enabling the calculation of electron transport. This addition has also meant extending Materials Visualizer with a set of tools for building electrodes and transport devices. Coupled with the new DFTB+ task, you can calculate transmission and current-voltage curves for a range molecular electronic devices.
These are just some of the highlights of the release. I don’t even have space to mention new forcefields, barostats, mechanical properties from DMol3, optimization of excited states in CASTEP or the uber-usability enhancement of “copy script”. I guess you will just have to listen to the webinar and find out more about how Materials Studio 7.0 can accelerate your modeling and impact your work.
Ever wondered what molecular vibrations would sound like? JC Jones and Andy Shipway in Israel pondered this question and decided to use molecular dynamics to investigate.You can find more information about their project here and listen to the ethereal sounds of a journey along the myelin protein.
I always like to hear about people taking software like Materials Studio and using it to create something completely outside of the original intent. When we introduced MaterialsScript to control parts of Materials Studio, I played my own games around the theme of Winter resulting in the Season's Greetings set of scripts. However, the sort of work linked above takes abstraction to a totally new level!
The question is, what else do you use Materials Studio for that isn't in our user stories?
As a Product Manager, I am always interested in how we can use new technologies to improve the experience of working with Materials Studio. After the Nintendo Wii was released, I pondered whether we could use the Wii Remote to do things like rotate and zoom into the structure. However, there were several reasons why I didn't pursue this – mainly that Nintendo does not really support using the Wii remote in this way. Also, I thought it would be fairly limited in terms of what it could do.
However, recently I saw that Microsoft had released the Kinect for Windows SDK for download. Although only in beta at the moment, and limited to non-commercial use, it does provide the capability to interface a Kinect with Windows to control applications. With gestures and voice, this broadens the capacity for interaction with the software. You could still do simple tasks like zoom, pan and rotate structures – that would be pretty cool and some smart people have already worked with Jmol to enable this using an open source implementation. But how about taking it further and using a gesture like touch to select different atoms, swipe to delete objects, create bonds by dragging between two atoms, etc.? What sort of tasks do you see being useful in a gesture-based world?
Of course, I can always get carried away with cool new ideas but back in the real world, I wonder how useful this would actually be. Do you think there is a future in gesture based control for molecular modeling applications?
I was recently chatting with a customer about studying phase morphology of fuel cell membranes between enclosed surfaces. He pointed me to this paper where Dorenbos et al. used mesoscale methods to model a surface interacting with a polymer. They varied the interaction potential between a coarse-grained representation of a Nafion® 1200 polymer membrane and the surface to mimic different levels of hydrophobicity. However, it seems that the surface of the carbon catalyst support would not necessarily have a single homogeneous interaction.
I started to think about whether it would be easy to build a mesoscale model where I had some idealized surfaces but with different interactions between a diblock polymer and the surface and whether this would affect the phase morphology.
In a simple example, I used a diblock copolymer with a fairly strong repulsive interaction between the two blocks. Initially, I performed Mesocite calculations in the bulk phase to check that I would get the expected formation of a lamellar phase (Fig 1a). I then introduced a surface which had no net interaction with the diblock copolymer (Fig 1b) and one where one of the blocks has a repulsive interaction the surface (Fig 1c). As expected, the introduction of the surface disrupts the lamellar structure and it forms a perforated lamellar phase. When the surface is modified so that one block repels the surface, the lamellar phase forms again but this time aligned along the surface.
Figure 1: Bulk (a), non-interacting surface (b), and surface with repulsion of one block (c). Only one of the blocks from the diblock is displayed, with the other hidden to see the shape of the phase.
I also introduced two different interactions on the surface. In this case, one of the diblock components is repelled by the blue parts of the surface and the other block by the green parts of the surface. When the surface is split into two strips, a perforated lamellar forms with the blocks in the diblock aligning with the surfaces (Fig 2a). However, when the surface is patterned with four strips, the diblock still forms a lamellar phase but this time the blocks align perpendicular to the strips (Fig 2b). Finally, when a circular interaction template is used, another perforated lamellar forms but this time the blocks try to align with the circles (Fig 2c).
Figure 2: Two strips (a), four strips (b), and a single circular interaction site (c). Only one of the blocks from the diblock is displayed, with the other hidden to see the shape of the phase
To attempt to characterize the effects of the surface, I calculated the order parameter for each system using a script from the Community. I found that as the surfaces were introduced, the order dropped from 0.19 to between 0.16 and 0.18. Although relatively small, this does indicate the surfaces are having an effect on the order of the system. You can see the evolution with time of the different morphologies in the video below
If you want to try calculations like this, I have posted more technical detail on how to generate the input structures, including templates for the structures used above, into the Materials Studio Community.
This rather superficial study demonstrates that the effect of surface templating or lithography can be simulated using mesoscale structures. Recent changes in the mesostructure building tools in Materials Studio have made it even simpler to construct complex patterned structures. This is a very simple study but what sort of surfaces and systems would you like to simulate?
As the winter vacation looms nearer, I have been thinking about how I am going to get to Scotland for Christmas. At the same time, a colleague asked if I was going to write a seasonal script again this year. Merging those two streams of thought, I came up with a new seasons greeting script which you can download or see a movie from here. Note the script requires Materials Studio 5.5 to run.
I wondered initially about how to make snow fall down the screen. I started off using shearing in DPD in Mesocite but that didn’t really give me the flexibility I needed. One of my colleagues, Neil Spenley, recommended trying out the new electric fields that we added in Materials Studio 5.5. This was a great idea as it enabled me to switch the field directions with a lot of flexibility – hence the “wind” blows across the screen.
Next issue was how to keep the snow falling. As I only have a fixed number of snow beads in my structure, I had to make them fall from the sky repeatedly. To do this, I used a soft, short range potential which, due to the magic of periodic boundary conditions, enables the beads to fall through the ground and back in through the top of screen. In the last part of the script, I used close contacts to dock the snow onto the ground.
The final challenge was to make the greeting appear. Initially, I fixed the bead positions but this makes the beads appear all the time which was a bit of an issue! Finally, I realised that I could use the restraints energy, also introduced in Materials Studio 5.5, to move the beads to anchor positions that I created from a pre-defined set of coordinates.
After this, it was just a matter of tuning the parameters. I experimented with a lower electric field strength which allowed the beads to move side to side a little. However, when they hit the Lennard-Jones potential on the roof beads, they bounced back up into space – not realistic!
After much playing, I finally got something I was happy with – so Seasons Greetings everyone! Now I have to think about next year...
As I touched on when I was musing about copy and paste, it never ceases to amaze me how much compute power has increased since I started doing modeling 15 years ago. Back then, I could minimize a few hundred atoms with classical mechanics and watch the geometry update every few seconds after each iteration. Now, I would do a few thousand molecules of the same size in a few seconds.
Of course, as computers get faster, the sorts of calculations that customers want to run also get larger and more “realistic.” The explosive growth of multi-core architectures has lead to an increased focus on parallelization strategies to make the most efficient use of the CPU power. Traditionally, the main focus with high performance codes has been on getting them to run in fine-grained parallel as efficiently as possible, scaling a single job over 100’s of CPU’s. The issue here is that, if you are only studying a medium sized system, you will quickly get to a point where communication between nodes costs far more than the time in the CPU, leading to poor performance.
The solution, if you are doing multiple calculations, is to run multiple calculations in a coarse-grained, or embarrassingly parallel, approach. In this mode, you run an individual calculation on a single CPU but occupy all your CPU’s with multiple calculations. Materials Studio will do this but it can be monotonous, although you can use queuing systems to get somewhere here. The other issue is that when you start doing many calculations, you also get lots of data back to analyze. Now you not only need a way to submit and control all the jobs, but also to automatically get the results you want.
Luckily, the Materials Studio Collection in Pipeline Pilot has both of these functionalities. You can use the internal architecture to submit jobs in both fine and coarse grained parallel (or a mixture of the two if you have a nice cluster!). You can also use the reporting tools to extract the data you want and display it in a report.
A great example of this is working with classical simulations of polymers for calculation of solubility parameters. Here, you need to build several different cells to sample phase space and then run identical calculations on each cell. You can quickly develop the workflow and then use the coarse-grained parallel options to submit a calculation to each CPU. When the calculations are finished, a report of the solubility parameter can be easily generated automatically.
So, whilst there is a definite need for an expert modeling client like Materials Studio, the Materials Studio Collection really does free you up to focus on problem solving (using Materials Studio!), not waiting for calculations to finish and then having to monotonously create reports!
As you may have guessed from my previous post, I am feeling in a bit of a philosophical mood at the moment. I guess this has to do with seeing Materials Studio reach it’s 10 year anniversary. Yes, it was back in June 2000 that Materials Studio 1.0 was first released!
I still remember those early days very clearly. Materials Studio 1.0 included the crystal builder and polymer building tools, and Discover, Amorphous Cell and Reflex modules. There was no quantum mechanics (2.0), no mesoscale (2.1 and 2.2), and no QSAR (3.0). However, even then, it was a pretty special product. I remember one of my colleagues and myself doing early testing of Discover and thinking that the calculation had failed – it was actually running so fast that we didn’t realize it had completed!
Back then, Materials Studio 1.0 ran on Windows 95, Windows 98, Windows NT, Red Hat Linux 6.0, and SGI IRIX 6.2. I am surprised we didn’t support Windows 3.1 as well! This rapidly changed as Windows 2000 was released and we had a widely adopted Windows platform. And it has been onwards and upwards since then with support for 64-bit Linux and current development of 64-bit Windows server executables.
And now we have the Materials Studio Collection as well as Materials Studio. This gives the opportunity to deliver the vision that first started with the release of Materials Studio – enabling non-experts to use modeling and simulation tools. Through the capabilities of Pipeline Pilot for web and Microsoft SharePoint deployment of protocols, we have the ability to offer simplified interfaces for property calculations but which still use the expert tools behind the scenes.
Over the years, I have been constantly amazed by what our customers and their imagination achieve with Materials Studio. I fully expect this to continue with the Materials Studio Collection and future versions of Materials Studio.
As Timbuk 3 once sung, “The futures so bright, I gotta wear shades”. I’ll be getting my new sunglasses next week.
I was just creating a slide the other day for the new Materials Studio Collection. As I wrote about being able to automatically generate meaningful reports directly from with Pipeline Pilot, I added a bullet point about “not having to copy and paste” anymore. I re-read this line a few times as something about it was ringing some bells.
I remembered that, when we launched Materials Studio 10 years ago, one of the big advantages was that you could copy and paste charts, structures and values directly into a report. Our previous software packages had always been on Silicon Graphics IRIX machines which had no interaction with Windows machines. Even getting a simple screenshot of a molecule into your favorite Windows word processing package required you to capture the screen on the SGI, transfer the image over to your Windows machine using FTP, and finally inserting it into your report.
It seems funny that we used to go to all these lengths and spend so much time doing something that can now be totally automated using the Materials Studio Collection. Want to change the look of all the tables in your report? Modify the stylesheet. Need to plot the cell volume vs pressure to generate phase diagrams for multiple structures? Add a property reader and plot the charts automatically.
As a Product Manager, it is also interesting to think about how user requirements have changed. Back in the days of SGI, creating a report didn’t take much time compared with the actual computational time so it was easy to justify the transferring of documents etc. As compute times decreased, and the numbers of calculations being run increase, copy and paste made it simpler to create reports. Now that compute times are fast and people are running high-throughput in-silico calculations with thousands of structures or multiple calculations, reports need to be generated automatically, otherwise you would spend far longer creating the reports than running calculations.
Has the requirement really changed? I don’t think so. As a scientist, what I really want to do is spend as little time as possible generating a report so I can focus on the interesting parts – understanding and improving the system.
Want to know more about the Materials Studio Collection? I will be giving a webinar on Wednesday 23rd June outlining the new features and benefits.
In the last of the series of my blogs on Materials Studio 5.0 functionality, I will be writing about new functionality in the mesoscale area. Back in Materials Studio 4.4, we developed a new module called Mesocite. Mesocite is a module for doing coarse-grained molecular dynamics where the elementary particle is a bead. In coarse-grained molecular dynamics, a bead typically represents several heavy atoms. This has advantages over classical molecular dynamics such as Forcite Plus as you can access longer time and length scales.
In Materials Studio 5.0, we added the capability to do Dissipative Particle Dynamics (DPD) to Mesocite. DPD is a very coarse-grained dynamics approach where the bead can represent many atoms or even molecules. We already have a module which can do DPD in Materials Studio but this has limited ability to be extended. By developing the new DPD in Mesocite, we could take advantage of the underlying MatServer environment to easily extend DPD to run in parallel and work with MaterialsScript amongst other things.
One issue we faced is that the legacy DPD tool works in reduced units whereas MatServer requires physical units. The use of reduced units is fairly standard in DPD however it makes it more difficult to relate the results back to experimentalists. Therefore, we thought that switching to physical units would be a good idea. However, there were still questions as to how customers would work with a DPD in physical units. We asked a small focus group of customers very early in the release as to how they would like to parameterize DPD calculations. All agreed that getting the results in physical units was preferable but they still wanted to set up the calculations in reduced units as they have lots of historical data they want to re-use. So, we have a new user interface which allows setup in either reduced or physical units but then converts to physical units for the calculation!
When a new piece of functionality is added to Materials Studio, I like to add a tutorial on how to apply the software, what sort of values customers should use, and how to get the most out of the software. For the new DPD functionality, I looked at several papers before settling on an application by Groot and Rabone looking at the effect of non-ionic surfactants on membrane properties. This interested me as it demonstrates the strength of mesoscale in looking at varying concentrations of different components and seeing the effect on morphology. I also realized that I could use some of the Mesocite analysis to really analyze the system for properties such as concentration profiles and examining the diffusivity of the beads across the membrane. This mapped really well to the original results and produced what I hope is an interesting tutorial.
There was another reason I chose this paper too - Groot and Rabone also looked at the effect of strain on the membrane. This wasn’t possible with the old DPD module but, using some MaterialsScript, I could strain the system and then calculate the surface tension. As I like to dabble with MaterialsScript, the lure of this was irresistible and the script I made is available from the Accelrys Community website.
One minor issue was that the system sizes and relative amounts were not clear from the paper. Luckily there are several ex-Unilever people at Accelrys, so one of my colleagues, Dr Neil Spenley, contacted one of the authors, Dr Robert D. Groot. I was also lucky that Dr Groot obviously hordes his old research work and, a day later, I had the original DPD input file in my hands!
The results from the strain calculations also show the same trends as those reported by Groot and Rabone so I was pretty happy with this work.
So that wraps up my blogs on Materials Studio 5.0. I hope to have given you an insight into some of the processes that go into making a new version of Materials Studio.
As I mentioned in my last blog on Materials Studio 5.0 , as with all releases of Materials Studio, there is always a wide range of functionality added in a single release. Last time I focussed on some of the functionality in the classical simulations area but this time I want to focus on new developments in the quantum mechanics area.
One of the main projects is the prediction of Raman spectra using CASTEP. This is highly anticipated and requested functionality and really makes CASTEP a unique tool for predicting a wide range of spectra from simple infra-red to core level spectroscopy (more on that later!)
CASTEP is an interesting code as it is not developed solely in house but in collaboration with a highly motivated and scientifically brilliant team who call themselves the CASTEP Developers Group (CDG). We have had a long and very successful relationship with these guys and I would consider them a top rating example of how academic and commercial collaboration can really produce high quality results which benefits all users.
Anyway, back to the Raman functionality. We were very fortunate that this was added into the release very early on so we managed to get some alpha feedback as early as March. At this time the functionality wasn’t running in parallel so the customer actually ran the calculation on a laptop and just left it going for several days (or more)! Also, one of our quantum mechanics supremo’s, Victor Milman, has already produced a paper (which has just been accepted). By the time we got around to the beta testing, we had showed some results at our Korean and Japan User Group Meetings and had customers salivating at the prospect of trying it. This has been one of the most pleasant beta tests I have been in involved with as nearly all the customers have had real success at predicting the Raman spectra of in-house materials. Although one comment from nearly all has been the calculations are pretty computationally expensive – time to use those cores!
Talking of cores, we also released core level spectroscopy functionality in Materials Studio 4.4. This allows you to simulate EELS or ELNES spectra. As often happens, we didn’t have time to fit all the bells and whistles we wanted into that release so we extended the tools in 5.0 to include smearing for the spectra which improves agreement with experimental spectra. Also, one of my colleagues took advantage of another piece of new functionality, exposure of CASTEP through MaterialsScript, to create an extensive script which automates several core level spectra calculations to improve the overall agreement with experiment – neat stuff!
Of course, there are many other enhancements in the quantum mechanics tools including major performance improvements for ONETEP, new elements in AM1* for VAMP, and extensions to QMERA that have been delivered from the Nanotechnology Consortium. If you want to read more about these, check out the "What’s new in Materials Studio 5.0" on our website.
Next time, I want to jump a few size scales up to the mesoscale.
We are now rolling into the last few days of the release of Materials Studio 5.0. It has been a pretty long but very exciting and challenging release as we really started to use sprint releases and agile techniques. From a customer perspective this meant that, from February onwards, early versions of the software have been going to some customers to get feedback.
During the 5.0 release cycle, we have been focusing on several large projects covering the range of technologies from quantum mechanics by classical simulations to mesoscale modelling. There is far too much to blog about in one post so I will break this up into a few posts.
First, let’s turn back the clock to the start of the previous release! During the development of Materials Studio 4.4, we began work on the parallelization of Forcite Plus. There was no intention to release this functionality in 4.4 but we got tantalisingly close! However, we didn’t have time to include core functionality like COMPASS support and more validation was needed. During the 5.0 release we finished off the functionality and we worked closely with HP to test Forcite Plus scaling and found some interesting “features”. One of these was the Ewald summation method did not scale well above 16 CPU’s which at first we couldn’t understand. After some head scratching, the classical simulations guys worked out that we had optimized Ewald so much for scalar calculations, that this had a negative effect on scaling in parallel. We didn’t want to lose the scalar performance so we added a switch so the algorithm changes to get good parallel scaling. One of our beta test customers has now seen pretty linear scaling for Ewald sums up to 128 CPU’s!
Another big project in 5.0 has been the development of a new Amorphous Cell. Amorphous Cell is a core tool for polymer simulations enabling the generation of bulk amorphous structures and has been available in Materials Studio since 1.0. However, it needed a re-vamp as it had some serious limitations including failing to build some pretty basic structures as it was tied to a specific forcefield. In fact, a few years back a customer memorably asked me to build a box of HCl and, unknowingly, I fell into the trap. It failed as the system was not parameterized which they obviously knew!
Before starting the development, we had long discussions about the best approach to creating this essential tool and decided to re-use the packing technology from Sorption. This has some great advantages as we could do really interesting things like pack into isosurfaces. It also gave the most requested enhancement – removing the dependency on the CVFF forcefield so now you can build any amorphous material. Again, early feedback suggested that having weight percent displayed for each component was really useful so this was added in. Our beta testers were equally happy as they could use the new packing task to build materials that were previously impossible and others could pack polymers with side chains at real densities rather than having to build at a low density and compress. Of course, the first structure I built was a box of HCl – successfully this time!
Enough for now, later I’ll blog on some of the fantastic new quantum mechanics functionality.