Computing resources

The need of the HEP community to consume computing resources varies throughout any given year with cycles of peaks and valleys driven by holiday schedules, conference dates and other factors. The classical method of provisioning resources at providing facilities has drawbacks, such as potential overprovisioning. Grid federations like Open Science Grid offer opportunistic access to the excess capacity so that no cycle goes unused. However, as the appetite for computing increases, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when they’re needed.

Computing resources are delivered by various systems: local batch farms, grid sites, private and commercial clouds and supercomputing centers. Historically, expert knowledge was required to access and concurrently use all these resources efficiently.

For cost effectiveness it is imperative that HEP computing facilities are able to rapidly expand and contract provisioned resources

–Elasticity by utilizing “rental” resources

–pay only for what is used, when it is used.

NOvA-jobs

NOvA jobs in the queue at FNAL

As the above diagram shows, usage is not steady-state. Computing schedules are driven by real-world considerations (detector, accelerator, …) but also ingenuity – this is research and development of cutting-edge science.

Cloud provisioning is becoming a real contender for realizing future computing needs. The CMS Amazon Web Services Investigation demonstrated that elasticity of cloud provisioning is certainly sufficient to address the bursting needs of HEP computing. It also demonstrated that the approach is compatible with the operational procedures of a number of diverse experiment workflows. HEPCloud will provide important on-demand capabilities and become a factor in the experiment resource planning.

Commercial cloud computing is offering increased value for decreased cost compared to the past.

Price of one core-year on Commercial Cloud

Price of one core-year on Commercial Cloud

High Performance Computers (HPC) are also becoming more open to the types of data-intensive computing performed in High Energy Physics. The National Strategic Computing Initiative targets exascale computing as the next computing frontier, enabled by HPC centers. Our first integration goal is the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy.