The ScienceCloud is a multi-purpose computing and storage infrastructure of the University of Zurich.

It is an Infrastructure-as-a-Service (IaaS) solution specifically targeted to address large-scale computational research; it is based on the OpenStack cloud management software and on Ceph for the underneath storage infrastructure.

As a convenience for researchers preferring to run their code on a pre-configured cluster, S3IT centrally manages multiple CPU and GPU partitions as part of ScienceCloud.

Why would you need to run on ScienceCloud?

Researchers need access to computing and data infrastructure to solve a wide variety of problems:

  • Data analysis
  • Statistical analysis
  • Simulations and model building
  • Parameter studies
  • Visualization
  • Image processing
  • Other emerging research services

ScienceCloud empowers you to provision a dedicated and customized research infrastructure to store data and run large-scale analysis.

The driving principle behind the platform is to allow you to easily adapt your infrastucture to the changing requirements of your research.

Moreover, access to ScienceCloud is accompanied by service and support from specialists in Research IT of S3IT, removing not only the overhead of running local resources, but providing you access to new skills and expertise.

Usage Policy

The Regulations of the Use of IT-Resources at UZH are binding for all users; acceptance of these policies is implicit in the use of the system.

How to get access and cost contributions

Access to the ScienceCloud is open for all researchers at the University of Zurich.

Usage of ScienceCloud is subject to a contribution to the costs. As ScienceCloud is largely subsidized by the UZH, these contributions are very affordable. Contact S3IT to understand how the cost contribution model matches your specific use-case.

ScienceCloud Resources

Current grand totals are shown in the tables below:


nodes virtual CPUs total RAM
434 18'760 101 TB


Nodes are equipped with "NVidia Tesla P4" GPUs. Currently only a small number of nodes are available for testing.

nodes GPUs virtual CPUs total RAM
2 4 96 0.5 TB


Block storage served by Cinder and object storage served by Swift:

type raw capacity usable capacity
Block storage 5.2 PB 1.7 PB
Object storage 1.7 PB 0.8 PB with replica-2
(or 1.2 PB with ec104)


Every compute node has a non-blocking, redundant, 10gbps link to the internal network. This network is used to access the underlying storage infrastructure and provide network connectivity to the virtual machines.

The uplink to the University network is a redundant 20gbps link.

Want to know more?

Do not hesitate to contact us by sending an email to