sciCORE currently operates a high-performance computing infrastructure, divided in three different environments tailored to specific scientific needs. The infrastructure is composed of near 200 InfiniBand and Ethernet 100G-interconnected nodes, and around 13500 CPU cores, providing 70 TB of distributed memory and a high-performance (GPFS) cluster file system with a disk-storage capacity of 11 PB. The technical details are provided in the tables below.
The sciCORE cluster is updated on a regular basis to match the growing needs in Life Sciences and parallel demanding applications. Nowadays, almost 30 million of CPUh are consumed per year by our 800 users, summing up to more than 14 million of jobs run per year.
Cluster | Total nodes | Total cores | Total RAM | Total GPUs | Inter-connect | Total Disk |
sciCORE | 223 | 14528 | 79 TB | 128 |
Eth 100G, |
16 PB |
Cluster | Total nodes | Total cores | Total RAM | Inter-connect | Total Disk |
sciCORE | 12 | 304 | 4.4 TB | Infiniband | 55 TB |
Cluster | Total nodes | Total cores | Total RAM | Total GPUs | Inter-connect | Total Disk |
sciCORE+ | 16 |
1104 total |
6.8 TB | 8 | Eth 100G | 1.8 PB |
Protocol | Total Disk |
NFS , SMB | 3.9 PB |