High Performance Computing
High Performance Computing
Even if the computing power of single processors continues to grow, physical limits of the current integrated circuit technology have been practically reached, thus further improvements will have to be based on radically innovative technologies still in a study phase. On the other hand, the size of problems to be solved grows continually. Collecting and analyzing social networks data, designing fluid dynamics strucures, analyzing geological surveys, simulating weather conditions and climatic models, or financial market analysis are examples of continually growing problems that must be solved faster and faster. The solution? Use thousands of processors in parallel breaking a large problems in many smaller instances that can be solved by single processors. This area of computer science is called “High Performance Computing" (HPC).
Epigenesys has a ten-year experience on HPC. We work together with public and private companies that want to optimize the usage of their computer cluster and we assist them selecting the hardware and software components, configuring operating systems and networkings, developing applications and tracking systems.
We propose solutions using the most recent open source technologies whose reliability is witnessed by the TOP 500 list of most powerful HPC clusters in the world. We suggest adopting the best available job managers such as Slurm and HTCondor and we offer training support for them. We have experience on most widespread frameworks for “big data” computing such as Apache Hadoop and Apache Spark, and on most reliable high capacity and high throughput cloud storage systems such as MongoDB, CephFS, and GlusterFS. We develop an optimize HPC applications for our customers. We are not afraid of oversized problems!