In-Situ Analysis Library (Avido project)
- Open Position: 36 month contract for developing an in-situ analysis library for simulations on large scale computers.
- Computer science or computer engineering candidates with a Master, Engineering or PhD degree.
- Contacts :
- Work location: INRIA Grenoble, close to the Grenoble, France, a mid-size city ideally located in the heart of the Alpes mountains, offering a high quality of living.
INRIA Grenoble Research Center (France), one of the leading research institute in computer science in Europe is looking for computer science or computer engineering candidates with a Master, Engineering or PhD title. The position is available now (November 2015). Starting date can be adjusted according to the candidate constraints. Incomes will depend on qualification and experience level, according to INRIA rates. Next generation supercomputers will enable to run very large scale simulations producing huge data sets. Analysing these data is very challenging, I/O being a serious performance bottleneck on supercomputers. Instead of processing the data once saved on disks (Post-Processing), the so called In-Situ processing paradigm proposes to start processing data directly in the nodes where the application runs, while the data is still in memory. With industrial and academic partners we are developing a new open source library for processing in-situ the results of multi-parametric simulations. A multi-parametric simulations consists in executing several times the same simulation but with different sets of parameters. The goal is to compute in-situ statistics while the various instances of the simulation are running. The candidate will work on the design, development and test of this labrary with a focus on the data model, programming environment and runtime aspects, performance being a primary goal. The applicant should hold a PhD or Master in computer science or computer engineering, with skills in C, C++ and Java programming, some experience with parallel programming, a good knowledge of Linux, and a good level of scientific english. Experience with Big Data tools like Hadoop or Spark or with GPU programming is a plus. The candidate will conduct experiments on large scale supercomputers, participate to the scientific publications related to this project, collaborate with the other partners, and be involved in building a community around the open source library to be developed.