Data Analytics Infrastructure for Particule Data (project Velassco)
- Open position: 12 month contract for developing a big data infrastructure specialised for analysing particule data produced by large scale simulations.
- Computer science or computer engineering candidates with a Master, Engineering or PhD degree.
- Contacts :
- bruno.raffin@inria.fr
- pierre-francois.dutot@inria.fr
- Work location: INRIA Grenoble, close to the Grenoble, France, a mid-size city ideally located in the heart of the Alpes mountains, offering a high quality of living.
INRIA Grenoble Research Center (France), one of the leading research institute in computer science in Europe is looking for computer science or computer engineering candidates with a Master, Engineering or PhD title. The position is available now (November 2015). Starting date can be adjusted according to the candidate constraints. Incomes will depend on qualification and experience level, according to INRIA rates. Large scale simulations are producing huge amount of data that are cumbersome and time consuming to analyse. The goal of this project is to design an infrastructure based on Big Data tools (Hadoop, Flume, HDFS, Hive) to store the simulation results (one simulation consists of several time steps, each one providing for each particule its 3D position and additional data like velocity or temperature) in a parallel file system and rely on map/reduce like queries running on a cluster to efficient extract the required data. The scientist will drive his analysis from a visualisation tool that will generate queries to be processed by the cluster and compute a 3D visualisation from the returned results. The work includes the development, deployment and test of the prototype and production platforms It will focus on the efficient manipulation and storage of the data using extensions of the Big Data framework, and its interface with HPC infrastructures and GPU visualisation tools. The applicants should hold a PhD or Master in computer science or computer engineering, with skills in C, C++ and Java programming, some experience with parallel programming, a good knowledge of Linux, and be fluent in English. Expertise with Big Data tools like Hadoop or Spark will be considered as a plus. Team work and frequent participation to on-site and audio/viso meetings with partners abroad (Spain, Great-Britain, Germany and Norway) are also required.