Job description
Conexus are partnered with a global consultancy who are supporting a global financial services client. On that basis, we are searching for Pyspark Engineers on a freelance basis to support their long-term project in Zurich, Switzerland.
To be considered, we are searching for the following:
Key Qualifications:
- Demonstrated proficiency in Spark programming, with a strong emphasis on PySpark.
- Proven experience in designing and building complex data pipelines using Spark, particularly on Cloudera distribution.
- Expertise in tuning the performance of Spark applications.
- Hands-on experience with scheduling and running Spark applications effectively.
- Solid experience with essential Hadoop tools and technologies.
Preferred Skills:
- Experience as a Big Data Engineer on the Palantir platform, or a solid understanding of its capabilities, is a significant advantage.
- Ability to conceptualise and design robust Big Data systems using Hadoop, PySpark, and Hive.
- Experience in designing both functional and technical architectures.
- Proficiency in developing applications on Big Data platforms using open-source programming languages.
- Ability to work closely with Administrators, Architects, and Developers to deliver high-quality solutions.
- Knowledge of cluster management and storage mechanisms within Big Data Cloud environments is a plus.
If this role could be of interest, please respond with your latest CV for your consideration.