Were looking for a Senior Data Engineer for our Chicago to join us in building our next generation data processing platform.
As part of the platform team you will work in multiple areas including campaign performance measurement and reporting, ad fraud, and botnet detection.
Are you someone who likes to have fun at work? Are you passionate about picking problems apart, driving results, and making an impact?
Do you enjoy working in a fast-paced, dynamic environment where no two days are the same? If so, we want to speak with you! About our team Quite simply our platform is the engine that powers the verification, optimization, and analytics solutions we provide.
It has the power to handle hundreds of thousands of transactions per second; collect tens of billions of events each day and evaluate thousands of data-points in real-time all while responding in just a few milliseconds.
What you'll do Work on Big Data technologies, lead the design, coding and maintenance of highly scalable backend data processing platform for large throughput Work on the data modelling for the MPP columnar databases to handle high volume of queries with sub-second response times Lead the entire software lifecycle including hands-on development, code reviews, testing, continuous integration, continuous deployment and documentation using modern programming languages (such as Java, Scala, Python) Perform tuning of systems for optimal performance Mentor junior team members You should apply if you have most of this 5 years of recent hands-on experience in one or more of the modern programming languages (Java, Scala, Python) Good understanding of collections, multi-threading, JVM memory model, algorithms, scalability and various tradeoffs in a Big Data setting.
Experience developing and maintaining ETL applications and data pipelines using big data technologies Strong SQL knowledge (OLAP) and experience working with mpp columnar databases (Vertica, SnowFlake,etc) Excellent interpersonal and communication skills Understanding of full software development life cycle, agile development and continuous integration What puts you over the top Data warehouse experience in SnowFlake and experience writing ETL pipelines in SnowFlake Experience working with AWS technologies such EMR, step functions, data pipeline, cloudformation, etc.
Experience working with hadoop mapreduce, spark, pig, hive, etc.