Integral Ad Science (IAS) is a global technology and data company that builds verification, optimization, and analytics solutions for the advertising industry and were looking for a Staff Big Data Engineer to join our Data Engineering team.
and evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then IAS is the place for you! As a Staff Big Data Engineer you will lead, design, implement, and maintain big data pipelines responsible for aggregating tens of billions of daily transactions.
You will lead the entire software development lifecycle including hands-on development, code reviews, testing, and deployment for streaming, batch ETL and RESTful APIs.
You will be using cutting edge Big Data technologies on cloud platforms. As a senior contributor, you will help guide, mentor, and provide technical leadership to junior team members.
What youll get to do Planning delivery of data pipelines and API endpoints for IAS (Integral Ad Science) product offerings Architect and build data pipelines and data stores specialized for data science needs Design architecture to streamline data science SDLC Design and implement systems for large-scale data analysis and machine learning Build tools and frameworks for use across the data science and data engineering teams Define and implement best practices and development processes Partner with engineering teams on deployments of data science driven solutions Participate in training and mentoring of more junior data engineers Ideal candidate is naturally curious, dedicated, detail-oriented with a strong desire to work with awesome people in a highly collaborative environment.
You should apply if you have most of this experience Bachelors or Masters in Computer Engineering, Computer Science, Electronics Engineering, or related fields 10 years of experience designing and building data-intensive applications Hands-on coding experience in object oriented language, preferably Java / Scala / Python In-depth understanding of algorithms, scalability and various tradeoffs in a Big Data setting Expertise in using Big Data frameworks (e.
g., Hadoop, Spark) and MPP databases (e.g., RedShift, Snowflake) for complex data assembly and transformation Experience building production level systems in a cloud environment (AWS, Azure or GCP) Orchestrating data pipelines using tools such as Airflow Knowledge of database technology, SQL, schema design and query optimization techniques Understanding of full software development life cycle, agile development and continuous integration Excellent interpersonal and communication skills What puts you over the top Digital advertising or web technology experience Experience with Spark streaming or Flink Experience implementing machine learning algorithms Experience with MLOps tools