Integral Ad Science (IAS) is a global technology and data company that builds verification, optimization, and analytics solutions for the advertising industry.
Our technology handles hundreds of thousands of transactions per second; collects tens of billions of events each day; and evaluates thousands of data-points in real-time all while responding in just a few milliseconds.
If this sounds exciting then IAS is the place for you! As a Senior Staff Big Data Engineer you will lead, design, implement, and maintain big data pipelines responsible for aggregating tens of billions of daily transactions.
You will lead the entire software lifecycle including hands-on development, code reviews, testing, and deployment for streaming, batch ETL and RESTful APIs.
You will be using cutting edge Big Data technologies on cloud platforms. As a senior contributor, you will help guide, mentor, and provide technical leadership to junior team members.
You should apply if you have most of this 5 years experience in designing and building data-intensive applications 5 years experience developing with object oriented languages (Java, Scala, Python) Experience with Big Data technologies - Hadoop MapReduce, Spark, Pig, Hive, etc.
Experience building, deploying, and supporting production level SaaS solutions Regular use of collections, multi-threading, JVM memory model, etc.
In-depth understanding of algorithms, performance, scalability, and reliability in a Big Data setting Solid understanding of OLTP and OLAP systems, database fundamentals and good knowledge of SQL Experience adding observability / monitoring capabilities to data pipelines using tools like Elastic and Grafana Experience building CI / CD pipelines using Jenkins Experience in full software development life cycle and agile development Excellent interpersonal and communication skills What puts you over the top Experience building production level systems in a cloud environment (AWS, Azure or GCP) Building event-driven microservices applications using spring boot or gRPC Orchestrating data pipelines using tools such as Airflow Familiarity with messaging frameworks like Kafka or RabbitMQ Experience with Spark streaming or Flink