Data Engineer

Engineering Hyderabad, India


As a Data Engineer, you should be an expert with data warehousing technical components (e.g., ETL, ELT, Cloud Databases and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have a deep understanding of the architecture for enterprise-level data lake solutions using multiple platforms (RDBMS, AWS, Cloud). You should be an expert in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The individual is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions. 

What you'll do: Yeah, I want to and can do that. 

  • As a Data Engineer, you will be responsible for engineering data pipelines for Splunk’s enterprise data platform, democratizing datasets, enabling advanced analytics capabilities, integrating data from various systems, and applications. You will work as part of an evolving Enterprise Data Management(EDM) - Data Engineering Team to rapidly design, secure, build, test and release new data enablement capabilities. The role will collaborate closely with other specialists, Product Managers & key stakeholders across the company. 
  • Build large-scale batch and real-time data pipelines using the cloud data technologies, such as Snowflake, Matillion, Kubernetes, Python, Apache Airflow and Apache Kafka 
  • Serve as a resource for data management implementations on other technology teams and collaborate with data owners, business owners, and leaders. 
  • Supports the design and development of framework based data integration and interoperability across multiple Splunk Business applications. 
  • Advanced level skills in Python, SQL, data integration, data modeling and data architecture. 

Requirements: I’ve already done that or have that! 

  • A minimum of 5 years of related experience 
  • 3+ years of experience as a Data Warehouse Architect or Data Engineer. 
  • 2+ years of experience driving adoption and building automation of data management services and tools. 
  • 2+ years of experience with API based ELT automation framework, data management, or interface design, development and maintenance. 
  • Large scale design, implementation and operations of Cloud data storage technologies such as AWS Redshift, Snowflake, Kubernetes, etc. 
  • 3+ years of experience with programming scripting and data science languages such as Python, SQL, etc. 
  • Experience in building data models, including conceptual, logical, and physical for Enterprise Relational, and Dimensional Databases. 
  • Advanced knowledge of Big Data concepts in organizing both structured and unstructured data 

Preferred knowledge and experience: These are a huge plus. 

  • Knowledge of Splunk products 
  • Knowledge of DBT
  • Experience with Sales Operations, Partner Operations and customer success business processes and applications

Education: Got it! 

  • Bachelor’s degree preferably in Computer Science, Information Technology, Management Information Systems, or equivalent years of industry experience.


We value diversity, equity, and inclusion at Splunk and are an equal employment opportunity employer. Qualified applicants receive consideration for employment without regard to race, religion, color, national origin, ancestry, sex, gender, gender identity, gender expression, sexual orientation, marital status, age, physical or mental disability or medical condition, genetic information, veteran status, or any other consideration made unlawful by federal, state, or local laws. We consider qualified applicants with criminal histories, consistent with legal requirements. 

This is a test line.