Job openings across our network

25
companies
368
Jobs

SDET

Unravel Data

Unravel Data

Software Engineering
Pune, Maharashtra, India
Posted 6+ months ago

SDET

Pune, Operations Full Time
About Unravel Data
Data is the new Oil! Every company on the planet uses data in some form to make better business decisions, build Machine Learning models, apply Data Science to discover unique differentiators, and much more. But, 85% of these data initiatives fail -- the reason is the complexity of Data Operations (DataOps) in the real world with petabytes of data. Unravel is a self-service DataOps solution for data engineers and users. It's an expert-in-a-box that uses innovative and patented algorithms and ML to monitor, analyze, recommend, and automatically tune data applications running on-prem, hybrid, or in the cloud. Unravel uses advanced Data Science within its platform to provide insights to data users for improving application performance, resource utilization, data layout, cost, cloud migration, workflow analysis, and several more. Unravel is actively used and deployed by
several brand name Fortune 500 companies for several years. Top VCs such as Menlo Ventures, GGV Capital, and Microsoft Ventures back our company.

About the Role

  • Hands-on experience in implementing Jenkins / CI-CD Pipeline
  • Good Knowledge of Big Data technologies like BigQuery, Kafka, Impala, Presto, Oozie and
  • Airflow, Elasticsearch, HBase, and Snowflake
  • Knowledge of Cloud data technologies and platforms like EMR, DataBricks, GCP,
  • HDInsights, Kubernetes, etc.

About You

  • Hands-on with any Programming Language, preferably Python
  • Big Data Exposure.
  • Good knowledge of Big Data technologies, preferably Hadoop, Spark, Hive.
  • Understanding of databases and SQL
  • Hands-on experience in implementing end-to-end solutions for test automation.
  • Knowledge of On-premise platforms like Cloudera, Hortonworks, etc.
  • Test Framework development experience
  • Should have the operational instincts and keen attention to details to think beyond the
  • obvious and come up with solutions
  • Good knowledge of Linux operating systems.
  • Good knowledge of Big Data technologies, preferably Hadoop, Spark, Hive.
  • Exposure in writing simple to complex SQL queries
  • The exciting part of the role:
  • Identify the customer-driven use cases for different Big Data services and platforms.
  • Write scalable data processing applications/scripts to simulate customer-driven use
  • cases for different Big Data services.
  • You will be involved in test automation of modern Big Data systems on-premise and
  • cloud (AWS, GCP & Azure), CI/CD for software development lifecycle.
  • Challenging part of the role: