Experience: 3-5 Years
CTC: No Bar
As part of the Data Infrastructure team, you will play a key role in managing the Hadoop infrastructure and associated support systems. Currently this environment is at nascent phase scaling to tens of petabytes in size with Hadoop clusters ranging around 100 – 150 nodes across multiple Data Centers. The demands on this system is growing steadily as we increase the data ingestions from retailers, and add more functionality and products around the data. You will be responsible for the performance improvements and optimization of our Hadoop ecosystem components, working closely with the Engineering and Product teams. You’ll be taking on interesting and complex challenges, in scaling the current infrastructure setups for production, staging and disaster recovery cluster(s). A successful candidate for this position will be self-motivated, with an attitude of getting things done. Should be able to see the big picture, as well be able to deep-dive into details solving day to day problems of operational nature.
Candidates must have strong analytical, troubleshooting and problem resolution skills and be able to effectively communicate with customers and other stakeholders.
Own and maintain operational best practices for smooth operation of large Hadoop clusters.
Apply in-depth analysis of Hadoop-based workload, project-based work, devise solutions and evaluate their effectiveness.
Optimize and tune the Hadoop environment(s) to meet the performance requirements. Partner with Hadoop developers in building best practices for Warehouse and Analytics environments.
Investigate emerging technologies in the Hadoop ecosystem that relate to our needs and implement those technologies.
Hands-on experience in deploying and administering Hadoop clusters(preferably Cloudera variants).
Strong problem solving and troubleshooting skills.
Good Linux administration and troubleshooting skills.
Good understanding of Hadoop design principles and the factors that affect distributed system performance.
Experience in troubleshooting on Big-Data technologies like HDFS, Impala, Sentry, Hive, Spark, Oozie, Kafka, Zookeeper etc.
Good scripting experience with at least two of the following: Shell, Python Ruby or Perl.
Knowledge in metric collection for monitoring and alerting
Good to have skills
Experience with cloud platforms, auto-scaling and cloud migrations
Knowledge on ETL processes and conventional data-warehouse systems
Know how on BI/analytics platforms
Bachelor’s/Master’s degree in Computer Science or related field(s)
3+ years progressive technology experience, including experience in the following:
Ability to work effectively with cross-functional and cross-cultural teams.
Ability to investigate complex issues spanning multiple technologies.
Want to be part of a company which has pioneered digital promotions and advertising solutions driven by data? We innovate every day to connect shoppers, retailers and thousands of brands to provide world class Shopping experience. Quotient Technology Inc., is a leading data-driven digital promotions and media company that connects brands, retailers, and shoppers.