Big Data Engineer
Bengaluru
Job ID : 1015
Who We Are
At VuNet, we are building next generation products that use a full stack product with Big data and machine learning in innovative ways to monitor customer journeys and improve user experience. Our next generation systems are helping the largest financial institutions to improve their digital payment experience, driving more financial inclusion across the country.
We empower our teams to solve hard problems – customer and business problems – in ways that our customers love. Great ideas get converted to extraordinary products and reach the customers in the shortest time. Our teams are cross functional, immerse into details, engage in collaborative debate and work with a shared purpose of creating a world class product company.
Imagine what you could do here.
We are looking for a savvy Data Engineer to join our team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. You’ll wear many hats in this role, but much of your focus will be on building and maintaining the streaming data platform. Apart from technical skills the candidate must exhibit good communication skills. We’re looking for someone willing to jump right in to help the company get the most out of our data.
Roles & Responsibilities
- Create and maintain optimal data pipeline architecture. Develop new plugins for the data pipelines.
- Build scalable systems that effectively store and crunch tons of data.
- Monitor, diagnose and maintain deployments.
- Take care of product performance, robustness and reliability.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Work with cross-functional partners (product, infra, analysts) to power data-driven products.
Skills & Experience
- Candidates must have 3+ years of experience as a Data engineer.
- Should have worked in designing, developing and fine tuning complex stream processing for high volume streams in an auto-scalable environment.
- Experience of working with streaming platforms like Kafka and Redis is required.
- Experience with other Big data tools: Hadoop, Spark, etc is required.
- Experience with real time data processing frameworks like Kafka streams or Spark streaming is required.
- Experience of working with SQL and NoSQL databases like PostgreSQL, ElasticSearch, MongoDB, Cassandra is required.
- Experience with JAVA, Python, Go and linux shell scripting is a must.
- Candidates must be familiar with Git.
- Candidates should be familiar with data processing and data storage technologies in cloud like Azure/AWS etc
- Good to have experience of working with data lakes and data warehouses like Snowflake, Amazon Redshift, Azure Storage, S3, etc.
- Experience with Docker / Kubernetes is a plus.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.is a plus.
How To Apply
If you are interested, fill the form below or e-mail us at jobs@vunetsystems.com, with your resume and an explanation of why you would be a good fit. We look forward to hearing from you.