Required Experience & Technical Stack
· Strong expertise in Java for building scalable data pipelines and microservices, with additional proficiency in Python or Scala as needed for data processing use cases.
· Extensive experience with Java-based big data frameworks, including Apache Spark (Java APIs), Kafka (Java producers/consumers), and Airflow (orchestrated with Java back-end services).
· Strong knowledge of SQL and Java JDBC for querying structured data and integrating with relational databases
· Hands-on experience building microservices in Java, using frameworks such as Spring Boot and Micronaut, including RESTful APIs, service orchestration, and containerization.
· Proficient in Java-based DevOps practices, using tools like Jenkins (Java-based CI), Git, Docker, and Kubernetes to enable scalable CI/CD pipelines.
· In-depth expertise with big data tools, integrating Hive, HBase, and Parquet with Java-based data ingestion and processing solutions.
· Skilled in data transformation and performance tuning using Java multithreading, memory management, and distributed system design patterns.
· Familiarity with integrating Java-based microservices with Machine Learning pipelines, enabling real-time inference and batch processing workflows.
· Architected and scaled high-volume Java microservices, ensuring high availability, resilience, and throughput for enterprise-scale data platforms.
· Strong troubleshooting and debugging capabilities in Java, especially in distributed systems involving asynchronous messaging and parallel computation.
· Experience mentoring engineers in Java best practices, clean code, object-oriented design, and performance optimization for enterprise data solutions. Candidate Should Be
· Outstanding written and verbal communication skills.