• More than 4 years of real time hands on work experience with varied skill sets and domains.
• 3+ years of expertise in Big data application development using Hadoop/YARN, Spark, Hive, Sqoop and Oozie.
• Real time programming expertise in Java, Scala, Shell, Pig latin & SQL.
• Worked with varied data formats like JSON, XML, Avro, Parquet, RC and ORC.
• Handled large scale structured, semi-structured and unstructured data sets using Hadoop & Spark.
• Extensive usage of Sqoop & Oozie for data ingestion from RDBMS into HDFS & Hive warehouse.
• Experience in writing complex queries for reporting using Impala, Hive and SparkSQL.
• Implemented customized optimization techniques like Combiner & Partitioner to enhance the performance.
• Used compression codecs like Snappy & BZIP2 to improve the storage efficiency.
• Experience working with NoSQL Database (HBase) application development.
• Experience in migrating Mapreduce jobs to Spark jobs.
• Expertise in working with AWS (EC2 and EMR) and Google cloud platforms.
• 3+ years of expertise in Big data application development using Hadoop/YARN, Spark, Hive, Sqoop and Oozie.
• Real time programming expertise in Java, Scala, Shell, Pig latin & SQL.
• Worked with varied data formats like JSON, XML, Avro, Parquet, RC and ORC.
• Handled large scale structured, semi-structured and unstructured data sets using Hadoop & Spark.
• Extensive usage of Sqoop & Oozie for data ingestion from RDBMS into HDFS & Hive warehouse.
• Experience in writing complex queries for reporting using Impala, Hive and SparkSQL.
• Implemented customized optimization techniques like Combiner & Partitioner to enhance the performance.
• Used compression codecs like Snappy & BZIP2 to improve the storage efficiency.
• Experience working with NoSQL Database (HBase) application development.
• Experience in migrating Mapreduce jobs to Spark jobs.
• Expertise in working with AWS (EC2 and EMR) and Google cloud platforms.
No comments:
Post a Comment