vyankatesh pentayya dasari
Business and FinanceAbout Me
Having 2 years of experience in designing, developing and maintaining large business
applications such as data migration, integration, conversion, and Testing.
Having experience in working with Hadoop Ecosystem, design/developing applications.
Ecosystem technologies (HDFS, Hive, Sqoop, Apache Spark and AWS).
Real time experience in Hadoop/Big Data related technology experience in Storage, Querying,
Processing and analysis of data.
Understands the Complex Data Processing needs of big data and have experience in developing
codes and modules to address those needs.
Capable of processing large sets of Structured and Semi-structured data.
Worked on AWS Components like RDS, S3, EMR and EC2.
Worked with different file formats like PARQUET, JSON, XML, AVRO data files and text files.
Worked extensively on Hadoop migration project and POCs.
Experience in data architecture including data ingestion, pipeline design.
Expertise in writing Hadoop Jobs for analyzing data using Hive.
Experience in importing and exporting data using Sqoop from RDBMS to HDFS and vice-versa.
Knowledge in installing, configuring, and using Hadoop ecosystem components like HDFS, Hive,
Sqoop, and Spark.
Brought in simplification process and Optimization initiatives to bring efficiency into applications.
Manned versatile roles across diverse applications as Data Engineer and Developer.
Able to assess business rules, collaborate with stakeholders and perform source-to-target data
mapping, design and review.
Having hands on experience in deploying spark jobs over EMR cluster as a step execution.
Used Agile methodology to work with IT and business team to progress efficient system
development.
Data base experience in SQL Server and MYSQL
Education
Dr.v.v.patil collage of engineering 2021
BE in Mechanical
Government polytechnic. 2018
Diploma in Mechanical
Work & Experience
CG Power & industrial solutions LTD 03/01/2021 - 01/02/2023
spark developer
Having 2 years of experience in designing, developing and maintaining large business applications such as data migration, integration, conversion, and Testing. Having experience in working with Hadoop Ecosystem, design/developing applications. Ecosystem technologies (HDFS, Hive, Sqoop, Apache Spark and AWS). Real time experience in Hadoop/Big Data related technology experience in Storage, Querying, Processing and analysis of data. Understands the Complex Data Processing needs of big data and have experience in developing codes and modules to address those needs. Capable of processing large sets of Structured and Semi-structured data. Worked on AWS Components like RDS, S3, EMR and EC2. Worked with different file formats like PARQUET, JSON, XML, AVRO data files and text files. Worked extensively on Hadoop migration project and POCs. Experience in data architecture including data ingestion, pipeline design. Expertise in writing Hadoop Jobs for analyzing data using Hive. Experience in importing and exporting data using Sqoop from RDBMS to HDFS and vice-versa. Knowledge in installing, configuring, and using Hadoop ecosystem components like HDFS, Hive, Sqoop, and Spark. Brought in simplification process and Optimization initiatives to bring efficiency into applications. Manned versatile roles across diverse applications as Data Engineer and Developer. Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Having hands on experience in deploying spark jobs over EMR cluster as a step execution. Used Agile methodology to work with IT and business team to progress efficient system development. Data base experience in SQL Server and MYSQL
There are no reviews yet.