×
Dodano do koszyka:
Pozycja znajduje się w koszyku, zwiększono ilość tej pozycji:
Zakupiłeś już tę pozycję:
Książkę możesz pobrać z biblioteki w panelu użytkownika
Pozycja znajduje się w koszyku
Przejdź do koszyka

Zawartość koszyka

ODBIERZ TWÓJ BONUS :: »

Shashwat Shriparv - ebooki

Lista Kafelki

Data wydania

Shashwat Shriparv was born in Muzaffarpur, Bihar. He did his schooling from Muzaffarpur and Shillong, Meghalaya. He received his BCA degree from IGNOU, Delhi and his MCA degree from Cochin University of Science and Technology, Kerala (C-DAC Trivandrum). He was introduced to Big Data technologies in early 2010 when he was asked to perform a proof of concept (POC) on Big Data technologies in storing and processing logs. He was also given another project, where he was required to store huge binary files with variable headers and process them. At this time, he started configuring, setting up, and testing Hadoop HBase clusters and writing sample code for them. After performing a successful POC, he initiated serious development using Java REST and SOAP web services, building a system to store and process logs to Hadoop using web services, and then storing these logs in HBase using homemade schema and reading data using HBase APIs and HBase-Hive mapped queries. Shashwat successfully implemented the project, and then moved on to work on huge binary files of size 1 to 3 TB, processing the header and storing metadata to HBase and files on HDFS. Shashwat started his career as a software developer at C-DAC Cyber Forensics, Trivandrum, building mobile-related software for forensics analysis. Then, he moved to Genilok Computer Solutions, where he worked on cluster computing, HPC technologies, and web technologies. After this, he moved to Bangalore from Trivandrum and joined PointCross, where he started working with Big Data technologies, developing software using Java, web services, and platform as Big Data. He worked on many projects revolving around Big Data technologies, such as Hadoop, HBase, Hive, Pig, Sqoop, Flume, and so on at PointCross. From here, he moved to HCL Infosystems Ltd. to work on the UIDAI project, which is one of the most prestigious projects in India, providing a unique identification number to every resident of India. Here, he worked on technologies such as HBase, Hive, Hadoop, Pig, and Linux, scripting, managing HBase Hadoop clusters, writing scripts, automating tasks and processes, and building dashboards for monitoring clusters. Currently, he is working with Cognilytics, Inc. on Big Data technologies, HANA, and other high-performance technologies. You can find out more about him at https://github.com/shriparv and https://helpmetocode.blogspot.com. You can connect with him on LinkedIn at https://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9. You can also e-mail him at dwivedishashwat@gmail.com. Shashwat has worked as a reviewer on the book Pig Design Pattern, Pradeep Pasupuleti, Packt Publishing. He also contributed to his college magazine, InfinityTech, as an editor.

więcej »

Tytuły autora: Shashwat Shriparv dostępne w księgarni Helion

Zamknij Pobierz aplikację mobilną Helion
Zabrania się wykorzystania treści strony do celów eksploracji tekstu i danych (TDM), w tym eksploracji w celu szkolenia technologii AI i innych systemów uczenia maszynowego. It is forbidden to use the content of the site for text and data mining (TDM), including mining for training AI technologies and other machine learning systems.