
Online or onsite, instructor-led live Apache Hadoop training courses demonstrate through interactive hands-on practice the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems.
Hadoop training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Hadoop training can be carried out locally on customer premises in Turkey or in NobleProg corporate training centers in Turkey.
NobleProg -- Your Local Training Provider
Testimonials
The fact that all the data and software was ready to use on an already prepared VM, provided by the trainer in external disks.
vyzVoice
Course: Hadoop for Developers and Administrators
I mostly liked the trainer giving real live Examples.
Simon Hahn
Course: Administrator Training for Apache Hadoop
I genuinely enjoyed the big competences of Trainer.
Grzegorz Gorski
Course: Administrator Training for Apache Hadoop
I genuinely enjoyed the many hands-on sessions.
Jacek Pieczątka
Course: Administrator Training for Apache Hadoop
Liked very much the interactive way of learning.
Luigi Loiacono
Course: Data Analysis with Hive/HiveQL
It was a very practical training, I liked the hands-on exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
I was benefit from the good overview, good balance between theory and exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
I enjoyed the dynamic interaction and “hands-on” the subject, thanks to the Virtual Machine, very stimulating!.
Philippe Job
Course: Data Analysis with Hive/HiveQL
I was benefit from the competence and knowledge of the trainer.
Jonathan Puvilland
Course: Data Analysis with Hive/HiveQL
It was very hands-on, we spent half the time actually doing things in Clouded/Hardtop, running different commands, checking the system, and so on. The extra materials (books, websites, etc. .) were really appreciated, we will have to continue to learn. The installations were quite fun, and very handy, the cluster setup from scratch was really good.
Ericsson
Course: Administrator Training for Apache Hadoop
The trainer was fantastic and really knew his stuff. I learned a lot about the software I didn't know previously which will help a lot at my job!
Steve McPhail - Alberta Health Services - Information Technology
Course: Data Analysis with Hive/HiveQL
The high level principles about Hive, HDFS..
Geert Suys - Proximus Group
Course: Data Analysis with Hive/HiveQL
The handson. The mix practice/theroy
Proximus Group
Course: Data Analysis with Hive/HiveQL
Fulvio was able to grasp our companies business case and was able to correlate with the course material, almost instantly.
Samuel Peeters - Proximus Group
Course: Data Analysis with Hive/HiveQL
Lot of hands-on exercises.
Ericsson
Course: Administrator Training for Apache Hadoop
Ambari management tool. Ability to discuss practical Hadoop experiences from other business case than telecom.
Ericsson
Course: Administrator Training for Apache Hadoop
I thought he did a great job of tailoring the experience to the audience. This class is mostly designed to cover data analysis with HIVE, but me and my co-worker are doing HIVE administration with no real data analytics responsibilities.
ian reif - Franchise Tax Board
Course: Data Analysis with Hive/HiveQL
The VM I liked very much The Teacher was very knowledgeable regarding the topic as well as other topics, he was very nice and friendly I liked the facility in Dubai.
Safar Alqahtani - Elm Information Security
Course: Big Data Analytics in Health
practical things of doing, also theory was served good by Ajay
Dominik Mazur - Capgemini Polska Sp. z o.o.
Course: Hadoop Administration on MapR
Exercises
Capgemini Polska Sp. z o.o.
Course: Hadoop Administration on MapR
I found the training good, very informative....but could have been spread over 4 or 5 days, allowing us to go into more details on different aspects.
Veterans Affairs Canada
Course: Hadoop Administration
I found this course gave a great overview and quickly touched some areas I wasn't even considering.
Veterans Affairs Canada
Course: Hadoop Administration
I genuinely liked work exercises with cluster to see performance of nodes across cluster and extended functionality.
CACI Ltd
Course: Apache NiFi for Developers
The trainers in depth knowledge of the subject
CACI Ltd
Course: Apache NiFi for Administrators
Ajay was a very experienced consultant and was able to answer all our questions and even made suggestions on best practices for the project we are currently engaged on.
CACI Ltd
Course: Apache NiFi for Administrators
That I had it in the first place.
Peter Scales - CACI Ltd
Course: Apache NiFi for Developers
The NIFI workflow excercises
Politiets Sikkerhetstjeneste
Course: Apache NiFi for Administrators
answers to our specific questions
MOD BELGIUM
Course: Apache NiFi for Administrators
Apache Hadoop Subcategories in Turkey
Apache Hadoop Course Outlines in Turkey
In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.
By the end of this training, participants will be able to:
- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered
Audience
- Data scientist
- Developer
- System administrator
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.
In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.
By the end of this training, participants will be able to:
- Install and configure Sqoop
- Import data from MySQL to HDFS and Hive
- Import data from HDFS and Hive to MySQL
Audience
- System administrators
- Data engineers
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.
By the end of this training, participants will be able to:
- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
By the end of this training, participants will be able to:
- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
- Developers
Format of the Course
- Lectures, hands-on practice, small tests along the way to gauge understanding
Impala enables users to issue low-latency SQL queries to data stored in Hadoop Distributed File System and Apache Hbase without requiring data movement or transformation.
Audience
This course is aimed at analysts and data scientists performing analysis on data stored in Hadoop via Business Intelligence or SQL tools.
After this course delegates will be able to
- Extract meaningful information from Hadoop clusters with Impala.
- Write specific programs to facilitate Business Intelligence in Impala SQL Dialect.
- Troubleshoot Impala.
By the end of this training, participants will be able to:
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course is very hands-on with lots of lab exercises.
Duration : 3 days
Audience : Developers & Administrators
In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.
By the end of this training, participants will be able to:
- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi
Audience
- Developers
- IT Professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.
In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.
Audience
- Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
- Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters.
By the end of this training, participants will be able to:
- Set up a live Big Data cluster using Ambari
- Apply Ambari's advanced features and functionalities to various use cases
- Seamlessly add and remove nodes as needed
- Improve a Hadoop cluster's performance through tuning and tweaking
Audience
- DevOps
- System Administrators
- DBAs
- Hadoop testing professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Audience
Business Analysts
Duration
three days
Format
Lectures and hands on labs.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
Course goal:
Getting knowledge regarding Hadoop cluster administration
In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.
By the end of this training, participants will be able to:
- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results
Audience
- Data analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice.
Note
- To request a customized training for this course, please contact us to arrange.
The major focus of the course is data manipulation and transformation.
Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation.
This training also addresses performance metrics and performance optimisation.
The course is entirely hands on and is punctuated by presentations of the theoretical aspects.
The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment
Goal:
Deep knowledge on Hadoop cluster administration.
This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.
By the end of this training, participants will be able to:
- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice