Big
Data on AWS introduces you to cloud-based big data solutions and Amazon
Elastic Map Reduce (EMR), the AWS big data platform. In this course, we show
you how to use Amazon EMR to process data using the broad ecosystem of Hadoop
tools like Pig and Hive. We also teach you how to create big data environments,
work with Amazon DynamoDB, Amazon Redshift, and Amazon Kinesis, and leverage
best practices to design big data environments for security and
cost-effectiveness.
Course Objectives
This course is
designed to teach you how to:
·
Understand Apache Hadoop in the context of
Amazon EMR.
·
Understand the architecture of an Amazon EMR cluster.
·
Launch an Amazon EMR cluster using an
appropriate Amazon Machine Image and Amazon EC2 instance types.
·
Choose appropriate AWS data storage options for
use with Amazon EMR.
·
Know your options for ingesting, transferring,
and compressing data for use with Amazon EMR.
·
Use common programming frameworks available for
Amazon EMR including Hive, Pig, and Streaming.
·
Work with Amazon Redshift to implement a big
data solution.
·
Leverage big data visualization software.
·
Choose appropriate security options for Amazon
EMR and your data.
·
Perform in-memory data analysis with Spark and
Shark on Amazon EMR.
·
Choose appropriate options to manage your Amazon
EMR environment cost-effectively.
·
Understand the benefits of using Amazon Kinesis
for big data.
Intended Audience
This course is intended for:
·
Individuals responsible for designing and
implementing big data solutions, namely Solutions Architects and SysOps
Administrators.
·
Data Scientists and Data Analysts interested in
learning about big data solutions on AWS.
Delivery Method
This course will be
delivered through a mix of:
·
Instructor-Led Training (ILT).
·
Hands-on Labs.
Hands-On Activity
This course allows you to test new skills and apply
knowledge to your working environment through a variety of practical exercises.
No comments:
Post a Comment