AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load your data for analytics. In the fourth post of the series, we discussed optimizing memory management. In this post, we focus on writing ETL scripts for AWS Glue jobs locally. AWS Glue is built on top of Apache Spark and therefore uses all the strengths of open-source technologies. AWS Glue comes with many improvements on top of Apache Spark and has its own ETL libraries that can fast-track the development process and reduce boilerplate code.
The AWS Glue team released the AWS Glue binaries and let you set up an environment on your desktop to test your code. We have used these libraries to create an image with all the right dependencies packaged together. The image has AWS Glue 1.0, Apache Spark, OpenJDK, Maven, Python3, the AWS Command Line Interface (AWS CLI), and boto3. We have also bundled Jupyter and Zeppelin notebook servers in the image so you don’t have to configure an IDE and can start developing AWS Glue code right away.
The AWS Glue team will release new images for various AWS Glue updates. The tags of the new images will follow the following convention: glue_libs_<glue-version>_image_<image-version>
. For example, glue_libs_1.0.0_image_01
. In this name, 1.0
is the AWS Glue major version, .0
is the patch version, and 01
is the image version. The patch version will be incremented for updates to the AWS Glue libraries of a major release. Image version will be incremented for the release of a new image of a major AWS Glue release. Both these increments will be reset with every major AWS Glue release. So, the first image released for AWS Glue 2.0 will be glue_libs_2.0.0_image_01
.
We recommend pulling the highest image version for an AWS Glue major version to get the latest updates.
Prerequisites
Before you start, make sure that Docker is installed and the Docker daemon is running. For installation instructions, see the Docker documentation for Mac, Windows, or Linux. The machine running the Docker hosts the AWS Glue container. Also make sure that you have at least 7 GB of disk space for the image on the host running the Docker.
For more information about restrictions when developing AWS Glue code locally, see Local Development Restrictions.
Solution overview
In this post, we use amazon/aws-glue-libs:glue_libs_1.0.0_image_01
from Docker Hub. This image has only been tested for an AWS Glue 1.0 Spark shell (both for PySpark and Scala). It hasn’t been tested for an AWS Glue 1.0 Python shell.
We organize this post into the following three sections. You only have to complete one of the three sections (not all three) depending on your requirement:
- Setting up the container to use Jupyter or Zeppelin notebooks
- Setting up the Docker image with PyCharm Professional
- Running against the CLI interpreter
This post uses the following two terms frequently:
- Client – The system from which you access the notebook. You open a web browser on this system and put the notebook URL.
- Host – The system that hosts the Docker daemon. The container runs on this system.
Sometimes, your client and host can be the same system.
Setting up the container to use Jupyter or Zeppelin notebooks
Setting up the container to run PySpark code in a notebook includes three high-level steps:
- Pulling the image from Docker Hub.
- Running the container.
- Opening the notebook.
Pulling the image from Docker Hub
If you’re running Docker on Windows, choose the Docker icon (right-click) and choose Switch to Linux containers… before pulling the image.
Open cmd
on Windows or terminal on Mac and run the following command:
Running the container
We pulled the image from Docker Hub in the previous step. We now run a container using this image.
The general format of the run
command is:
The code includes the following information:
The code includes the following information:
- <port_on_host> – The local port of your host that is mapped to the port of the container. For our use case, the container port is either
8888
(for a Jupyter notebook) or8080
(for a Zeppelin notebook). To keep things simple, we use the same port number as the notebook server ports on the container in the following examples. - <port_on_container_either_8888_or_8080> – The port of the notebook server on the container. The default port of Jupyter is
8888
; the default port of Zeppelin is8080
. - 4040:4040 – This is required for SparkUI. 4040 is the default port for SparkUI. For more information, see Web Interfaces.
- <credential_setup_to_access_AWS_resources> – In this section, we go with the typical case of mounting the host’s directory, containing the credentials. We assume that your host has the credentials configured using
aws configure
. The flow chart in the Appendix section explains various ways to set the credentials if the assumption doesn’t hold for your environment. -
<container_name> – The name of the container. You can use any text here.
- amazon/aws-glue-libs:glue_libs_1.0.0_image_01 – The name of the image that we pulled in the previous step.
- <command_to_start_notebook_server> – We run
/home/zeppelin/bin/zeppelin.sh
for a Zeppelin notebook and/home/jupyter/jupyter_start.sh
for a Jupyter notebook. If you want to run your code against the CLI interpreter, you don’t need a notebook server and can leave this argument blank.
To run a Zeppelin notebook, replace 8888:8888
with 8080:8080
, glue_jupyter
with glue_zeppelin
, and /home/jupyter/jupyter_start.sh
with /home/zeppelin/bin/zeppelin.sh
. For example, the following command starts a Zeppelin notebook server and passes read-only credentials from a Mac or Linux host:
You can now run the following command to make sure that the container is running:
The Jupyter notebook is configured to allow connections from all IP addresses without authentication, and the Zeppelin notebook is configured to use anonymous access. This configuration makes sure that you can start working on your local machine with just two commands (docker pull
and docker run
). If your scenario mandates a different configuration, run the container without running the notebook startup script (/home/jupyter/jupyter_start.sh
or /home/zeppelin/bin/zeppelin.sh
). This starts the container but not the notebook server. You can then run the bash shell on the container using the following command, edit the required notebook configurations, and start the notebook server:
For example,
The following example code is the docker run
command without the notebook server startup:
If you’re running the container on Amazon Elastic Compute Cloud (Amazon EC2) instance, you have to set up your inbound rules in the security group to allow communication on the ports used by the notebook server. A broad inbound rule can create security risks. For more information, see AWS Security Best Practices.
Opening the notebook
If your client and host are the same machine, enter the following URL for Jupyter: http://localhost:8888
.
You can write PySpark code in the notebook as shown here. You can also use SQL magic (%%sql
) to directly write SQL against the tables in the AWS Glue Data Catalog. If your catalog table is on top of JSON data, you have to place json-serde.jar in the /home/spark-2.4.3-bin-spark-2.4.3-bin-hadoop2.8/jars
directory of the container and restart the kernel in your Jupyter notebook. You can place the jar in this directory by first running the bash shell on the container using the following command:
If you have a local directory that holds your notebooks, you can mount it to /home/jupyter/jupyter_default_dir
using the -v
option. These notebooks are available to you when you open the Jupyter notebook URL. For example, see the following code:
The URL for Zeppelin is http://localhost:8080
.
For Zeppelin notebooks, include %spark.pyspark
on the top to run PySpark code.
If your host is Amazon EC2 and your client is your laptop, replace localhost
in the preceding URLs with your host’s public IP.
Depending on your network or if you’re on a VPN, you might have to set an SSH tunnel. The general format of the tunnel is the following code:
Your security group controlling the EC2 instance should allow inbound on port 22 from the client. A broad inbound rule can create security risks. For more information, see AWS Security Best Practices.
You can get the under the IPAddress
field when you run docker inspect
. For example: docker inspect glue_jupyter
.
If you set up the tunnel, the URL to access the notebook is: http://localhost:
.
Use 8888
or 8080
for , depending on if you’re running a Jupyter or Zeppelin notebook.
You can now use the following sample code to test your notebook:
Although awsglue-datasets
is a public bucket, you at least need the following permissions, attached to the AWS Identity and Access Management (IAM) user used for your container, to view the data:
You can also see the databases in your AWS Glue Data Catalog using the following code:
You need AWS Glue permissions to run the preceding command. The following are the minimum permissions required to run the code. Replace with your account number and with your Region:
Similarly, you can query the AWS Glue Data Catalog tables too. If your host is Amazon EC2 instance, you see the catalog of the Region of your EC2 instance. If your host is local, you see the catalog of the Region set in your aws configure
or your AWS_REGION
variable.
You can stop here if you want to develop AWS Glue code locally using only notebooks.
Setting up the Docker image with PyCharm Professional
This section talks about setting up PyCharm Professional to use the image. For this post, we use Windows. There may be a few differences when using PyCharm on a Mac.
- Open
cmd
(orterminal
for Mac) and pullamazon/aws-glue-libs:glue_libs_1.0.0_image_01
using the following command:If you’re running Docker on Windows, choose the Docker icon (right-click) and choose Switch to Linux containers… before pulling the image.
- Choose the Docker icon (right-click) and choose Settings (this step isn’t required for Mac or Linux).
- In the General section, select Expose daemon on tcp://localhost:2375 without TLS (this step isn’t required for Mac or Linux). Note the warning listed under the checkbox. This step is based on PyCharm documentation.
- Choose Apply & Restart (this step isn’t required for Mac or Linux).
- Choose the Docker icon (right-click) and choose Restart… if the Docker doesn’t restart automatically (this step isn’t required for Mac or Linux).
- Open PyCharm and create a Pure Python project (if you don’t have one).
- Under File, choose Settings… (for Mac, under PyCharm, choose Preferences).
- Under Settings, choose Project Interpreter. In the following screenshot, GlueProject is the name of my project. Your project name might be different.
- Choose Show All… from the drop-down menu.
- Choose the + icon.
- Choose Docker.
- Choose New.
- For Name, enter a name (for example,
Docker-Glue
). - Keep other settings at their default.
- If running on Windows, for Connect to Docker daemon with, select TCP socket and enter the Engine API URL.
For this post, we entertcp://localhost:2375
because Docker and PyCharm are on the same Windows machine.
If running on a Mac, select Docker for Mac. No API URL is required. - Make sure you see the message
Connection successful
.
For Windows, if you don’t see this message, Docker may not have restarted after you changed the settings in Step 4. Restart the Docker and repeat these steps again. For more information about connection settings, see PyCharm documentation.
The following screenshots show steps 13-16 in Windows and Mac.
- Choose OK.
You should now see the image listed in the drop-down menu.
- Choose the image that you pulled from Docker Hub (
amazon/aws-glue-libs:glue_libs_1.0.0_image_01
).
- Choose OK.
You now see the interpreter listed.
- Choose OK.
This lists all the packages in the image.
- Choose OK.
Steps 22-27 help you get AWS Glue-related code completion suggestions from PyCharm.
- Download the following file: https://s3.amazonaws.com/aws-glue-jes-prod-us-east-1-assets/etl-1.0/python/PyGlue.zip.
- Under File, choose Settings (for Mac, under PyCharm, choose Preferences).
- Under Project: , choose Project Structure.
- Choose Add Content Root.
- Choose the newly downloaded
PyGlue.zip
file.
- In the Settings window, choose OK.
- Choose the project (right-click) and choose New, Python File.
- Enter a name for the Python file and press Enter.
- Enter the following code in the file and save it. For more information about the minimum permissions required to run this code, see this section.
- Choose Add Configuration.
- Choose the +icon.
- Under Add New Configuration, choose Python.
- For Name, enter a name.
- For Environment variables, enter the following:
- For Script path, select the newly created script in Step 29.
- For Python interpreter, choose the newly created interpreter.
- Choose Docker Container Settings.
- Under Volume bindings, choose the +icon.
- For Host path, add the absolute path
.aws
folder that holds thecredentials
and theconfig
files. - For Container path, add
/root/.aws
. - Choose OK.
- For Run/Debug Configurations, choose OK.
- Run the code by choosing the green button on the top right.
You can also see the databases in your AWS Glue Data Catalog using the following code. For more information about the minimum permissions required to run this code, see this section.
Similarly, you can also query the catalog tables. If your host is Amazon EC2 instance, you see the catalog of the Region of your EC2 instance. If your host is local, you see the catalog of the Region set in your aws configure
or your AWS_REGION
variable.
PyCharm gives code completion suggestions for AWS Glue (see the following screenshot). This is possible because of the steps you completed earlier.
Running against the CLI interpreter
You can always run the bash shell on the container and run your PySpark code directly against the CLI interpreter in the container.
- Complete Pulling the image from Docker Hub step and Running the container step in the section Setting up the container to use Jupyter of Zeppelin notebooks.
- Run the bash shell on the container by entering the following code. Replace with the name (
--name
argument) you used earlier. - Run one of the following commands:
- For PySpark, enter the following code:
- For Scala, enter the following code:
Conclusion
In this post, we learned about a three-step process to get started on AWS Glue and Jupyter or Zeppelin notebook. Although notebooks are a great way to get started and a great asset to data scientists and data wranglers, data engineers generally have a source control repository, an IDE, and a well-defined CI/CD process. Because PyCharm is a widely used IDE for PySpark development, we showed how to use the image with PyCharm Professional. You can develop your code locally in your IDE and test it locally using the container, and your CI/CD process can run as it does with any other IDE and source control tool in your organization. Although we showed integration with PyCharm, you can similarly integrate the container with any IDE that you use to complete your CI/CD story with AWS Glue.
Appendix
The following section discusses various ways to set the credentials to access AWS resources (such as Amazon Simple Storage Service (Amazon S3), AWS Step Functions, and more) from the container.
You need to provide your AWS credentials to connect to an AWS service from the container. The AWS SDKs and CLIs use provider chains to look for AWS credentials in several different places, including system or user environment variables and in local AWS configuration files. For more information about how to set up credentials, see https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html. To generate the credentials using the AWS Management Console, see Managing Access Keys (Console). For instructions on generating credentials with the AWS CLI, see create-access-key. For more information about generating credentials with an API, see CreateAccessKey.
The following flow chart shows the various ways to set up AWS credentials for the container. Most of these mechanisms don’t work with PyCharm because we use the image there and not the container. You can use the container as an SSH interpreter in PyCharm and then use one of the credential setting mechanisms listed here. However, that discussion is out of the scope of this post.
Note that the numbers, in brackets, match the code snippets that follow the chart.
(1) To find more info about the syntax of setting up the tunnel, see this.
(2) To set credentials using the docker cp
command to copy credentials from the Windows host to the container, enter the following code (this example code uses the container name glue_jupyter
):
(3) To mount the host’s .aws directory on the container with rw option, see this.
(4) To mount the host’s .aws directory on the container with ro option, see this.
(5) To set the credentials in a file, enter the following code:
/datalab_pocs/glue_local/env_variables.txt
is the absolute path of the file holding the environment variables. The file should have the following variables:
- AWS_ACCESS_KEY_ID=
- AWS_SECRET_ACCESS_KEY=
- AWS_REGION=
For more information about Regions, see Regions, Availability Zones, and Local Zones.
(6) To set the credentials in the docker run
command, enter the following code:
(7) To set credentials using aws configure
on the container, enter the following code:
This article has been published from the source link without modifications to the text. Only the headline has been changed.