Pangeo

_images/pangeo_simple_logo.svg
_images/binder_logo.png

About Pangeo’s Binder

Much like mybinder.org, the Pangeo’s BinderHub deployment (binder.pangeo.io) allows users to create and share custom computing environments. The main distinction between the two BinderHubs is that Pangeo’s BinderHub allows users to perform scalable computations using Dask Gateway.

For more information on the Pangeo project, check out the online documentation.

Using Pangeo’s Binder

Preparing a repository for use with a BinderHub is quite simple. The best place to start is the BinderHub documentation. The sections below outline some common configurations used on Pangeo’s BinderHub deployment. Specifically, we’ll provide examples of the .dask/config.yaml configuration file and the binder/start script.

Using the Pangeo-Binder Cookiecutter

We have put together a cookiecutter repo to help setup binder repositories that can take advantage of Pangeo. This automates the setup of some of the configuration (described in detail below). The usage for this tool is described below.

pip install -U cookiecutter
cookiecutter https://github.com/pangeo-data/cookiecutter-pangeo-binder.git

After running the cookiecutter command, simply follow the command line instructions to compete setting up your repository. Add some Jupyter Notebooks, configure your environment and push the whole thing to GitHub.

Configuring Dask

The Pangeo Binder is configured to include a Dask Gateway server, which allows users to create Dask Clusters for distributed computation. To create the clusters, we recommend depending on the pangeo-notebook metapackage. This metapackage brings in several dependencies including dask-gateway and dask-labextension.

# binder/environment.yml
channels:
  - conda-forge
dependencies:
  - pangeo-notebook
  # Additional packages for your analysis...

The version of dask-gateway pre-configured on the Binder must match the dask-gateway in the environment.yml. That’s currently dask-gateway==0.6.1.

With Dask Gateway installed, your notebooks can create clusters:

from dask_gateway import Gateway
from dask.distributed import Client

gateway = Gateway()
cluster = gateway.new_cluster()

client = Client(cluster)

You can use dask_gateway.GatewayCluster.scale() to scale the number of workers manually, or set the cluster to adaptive mode with dask_gateway.GatewayCluster.adapt() to scale up and down based on computational load.

start script

The start script (e.g. binder/start) provides a mechanism to update the user environment at run time. The start script should look roughly like the example below. A few key points about using the start script:

  • The start script must end with the exec "$@" line.
  • The start script should not do any major work (i.e. don’t download a large dataset using this script)
#!/bin/bash

# Replace DASK_DASHBOARD_URL with the proxy location
sed -i -e "s|DASK_DASHBOARD_URL|/user/${JUPYTERHUB_USER}/proxy/8787|g" binder/jupyterlab-workspace.json
# Get the right workspace ID
sed -i -e "s|WORKSPACE_ID|/user/${JUPYTERHUB_USER}/lab|g" binder/jupyterlab-workspace.json

# Import the workspace into JupyterLab
jupyter lab workspaces import binder/jupyterlab-workspace.json \
  --NotebookApp.base_url=user/${JUPYTERHUB_USER}

exec "$@"

Examples using Pangeo’s Binder