repo_name
stringclasses 8
values | repo_url
stringclasses 8
values | repo_description
stringclasses 6
values | repo_stars
int64 6
15.8k
| repo_forks
int64 192
3.6k
| repo_last_updated
stringclasses 8
values | repo_created_at
stringclasses 8
values | repo_size
int64 34
2.13k
| repo_license
stringclasses 4
values | language
stringclasses 3
values | text
stringlengths 0
6.32M
| avg_line_length
float64 0
28.5k
| max_line_length
int64 0
945k
| alphnanum_fraction
float64 0
0.91
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ##
# Generic dockerfile for dbt image building.
# See README for operational details
##
# Top level build args
ARG build_for=linux/amd64
##
# base image (abstract)
##
FROM --platform=$build_for python:3.9.9-slim-bullseye as base
# N.B. The refs updated automagically every release via bumpversion
# N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct
ARG dbt_core_ref=dbt-core@v1.0.1
ARG dbt_postgres_ref=dbt-core@v1.0.1
ARG dbt_redshift_ref=dbt-redshift@v1.0.0
ARG dbt_bigquery_ref=dbt-bigquery@v1.0.0
ARG dbt_snowflake_ref=dbt-snowflake@v1.0.0
ARG dbt_spark_ref=dbt-spark@v1.0.0
# special case args
ARG dbt_spark_version=all
ARG dbt_third_party
# System setup
RUN apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
git \
ssh-client \
software-properties-common \
make \
build-essential \
ca-certificates \
libpq-dev \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/*
# Env vars
ENV PYTHONIOENCODING=utf-8
ENV LANG=C.UTF-8
# Update python
RUN python -m pip install --upgrade pip setuptools wheel --no-cache-dir
# Set docker basics
WORKDIR /usr/app/dbt/
VOLUME /usr/app
ENTRYPOINT ["dbt"]
##
# dbt-core
##
FROM base as dbt-core
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_core_ref}#egg=dbt-core&subdirectory=core"
##
# dbt-postgres
##
FROM base as dbt-postgres
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_postgres_ref}#egg=dbt-postgres&subdirectory=plugins/postgres"
##
# dbt-redshift
##
FROM base as dbt-redshift
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_redshift_ref}#egg=dbt-redshift"
##
# dbt-bigquery
##
FROM base as dbt-bigquery
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_bigquery_ref}#egg=dbt-bigquery"
##
# dbt-snowflake
##
FROM base as dbt-snowflake
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_snowflake_ref}#egg=dbt-snowflake"
##
# dbt-spark
##
FROM base as dbt-spark
RUN apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
python-dev \
libsasl2-dev \
gcc \
unixodbc-dev \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/*
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_spark_ref}#egg=dbt-spark[${dbt_spark_version}]"
##
# dbt-third-party
##
FROM dbt-core as dbt-third-party
RUN python -m pip install --no-cache-dir "${dbt_third_party}"
##
# dbt-all
##
FROM base as dbt-all
RUN apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
python-dev \
libsasl2-dev \
gcc \
unixodbc-dev \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/*
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_redshift_ref}#egg=dbt-redshift"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_bigquery_ref}#egg=dbt-bigquery"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_snowflake_ref}#egg=dbt-snowflake"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_spark_ref}#egg=dbt-spark[${dbt_spark_version}]"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_postgres_ref}#egg=dbt-postgres&subdirectory=plugins/postgres" | 25.507463 | 141 | 0.68178 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | Import-Module posh-git
Import-Module PSFzf -ArgumentList 'Ctrl+t', 'Ctrl+r'
Import-Module z
Import-Module Terminal-Icons
Set-PSReadlineKeyHandler -Key Tab -Function MenuComplete
$env:POSH_GIT_ENABLED=$true
oh-my-posh init pwsh --config $env:POSH_THEME | Invoke-Expression
# NOTE: You can override the above env var from the devcontainer.json "args" under the "build" key.
# Aliases
Set-Alias -Name ac -Value Add-Content | 29.285714 | 99 | 0.77305 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Course Project
### Objective
The goal of this project is to apply everything we have learned
in this course to build an end-to-end data pipeline.
### Problem statement
Develop a dashboard with two tiles by:
* Selecting a dataset of interest (see [Datasets](#datasets))
* Creating a pipeline for processing this dataset and putting it to a datalake
* Creating a pipeline for moving the data from the lake to a data warehouse
* Transforming the data in the data warehouse: prepare it for the dashboard
* Building a dashboard to visualize the data
## Data Pipeline
The pipeline could be **stream** or **batch**: this is the first thing you'll need to decide
* **Stream**: If you want to consume data in real-time and put them to data lake
* **Batch**: If you want to run things periodically (e.g. hourly/daily)
## Technologies
You don't have to limit yourself to technologies covered in the course. You can use alternatives as well:
* **Cloud**: AWS, GCP, Azure, ...
* **Infrastructure as code (IaC)**: Terraform, Pulumi, Cloud Formation, ...
* **Workflow orchestration**: Airflow, Prefect, Luigi, ...
* **Data Warehouse**: BigQuery, Snowflake, Redshift, ...
* **Batch processing**: Spark, Flink, AWS Batch, ...
* **Stream processing**: Kafka, Pulsar, Kinesis, ...
If you use a tool that wasn't covered in the course, be sure to explain what that tool does.
If you're not certain about some tools, ask in Slack.
## Dashboard
You can use any of the tools shown in the course (Data Studio or Metabase) or any other BI tool of your choice to build a dashboard. If you do use another tool, please specify and make sure that the dashboard is somehow accessible to your peers.
Your dashboard should contain at least two tiles, we suggest you include:
- 1 graph that shows the distribution of some categorical data
- 1 graph that shows the distribution of the data across a temporal line
Ensure that your graph is easy to understand by adding references and titles.
Example dashboard: ![image](https://user-images.githubusercontent.com/4315804/159771458-b924d0c1-91d5-4a8a-8c34-f36c25c31a3c.png)
## Peer reviewing
> [!IMPORTANT]
> To evaluate the projects, we'll use peer reviewing. This is a great opportunity for you to learn from each other.
> * To get points for your project, you need to evaluate 3 projects of your peers
> * You get 3 extra points for each evaluation
## Evaluation Criteria
* Problem description
* 0 points: Problem is not described
* 1 point: Problem is described but shortly or not clearly
* 2 points: Problem is well described and it's clear what the problem the project solves
* Cloud
* 0 points: Cloud is not used, things run only locally
* 2 points: The project is developed in the cloud
* 4 points: The project is developed in the cloud and IaC tools are used
* Data ingestion (choose either batch or stream)
* Batch / Workflow orchestration
* 0 points: No workflow orchestration
* 2 points: Partial workflow orchestration: some steps are orchestrated, some run manually
* 4 points: End-to-end pipeline: multiple steps in the DAG, uploading data to data lake
* Stream
* 0 points: No streaming system (like Kafka, Pulsar, etc)
* 2 points: A simple pipeline with one consumer and one producer
* 4 points: Using consumer/producers and streaming technologies (like Kafka streaming, Spark streaming, Flink, etc)
* Data warehouse
* 0 points: No DWH is used
* 2 points: Tables are created in DWH, but not optimized
* 4 points: Tables are partitioned and clustered in a way that makes sense for the upstream queries (with explanation)
* Transformations (dbt, spark, etc)
* 0 points: No tranformations
* 2 points: Simple SQL transformation (no dbt or similar tools)
* 4 points: Tranformations are defined with dbt, Spark or similar technologies
* Dashboard
* 0 points: No dashboard
* 2 points: A dashboard with 1 tile
* 4 points: A dashboard with 2 tiles
* Reproducibility
* 0 points: No instructions how to run the code at all
* 2 points: Some instructions are there, but they are not complete
* 4 points: Instructions are clear, it's easy to run the code, and the code works
> [!NOTE]
> It's highly recommended to create a new repository for your project (not inside an existing repo) with a meaningful title, such as
> "Quake Analytics Dashboard" or "Bike Data Insights" and include as many details as possible in the README file. ChatGPT can assist you with this. Doing so will not only make it easier to showcase your project for potential job opportunities but also have it featured on the [Projects Gallery App](#projects-gallery).
> If you leave the README file empty or with minimal details, there may be point deductions as per the [Evaluation Criteria](#evaluation-criteria).
## Going the extra mile (Optional)
> [!NOTE]
> The following things are not covered in the course, are entirely optional and they will not be graded.
However, implementing these could significantly enhance the quality of your project:
* Add tests
* Use make
* Add CI/CD pipeline
If you intend to include this project in your portfolio, adding these additional features will definitely help you to stand out from others.
## Resources
### Datasets
Refer to the provided [datasets](datasets.md) for possible selection.
### Helpful Links
* [Unit Tests + CI for Airflow](https://www.astronomer.io/events/recaps/testing-airflow-to-bulletproof-your-code/)
* [CI/CD for Airflow (with Gitlab & GCP state file)](https://engineering.ripple.com/building-ci-cd-with-airflow-gitlab-and-terraform-in-gcp)
* [CI/CD for Airflow (with GitHub and S3 state file)](https://programmaticponderings.com/2021/12/14/devops-for-dataops-building-a-ci-cd-pipeline-for-apache-airflow-dags/)
* [CD for Terraform](https://towardsdatascience.com/git-actions-terraform-for-data-engineers-scientists-gcp-aws-azure-448dc7c60fcc)
* [Spark + Airflow](https://medium.com/doubtnut/github-actions-airflow-for-automating-your-spark-pipeline-c9dff32686b)
### Projects Gallery
Explore a collection of projects completed by members of our community. The projects cover a wide range of topics and utilize different tools and techniques. Feel free to delve into any project and see how others have tackled real-world problems with data, structured their code, and presented their findings. It's a great resource to learn and get ideas for your own projects.
[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://datatalksclub-projects.streamlit.app/)
### DE Zoomcamp 2023
* [2023 Projects](../cohorts/2023/project.md)
### DE Zoomcamp 2022
* [2022 Projects](../cohorts/2022/project.md)
| 45.641379 | 377 | 0.748743 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | # Activate oh-my-posh prompt:
oh-my-posh init fish --config $POSH_THEME | source
# NOTE: You can override the above env vars from the devcontainer.json "args" under the "build" key. | 44.75 | 100 | 0.741758 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | // For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.177.0/containers/go
{
"name": "oh-my-posh",
"build": {
"dockerfile": "Dockerfile",
"args": {
// Update the VARIANT arg to pick a version of Go: 1, 1.16, 1.17
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local arm64/Apple Silicon.
"VARIANT": "1.19-bullseye",
// Options:
"POSH_THEME": "https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/main/themes/clean-detailed.omp.json",
// Override me with your own timezone:
"TZ": "America/Moncton",
// Use one of the "TZ database name" entries from:
// https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
"NODE_VERSION": "lts/*",
//Powershell version
"PS_VERSION": "7.2.7"
}
},
"runArgs": ["--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined"],
"features": {
"ghcr.io/devcontainers/features/azure-cli:1": {
"version": "latest"
},
"ghcr.io/devcontainers/features/python:1": {
"version": "3.9"
},
"ghcr.io/devcontainers-contrib/features/curl-apt-get:1": {},
"ghcr.io/devcontainers-contrib/features/terraform-asdf:2": {},
"ghcr.io/devcontainers-contrib/features/yamllint:2": {},
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/devcontainers-contrib/features/spark-sdkman:2": {
"jdkVersion": "11"
},
"ghcr.io/dhoeric/features/google-cloud-cli:1": {
"version": "latest"
}
},
// Set *default* container specific settings.json values on container create.
"customizations": {
"vscode": {
"settings": {
"go.toolsManagement.checkForUpdates": "local",
"go.useLanguageServer": true,
"go.gopath": "/go",
"go.goroot": "/usr/local/go",
"terminal.integrated.profiles.linux": {
"bash": {
"path": "bash"
},
"zsh": {
"path": "zsh"
},
"fish": {
"path": "fish"
},
"tmux": {
"path": "tmux",
"icon": "terminal-tmux"
},
"pwsh": {
"path": "pwsh",
"icon": "terminal-powershell"
}
},
"terminal.integrated.defaultProfile.linux": "pwsh",
"terminal.integrated.defaultProfile.windows": "pwsh",
"terminal.integrated.defaultProfile.osx": "pwsh",
"tasks.statusbar.default.hide": true,
"terminal.integrated.tabs.defaultIcon": "terminal-powershell",
"terminal.integrated.tabs.defaultColor": "terminal.ansiBlue",
"workbench.colorTheme": "GitHub Dark Dimmed",
"workbench.iconTheme": "material-icon-theme"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"actboy168.tasks",
"eamodio.gitlens",
"davidanson.vscode-markdownlint",
"editorconfig.editorconfig",
"esbenp.prettier-vscode",
"github.vscode-pull-request-github",
"golang.go",
"ms-vscode.powershell",
"redhat.vscode-yaml",
"yzhang.markdown-all-in-one",
"ms-python.python",
"ms-python.vscode-pylance",
"ms-toolsai.jupyter",
"akamud.vscode-theme-onedark",
"ms-vscode-remote.remote-containers",
"PKief.material-icon-theme",
"GitHub.github-vscode-theme"
]
}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [3000],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "pip3 install --user -r .devcontainer/requirements.txt --use-pep517",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
} | 31 | 114 | 0.657494 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | kafka-python==1.4.6
confluent_kafka
requests
avro
faust
fastavro
| 8.428571 | 19 | 0.815385 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | .gradle
bin
!src/main/resources/rides.csv
build/classes
build/generated
build/libs
build/reports
build/resources
build/test-results
build/tmp
| 11 | 29 | 0.825175 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Thank you!
Thanks for signining up for the course.
The process of adding you to the mailing list is not automated yet,
but you will hear from us closer to the course start.
To make sure you don't miss any announcements
- Register in [DataTalks.Club's Slack](https://datatalks.club/slack.html) and
join the [`#course-data-engineering`](https://app.slack.com/client/T01ATQK62F8/C01FABYF2RG) channel
- Join the [course Telegram channel with announcements](https://t.me/dezoomcamp)
- Subscribe to [DataTalks.Club's YouTube channel](https://www.youtube.com/c/DataTalksClub) and check
[the course playlist](https://www.youtube.com/playlist?list=PL3MmuxUbc_hJed7dXYoJw8DoCuVHhGEQb)
- Subscribe to our [public Google Calendar](https://calendar.google.com/calendar/?cid=ZXIxcjA1M3ZlYjJpcXU0dTFmaG02MzVxMG9AZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ) (it works from Desktop only)
See you in January!
| 48.888889 | 186 | 0.788183 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc |
![](images/architecture/arch_1.jpg)
| 11.666667 | 35 | 0.702703 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Asking questions
If you have any questions, ask them
in the [`#course-data-engineering`](https://app.slack.com/client/T01ATQK62F8/C01FABYF2RG) channel in [DataTalks.Club](https://datatalks.club) slack.
To keep our discussion in Slack more organized, we ask you to follow these suggestions:
* First, review How to troubleshoot issues listed below.
* Before asking a question, check the [FAQ](https://docs.google.com/document/d/19bnYs80DwuUimHM65UV3sylsCn2j1vziPOwzBwQrebw/edit).
* Before asking a question review the [Slack Guidlines](#Ask-in-Slack).
* If somebody helped you with your problem and it's not in [FAQ](https://docs.google.com/document/d/19bnYs80DwuUimHM65UV3sylsCn2j1vziPOwzBwQrebw/edit), please add it there.
It'll help other students.
### How to troubleshoot issues
The first step is to try to solve the issue on you own; get use to solving problems. This will be a real life skill you need when employeed.
1. What does the error say? There will often be a description of the error or instructions on what is needed, I have even seen a link to the solution. Does it reference a specific line of your code?
2. Restart the application or server/pc.
3. Google it. It is going to be rare that you are the first to have the problem, someone out there has posted the issue and likely the solution. Search using: **technology** **problem statement**. Example: `pgcli error column c.relhasoids does not exist`.
* There are often different solutions for the same problem due to variation in environments.
4. Check the tech’s documentation. Use its search if available or use the browsers search function.
5. Try uninstall (this may remove the bad actor) and reinstall of application or reimplementation of action. Don’t forget to restart the server/pc for reinstalls.
* Sometimes reinstalling fails to resolve the issue but works if you uninstall first.
6. Post your question to Stackoverflow. Be sure to read the Stackoverflow guide on posting good questions.
* [Stackoverflow How To Ask Guide](https://stackoverflow.com/help/how-to-ask).
* This will be your real life ask an expert in the future (in addition to coworkers).
7. ##### Ask in Slack
* Before asking a question, check the [FAQ](https://docs.google.com/document/d/19bnYs80DwuUimHM65UV3sylsCn2j1vziPOwzBwQrebw/edit).
* DO NOT use screenshots, especially don’t take pictures from a phone.
* DO NOT tag instructors, it may discourage others from helping you.
* Copy and past errors; if it’s long, just post it in a reply to your thread.
* Use ``` for formatting your code.
* Use the same thread for the conversation (that means reply to your own thread).
* DO NOT create multiple posts to discus the issue.
* You may create a new post if the issue reemerges down the road. Be sure to describe what has changed in the environment.
* Provide addition information in the same thread of the steps you have taken for resolution.
8. Take a break and come back to it later. You will be amazed at how often you figure out the solution after letting your brain rest. Get some fresh air, workout, play a video game, watch a tv show, whatever allows your brain to not think about it for a little while or even until the next day.
9. Remember technology issues in real life sometimes take days or even weeks to resolve.
| 79 | 295 | 0.761834 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Getting your certificate
Congratulations on finishing the course!
Here's how you can get your certificate.
First, get your certificate id using the `compute_certificate_id` function:
```python
from hashlib import sha1
def compute_hash(email):
return sha1(email.encode('utf-8')).hexdigest()
def compute_certificate_id(email):
email_clean = email.lower().strip()
return compute_hash(email_clean + '_')
```
> **Note** that this is not the same hash as you have on the leaderboard
> There's an extra "_" added to your email, so the hash is different.
Then use this hash to get the URL
```python
cohort = 2023
course = 'dezoomcamp'
your_id = compute_certificate_id('never.give.up@gmail.com')
url = f"https://certificate.datatalks.club/{course}/{cohort}/{your_id}.pdf"
print(url)
```
Example: https://certificate.datatalks.club/dezoomcamp/2023/fe629854d45c559e9c10b3b8458ea392fdeb68a9.pdf
## Adding to LinkedIn
You can add your certificate to LinkedIn:
* Log in to your LinkedIn account, then go to your profile.
* On the right, in the "Add profile" section dropdown, choose "Background" and then select the drop-down triangle next to "Licenses & Certifications".
* In "Name", enter "Data Engineering Zoomcamp".
* In "Issuing Organization", enter "DataTalksClub".
* (Optional) In "Issue Date", enter the time when the certificate was created.
* (Optional) Select the checkbox This certification does not expire.
* Put your certificate ID.
* In "Certification URL", enter the URL for your certificate.
[Adapted from here](https://support.edx.org/hc/en-us/articles/206501938-How-can-I-add-my-certificate-to-my-LinkedIn-profile-)
| 31.431373 | 150 | 0.745917 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Course Project
The goal of this project is to apply everything we learned
in this course and build an end-to-end data pipeline.
You will have two attempts to submit your project. If you don't have
time to submit your project by the end of attempt #1 (you started the
course late, you have vacation plans, life/work got in the way, etc.)
or you fail your first attempt,
then you will have a second chance to submit your project as attempt
#2.
There are only two attempts.
Remember that to pass the project, you must evaluate 3 peers. If you don't do that,
your project can't be considered complete.
To find the projects assigned to you, use the peer review assignments link
and find your hash in the first column. You will see three rows: you need to evaluate
each of these projects. For each project, you need to submit the form once,
so in total, you will make three submissions.
### Submitting
#### Project Attempt #1
Project:
* Form: TBA
* Deadline: TBA
Peer reviewing:
* Peer review assignments: TBA ("project-01" sheet)
* Form: TBA
* Deadline: TBA
Project feedback: TBA ("project-01" sheet)
#### Project Attempt #1
Project:
* Form: TBA
* Deadline: TBA
Peer reviewing:
* Peer review assignments: TBA ("project-02" sheet)
* Form: TBA
* Deadline: TBA
Project feedback: TBA ("project-02" sheet)
### Evaluation criteria
See [here](../../week_7_project/README.md)
### Misc
To get the hash for your project, use this function to hash your email:
```python
from hashlib import sha1
def compute_hash(email):
return sha1(email.lower().encode('utf-8')).hexdigest()
```
Or use [this website](http://www.sha1-online.com/).
| 20.855263 | 86 | 0.727711 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Week 6 Homework
In this homework, there will be two sections, the first session focus on theoretical questions related to Kafka
and streaming concepts and the second session asks to create a small streaming application using preferred
programming language (Python or Java).
### Question 1:
**Please select the statements that are correct**
- Kafka Node is responsible to store topics [x]
- Zookeeper is removed from Kafka cluster starting from version 4.0 [x]
- Retention configuration ensures the messages not get lost over specific period of time. [x]
- Group-Id ensures the messages are distributed to associated consumers [x]
### Question 2:
**Please select the Kafka concepts that support reliability and availability**
- Topic Replication [x]
- Topic Partioning
- Consumer Group Id
- Ack All [x]
### Question 3:
**Please select the Kafka concepts that support scaling**
- Topic Replication
- Topic Paritioning [x]
- Consumer Group Id [x]
- Ack All
### Question 4:
**Please select the attributes that are good candidates for partitioning key.
Consider cardinality of the field you have selected and scaling aspects of your application**
- payment_type [x]
- vendor_id [x]
- passenger_count
- total_amount
- tpep_pickup_datetime
- tpep_dropoff_datetime
### Question 5:
**Which configurations below should be provided for Kafka Consumer but not needed for Kafka Producer**
- Deserializer Configuration [x]
- Topics Subscription [x]
- Bootstrap Server
- Group-Id [x]
- Offset [x]
- Cluster Key and Cluster-Secret
### Question 6:
Please implement a streaming application, for finding out popularity of PUlocationID across green and fhv trip datasets.
Please use the datasets [fhv_tripdata_2019-01.csv.gz](https://github.com/DataTalksClub/nyc-tlc-data/releases/tag/fhv)
and [green_tripdata_2019-01.csv.gz](https://github.com/DataTalksClub/nyc-tlc-data/releases/tag/green)
PS: If you encounter memory related issue, you can use the smaller portion of these two datasets as well,
it is not necessary to find exact number in the question.
Your code should include following
1. Producer that reads csv files and publish rides in corresponding kafka topics (such as rides_green, rides_fhv)
2. Pyspark-streaming-application that reads two kafka topics
and writes both of them in topic rides_all and apply aggregations to find most popular pickup location.
## Submitting the solutions
* Form for submitting: https://forms.gle/rK7268U92mHJBpmW7
* You can submit your homework multiple times. In this case, only the last submission will be used.
Deadline: 13 March (Monday), 22:00 CET
## Solution
We will publish the solution here after deadline#
For Question 6 ensure,
1) Download fhv_tripdata_2019-01.csv and green_tripdata_2019-01.csv under resources/fhv_tripdata
and resources/green_tripdata resprctively. ps: You need to unzip the compressed files
2) Update the client.properties settings using your Confluent Cloud api keys and cluster.
3) And create the topics(all_rides, fhv_taxi_rides, green_taxi_rides) in Confluent Cloud UI
4) Run Producers for two datasets
```
python3 producer_confluent --type green
python3 producer_confluent --type fhv
```
5) Run pyspark streaming
```
./spark-submit.sh streaming_confluent.py
```
| 28.918182 | 120 | 0.766261 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | # Custom
COMPOSE_PROJECT_NAME=dtc-de
GOOGLE_APPLICATION_CREDENTIALS=/.google/credentials/google_credentials.json
AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT=google-cloud-platform://?extra__google_cloud_platform__key_path=/.google/credentials/google_credentials.json
# AIRFLOW_UID=
GCP_PROJECT_ID=
GCP_GCS_BUCKET=
# Postgres
POSTGRES_USER=airflow
POSTGRES_PASSWORD=airflow
POSTGRES_DB=airflow
# Airflow
AIRFLOW__CORE__EXECUTOR=LocalExecutor
AIRFLOW__SCHEDULER__SCHEDULER_HEARTBEAT_SEC=10
AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://airflow:airflow@postgres:5432/airflow
AIRFLOW_VAR__METADATA_DB_SCHEMA=airflow
_AIRFLOW_WWW_USER_CREATE=True
_AIRFLOW_WWW_USER_USERNAME=${_AIRFLOW_WWW_USER_USERNAME:airflow}
_AIRFLOW_WWW_USER_PASSWORD=${_AIRFLOW_WWW_USER_PASSWORD:airflow}
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION=True
AIRFLOW__CORE__LOAD_EXAMPLES=False
| 33.642857 | 142 | 0.803922 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Setup (Official)
### Pre-Reqs
1. For the sake of standardization across this workshop's config,
rename your gcp-service-accounts-credentials file to `google_credentials.json` & store it in your `$HOME` directory
``` bash
cd ~ && mkdir -p ~/.google/credentials/
mv <path/to/your/service-account-authkeys>.json ~/.google/credentials/google_credentials.json
```
2. You may need to upgrade your docker-compose version to v2.x+, and set the memory for your Docker Engine to minimum 5GB
(ideally 8GB). If enough memory is not allocated, it might lead to airflow-webserver continuously restarting.
3. Python version: 3.7+
### Airflow Setup
1. Create a new sub-directory called `airflow` in your `project` dir (such as the one we're currently in)
2. **Set the Airflow user**:
On Linux, the quick-start needs to know your host user-id and needs to have group id set to 0.
Otherwise the files created in `dags`, `logs` and `plugins` will be created with root user.
You have to make sure to configure them for the docker-compose:
```bash
mkdir -p ./dags ./logs ./plugins
echo -e "AIRFLOW_UID=$(id -u)" > .env
```
On Windows you will probably also need it. If you use MINGW/GitBash, execute the same command.
To get rid of the warning ("AIRFLOW_UID is not set"), you can create `.env` file with
this content:
```
AIRFLOW_UID=50000
```
3. **Import the official docker setup file** from the latest Airflow version:
```shell
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/stable/docker-compose.yaml'
```
4. It could be overwhelming to see a lot of services in here.
But this is only a quick-start template, and as you proceed you'll figure out which unused services can be removed.
Eg. [Here's](docker-compose-nofrills.yml) a no-frills version of that template.
5. **Docker Build**:
When you want to run Airflow locally, you might want to use an extended image,
containing some additional dependencies - for example you might add new python packages,
or upgrade airflow providers to a later version.
Create a `Dockerfile` pointing to Airflow version you've just downloaded,
such as `apache/airflow:2.2.3`, as the base image,
And customize this `Dockerfile` by:
* Adding your custom packages to be installed. The one we'll need the most is `gcloud` to connect with the GCS bucket/Data Lake.
* Also, integrating `requirements.txt` to install libraries via `pip install`
6. **Docker Compose**:
Back in your `docker-compose.yaml`:
* In `x-airflow-common`:
* Remove the `image` tag, to replace it with your `build` from your Dockerfile, as shown
* Mount your `google_credentials` in `volumes` section as read-only
* Set environment variables: `GCP_PROJECT_ID`, `GCP_GCS_BUCKET`, `GOOGLE_APPLICATION_CREDENTIALS` & `AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT`, as per your config.
* Change `AIRFLOW__CORE__LOAD_EXAMPLES` to `false` (optional)
7. Here's how the final versions of your [Dockerfile](./Dockerfile) and [docker-compose.yml](./docker-compose.yaml) should look.
## Problems
### `File /.google/credentials/google_credentials.json was not found`
First, make sure you have your credentials in your `$HOME/.google/credentials`.
Maybe you missed the step and didn't copy the your JSON with credentials there?
Also, make sure the file-name is `google_credentials.json`.
Second, check that docker-compose can correctly map this directory to airflow worker.
Execute `docker ps` to see the list of docker containers running on your host machine and find the ID of the airflow worker.
Then execute `bash` on this container:
```bash
docker exec -it <container-ID> bash
```
Now check if the file with credentials is actually there:
```bash
ls -lh /.google/credentials/
```
If it's empty, docker-compose couldn't map the folder with credentials.
In this case, try changing it to the absolute path to this folder:
```yaml
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
# here: ----------------------------
- c:/Users/alexe/.google/credentials/:/.google/credentials:ro
# -----------------------------------
```
| 36.208696 | 161 | 0.69799 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Setup (No-frills)
### Pre-Reqs
1. For the sake of standardization across this workshop's config,
rename your gcp-service-accounts-credentials file to `google_credentials.json` & store it in your `$HOME` directory
``` bash
cd ~ && mkdir -p ~/.google/credentials/
mv <path/to/your/service-account-authkeys>.json ~/.google/credentials/google_credentials.json
```
2. You may need to upgrade your docker-compose version to v2.x+, and set the memory for your Docker Engine to minimum 4GB
(ideally 8GB). If enough memory is not allocated, it might lead to airflow-webserver continuously restarting.
3. Python version: 3.7+
### Airflow Setup
1. Create a new sub-directory called `airflow` in your `project` dir (such as the one we're currently in)
2. **Set the Airflow user**:
On Linux, the quick-start needs to know your host user-id and needs to have group id set to 0.
Otherwise the files created in `dags`, `logs` and `plugins` will be created with root user.
You have to make sure to configure them for the docker-compose:
```bash
mkdir -p ./dags ./logs ./plugins
echo -e "AIRFLOW_UID=$(id -u)" >> .env
```
On Windows you will probably also need it. If you use MINGW/GitBash, execute the same command.
To get rid of the warning ("AIRFLOW_UID is not set"), you can create `.env` file with
this content:
```
AIRFLOW_UID=50000
```
3. **Docker Build**:
When you want to run Airflow locally, you might want to use an extended image,
containing some additional dependencies - for example you might add new python packages,
or upgrade airflow providers to a later version.
Create a `Dockerfile` pointing to the latest Airflow version such as `apache/airflow:2.2.3`, for the base image,
And customize this `Dockerfile` by:
* Adding your custom packages to be installed. The one we'll need the most is `gcloud` to connect with the GCS bucket (Data Lake).
* Also, integrating `requirements.txt` to install libraries via `pip install`
4. Copy [docker-compose-nofrills.yml](docker-compose-nofrills.yml), [.env_example](.env_example) & [entrypoint.sh](scripts/entrypoint.sh) from this repo.
The changes from the official setup are:
* Removal of `redis` queue, `worker`, `triggerer`, `flower` & `airflow-init` services,
and changing from `CeleryExecutor` (multi-node) mode to `LocalExecutor` (single-node) mode
* Inclusion of `.env` for better parametrization & flexibility
* Inclusion of simple `entrypoint.sh` to the `webserver` container, responsible to initialize the database and create login-user (admin).
* Updated `Dockerfile` to grant permissions on executing `scripts/entrypoint.sh`
5. `.env`:
* Rebuild your `.env` file by making a copy of `.env_example` (but make sure your `AIRFLOW_UID` remains):
```shell
mv .env_example .env
```
* Set environment variables `AIRFLOW_UID`, `GCP_PROJECT_ID` & `GCP_GCS_BUCKET`, as per your config.
* Optionally, if your `google-credentials.json` is stored somewhere else, such as a path like `$HOME/.gc`,
modify the env-vars (`GOOGLE_APPLICATION_CREDENTIALS`, `AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT`) and `volumes` path in `docker-compose-nofrills.yml`
6. Here's how the final versions of your [Dockerfile](./Dockerfile) and [docker-compose-nofrills](./docker-compose-nofrills.yml) should look.
## Problems
### `no-frills setup does not work for me - WSL/Windows user `
If you are running Docker in Windows/WSL/WSL2 and you have encountered some `ModuleNotFoundError` or low performance issues,
take a look at this [Airflow & WSL2 gist](https://gist.github.com/nervuzz/d1afe81116cbfa3c834634ebce7f11c5) focused entirely on troubleshooting possible problems.
### `File /.google/credentials/google_credentials.json was not found`
First, make sure you have your credentials in your `$HOME/.google/credentials`.
Maybe you missed the step and didn't copy the your JSON with credentials there?
Also, make sure the file-name is `google_credentials.json`.
Second, check that docker-compose can correctly map this directory to airflow worker.
Execute `docker ps` to see the list of docker containers running on your host machine and find the ID of the airflow worker.
Then execute `bash` on this container:
```bash
docker exec -it <container-ID> bash
```
Now check if the file with credentials is actually there:
```bash
ls -lh /.google/credentials/
```
If it's empty, docker-compose couldn't map the folder with credentials.
In this case, try changing it to the absolute path to this folder:
```yaml
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
# here: ----------------------------
- c:/Users/alexe/.google/credentials/:/.google/credentials:ro
# -----------------------------------
```
| 41.8 | 162 | 0.703922 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | version: '3'
services:
postgres:
image: postgres:13
env_file:
- .env
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
scheduler:
build: .
command: scheduler
restart: on-failure
depends_on:
- postgres
env_file:
- .env
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./scripts:/opt/airflow/scripts
- ~/.google/credentials/:/.google/credentials:ro
webserver:
build: .
entrypoint: ./scripts/entrypoint.sh
restart: on-failure
depends_on:
- postgres
- scheduler
env_file:
- .env
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ~/.google/credentials/:/.google/credentials:ro
- ./scripts:/opt/airflow/scripts
user: "${AIRFLOW_UID:-50000}:0"
ports:
- "8080:8080"
healthcheck:
test: [ "CMD-SHELL", "[ -f /home/airflow/airflow-webserver.pid ]" ]
interval: 30s
timeout: 30s
retries: 3
volumes:
postgres-db-volume: | 25.052632 | 79 | 0.484501 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | version: '3'
services:
dbt-bq-dtc:
build:
context: .
target: dbt-bigquery
image: dbt/bigquery
volumes:
- .:/usr/app
- ~/.dbt/:/root/.dbt/
- ~/.google/credentials/google_credentials.json:/.google/credentials/google_credentials.json
network_mode: host | 23.833333 | 98 | 0.616162 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL.
#
# WARNING: This configuration is for local development. Do not use it in a production deployment.
#
# This configuration supports basic configuration using environment variables or an .env file
# The following variables are supported:
#
# AIRFLOW_IMAGE_NAME - Docker image name used to run Airflow.
# Default: apache/airflow:2.2.3
# AIRFLOW_UID - User ID in Airflow containers
# Default: 50000
# Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode
#
# _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account (if requested).
# Default: airflow
# _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account (if requested).
# Default: airflow
# _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers.
# Default: ''
#
# Feel free to modify this file to suit your needs.
---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
build:
context: .
dockerfile: ./Dockerfile
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
GOOGLE_APPLICATION_CREDENTIALS: /.google/credentials/google_credentials.json
AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT: 'google-cloud-platform://?extra__google_cloud_platform__key_path=/.google/credentials/google_credentials.json'
GCP_PROJECT_ID: 'abc'
GCP_GCS_BUCKET: "abc"
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ~/.google/credentials/:/.google/credentials:ro
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
# yamllint disable rule:line-length
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.2.0
min_airflow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airflow_version_comparable )); then
echo
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
echo
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
echo "If you are on Linux, you SHOULD follow the instructions below to set "
echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
echo "For other operating systems you can get rid of the warning with manually created .env file:"
echo " See: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#setting-the-right-airflow-user"
echo
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
echo
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
echo
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
echo
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
echo
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
# yamllint enable rule:line-length
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
user: "0:0"
volumes:
- .:/sources
airflow-cli:
<<: *airflow-common
profiles:
- debug
environment:
<<: *airflow-common-env
CONNECTION_CHECK_MAX_COUNT: "0"
# Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
command:
- bash
- -c
- airflow
volumes:
postgres-db-volume:
| 37.793427 | 149 | 0.640402 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | ## Airflow concepts
### Airflow architecture
![](arch-diag-airflow.png)
Ref: https://airflow.apache.org/docs/apache-airflow/stable/concepts/overview.html
* **Web server**:
GUI to inspect, trigger and debug the behaviour of DAGs and tasks.
Available at http://localhost:8080.
* **Scheduler**:
Responsible for scheduling jobs. Handles both triggering & scheduled workflows, submits Tasks to the executor to run, monitors all tasks and DAGs, and
then triggers the task instances once their dependencies are complete.
* **Worker**:
This component executes the tasks given by the scheduler.
* **Metadata database (postgres)**:
Backend to the Airflow environment. Used by the scheduler, executor and webserver to store state.
* **Other components** (seen in docker-compose services):
* `redis`: Message broker that forwards messages from scheduler to worker.
* `flower`: The flower app for monitoring the environment. It is available at http://localhost:5555.
* `airflow-init`: initialization service (customized as per this design)
All these services allow you to run Airflow with CeleryExecutor.
For more information, see [Architecture Overview](https://airflow.apache.org/docs/apache-airflow/stable/concepts/overview.html).
### Project Structure:
* `./dags` - `DAG_FOLDER` for DAG files (use `./dags_local` for the local ingestion DAG)
* `./logs` - contains logs from task execution and scheduler.
* `./plugins` - for custom plugins
### Workflow components
* `DAG`: Directed acyclic graph, specifies the dependencies between a set of tasks with explicit execution order, and has a beginning as well as an end. (Hence, “acyclic”)
* `DAG Structure`: DAG Definition, Tasks (eg. Operators), Task Dependencies (control flow: `>>` or `<<` )
* `Task`: a defined unit of work (aka, operators in Airflow). The Tasks themselves describe what to do, be it fetching data, running analysis, triggering other systems, or more.
* Common Types: Operators (used in this workshop), Sensors, TaskFlow decorators
* Sub-classes of Airflow's BaseOperator
* `DAG Run`: individual execution/run of a DAG
* scheduled or triggered
* `Task Instance`: an individual run of a single task. Task instances also have an indicative state, which could be “running”, “success”, “failed”, “skipped”, “up for retry”, etc.
* Ideally, a task should flow from `none`, to `scheduled`, to `queued`, to `running`, and finally to `success`.
### References
https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html
https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html
| 41.918033 | 179 | 0.740925 |
data-engineering-zoomcamp | https://github.com/DataTalksClub/data-engineering-zoomcamp | Free Data Engineering course! | 15,757 | 3,602 | 2023-12-05 01:08:47+00:00 | 2021-10-21 09:32:50+00:00 | 1,561 | null | Misc | �PNG
IHDR � � lp�v sRGB ��� gAMA ���a pHYs � ��o�d ?zIDATx^��{�fuA?�j:%�Ӱ45؝1+����3`�kQ)j4M�`�uR ����T@�5��dW��⸐��-t������Y��Z���}x>_���m��}���<��̙���y�����}>��|Ά}�5 @1z * � @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @E6��o�p����۾�ٳ�}]H�o�Ŧ�mܸq�����M�6��2n9 ���,K�s7|gx%��V�| ֒��V+�/^�
��J���߾�����IW�[p`5�0C�!�Px7�N[8��W3藿C^w ���`Y��Ν;�ץ$LveB��y��w��Ѿ.'�����ܜ�' ��a`Jh\(��k�\���Xp��\X``�P�e˖�5���*!��q�]X�K@�)�P0�떐�Ϯ�u B@�)���
�B�t��Y�f��S��S�ZJ^�^��.��.���g�ݾ
q�ӽ�ھ}{�
�ly���ݪ��m��g
��S�����g
��S,U!�m�v@�:�)U[�Y�[�`���S&a<A�O}�)�<�su�f��S�зn�z@���^��Byi*�M� �M@�)�
�%�eܸ� ���͇B��-VJ�6��}� ���Rn��^��^��о:�7����>�n(���:�l�aʬ$�e�Rz�PP�*A��F��@�.7���@��Q�'��6�̡�&2K�rf���q��������O��\��J���t��&�ÔY� �
�JWb\`���
��,�~�vx�f5�쏀0�t�2�p���������ڬe _����Sf�ܸ���A��T�.���
�K��a�p������ �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et`Iw�uWs뭷��Y�<k���oo�;�� ���,�nh�=����+��y�K.���'�D���_?:4��s�-���+���� t ���v�am�~n��:�ј�Ki�j�u��7w�}�h ��#/�o�L�{キ���?����͵�k-����F��G4�w\�_\s�5;}���?�9�裛N8a4����7�|s��O�
�/̯����{��Y>2��?���� !��\q�M7���l��G��2����:��l?�n���v� ��{����[�߰aC�录ie�Qֹ�����.JЁe;�3���WBq_�i�����-���Yg���"A8���.k.��60'�g�,[�u_x�͝w�ٮ��+�xXI|֝�v���Γ��L"�q���}�v�\�7��r�۾X X-:�l'�tR[�^Bm��L�J�MՑ���R���SNi�GB}p�L/%��v[s��]�_}��m�������ټys�˺�>J��2O�w���A>���ra��ҟ�s�1������ ��&�˖@�P�
� �ׯ~�����N�Ox^H�p�D>�RR�墠?O[��p�ݷR���J���? X+:�" ��y�{FC���H)x�jȉ'�8�ex)� �/�&���9��]�@Hu�T�Y���r'�|��rJ��X����-�N�NՓ~IvW꒧~x�+7j��t<u�����<���?�ca �]YOZKYO���R�<�^�K�: L���HJ��Su$}�p^��d��-�wU)U_r3gi5%��[�^��օ�~S�Yיg�y@ �bUk"�L�:����� ֊��XBy�����4�X$�����K�}n�,�x����C��87���"��Yw�_d��p�
�)���R����
��ׂ�_ �jЁK�N`NH�VU�״���!��:�)�N��Lg] ����V^��T3I�,[�Ѥ�4��*' �Y.��3_���<�@�z2Oi�%��zR���/Ù'=�I�e����D�+a ��_|6@Tm�Ͷm����۷��k-�;���0�P�`�}xOoB{w\J�3�̟pޝ �)�δl#�d\�Y��D�q��R��3 �/�:�:㲎R�m�˶�\���d]e_2� �}/����+�2_Y.�e[���Z����.:Ln�|� �� � �": TD@ ��� P *"��J��y@P�2�^��� *XT̓'p�.O�<���'l��W^y�o�n�����!Ёe�;�]w�uͥ�^�>a3��OP��=W"O�<�� �����`/�S��{�m>���sss��Z*��O�y��O{�����'�_s�5�7����9�yN;=�?��47�|s��O�W��u� ��������NK�|���������#���v���'���<���`���k�o����'?�|�+_i��U���l��h�ӕ�<e;yٗ#�8b4umL���>JЁ�vꩧ6�7o>���M7�Ԝy�m�=���*))e/�:��Q�C����v�,�`��R�&�+A�̗�\q��p����/�O���<Y6�kW;�e.����v�:��W��#��}u� �8$�w\pK���)���VI�{n�L�OIt��;u�#�3Y�3�8`��𮔬g��e��˅A�Y.�I�������7��j6�~��o/��K.i�9��y����?��������� ����"a7a�+!��$ gٮ澌�n����g8���Lt��D.
��ߙ���Rbޕ��r1 kA@I)�.�7A��/lN>�䶤<�rn�z�\Y&]�u�ڏ?���*J��%L��t��D�椴>��LOX/%�w�}w��ߟtQ��ZЁC������M� � ��*&��TYL��r�S���\��r�������1�z��d"������ �Y.�Э��*-Y�ߥ
�8h)-�
�)�.� � ��VSJ)�B���Xw���{F}7.%��Fiq%uO%�wCwW��5�TmI���g_J)}���K��V���&�+��R��9��_B��EZ=Y��B%8� �_.%�i��+�t���Gy�UP����GYy.�ɾ$�������: �g��: ��
���S`�Ͷm����'����%��*K�3R�#��%`Glrs ĩn�VQ2.UJ2=�>�Liw~���%��?˕:�i.��l3]Bs�QnLM��2Y>���A�uÆ
�+^�.y����=n>�'�wד���l��S��喪�s(&��P�̤\�鸛"����+!9ˤ�H��Kꃗ�' ��~ӥT:!9�eݩz�'!;˕��z��[d[���LKP/�O{Q��+�j/��,��t t�2ܰ�|P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ���0e6n�8� �H@�)�cǎQC�gϞQ �jþ�F���H0߹sg۟���9%�S.��{�.�l��~� ��T7���>�ʯ ��R8�m:L������.a�n�By$�oڴ��0�t���z$�K�+�L�B��H0�� ���XX/JH/�]h_��wo���B��P�Bt��K�RJP/�Qp?�Jx_ �.� X��3��Ʉ�ҿ܀�ew\�F�5-a���)2n%�"ﹼo���Ё����ҕX�����tC�8���C
�!���t`A�������Z��I�XO:��������祂u)�����T/a]�`V<b�
P��
��To�v�`Ht�j ��Y �SC5 f��TM(`��@պU[v��9���Ёj��w�D��Ёj � �"�ָt�\ :��b�**Y`�t�JB8 �J@��X �j. ��Tg�p^(a`�6��o�P�m۶��[�nm_`Ȕ� @Et ��� � �": TD@ ��� P *"� @Et ���To�m �@@��cǎQ߁� 0T:P��;w����_H`�6��o�?Q�K����tڸqc�:777߿�R�aϞ=�<\�|�l�2ߟ㫜3&q��j��i�&�-�,����b�֭[���{����n�ٶm[;.Ǘ4�Q��3� ��x@�I�|�拷| 3����Z���}�͞n@�i�,��z�h��|�K�l��˧\��b�M:CQB�Zd ðn7�
Y�S�l֢�R�4���f�V�i �L4���T�`xֲ�R��`��\�X̺��[ �p�A ��� P *"� @Et ��� � �": Tdþ�F�k.ON۶m[ۿ}����`�ر�}]�G�3Y�Uy��b�*��֭[�Wf�ϟ!q<K����s�|r�V.�<� `�Jл˧�5ݦM��a�W.�J8O�Ҹ�t%N���ϐ8���LU@ϲYG�۰��v˖-mu�>_h���ϐ8���L�M��ӭ���� ̺� �)]-� U� 4�%���( �h *"� @Et ��� � �": TD@ ��� P�
�����< t۶mm�������:�e�v��Ѿz���+�����هW9�����xdz�yH��x�����(ôu�֙�p<��!q<3$�:�g&�w��6ݦM��a�WN��lc�����xv<����<$�>�g&�g٬#��a���nٲ���i8�����xdz�yH&y<��M��r��%�*vNt<��!q<3$�<�g"��j'�4k?:����̐8��I�3W�^�~`���!q<s(�� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� 3�[ni��,5��!� ̨+������tֆ�,����on����.�+u���7�w�hh��~��+^�Z� ,X��_ޜv�i�����a�z��'7�\rɊ���w߽�?�f_V����'�����?�\q�m���}�y��FS�|�y����v��ʹ�����TB?���u��W��9����=��5'�]�v5^x�hN fū^����o{s�]w5_��ך���=�駟>�����կ~u��7����W��<��o^��;]�Ny����MoRX2����j#�~xs�g��<�Nh.�����+��7�pC{ҽ�kړ�8�'%�/��+%+�g���t�M��wbO��t}������ls�u�}�|y�����;��|�3m�M�����h~�~����>��s��%�����o}k[��]�zW�}�y�{���Sd8��� ��XG}�UGN:餶�K��{�Yg�֔�g��/��v�a�ֆ윸��9����<Y_=���;S�:K/��K�NՖq�l��d��v��/�R���\�d�Z�s�1m��|`4�i��;��
��;!U^~�G�y����G���я~��#��rJ�<e4D����g�j �7�|�a�+�K��I:!����j�lWJ��3OJLJ�{d�Y.%2Y.�$d�� �W]uU�oe=�f �����b���{�>d������w��� ̆_��_o��G�5�x�+����-����ߎ�6m�IJ�O<������9�
�H@�JiG ���v[FS�ܿI4�:%�9Ywe�t]�yr�?��vݑ�8��S��"�?��n[��q�~��l;�U�OB{ٷ�_�`Z=�1�i���/��.���ǎ����~�ǚ?��?j��O��O4�����ʯ�J[r�x�#�����o8�]�OXXPBm)��
�y=�3�R��(���{l���q�_Z�@�ԥl�����H�>嗁��w��n\��y� ���O~r��/��sl�q���'=i4�!�y�s�`��-Q�S������D[�1�cyt`�L�S
%�>ҕ�Z���W�n�P<.D�-�%��̸m/��`���/n_���wP�1�/x����虂�={�����'>�}=�#�ן��nKݻM+f�,��)�#�����o�>�]��B��l�b�-WJK�#�DN�iɤ_��(�EznS�ݯ���76ZJ ǥ<u�Ǖ�tK��a{9��?�u�uܶ��O�{�c�ȴn�Zl��̗s�r����`�*KBt�K������K�eZ扗��%��_������7��
m5��[���J���TLӊij17�g����2�
�k(w&���q�;��;�}��7������n����?��ј���?�s{�����hLVxZ[IH��rCe��2'��D�
ʙ��K/
-��k_���37�v��{��4a9�,7�F��mw��&́u�g���"��Yo7�彧�� 6}R�7����hܴ|���ԱRd]���g0��9��O���+?�3?�v�ϸ~�~��_ߞ����y�Ӟּ���mn�����]t�E�����5�����l���T�)��/N@_cij(?�|�ӟ�y�?��?6����ۆ�����,��۱cǨ���<�<��6���I:]�'�$4����D�y�?���,&�5�^�y^3ܽq���O�V�uW�����6�X�I�_L��c�L�[�,��s%��'�w��~Iyu�ŧ�˯,�41�RU%�����.���J_Ε/{����p�מ��'��<$7��>���ߕeJ�x�����o�C��K��177.�z-{��>hJ�sFJ����%ܤa������k���E�r����o��o�����<��OMy�K��(�������ۃ��놛���������3��v\YwBO�gݏ|�#�i����'��d�����~�].��M$���VC���~�{��mofٹsg{'z�'}\�I��,s�$��͉:�ln�i#%'E���c,�FJLR�R|J���2\��<�)u3��d�7����Y�z�ö�y����W��ܰ� �}ɾg=e���L�|e��>e\�oJ|���ӯ�꯶%I�B�c5���j=����症N�3�gZ�W�O �y�[�?��9��K!Kn��^�e8�r��\Y��1�i9��|��_9�l7lIId֑�*��V���~��<��t]����Gک�6�99˖�뿏\����#���y���MʦM�FC�����Y3��W �{�3�ٶ�YJŋ��jC���0-9'��t9����<�o|��ҟq�Vd\N��6S���K����(7�$D��XN�������l~�~i���G?��������]�o{����O��_��NpϗT�?Y%��D��g7۶mk�yƱ�|���$!7]�8 ���<9�k9�Jx�ʺ�A!2o���d����}��e��m�y�6��,�O�=t�+Z���J@M��J��g��_�%U���s^��T�ʯ1N~UIW�{i�(�Yg��;��ӟ_j�zr��pYGƧ�@~E*�>r..��9;A<�����̓��b�Ⱥʺ��zW�ʹ���s��Y$�O@~�I��W��������~X@ϗ����WoIIIJ�Sw+7[�K�eZWN�)���?��ј�|��n������mx���/u���8ί��v�o��ok_�S��.J Ͼ�D�/��<�j;�ʁّ��S�τ����^�hJ�3.�՜�%���V?�G�m.�����\$Xg=�%�\�e��r�`�7�)�ud[���/0"�e������yr��}y�e�t���(��e\晴~(w�f� �����+˗A�O0Oc�y������?��?�E � �)��ViIƕ��"wN��}�h�!�HI���گ����O}�S��~����H��s��ܶ��(=����1wc�D����o~��r�����̻�N�;�/!�`s^K�z�%$�5]��=� ��_Q�/5�`�=�F�3�R�l/R�$�YOJ�˺����/@����/��ʾd��yҟ��*Z�}d���"�Y�}��}��rX؆}��kBjnb�����r�ײ��y�{^�6hJ�dsR͝ϑ�SN�/}�K���G�����v�����E/z�|3FE�O�*~��m�|R�������'̴f�:�)�Ӗu��.� ����lٲ�m"��o���o��o��ir#a�Jf���=�q��&���f8'��.�ƍG}K+��2��ej8�Y;������MH/U�ҥnu)!���{z3��ȍ�%����ν ɹ�8%�%g�)��|�r#s9w�O�+�*]n����7��Ꙓ�\d۱�u/�>VC��-��t˵��s
�{s~��>_u)A������j�GN�ݛ�RG=%�_�җ���iK�����T����9�~��4�AYw���I�ߥ�K�����M�����d�/�����kK��s�z+�9�<y]��=�-��zs���G��YJ�K��b�i)�J�w~�D�8�O� HHO��3�9u�K){,�������<r�������Rzه��{��9M7|ºJ ���ٳg߫^���[��Zv���������?��{ы^���_��hʾ}�����?Y���?��v�����^���q�������].�]�{n��_��_���?)����w�������gǎ�1�Z��~�飡�Y���������Ƶ�^�.��NR
�3kg�>ߵ~��]w]{۵kW�z�}����kǝt�I���;�s4e߾s�9g���;�,�mt�<���>dz9oy�E]��/d�uf?�����1�#�b�Y
}��n�sr��&�����7Yc��8�����ǐL��U�>!)���Ҥ��ۼV�w����EύC�i����u�ӟq���\��3�����O|����|n��z�+_9�z@Jʣ[��Қ�lڿI�����7�-�Z�+Y?��NKK��a�W�>�f�Rݤ[�?����S�\�)�N�t~�,2o����n�~yWJ��9?��W�R����G�ڤ�[���lƗ�gߣ��H�{��v��]w�G�w��.��z����N���8LU���燢[9~�����'(UY�OЭ�i��LK��]���/xA��aꥧK����͵<�93_i�%-���ԙ��zJٛ/�4��(=���K��̟���K곧5��99����P�����g[�N � ��*,�i9'fZ �E�m�n�F�Z��ݟ7�>�r�M�N8)�G2o����YO��d�R%�)��v2O�2O������k���>�}�I�q %���\D�V@/���cZ�@��$:A�����LK+,������w�f��$WN�i��Xh�e|wܿ�ۿͷ�^Ƨ5�{��)�I1%��[�jK���Ϲ�47��2�e%P����o�����xfm���;�����g�=�9���=?f�n N����:R"�0�y���L+�%��羜���a�fҢl�.2m�н��1�op(�����������H��9rl�T=�9>s��:l������\f�r�f|�Q��,�=>�l�ϱ���L�2 �9����sl��~d|��u��mFw_���i�M��U�>A�;�ZƏ�Q����c����b���G~�-QI�OJ�幑t�z"-��Q�)�Iӌ����sS�a��� 97&htCmd��l��<�wܹ0�fz�q�<�����9�lk!��Rӗ�>j�$��%�О�~r����n������j�KKnjN��r�����#�M�i9(�5�z��K`/�&��"4�bw����v"��W�n�B���!� *�8�`��,�:!��G�q�}e���eBq�'ԗ�*Y>A��R[�7��E^���k��̓j\�a]Yo.KS��iY��̓j_ ��>e�C$� T����J�S���E&��d=�bҵ�_^�˳��?Ud#���7Yw] ޥ}������P� 5��U�;[H���^yJջ]s?�/�ZT֕j5�[֕P]֕�Kɯ �ݧ�� fH�p/&�թ�=�[���O�qY67���͝E�T|1��O�b�z�� ��Y,�w�q�
� �㺕H�wJ������k\���;O�0���4-&лwc{� ��K��"��$,wo�TS�[J��
ߩO�m
&J]��|�!5���j�Ӵ�x ��-[�W%� �+u�s�fJ��S"�VR.���v\y�UB|�}�D֕��#a�4�Xn�������g;�f^S&7��j�ӴX�*.i�=��: ��˃}���K��nՒH(NW$�i���"�ì3���sΙoI�Ⱥ3�k�63��g\�x�JL�y֟'�f�)i�pJ̳㶷�}��>I�رcG�s����ʭ��M�b &�d/��z~�e���$��T���<6��\�����d�������^ɼ��ٷ�y�7���
�l�8��G}�7^ɼ��qy?y-�Y8_u?��ᙵ���<l��|ץ=_���|)+� �P�e��ğ�� ������Ki)���`K�&y�����[��/]K���BwӦM�1�_�.�V㼱��5�����͛_��g�|����<\���:��m��o}9��f����-�]����7�m�����3�Y8_9?۬}���a���n7���__��7���"�T�_U ��#��S�
�#� @Et�RJχ�g0�t�RZo&-� � @E�2��6��ɹ뮻�[o�u� 8tSлu2�TQ�A��I_0���Io��nh�=��E�Z]y啣���V=t��5U%�[�li_��S��fu�>)��/O3��+��b��F)������7������*.�̗�)���pܹs�DC�z:�����oo�����7�tSs��W���R��{�嗷��e��t��+�n���Ѓ��yʹ�����q�_}�e�l{�z2o�̗y��`�lطߨ*$%��۷���r����v��ITM(��|� ��v��=3^B�y��}��ͥ�^ڎKh?��3���?�}��s�m�5��zj;�0�eR��Ϗ��K�͛77�w\��ZJ�l�n��鬻̓�ԏ�<GuT;..������oN<��v?n�������{�i�<���m����I���;k��$?ߩ+A���k� KRG�Z��T������KIs$\_|��mNN�.�7�;A:���.h�RZ}�5״�?�<����.ks)�μ�aÆ�u!)�O �,_vJij?Y_Ɨ����[�����W���o� b�J��r5���TZJ7]����HHM���tE�M�>�3ڒ��R2��x���UyIPNiv�V�]�v�/ח���n�=�������)�?餓�q���N;m~��3}�_V��!q<3$��|��zW)I_�+7��.�u\�
ɑ�c�9�
�)!��T{I��v �w�q�h�-�V)!�7�k"�H ����0krљ�4�?m2%� ۇvX���=�ض��ߕz�ke��j�����#��*��,��S��j+)/��#!9U\J�/�.��m/r3g���o��[>N ����ED��z�����#��.u�S�<7�&�z{ ŹY4!�������e\�K /7�FBt�˔�y��<�讳�l;�?���k�1�+ �^t`E��ƻ]BnBtZCI�R�;� � ��
� �y=묳�]J�S�]ZwI��|yM�+�E��f{ez��������4?�z�Y.�-�5ù���
�7���,����m[ (E�d=�x�}�ݣ��R2����,��p����/m��e���g���~ w�Q�Ϝr�)m�θ�W�v�O�q%��b �ʼ��Ed|�������]/���6��g��pM��Йz� �3C�xfH&���� �����
�3C�xfH��������TQ�iV��s<��3C�xfH&q<�L ��-[�WW��Sn��%���r<3$�g�d���#/N�b3 �(�����������'>�9�&�?���5;w�? �s�93�y:�����xdz�yH��x��V\�\����9�Y� �X�x�ߢ�D_�</'�����+�w���.+�7�/w���dޔf��͍�V6o�}�٣����d������7�����̐L�x���D��[���v���Y<���yxώ�!q<;��d���L���-Tz5�z�~i��8��<��>��5�e���ؼ}k5o����;n۳l���s~�=C>�g�z�3Ї�����=�v�g���,Ń� �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et ���>P7nl���;�97�
�}�6m�Ծ�ر�}`��sr9G�#�TJg�l���ܹSH�@��9'�ܬX̆}�������&9^Sʘ�m۶��n��Vسg�*\L�r�&���͵� �g�@�4��ҟ�iԽ�X��T��P���S�V�xy!�0t�j�@.�0t�z����. �B@ ��� P *"� @Et ��� � �": TD@ ��� P�
���Taǎ�Ν;GC�mݺ���$%�@u���F}�*�Җ-[F}'�0d:0u�S� �J@��XW��� �@��Us�:�*�� 0t:P�qa\ : C'�U�Vs��:05To`�@Մr f��T/U[J C'��K0W���Ё� � ̒
���Sfǎ��Ν;�W�Q����B� VB@�)�p.�O��[�
� ,��Sh�Ͷm����t�6mj��G.��Y���r �0������W����%u�X7����WJ�UG`�t�2)�-�&���G ���SL�f *"� @E��S&��K�۷oo_�<��^�Z��%�00�F�y��M���E �S�Sf��ΦӊH}<����0e�T6����V>+0�O ��~��\U$ �t��Ɩj�O�# "�� �� �K@ ��� P����Y����wy�Q�J�$0@ :�0�[�ɷ4���R�Sf����GwZJg�iE�>`@�0e&�g|�'�y�Q��g�F �K�!_�J���H �K@��Kil:���G �0C�i�� � P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et`bn����o�}4ei_�җ�[��}�k�g>������� ��{����Λ��<����On�?���+�h������|�w����{�����x�h�xo�ۛ�O?�ټys�W��y�����m~�w����0� �%�����ͻ����5�yM�G=��ꪫ�_��_\���ꫯn^��6���g�/~z��^�}�kN;���5�I��E@���\��}n47 �I@��w�w7�~���R�z�s�9�m�������h��|�Cj���6/~<�)�����є�$��#i^���7�|�+��?����N8�~�[������ls�7���V:P�TyI yBv_J�Se%^�4�\sM�ߵ}��fӦM��_��јe�)a�v �f:P�g=�Y�W�����_��hL��H��=O:�v�E/zQ�o}�ٵkW;\�&�g>�! �^:P���G�f�;v4��=���������>��w�G���:�� j&�ո�;��c�9�}�������y��t�gL5�O|��,�p�-y/r�i����[GS �N:P��$�j*�{����~���kn �6���7����`����{4����R�܈
�Ё*�����_��/�i��;w6O}�S٥;�3�����ɟl>��O5���Fc `: �����G?ڼ�
oh���777nl^�6���������EП���5]tQs���>��Kƥ-�'>�1 P'��O<q�������/��/�SO=����+�G?���<�zK�.�裏n��gP��MozSۜ�[���e/{Y�|�{ƽ�կn/ �f��7���{�m۶��i��X����<����'��ڥ�y�y��=�����AC)�K+/O~���HS�ybh�?��on����Z��y�>�������lb?8?�IO����2O7�GBƧ�LJ�Ӿ�4�s *"� @Et ��� � �": TD@V䮻�jn����.Ë9��x$? �8X�n��9��s��N;�}"h�x��M�YhypѸ�!�4Ё��{���K/��9ꨣ��/��
����h��}|�? W_}����ЁCr��G7'�tR�/��6 '��/�9�a2��k���������P_�u��+��ϓ�T��o?�zӥ�{1n�R%'�}Y.��~���1 ��t`դ�|���Tu��ᄏ������{�����:�v�a���$��K.i����ڠ\��2E�w��d8����+���2���>eZ�����̛_ `����:��ڀ�P x$''�F^KxN����'T'����SrBz�ɼ)�N�N�>|�O)x�/ˤ�62�_B�}.�T���t�M�k��SN9�9���Gc `� ���:�F}��߄�'%�%$����o�=��v8�S>�l�ɴ�2�F�eZW����5�������\$�g> XK:��J�y7l����(UMRr���n���2�`�}�s�=c�{��w���ݧ�C ��\0� �5XU ��G�-�^L �)�N��~W����{��%�we\�-����6�K)? �5X5)m>� ���P��J�OI�<������dž�HO)|_�[�g�qF�~JIz." `� ��!K�M����@]n�\������R��$���G����ϮnX�L���r."2O֗�c�:�Bu�`5 ��A�D�SW<� � ����X��q��9昶��/���J�/%�)MO��63-�i�%m����LZg����,���z��� �Yo�O�zu���
����K;�۶mk��o��J���r��6� =.�'ܦ:IJ�S��ؼ�&��λ�|i�%�tK���Rg<�l#������\Pt�>���M.����� �>:L���$�=��)q�W�Y+>/ Tq�ɯ���}�ݷ�:� p(t����k��K����� kE@�I0O��/�X�- L�� � �": TD@ ���0��f6 0,:L��7�����6;v�
C ���&KP߹s�� "�Ôںu�� $��KH߲eKۯ>: ��Snnnn>� �O@��`8t�M�6�� �i&� @Et 7����a <� �A@�� # �~:��t�a�<� ����mt �N:�� �G@��� # �.:�~��P�
������vѣ���,+-ۤ���t�V�.k���~^i6��#��8��G}M['���Y��P����*�� �0#�J5�Y� ��`^n�]����={ڿ[���%��o�ZA]@@@��
� ���2d�`��\��(����W+�� �0c��u�Bem*1_�_
��a�P��: :̠~��ͣ%,w�$��fP��aFu�`L{���E�z����: :̰~��Ɛ>.��z��qA}�[ ha �[5c���<j�����쟀�����7��V^�!��u�y� ��/�Z�o�+��Yj�]���n��Ё��ɥJ}�Z�)�wu�w�{: :�����_�a�%��_)�~�U�t t`Q � �%P�Z�`��yLk��B��{�#� ��2.�wK�����'y1���"y��6m�f���ȸ��8>.�5�w�/|J)z� �I@J?Xv�zN�pߞ={Ɩ"Lȟf�z酀0�t��-�WbVJ���A@�M:��JHO�xtC�� ���t�̻�7R`� � �IHOiz�Or`� � P�G�^ �
� P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P *"� @Et ��� � �": TD@ ��� P����J"F)\��3 IEND�B`� | 809.105263 | 4,002 | 0.262491 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 33