MySQL コンテナ作成時にテーブルとプロシージャを作る

  1. 必要なファイル
  2. init.sql の内容
  3. .env の内容
  4. Dockerfile の内容
  5. コンテナの作成
  6. MySQL で確認

1. 必要なファイル

とりあえず必要なファイルは三種類。init.sql、.env、そして Dockerfile です。

3つとも同じディレクトリに配置します。

ls -a
.                       ..			.env                    Dockerfile              init.sql

2. init.sql の内容

--init.sql

CREATE DATABASE IF NOT EXISTS buzzing;
USE buzzing;

CREATE TABLE IF NOT EXISTS `yt_mst_cnl` (
  `channel_id` varchar(40) NOT NULL,
  `channel_name` tinytext,
  `description` text,
  `thumbnail` text,
  `uploads_list` varchar(40) DEFAULT NULL,
  `published_at` date DEFAULT NULL,
  `data_update_date` date DEFAULT NULL,
  PRIMARY KEY (`channel_id`),
  KEY `idx_published_at` (`published_at`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

CREATE TABLE IF NOT EXISTS `yt_mst_vid` (
  `video_id` varchar(20) NOT NULL,
  `video_name` text,
  `description` text,
  `thumbnail` text,
  `channel_id` varchar(40) DEFAULT NULL,
  `published_at` varchar(8) DEFAULT NULL,
  `data_update_date` varchar(8) DEFAULT NULL,
  PRIMARY KEY (`video_id`),
  KEY `fk_channel` (`channel_id`),
  KEY `idx_published_at` (`published_at`),
  CONSTRAINT `fk_channel` FOREIGN KEY (`channel_id`) REFERENCES `yt_mst_cnl` (`channel_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

CREATE TABLE IF NOT EXISTS `yt_pfm_cnl` (
  `channel_id` varchar(40) DEFAULT NULL,
  `subscriber_count` bigint DEFAULT NULL,
  `hidden_subscriber_count` varchar(1) DEFAULT NULL,
  `view_count` bigint DEFAULT NULL,
  `video_count` int DEFAULT NULL,
  `data_date` varchar(8) DEFAULT NULL,
  KEY `fk_channel_pfm` (`channel_id`),
  CONSTRAINT `fk_channel_pfm` FOREIGN KEY (`channel_id`) REFERENCES `yt_mst_cnl` (`channel_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

CREATE TABLE IF NOT EXISTS `yt_pfm_vid` (
  `video_id` varchar(20) DEFAULT NULL,
  `view_count` bigint DEFAULT NULL,
  `like_count` int DEFAULT NULL,
  `dislike_count` int DEFAULT NULL,
  `favorite_count` int DEFAULT NULL,
  `comment_count` int DEFAULT NULL,
  `most_used_words` text,
  `data_date` varchar(8) DEFAULT NULL,
  KEY `fk_video_pfm` (`video_id`),
  CONSTRAINT `fk_video_pfm` FOREIGN KEY (`video_id`) REFERENCES `yt_mst_vid` (`video_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

CREATE TABLE IF NOT EXISTS `yt_analysis_07` (
  `channel_id` varchar(40) DEFAULT NULL,
  `channel_name` tinytext,
  `view_count` bigint DEFAULT NULL,
  `like_count` int DEFAULT NULL,
  `dislike_count` int DEFAULT NULL,
  `favorite_count` int DEFAULT NULL,
  `comment_count` int DEFAULT NULL,
  `video_count` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

DROP PROCEDURE IF EXISTS `buzzing`.`yt_analysis_07`;
DELIMITER //
CREATE PROCEDURE `buzzing`.`yt_analysis_07`(IN published_after VARCHAR(8))
BEGIN
    -- truncate the yt_analysis_07 table
    TRUNCATE TABLE yt_analysis_07;

    -- insert data into the yt_analysis_07 table
    INSERT INTO yt_analysis_07
    SELECT 
        channel_id,
        channel_name,
        view_count,
        like_count,
        dislike_count,
        favorite_count,
        comment_count,
        video_count
    FROM (
        SELECT 
            B.channel_id AS channel_id, 
            MAX(C.channel_name) AS channel_name, 
            SUM(A.view_count) AS view_count, 
            SUM(A.like_count) AS like_count, 
            SUM(A.dislike_count) AS dislike_count, 
            SUM(A.favorite_count) AS favorite_count, 
            SUM(A.comment_count) AS comment_count, 
            COUNT(*) AS video_count
        FROM yt_pfm_vid A
        LEFT JOIN yt_mst_vid B ON A.video_id = B.video_id
        LEFT JOIN (
            SELECT channel_id, MAX(channel_name) AS channel_name FROM yt_mst_cnl GROUP BY channel_id
        ) C ON B.channel_id = C.channel_id
        WHERE B.published_at >= @published_after
        GROUP BY channel_id
    ) T1
    ORDER BY view_count DESC;
    COMMIT;
END //
DELIMITER ;

3. .env の内容

MYSQL_ROOT_PASSWORD=rootpassword
MYSQL_USER=admin
MYSQL_PASSWORD=password
MYSQL_DATABASE=buzzing

4. Dockerfile の内容

# Dockerfile

FROM mysql

ADD init.sql /docker-entrypoint-initdb.d

5. コンテナの作成

% docker build -t docker_mysql:1.0 .
% docker run --env-file .env --name docker_mysql -p 13306:3306 -it -d docker_mysql:1.0
% docker exec -it docker_mysql bash                               
bash-4.4# 

6. MySQL で確認

bash-4.4# mysql -u admin -p
Enter password:
mysql> 

テーブルの確認

mysql> use buzzing;
mysql> show tables;
+-------------------+
| Tables_in_buzzing |
+-------------------+
| yt_analysis_07    |
| yt_mst_cnl        |
| yt_mst_vid        |
| yt_pfm_cnl        |
| yt_pfm_vid        |
+-------------------+
5 rows in set (0.00 sec)

プロシージャの確認

mysql> call buzzing.yt_analysis_07('20230101');
Query OK, 0 rows affected (0.03 sec)

ちなみに DBeaver でもポート 13306 を通して接続できました。

Apache Airflow を MySQL で Docker Compose してみた

Apache Airflow 公式サイトから取得できいる docker-compose.yaml はメタデータ用のデータベースに posgresql を使っているのですが、MySQL で立ち上げられる様に変更してみました。

これはその時の備忘録です。

  1. 公式サイトから YAML ファイルを取得
  2. 不要な部分をコメントアウト
  3. Postgresql の部分を MySQL に変更
  4. コメント部分を削除

1. 公式サイトから YAML ファイルを取得

公式サイトのリンクから YAML ファイルを取得

% mkdir docker_airflow
% cd docker_airflow
% mkdir -p ./dags ./logs ./plugins ./config

% curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.6.1/docker-compose.yaml'

中身がこれ(2023 年 5 月時点)

version: '3.8'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.6.1}
  # build: .
  environment:
    &airflow-common-env
    AIRFLOW__CORE__EXECUTOR: CeleryExecutor
    AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    # For backward compatibility, with Airflow <2.3
    AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
    AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
    AIRFLOW__CORE__FERNET_KEY: ''
    AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
    AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
    AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session'
    # yamllint disable rule:line-length
    # Use simple http server on scheduler for health checks
    # See https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/check-health.html#scheduler-health-check-server
    # yamllint enable rule:line-length
    AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true'
    # WARNING: Use _PIP_ADDITIONAL_REQUIREMENTS option ONLY for a quick checks
    # for other purpose (development, test and especially production usage) build/extend Airflow image.
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
  volumes:
    - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
    - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
    - ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config
    - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
  user: "${AIRFLOW_UID:-50000}:0"
  depends_on:
    &airflow-common-depends-on
    redis:
      condition: service_healthy
    postgres:
      condition: service_healthy

services:
  postgres:
    image: postgres:13
    environment:
      POSTGRES_USER: airflow
      POSTGRES_PASSWORD: airflow
      POSTGRES_DB: airflow
    volumes:
      - postgres-db-volume:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "airflow"]
      interval: 10s
      retries: 5
      start_period: 5s
    restart: always

  redis:
    image: redis:latest
    expose:
      - 6379
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 30s
      retries: 50
      start_period: 30s
    restart: always

  airflow-webserver:
    <<: *airflow-common
    command: webserver
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-scheduler:
    <<: *airflow-common
    command: scheduler
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8974/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-worker:
    <<: *airflow-common
    command: celery worker
    healthcheck:
      test:
        - "CMD-SHELL"
        - 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    environment:
      <<: *airflow-common-env
      # Required to handle warm shutdown of the celery workers properly
      # See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
      DUMB_INIT_SETSID: "0"
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-triggerer:
    <<: *airflow-common
    command: triggerer
    healthcheck:
      test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-init:
    <<: *airflow-common
    entrypoint: /bin/bash
    # yamllint disable rule:line-length
    command:
      - -c
      - |
        function ver() {
          printf "%04d%04d%04d%04d" $${1//./ }
        }
        airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
        airflow_version_comparable=$$(ver $${airflow_version})
        min_airflow_version=2.2.0
        min_airflow_version_comparable=$$(ver $${min_airflow_version})
        if (( airflow_version_comparable < min_airflow_version_comparable )); then
          echo
          echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
          echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
          echo
          exit 1
        fi
        if [[ -z "${AIRFLOW_UID}" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
          echo "If you are on Linux, you SHOULD follow the instructions below to set "
          echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
          echo "For other operating systems you can get rid of the warning with manually created .env file:"
          echo "    See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
          echo
        fi
        one_meg=1048576
        mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
        cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
        disk_available=$$(df / | tail -1 | awk '{print $$4}')
        warning_resources="false"
        if (( mem_available < 4000 )) ; then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
          echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
          echo
          warning_resources="true"
        fi
        if (( cpus_available < 2 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
          echo "At least 2 CPUs recommended. You have $${cpus_available}"
          echo
          warning_resources="true"
        fi
        if (( disk_available < one_meg * 10 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
          echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
          echo
          warning_resources="true"
        fi
        if [[ $${warning_resources} == "true" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
          echo "Please follow the instructions to increase amount of resources available:"
          echo "   https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
          echo
        fi
        mkdir -p /sources/logs /sources/dags /sources/plugins
        chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
        exec /entrypoint airflow version
    # yamllint enable rule:line-length
    environment:
      <<: *airflow-common-env
      _AIRFLOW_DB_UPGRADE: 'true'
      _AIRFLOW_WWW_USER_CREATE: 'true'
      _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
      _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
      _PIP_ADDITIONAL_REQUIREMENTS: ''
    user: "0:0"
    volumes:
      - ${AIRFLOW_PROJ_DIR:-.}:/sources

  airflow-cli:
    <<: *airflow-common
    profiles:
      - debug
    environment:
      <<: *airflow-common-env
      CONNECTION_CHECK_MAX_COUNT: "0"
    # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
    command:
      - bash
      - -c
      - airflow

  # You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
  # or by explicitly targeted on the command line e.g. docker-compose up flower.
  # See: https://docs.docker.com/compose/profiles/
  flower:
    <<: *airflow-common
    command: celery flower
    profiles:
      - flower
    ports:
      - "5555:5555"
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

volumes:
  postgres-db-volume:

2. 不要な部分をコメントアウト

不要な部分をコメントアウトしたのがこれ。Celery 関連、Redis関連をコメントアウトしています。

version: '3.8'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.6.1}
  # build: .
  environment:
    &airflow-common-env
    AIRFLOW__CORE__EXECUTOR: LocalExecutor
    AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    # For backward compatibility, with Airflow <2.3
    AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    # AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
    # AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
    AIRFLOW__CORE__FERNET_KEY: ''
    AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
    AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
    AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session'
    # yamllint disable rule:line-length
    # Use simple http server on scheduler for health checks
    # See https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/check-health.html#scheduler-health-check-server
    # yamllint enable rule:line-length
    AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true'
    # WARNING: Use _PIP_ADDITIONAL_REQUIREMENTS option ONLY for a quick checks
    # for other purpose (development, test and especially production usage) build/extend Airflow image.
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
  volumes:
    - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
    - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
    - ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config
    - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
  user: "${AIRFLOW_UID:-50000}:0"
  depends_on:
    &airflow-common-depends-on
    # redis:
    #   condition: service_healthy
    postgres:
      condition: service_healthy

services:
  postgres:
    image: postgres:13
    environment:
      POSTGRES_USER: airflow
      POSTGRES_PASSWORD: airflow
      POSTGRES_DB: airflow
    volumes:
      - postgres-db-volume:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "airflow"]
      interval: 10s
      retries: 5
      start_period: 5s
    restart: always

  # redis:
  #   image: redis:latest
  #   expose:
  #     - 6379
  #   healthcheck:
  #     test: ["CMD", "redis-cli", "ping"]
  #     interval: 10s
  #     timeout: 30s
  #     retries: 50
  #     start_period: 30s
  #   restart: always

  airflow-webserver:
    <<: *airflow-common
    command: webserver
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-scheduler:
    <<: *airflow-common
    command: scheduler
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8974/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  # airflow-worker:
  #   <<: *airflow-common
  #   command: celery worker
  #   healthcheck:
  #     test:
  #       - "CMD-SHELL"
  #       - 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
  #     interval: 30s
  #     timeout: 10s
  #     retries: 5
  #     start_period: 30s
  #   environment:
  #     <<: *airflow-common-env
  #     # Required to handle warm shutdown of the celery workers properly
  #     # See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
  #     DUMB_INIT_SETSID: "0"
  #   restart: always
  #   depends_on:
  #     <<: *airflow-common-depends-on
  #     airflow-init:
  #       condition: service_completed_successfully

  airflow-triggerer:
    <<: *airflow-common
    command: triggerer
    healthcheck:
      test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-init:
    <<: *airflow-common
    entrypoint: /bin/bash
    # yamllint disable rule:line-length
    command:
      - -c
      - |
        function ver() {
          printf "%04d%04d%04d%04d" $${1//./ }
        }
        airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
        airflow_version_comparable=$$(ver $${airflow_version})
        min_airflow_version=2.2.0
        min_airflow_version_comparable=$$(ver $${min_airflow_version})
        if (( airflow_version_comparable < min_airflow_version_comparable )); then
          echo
          echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
          echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
          echo
          exit 1
        fi
        if [[ -z "${AIRFLOW_UID}" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
          echo "If you are on Linux, you SHOULD follow the instructions below to set "
          echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
          echo "For other operating systems you can get rid of the warning with manually created .env file:"
          echo "    See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
          echo
        fi
        one_meg=1048576
        mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
        cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
        disk_available=$$(df / | tail -1 | awk '{print $$4}')
        warning_resources="false"
        if (( mem_available < 4000 )) ; then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
          echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
          echo
          warning_resources="true"
        fi
        if (( cpus_available < 2 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
          echo "At least 2 CPUs recommended. You have $${cpus_available}"
          echo
          warning_resources="true"
        fi
        if (( disk_available < one_meg * 10 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
          echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
          echo
          warning_resources="true"
        fi
        if [[ $${warning_resources} == "true" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
          echo "Please follow the instructions to increase amount of resources available:"
          echo "   https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
          echo
        fi
        mkdir -p /sources/logs /sources/dags /sources/plugins
        chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
        exec /entrypoint airflow version
    # yamllint enable rule:line-length
    environment:
      <<: *airflow-common-env
      _AIRFLOW_DB_UPGRADE: 'true'
      _AIRFLOW_WWW_USER_CREATE: 'true'
      _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
      _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
      _PIP_ADDITIONAL_REQUIREMENTS: ''
    user: "0:0"
    volumes:
      - ${AIRFLOW_PROJ_DIR:-.}:/sources

  airflow-cli:
    <<: *airflow-common
    profiles:
      - debug
    environment:
      <<: *airflow-common-env
      CONNECTION_CHECK_MAX_COUNT: "0"
    # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
    command:
      - bash
      - -c
      - airflow

  # You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
  # or by explicitly targeted on the command line e.g. docker-compose up flower.
  # See: https://docs.docker.com/compose/profiles/
  # flower:
  #   <<: *airflow-common
  #   command: celery flower
  #   profiles:
  #     - flower
  #   ports:
  #     - "5555:5555"
  #   healthcheck:
  #     test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
  #     interval: 30s
  #     timeout: 10s
  #     retries: 5
  #     start_period: 30s
  #   restart: always
  #   depends_on:
  #     <<: *airflow-common-depends-on
  #     airflow-init:
  #       condition: service_completed_successfully

volumes:
  postgres-db-volume:
% docker-compose up airflow-init
% docker-compose up -d

3. Postgresql の部分を MySQL へ変更

version: '3.8'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.6.1}
  # build: .
  environment:
    &airflow-common-env
    AIRFLOW__CORE__EXECUTOR: LocalExecutor
    # AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: mysql://airflow:airflowpassword@mysql/airflow
    # For backward compatibility, with Airflow <2.3
    # AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
    AIRFLOW__CORE__SQL_ALCHEMY_CONN: mysql://airflow:airflowpassword@mysql/airflow
    # AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
    # AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
    AIRFLOW__CORE__FERNET_KEY: ''
    AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
    AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
    AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session'
    # yamllint disable rule:line-length
    # Use simple http server on scheduler for health checks
    # See https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/check-health.html#scheduler-health-check-server
    # yamllint enable rule:line-length
    AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true'
    # WARNING: Use _PIP_ADDITIONAL_REQUIREMENTS option ONLY for a quick checks
    # for other purpose (development, test and especially production usage) build/extend Airflow image.
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
  volumes:
    - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
    - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
    - ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config
    - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
  user: "${AIRFLOW_UID:-50000}:0"
  depends_on:
    &airflow-common-depends-on
    # redis:
    #   condition: service_healthy
    # postgres:
    #   condition: service_healthy
    mysql:
      condition: service_healthy

services:
  # postgres:
  #   image: postgres:13
  #   environment:
  #     POSTGRES_USER: airflow
  #     POSTGRES_PASSWORD: airflow
  #     POSTGRES_DB: airflow
  #   volumes:
  #     - postgres-db-volume:/var/lib/postgresql/data
  #   healthcheck:
  #     test: ["CMD", "pg_isready", "-U", "airflow"]
  #     interval: 10s
  #     retries: 5
  #     start_period: 5s
  #   restart: always
  mysql:
    image: mysql:8.0
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: airflow
      MYSQL_USER: airflow
      MYSQL_PASSWORD: airflowpassword
    volumes:
      - mysql-db-volume:/var/lib/mysql
    healthcheck:
      test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
      interval: 20s
      timeout: 10s
      retries: 5
    ports:
      - "13306:3306"

  # redis:
  #   image: redis:latest
  #   expose:
  #     - 6379
  #   healthcheck:
  #     test: ["CMD", "redis-cli", "ping"]
  #     interval: 10s
  #     timeout: 30s
  #     retries: 50
  #     start_period: 30s
  #   restart: always

  airflow-webserver:
    <<: *airflow-common
    command: webserver
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-scheduler:
    <<: *airflow-common
    command: scheduler
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8974/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  # airflow-worker:
  #   <<: *airflow-common
  #   command: celery worker
  #   healthcheck:
  #     test:
  #       - "CMD-SHELL"
  #       - 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
  #     interval: 30s
  #     timeout: 10s
  #     retries: 5
  #     start_period: 30s
  #   environment:
  #     <<: *airflow-common-env
  #     # Required to handle warm shutdown of the celery workers properly
  #     # See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
  #     DUMB_INIT_SETSID: "0"
  #   restart: always
  #   depends_on:
  #     <<: *airflow-common-depends-on
  #     airflow-init:
  #       condition: service_completed_successfully

  airflow-triggerer:
    <<: *airflow-common
    command: triggerer
    healthcheck:
      test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-init:
    <<: *airflow-common
    entrypoint: /bin/bash
    # yamllint disable rule:line-length
    command:
      - -c
      - |
        function ver() {
          printf "%04d%04d%04d%04d" $${1//./ }
        }
        airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
        airflow_version_comparable=$$(ver $${airflow_version})
        min_airflow_version=2.2.0
        min_airflow_version_comparable=$$(ver $${min_airflow_version})
        if (( airflow_version_comparable < min_airflow_version_comparable )); then
          echo
          echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
          echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
          echo
          exit 1
        fi
        if [[ -z "${AIRFLOW_UID}" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
          echo "If you are on Linux, you SHOULD follow the instructions below to set "
          echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
          echo "For other operating systems you can get rid of the warning with manually created .env file:"
          echo "    See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
          echo
        fi
        one_meg=1048576
        mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
        cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
        disk_available=$$(df / | tail -1 | awk '{print $$4}')
        warning_resources="false"
        if (( mem_available < 4000 )) ; then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
          echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
          echo
          warning_resources="true"
        fi
        if (( cpus_available < 2 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
          echo "At least 2 CPUs recommended. You have $${cpus_available}"
          echo
          warning_resources="true"
        fi
        if (( disk_available < one_meg * 10 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
          echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
          echo
          warning_resources="true"
        fi
        if [[ $${warning_resources} == "true" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
          echo "Please follow the instructions to increase amount of resources available:"
          echo "   https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
          echo
        fi
        mkdir -p /sources/logs /sources/dags /sources/plugins
        chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
        exec /entrypoint airflow version
    # yamllint enable rule:line-length
    environment:
      <<: *airflow-common-env
      _AIRFLOW_DB_UPGRADE: 'true'
      _AIRFLOW_WWW_USER_CREATE: 'true'
      _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
      _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
      _PIP_ADDITIONAL_REQUIREMENTS: ''
    user: "0:0"
    volumes:
      - ${AIRFLOW_PROJ_DIR:-.}:/sources

  airflow-cli:
    <<: *airflow-common
    profiles:
      - debug
    environment:
      <<: *airflow-common-env
      CONNECTION_CHECK_MAX_COUNT: "0"
    # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
    command:
      - bash
      - -c
      - airflow

  # You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up
  # or by explicitly targeted on the command line e.g. docker-compose up flower.
  # See: https://docs.docker.com/compose/profiles/
  # flower:
  #   <<: *airflow-common
  #   command: celery flower
  #   profiles:
  #     - flower
  #   ports:
  #     - "5555:5555"
  #   healthcheck:
  #     test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
  #     interval: 30s
  #     timeout: 10s
  #     retries: 5
  #     start_period: 30s
  #   restart: always
  #   depends_on:
  #     <<: *airflow-common-depends-on
  #     airflow-init:
  #       condition: service_completed_successfully

volumes:
  # postgres-db-volume:
  mysql-db-volume:

4. コメント部分を削除

コメント部分を削除したのがこちら。

version: '3.8'
x-airflow-common:
  &airflow-common
  # In order to add custom dependencies or upgrade provider packages you can use your extended image.
  # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
  # and uncomment the "build" line below, Then run `docker-compose build` to build the images.
  image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.6.1}
  # build: .
  environment:
    &airflow-common-env
    AIRFLOW__CORE__EXECUTOR: LocalExecutor
    AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: mysql://airflow:airflowpassword@mysql/airflow
    # For backward compatibility, with Airflow <2.3
    AIRFLOW__CORE__SQL_ALCHEMY_CONN: mysql://airflow:airflowpassword@mysql/airflow
    AIRFLOW__CORE__FERNET_KEY: ''
    AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
    AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
    AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session'
    # yamllint disable rule:line-length
    # Use simple http server on scheduler for health checks
    # See https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/check-health.html#scheduler-health-check-server
    # yamllint enable rule:line-length
    AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true'
    # WARNING: Use _PIP_ADDITIONAL_REQUIREMENTS option ONLY for a quick checks
    # for other purpose (development, test and especially production usage) build/extend Airflow image.
    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
  volumes:
    - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags
    - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs
    - ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config
    - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins
  user: "${AIRFLOW_UID:-50000}:0"
  depends_on:
    &airflow-common-depends-on
    mysql:
      condition: service_healthy

services:
  mysql:
    image: mysql:8.0
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: airflowpassword
      MYSQL_DATABASE: airflow
      MYSQL_USER: airflow
      MYSQL_PASSWORD: airflowpassword
    volumes:
      - mysql-db-volume:/var/lib/mysql
    healthcheck:
      test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
      interval: 20s
      timeout: 10s
      retries: 5
    ports:
      - "13306:3306"

  airflow-webserver:
    <<: *airflow-common
    command: webserver
    ports:
      - "8080:8080"
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-scheduler:
    <<: *airflow-common
    command: scheduler
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost:8974/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-triggerer:
    <<: *airflow-common
    command: triggerer
    healthcheck:
      test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s
    restart: always
    depends_on:
      <<: *airflow-common-depends-on
      airflow-init:
        condition: service_completed_successfully

  airflow-init:
    <<: *airflow-common
    entrypoint: /bin/bash
    # yamllint disable rule:line-length
    command:
      - -c
      - |
        function ver() {
          printf "%04d%04d%04d%04d" $${1//./ }
        }
        airflow_version=$$(AIRFLOW__LOGGING__LOGGING_LEVEL=INFO && gosu airflow airflow version)
        airflow_version_comparable=$$(ver $${airflow_version})
        min_airflow_version=2.2.0
        min_airflow_version_comparable=$$(ver $${min_airflow_version})
        if (( airflow_version_comparable < min_airflow_version_comparable )); then
          echo
          echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
          echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
          echo
          exit 1
        fi
        if [[ -z "${AIRFLOW_UID}" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
          echo "If you are on Linux, you SHOULD follow the instructions below to set "
          echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
          echo "For other operating systems you can get rid of the warning with manually created .env file:"
          echo "    See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user"
          echo
        fi
        one_meg=1048576
        mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
        cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
        disk_available=$$(df / | tail -1 | awk '{print $$4}')
        warning_resources="false"
        if (( mem_available < 4000 )) ; then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
          echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
          echo
          warning_resources="true"
        fi
        if (( cpus_available < 2 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
          echo "At least 2 CPUs recommended. You have $${cpus_available}"
          echo
          warning_resources="true"
        fi
        if (( disk_available < one_meg * 10 )); then
          echo
          echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
          echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
          echo
          warning_resources="true"
        fi
        if [[ $${warning_resources} == "true" ]]; then
          echo
          echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
          echo "Please follow the instructions to increase amount of resources available:"
          echo "   https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin"
          echo
        fi
        mkdir -p /sources/logs /sources/dags /sources/plugins
        chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
        exec /entrypoint airflow version
    # yamllint enable rule:line-length
    environment:
      <<: *airflow-common-env
      _AIRFLOW_DB_UPGRADE: 'true'
      _AIRFLOW_WWW_USER_CREATE: 'true'
      _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
      _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
      _PIP_ADDITIONAL_REQUIREMENTS: ''
    user: "0:0"
    volumes:
      - ${AIRFLOW_PROJ_DIR:-.}:/sources

  airflow-cli:
    <<: *airflow-common
    profiles:
      - debug
    environment:
      <<: *airflow-common-env
      CONNECTION_CHECK_MAX_COUNT: "0"
    # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
    command:
      - bash
      - -c
      - airflow

volumes:
  mysql-db-volume:
% docker-compose up airflow-init
% docker-compose up -d

DBeaver でリモートサーバーの MySQL へ接続

DBeaver からリモートーサーバ上にある MySQL への接続を作成したので、メモとして残します。

今回は SSH Tunnel を使用し、SSH でサーバーへ接続→localhost として MySQL へアクセスする方法を選びました。

  1. 新しい接続
  2. SSH Tunnel の設定
  3. allowPublicKeyRetrieval
  4. MySQL ログイン情報の設定

新しい接続

まず、DBeaver の画面上でデータベースタブをクリックし、「新しい接続」をクリックします。

すると下記の様な画面が表示されるので今回の場合は「MySQL」を選択します。

SSH Tunnel の設定

今回、SSH Tunnel を使用して接続するので、まず「SSH」タブをクリックし、MySQL を設置しているサーバーへの SSH 接続情報を入力します。

MySQL へのログイン情報ではありません。

SSH 接続部分のテストをします。

無事成功です。

allowPublicKeyRetrieval

「ドライバのプロパティ」タブで「allowPublicKeyRetrieval」を TRUE にします。

MySQL ログイン情報の設定

最後にデータベース名やユーザー名、パスワードなど MySQL のログイン情報を入力します。

これもテストしてみます。

成功です。「終了」をクリックで接続の作成は完了です。

【Mac】Django アプリ作成 & MySQL 初回 migrate まで

  1. Python 仮想環境の作成
  2. Django プロジェクト&アプリケーションの作成
  3. MySQL へ初回 migrate
  4. 管理ユーザーの作成

Python 仮想環境の作成

任意の場所に Python 仮想環境を作成、起動します。

% python3 -m venv justjam
% cd justjam
% source bin/activate

Django プロジェクト&アプリケーションの作成

仮想環境内で django をインストール、そしてプロジェクトとアプリケーションを作成します。

% pip install --upgrade pip
% pip install django
% django-admin startproject justjam_proj
% cd justjam_proj
% python manage.py startapp api
# settings.py

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'api.apps.ApiConfig', # 追記分
]

runserver を実行し、ブラウザで動作確認します。

% python manage.py runserver

MySQL へ初回 migrate

注)カスタムユーザーモデルを使用する場合は、以下のステップに進む前に設定を完了しておく必要があります。

settings.py の DATABASES を変更し MySQL 対応にします。

# settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'justjam',
    }
}

MySQL でデータベースを作成し、migrate を実行します。

% mysql
mysql> create database justjam;
mysql> exit
% pip install mysql
% python manage.py migrate

Django のテーブルが作成されていることを確認します。

% mysql
mysql> show tables
    -> ;
+----------------------------+
| Tables_in_justjam          |
+----------------------------+
| auth_group                 |
| auth_group_permissions     |
| auth_permission            |
| auth_user                  |
| auth_user_groups           |
| auth_user_user_permissions |
| django_admin_log           |
| django_content_type        |
| django_migrations          |
| django_session             |
+----------------------------+
10 rows in set (0.00 sec)

mysql> exit;

管理ユーザーの作成

% python manage.py createsuperuser
Username (leave blank to use 'username'): admin
Email address: admin@justjam.com
Password: 
Password (again): 
Superuser created successfully.

【Django】独自 ID の自動インクリメントが行われていなかった

問題

Django でモデルを扱う際、デフォルトで作成される id ではないプライマリキーで管理をしたかったので、なんとなくネットでささっと見た情報で 「models.AutoField(primary_key=True)」を設定していました。

ただ、データベース上での設定が正しくできていなかった様で自動インクリメントが機能していませんでした。

そのため Django の admin 画面からデータを登録する際、すでに存在する値が id として登録されていました。

ちなみに、models.py では「models.AutoField(primary_key=True)」となっていたもののテーブル上ではカラムがプライマリキーになっていなかったため、エラーにもなっていませんでした。

修正作業

この時点でのテーブルは下記の状態。「id」カラムを修正していきます。

mysql> describe ig_mst_product;
+-------------------+--------------------------+------+-----+---------+-------+
| Field             | Type                     | Null | Key | Default | Extra |
+-------------------+--------------------------+------+-----+---------+-------+
| id                | int(6) unsigned zerofill | NO   |     | 000000  |       |
| product_nm        | varchar(60)              | YES  |     | NULL    |       |
| product_category  | varchar(20)              | YES  |     | NULL    |       |
| brand_cd          | int(6) unsigned zerofill | YES  |     | NULL    |       |
| product_url       | text                     | YES  |     | NULL    |       |
| product_image_url | text                     | YES  |     | NULL    |       |
+-------------------+--------------------------+------+-----+---------+-------+

プライマリキーとして設定

既存の値が再度使われてしまうのはおかしいので MySQL 上でプライマリキーを設定。

mysql> alter table ig_mst_brand add primary key (id);

値の重複の解決

admin 画面からレコードを追加すると「1062, "Duplicate entry '000000' for key 'ig_mst_product.PRIMARY'"」という感じで値の重複が発生。

「000000」を id として使用しようとしているので、おそらく値の自動インクレメントがされていない状態

デフォルト値の設定削除

デフォルト値として「000000」を設定していたのでとりあえずなくしてみます。

mysql> alter table ig_mst_product alter id drop default;

auto_increment の設定

その上で MySQL でカラムに auto_increment を設定しようとしたところ、また値の重複で拒否されました。

mysql> alter table テーブル名 modify カラム名 int(6) unsigned zerofill not null auto_increment;
ERROR 1062 (23000): ALTER TABLE causes auto_increment resequencing, resulting in duplicate entry '000001' for key 'テーブル名.PRIMARY'

この時点で id カラムには「000000」〜「000069」まで存在していたにもかかわらず、なぜか「000001」を使おうとして重複エラーが返ってきている状況。

調べてみたところ、既存データの対象のカラムの中に 0 もしくは -1 以下の値が入っているとこのエラーになる様です。既存データで「000000」が存在したのでこれを別の値に変更して再度実行したら成功しました。

mysql> alter table ig_mst_product modify id int(6) unsigned zerofill not null auto_increment;
Query OK, 69 rows affected, 2 warnings (0.04 sec)
Records: 69  Duplicates: 0  Warnings: 2

正しい設定をまとめると…

Id として使用するカラムがデータベース上で下記の条件を満たす必要がある様です。

  • デフォルト値なし
  • プライマリキーとして設定
  • AUTO_INCREMENT を設定
    • 既存データで 0 もしくはそれ以下の値が入っていないことを確認

一応 Before / After も載せておきます。Key, Default, Extra が変わっています。

Before

mysql> describe ig_mst_product;
+-------------------+--------------------------+------+-----+---------+-------+
| Field             | Type                     | Null | Key | Default | Extra |
+-------------------+--------------------------+------+-----+---------+-------+
| id                | int(6) unsigned zerofill | NO   |     | 000000  |       |
| product_nm        | varchar(60)              | YES  |     | NULL    |       |
| product_category  | varchar(20)              | YES  |     | NULL    |       |
| brand_cd          | int(6) unsigned zerofill | YES  |     | NULL    |       |
| product_url       | text                     | YES  |     | NULL    |       |
| product_image_url | text                     | YES  |     | NULL    |       |
+-------------------+--------------------------+------+-----+---------+-------+

After

mysql> describe ig_mst_product;
+-------------------+--------------------------+------+-----+---------+----------------+
| Field             | Type                     | Null | Key | Default | Extra          |
+-------------------+--------------------------+------+-----+---------+----------------+
| id                | int(6) unsigned zerofill | NO   | PRI | NULL    | auto_increment |
| product_nm        | varchar(60)              | YES  |     | NULL    |                |
| product_category  | varchar(20)              | YES  |     | NULL    |                |
| brand_cd          | int(6) unsigned zerofill | YES  |     | NULL    |                |
| product_url       | text                     | YES  |     | NULL    |                |
| product_image_url | text                     | YES  |     | NULL    |                |
+-------------------+--------------------------+------+-----+---------+----------------+

Mac brew install mysqlclient でエラー pip install mysql で成功

環境

OS: macOS Big Sur バージョン 11.2.2
Python: Python 3.8.3

brew install mysqlclient でエラー

Python 仮想環境で「brew install mysqlclient」を実行したら下記のエラーが発生しました。結論から言うと「pip install mysqlclient」の間違いだったことに後で気づきました。

が、それに気づく前に「pip install mysql」を実行して無事解決したので書いておきます。

エラーログ

==> Searching for similarly named formulae...
These similarly named formulae were found:
mysql-client                                                                                  mysql-client@5.7
To install one of them, run (for example):
  brew install mysql-client
Error: No available formula or cask with the name "mysqlclient".
==> Searching for a previously deleted formula (in the last month)...
Error: No previously deleted formula found.
==> Searching taps on GitHub...
Error: No formulae found in taps.

とりあえず一番初めのエラー「Error: No available formula or cask with the name "mysqlclient".」が気になる。

調べてみると、「pip install mysql」を実行してその後に「pip install mysqlclient」を実行するという書き込みを発見。

ただ実際は「pip install mysql」だけで成功しました。

pip install mysql で成功

仮想環境内でコマンド「pip install mysql」を実行。

(venv) % pip install mysql
Collecting mysql
  Downloading https://files.pythonhosted.org/packages/bf/5f/b574ac9f70811df0540e403309f349a8b9fa1a25d3653824c32e52cc1f28/mysql-0.0.2.tar.gz
Collecting mysqlclient (from mysql)
  Downloading https://files.pythonhosted.org/packages/3c/df/59cd2fa5e48d0804d213bdcb1acb4d08c403b61c7ff7ed4dd4a6a2deb3f7/mysqlclient-2.0.3.tar.gz (88kB)
     |████████████████████████████████| 92kB 5.7MB/s 
Installing collected packages: mysqlclient, mysql
  Running setup.py install for mysqlclient ... done
  Running setup.py install for mysql ... done
Successfully installed mysql-0.0.2 mysqlclient-2.0.3
WARNING: You are using pip version 19.2.3, however version 21.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(venv) %

「pip install mysql」で mysqlclient もカバーされている?

ログをみるとちらほら「mysqlclient」について書かれているのが見えると思います。

そして「Successfully installed mysql-0.0.2 mysqlclient-2.0.3」の一文も。これってストレートに「mysql と mysqlclient が無事インストールされました。」ってことですよね?

% pip install mysqlclient

なので python を立ち上げて「import MySQLdb」を実行してみたら特にエラーも出ず。

「pip install mysqlclient」を実行しても何も変わらず

ここまで来てやっと「brew install 〜」ではなく「pip install 〜」を実行すべきだったと気づいて「pip install mysqlclient」を実行してみましたが、すでに完了しているとのことで何も変わりませんでした。

(venv) % pip install mysqlclient
Requirement already satisfied: mysqlclient in /Users/ユーザー名/PythonProjects/sample_venv/lib/python3.8/site-packages (2.0.3)
(venv) %

Mac ローカルにインストールした MySQL を使う方法

  1. MySQL を起動する
  2. MySQL に接続する
  3. MySQL を遮断する
  4. MySQL を終了する

1. MySQL を起動する

コマンド「brew services start mysql」でローカルの MySQL を起動します。

% brew services start mysql
==> Successfully started `mysql` (label: homebrew.mxcl.mysql)

2. MySQL に接続する

コマンド「mysql -u root」で MySQL に接続します。

% mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.23 Homebrew

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

3. MySQL を遮断する

コマンド「exit」で MySQL のコマンドプロンプトから出ます。

mysql> exit
Bye

4. MySQL を終了する

コマンド「brew services stop mysql」で終了です。

% brew services stop mysql
==> Successfully stopped `mysql` (label: homebrew.mxcl.mysql)

import MySQLdb で実際にクエリを実行できるか確認する

mysqlclient をインストールしたので、実際にクエリを実行できるか簡単に確認します。

今回は WordPress で自動作成されたテーブルで、現在公開されている記事のタイトルを select しました。

% python3
Python 3.9.0 (default, Jan  3 2021, 10:29:59) 
[GCC 4.2.1 20070831 patched [FreeBSD]] on freebsd9
Type "help", "copyright", "credits" or "license" for more information.

>>> import MySQLdb
>>> conn = MySQLdb.connect(host='mysqlXXX.db.sakura.ne.jp', db='abcabc_notemite',user='abcabc',passwd='xxxxxxxxxxxx',charset='utf8mb4')
>>> c = conn.cursor()
>>> c.execute('select post_title from nm_posts where post_status = "publish"')
20
>>> c.fetchall()
(('サンプルページ',), ('【Wordpress】ショートコードを別ファイル管理でリスク低減',), ('【WordPress】ショートコードの表示・非表示を一発で切り替えたい',), ('【ターミナル】Mac のログインシェルを zsh に変える',), ('【Python】Macbook に Python3 をインストール',), ('【Python】「pip install requests」でエラー「pip3 install requests」で成功',), ('【Python】Mac に Beautifulsoup4 をインストール',), ('Homebrew を Macbook Pro へインストール',), ('Homebrew を使って Macbook に Wget をインストール',), ('【Wordpress】確かに「--」と書いたのに「–」と表示される件',), ('【8ステップでがんばる】さくらレンタルサーバーでWordPressサイト立ち上げ',), ('【ターミナル】さくらのレンタルサーバーにSSHでログインする方法',), ('さくらレンタルサーバに Python3 をインストールする方法',), ('【Python】さくらレンタルサーバーに requests モジュールをインストール',), ('【Python】Mac に MySQL モジュールをインストールする方法',), ('【2020年10月〜12月】YouTube 急上昇入りランキング',), ('さくらレンタルサーバーに libffi をインストールする',), ('【2020年1月〜】YouTube 急上昇入りランキング',), ('さくらレンタルサーバーで Python3 を再インストール',), ('【Python3】mysqlclient のために _ctypes と格闘した記録',))
>>> 

execute() の実行結果(上記「20」の部分)はクエリの結果として返されたレコード数を表示しています。fetchall() を実行しないとクエリの実行結果は見れないんですね。

【Python3】_ctypes と libffi のインストールに苦しんだ記録

経緯

さくらのレンタルサーバ上で MySQL のデータベースを使えるので、それを Python で操作したいと思ったのが始まりです。

mysqlclient を使えば良いらしかったので「pip3 install mysqlclient」を実行するも、エラーが出て成功しない。

なんとかインストールを終えるまでの過程が文系の私には?だいぶややこしかったのでその記録を書き留めます。

  1. 問題発生1(No module named '_ctypes')
  2. 調査と原因1(libffi が無い)
  3. libffi をインストール(sudo は使えない)
  4. 問題発生2(INFO: Could not locate ffi libs and/or headers)
  5. 調査と原因2(./configure 文の修正)
  6. 解決(mysqlclient インストール成功)

問題発生1(No module named '_ctypes')

上記の通り「pip3 install mysqlclient」を実行したところ、下記のエラーが出てインストールが失敗しました。

ModuleNotFoundError: No module named '_ctypes'

直訳すると「_ctypes モジュールがありません」

_ctypes モジュールとはなんぞや。そしてそれはどこにあるのでしょうか。。。

調査と原因1(libffi が無い)

取り敢えず知識ゼロから出来ることは、情報の海に身を投げてなんとなくわかるまで溺れること。。。調べた結果断片的ですが自分の理解できる範囲で下記の情報が得られました。

  • _ctypes と libffi
    • _ctypes は C 言語で書かれたライブラリを Python から利用するためのモジュールであり、libffiに依存している。
    • ソースからPythonをビルドする際、libffiが見つからない場合は _ctypes のビルドはスキップされる。
    • _ctypes がビルドされていない Python から _ctypes を利用しようとすると、当該エラー(ModuleNotFoundError: No module named '_ctypes')が発生する。
  • 原因
    • つまり、libffi がインストールされていない状態で Python をインストールしたため、_ctypes のビルドがされていなかった。
  • 解決方法
    • 当該エラーを解消するには、libffi をインストールした上で Python を再ビルド・再インストールする必要がある。

libffi をインストール(sudo は使えない)

「sudo apt-get install -y libffi-dev」というコマンドが出てきましたが、さくらのレンタルサーバー(スタンダード)では管理者権限が与えられていないので sudo が使えません。

% sudo apt-get install -y libffi-dev
/usr/local/bin/sudo: Permission denied.

この通り、拒否されてしまいます。諸々調べた結果下記の方法でインストール出来ました。

問題発生2(INFO: Could not locate ffi libs and/or headers)

めでたく libffi をインストール出来たので Python をインストールし直しました。が、念のため make のログを確認したところ下記を発見。

INFO: Could not locate ffi libs and/or headers

Failed to build these modules:
_ctypes               _zoneinfo 

直訳すると「ffi libs およびヘッダーが見つからず _ctypes モジュールと _zoneinfo モジュールをビルド出来ませんでした。」とのこと。ショック。

しょうがないのでとりあえず続けて make install しましたが、当然 _ctypes モジュールが無いので「pip3 install mysqlclient」を実行しても前回と全く同じエラーで失敗します。

ModuleNotFoundError: No module named '_ctypes'

調査と原因2(./configure 文の修正)

また海に放り出されました。今回は「INFO: Could not locate ffi libs and/or headers」の文言を頼りに調べました。

こちらのページ(英語)を見たところ、どうやら CPPFLAGS、LDFLAGS、PKG_CONFIG_PATH の設定が必要みたいです。PKG_CONFIG_PATH の設定は libffi のインストールの時に行っていたので、取り敢えず CPPFLAGS と LGFLAGS だけでやってみます。

python-3.9.0.tgz は残して他を消し、再度下記の「./configure」文で Python のインストールを試してみます。

./configure --prefix=$HOME/local/python/ --with-system-ffi LDFLAGS="-L $HOME/local/lib/" CPPFLAGS="-I $HOME/local/include/"
項目雑なメモ
--prefixPython3 をインストールする場所
--with-system-ffilibffi のシステムバージョンへの接続を有効にするスイッチ。
CPPFLAGSCプリプロセッサのオプション
include パスを入れるっぽい?
LDFLAGSリンク用のディレクトリを指定する。
lib フォルダを入れるっぽい?

解決(mysqlclient インストール成功)

上記の「./configure」文のあとで make を実行しました。そしてログを確認すると下記の通り _ctypes のエラーはなくなりました。_zoneinfo は何かよくわかりませんし特に今回必要だとも思ってないので無視します。

Failed to build these modules:
_zoneinfo

今回は「pip3 install mysqlclient」も無事成功しました。

% pip3 install mysqlclient
Collecting mysqlclient
  Using cached mysqlclient-2.0.3.tar.gz (88 kB)
Using legacy 'setup.py install' for mysqlclient, since package 'wheel' is not installed.
Installing collected packages: mysqlclient
    Running setup.py install for mysqlclient ... done
Successfully installed mysqlclient-2.0.3
WARNING: You are using pip version 20.2.3; however, version 20.3.3 is available.
You should consider upgrading via the '/home/xxxxxx/local/python/bin/python3.9 -m pip install --upgrade pip' command.
% 

【Python】Mac に MySQL モジュールをインストールする方法

環境

OS: macOS Catalina バージョン 10.15.4
Python: Python 3.8.3

やりたいこと

Python で MySQLdb モジュールを使える様にしたい。

mysql をインストール

まず Homebrew で MySQL をインストールする必要があるので「brew install mysql」を実行します。

これをしていないとこの後 pip で「mysqlclient」のインストールができません。

(scraping) % brew install mysql
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> New Formulae
acl2                       heksa                      nfpm
act                        httpx                      ngs
airshare                   hy                         notmuch-mutt
alsa-lib                   i686-elf-binutils          numcpp
amp                        i686-elf-gcc               oci-cli
apidoc                     idris2                     omake
arb                        infracost                  openfst
argo                       inja                       oq
argocd                     inko                       ormolu
arrayfire                  ioctl                      ory-hydra
asimov                     jerryscript                osi
asroute                    jimtcl                     osm
athenacli                  jinx                       packetbeat
austin                     jobber                     packr
awsweeper                  jsonnet-bundler            pandoc-include-code
blogc                      k9s                        pandocomatic
bombadillo                 kde-karchive               parallel-hashmap
bond                       kde-kdoctools              periscope
borgbackup                 kde-ki18n                  pipgrip
box2d                      kde-threadweaver           po4a
buildozer                  kona                       podman
c7n                        kondo                      postgresql@12
cadence                    ksync                      prometheus-cpp
carton                     kubie                      promtail
cassowary                  ladspa-sdk                 protoc-gen-go-grpc
castget                    latexindent                protoc-gen-gogo
cbc                        ldpl                       protoc-gen-gogofaster
cdktf                      leaf                       pwncat
cgl                        libaio                     python@3.7
chalk-cli                  libcouchbase@2             python@3.9
charge                     libdmx                     reg
chart-testing              libdrm                     rgf
choose-rust                libfontenc                 rqlite
chrony                     libfs                      rust-analyzer
clair                      libgccjit                  rustscan
clang-format@8             libgnt                     s2n
cli11                      libhandy                   saltwater
cloud-nuke                 libice                     scw@1
cloudformation-cli         libirecovery               sdns
cloudformation-guard       libmnl                     server-go
coconut                    libnetfilter-queue         shallow-backup
colfer                     libnetworkit               sheldon
commitizen                 libnfnetlink               shtools
copilot                    liboqs                     silicon
coredns                    libpciaccess               simdjson
cortex                     libpthread-stubs           skylighting
cpio                       libseccomp                 sleef
cpm                        libsm                      smlpkg
cpr                        libx11                     snap
croaring                   libxau                     so
croc                       libxaw                     solidity
cubejs-cli                 libxaw3d                   sonic
cvs-fast-export            libxcb                     sponge
datasette                  libxcomposite              sqlite-utils
dbdeployer                 libxcursor                 standardese
diskonaut                  libxdamage                 staticcheck
dnsprobe                   libxdmcp                   structurizr-cli
doctest                    libxext                    subfinder
dosbox-staging             libxfixes                  termcolor
dotnet                     libxfont                   terraform-ls
duckscript                 libxft                     terraform@0.12
eksctl                     libxi                      terrascan
eleventy                   libxinerama                tfsec
empty                      libxkbfile                 thanos
envoy                      libxmu                     toot
fava                       libxpm                     torchvision
fblog                      libxrandr                  tre-command
fennel                     libxrender                 trunk
fetch                      libxres                    ugrep
flank                      libxscrnsaver              uptoc
fleet-cli                  libxshmfence               usb.ids
flit                       libxt                      util-macros
folderify                  libxtst                    vapor
font-util                  libxv                      vcpkg
foreman                    libxvmc                    vgrep
fpart                      libxxf86dga                vivid
fpdns                      libxxf86vm                 vlang
functionalplus             litecli                    volk
gateway-go                 localstack                 vsearch
gcc@9                      logcli                     vtk@8.2
git-hooks-go               loki                       webify
git-hound                  lunchy                     wgcf
gitql                      lunchy-go                  wownero
gitui                      macos-trash                wren
gluon                      mandown                    wren-cli
go@1.14                    mariadb@10.4               x86_64-elf-gdb
gofish                     marked                     xcb-proto
golangci-lint              mask                       xdpyinfo
gostatic                   matplotplusplus            xorgproto
gradle-profiler            mhonarc                    xtrans
gravity                    microplane                 yj
grpcui                     naabu                      z.lua
gulp-cli                   nanorc                     zoxide
halide                     nest                       zsh-you-should-use
hashlink                   networkit
hasura-cli                 never
==> Updated Formulae
Updated 4645 formulae.
==> Renamed Formulae
elasticsearch@6.8 -> elasticsearch@6
gst-validate -> gst-devtools
interactive-rebase-tool -> git-interactive-rebase-tool
jfrog-cli-go -> jfrog-cli
kibana@6.8 -> kibana@6
mkl-dnn -> onednn
==> Deleted Formulae
baidupcs-go         elasticsearch@5.6   marathon-swift      unravel
biogeme             highlighting-kate   python              urbit
cargo-completion    i386-elf-grub       residualvm          wpscan
cryptopp            kibana@5.6          sflowtool           xu4
elasticsearch@2.4   lumo                tomee-jax-rs

==> Downloading https://homebrew.bintray.com/bottles/openssl%401.1-1.1.1h.catali
==> Downloading from https://d29vzk4ow07wi7.cloudfront.net/4e5357c0cfd55cfa4ef0b
######################################################################## 100.0%
==> Downloading https://homebrew.bintray.com/bottles/protobuf-3.13.0.catalina.bo
==> Downloading from https://d29vzk4ow07wi7.cloudfront.net/af2990ba0a6e5588ef2af
######################################################################## 100.0%
==> Downloading https://homebrew.bintray.com/bottles/mysql-8.0.21_1.catalina.bot
==> Downloading from https://d29vzk4ow07wi7.cloudfront.net/81e92d0df39edaff415e7
######################################################################## 100.0%
==> Installing dependencies for mysql: openssl@1.1 and protobuf
==> Installing mysql dependency: openssl@1.1
==> Pouring openssl@1.1-1.1.1h.catalina.bottle.tar.gz
==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
  /usr/local/etc/openssl@1.1/certs

and run
  /usr/local/opt/openssl@1.1/bin/c_rehash

openssl@1.1 is keg-only, which means it was not symlinked into /usr/local,
because macOS provides LibreSSL.

If you need to have openssl@1.1 first in your PATH run:
  echo 'export PATH="/usr/local/opt/openssl@1.1/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl@1.1 you may need to set:
  export LDFLAGS="-L/usr/local/opt/openssl@1.1/lib"
  export CPPFLAGS="-I/usr/local/opt/openssl@1.1/include"

==> Summary
🍺  /usr/local/Cellar/openssl@1.1/1.1.1h: 8,067 files, 18.5MB
==> Installing mysql dependency: protobuf
==> Pouring protobuf-3.13.0.catalina.bottle.tar.gz
==> Caveats
Emacs Lisp files have been installed to:
  /usr/local/share/emacs/site-lisp/protobuf
==> Summary
🍺  /usr/local/Cellar/protobuf/3.13.0: 266 files, 19.8MB
==> Installing mysql
==> Pouring mysql-8.0.21_1.catalina.bottle.tar.gz
==> /usr/local/Cellar/mysql/8.0.21_1/bin/mysqld --initialize-insecure --user=ken
==> Caveats
We've installed your MySQL database without a root password. To secure it run:
    mysql_secure_installation

MySQL is configured to only allow connections from localhost by default

To connect run:
    mysql -uroot

To have launchd start mysql now and restart at login:
  brew services start mysql
Or, if you don't want/need a background service you can just run:
  mysql.server start
==> Summary
🍺  /usr/local/Cellar/mysql/8.0.21_1: 290 files, 291.2MB
==> `brew cleanup` has not been run in 30 days, running now...
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/gettext--0.20.2_1.catalina.bottle.tar.gz... (8.4MB)
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/libidn2--2.3.0.catalina.bottle.tar.gz... (233.6KB)
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/libunistring--0.9.10.catalina.bottle.tar.gz... (1.4MB)
Removing: /usr/local/Cellar/openssl@1.1/1.1.1g... (8,059 files, 18MB)
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/openssl@1.1--1.1.1g.catalina.bottle.tar.gz... (5.3MB)
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/tree--1.8.0.catalina.bottle.tar.gz... (49.5KB)
Removing: /Users/xxxxxxxx/Library/Caches/Homebrew/wget--1.20.3_2.catalina.bottle.tar.gz... (1.4MB)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/tree... (64B)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/wget... (64B)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/libidn2... (64B)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/libunistring... (64B)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/gettext... (64B)
Removing: /Users/xxxxxxxx/Library/Logs/Homebrew/openssl@1.1... (64B)
Pruned 1 symbolic links and 2 directories from /usr/local
==> Caveats
==> openssl@1.1
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
  /usr/local/etc/openssl@1.1/certs

and run
  /usr/local/opt/openssl@1.1/bin/c_rehash

openssl@1.1 is keg-only, which means it was not symlinked into /usr/local,
because macOS provides LibreSSL.

If you need to have openssl@1.1 first in your PATH run:
  echo 'export PATH="/usr/local/opt/openssl@1.1/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl@1.1 you may need to set:
  export LDFLAGS="-L/usr/local/opt/openssl@1.1/lib"
  export CPPFLAGS="-I/usr/local/opt/openssl@1.1/include"

==> protobuf
Emacs Lisp files have been installed to:
  /usr/local/share/emacs/site-lisp/protobuf
==> mysql
We've installed your MySQL database without a root password. To secure it run:
    mysql_secure_installation

MySQL is configured to only allow connections from localhost by default

To connect run:
    mysql -uroot

To have launchd start mysql now and restart at login:
  brew services start mysql
Or, if you don't want/need a background service you can just run:
  mysql.server start

長々とログが出ましたが、入力待ちになったら which コマンドで mysql がインストールされたのを確認します。

(scraping) % which mysql
/usr/local/bin/mysql

/usr/local/bin/mysql にインストールされています。

MySQL の動作を確認

MySQL を起動

「mysql.server start」と入力して MySQL を起動してみます。

% mysql.server start
Starting MySQL
.. SUCCESS! 
% 

無事起動しました。

MySQL をシャットダウン

「mysql.server stop」でシャットダウンすることができます。

% mysql.server stop
Shutting down MySQL
.. SUCCESS!
%

mysqlclient をインストール

pip で mysqlclient をインストールします。

(scraping) % pip install mysqlclient
Collecting mysqlclient
  Using cached https://files.pythonhosted.org/packages/a5/e1/e5f2b231c05dc51d9d87fa5066f90d1405345c54b14b0b11a1c859020f21/mysqlclient-2.0.1.tar.gz
Installing collected packages: mysqlclient
  Running setup.py install for mysqlclient ... done
Successfully installed mysqlclient-2.0.1
WARNING: You are using pip version 19.2.3, however version 20.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(scraping) % 

MySQLdb モジュールが使えることを確認

Python で MySQLdb モジュールがインポートできることを確認して完了です。

>>> import MySQLdb
>>>