Go项目实战之docker开发环境部署

文摘   2024-06-13 15:35   挪威  



前言

上一篇文章带你进行了Go-Zero实战之API定义,今天这篇继续给大家更新掘金签约系列文章:使用gozero进行业务开发,通过docker-compose,搭建开发环境。

通过本文的教程,你将能够搭建一个完整的开发环境,为今后的业务开发搭建好相关脚手架。

概述

对于gozero项目的开发环境推荐docker-compose,使用直链方式,可以避免开发环境服务注册发现中间件(etcd、nacos、consul等)带来的麻烦。

实战前准备

我们根据前面的文章,在go-zero-testProject目录下面创建了usercenter服务,有对应的api服务以及rpc服务

项目目录结构如下:

生成go.mod文件

go mod init testProject

修改api层的usercenter.yaml文件成如下内容:

Name: usercenter-api
Host: 0.0.0.0
Port: 1004
Mode: dev

#监控
Prometheus:
  Host: 0.0.0.0
  Port: 4008
  Path: /metrics

#链路追踪
Telemetry:
  Name: usercenter-api
  Endpoint: http://jaeger:14268/api/traces
  Sampler: 1.0
  Batcher: jaeger

Log:
  ServiceName: usercenter-api
  Mode: console
  Level: error
  Encoding: plain

#rpc service
UsercenterRpcConf:
  Endpoints:
    - 127.0.0.1:2004
  NonBlock: true

rpc的配置文件如下:

Name: usercenter-rpc
ListenOn: 0.0.0.0:2004
Mode: dev

#Monitoring
Prometheus:
  Host: 0.0.0.0
  Port: 4009
  Path: /metrics

#Link Tracking
Telemetry:
  Name: usercenter-rpc
  Endpoint: http://jaeger:14268/api/traces
  Sampler: 1.0
  Batcher: jaeger

Log:
  ServiceName: usercenter-rpc
  Level: error

Redis:
  Host: redis:6379
  Type: node
  Pass: xxxxxxx
  Key: usercenter-rpc
DB:
  DataSource: root:xxxxxx@tcp(mysql:3306)/xxxxxx?charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai
Cache:
  - Host: redis:6379
    Pass: xxxxxx

docker-compose.yml内容如下:

声明:本文docker镜像使用的lyumikael开源的:lyumikael/gomodd:v1.20.3,所以下面配置文件中的密码等都是用他内置的,这样能极大提高大家本地部署的效率。

version: '3'

######## app下api+rpc ,  Before starting this project, start the environment that the project depends on docker-compose-env.yml #######

services:
  nginx-gateway:
    image: nginx:1.21.5
    container_name: nginx-gateway
    restart: always
    privileged: true
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 8888:8081
    volumes:
      - ./deploy/nginx/conf.d:/etc/nginx/conf.d
      - ./data/nginx/log:/var/log/nginx
    networks:
      - testProject_net
    depends_on:
      - testProject

  #前端api + 业务rpc - Front-end API + business RPC
  testProject:
    image: lyumikael/gomodd:v1.20.3
    container_name: testProject
    environment:
      # 时区上海 - Timezone Shanghai
      TZ: Asia/Shanghai
      GOPROXY: https://goproxy.cn,direct
    working_dir: /go/testProject
    volumes:
      - .:/go/testProject
    privileged: true
    restart: always
    networks:
      - testProject_net
    ports:
      - 2004:2004

networks:
  testProject_net:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.1.0/24

docker-compose-env.yml内容如下:

version: '3'

services:
  #jaeger链路追踪 — Jaeger for tracing
  jaeger:
    image: jaegertracing/all-in-one:1.42.0
    container_name: jaeger
    restart: always
    ports:
      - "5775:5775/udp"
      - "6831:6831/udp"
      - "6832:6832/udp"
      - "5778:5778"
      - "16686:16686"
      - "14268:14268"
      - "9411:9411"
    environment:
      - SPAN_STORAGE_TYPE=elasticsearch
      - ES_SERVER_URLS=http://elasticsearch:9200
      - LOG_LEVEL=debug
    networks:
      - testProject_net

  #prometheus监控 — Prometheus for monitoring
  prometheus:
    image: prom/prometheus:v2.28.1
    container_name: prometheus
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    volumes:
      - ./deploy/prometheus/server/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./data/prometheus/data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
    restart: always
    user: root
    ports:
      - 9090:9090
    networks:
      - testProject_net

  #查看prometheus监控数据 - Grafana to view Prometheus monitoring data
  grafana:
    image: grafana/grafana:8.0.6
    container_name: grafana
    hostname: grafana
    user: root
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    restart: always
    volumes:
        - ./data/grafana/data:/var/lib/grafana
    ports:
        - "3001:3000"
    networks:
        - testProject_net

  #搜集kafka业务日志、存储prometheus监控数据 - Kafka for collecting business logs and storing Prometheus monitoring data
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
    container_name: elasticsearch
    user: root
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - TZ=Asia/Shanghai
    volumes:
      - ./data/elasticsearch/data:/usr/share/elasticsearch/data
    restart: always
    ports:
    - 9200:9200
    - 9300:9300
    networks:
      - testProject_net

  #查看elasticsearch数据 - Kibana to view Elasticsearch data
  kibana:
    image: docker.elastic.co/kibana/kibana:7.13.4
    container_name: kibana
    environment:
      - elasticsearch.hosts=http://elasticsearch:9200
      - TZ=Asia/Shanghai
    restart: always
    networks:
      - testProject_net
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

  #消费kafka中filebeat收集的数据输出到es - The data output collected by FileBeat in Kafka is output to ES
  go-stash:
    image: kevinwan/go-stash:1.0 # if you "macOs intel" or "linux amd"
#    image: kevinwan/go-stash:1.0-arm64 #  if you "macOs m1" or "linux arm"
    container_name: go-stash
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    user: root
    restart: always
    volumes:
      - ./deploy/go-stash/etc:/app/etc
    networks:
      - testProject_net
    depends_on:
      - elasticsearch
      - kafka

  #收集业务数据 - Collect business data
  filebeat:
    image: elastic/filebeat:7.13.4
    container_name: filebeat
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    user: root
    restart: always
    entrypoint: "filebeat -e -strict.perms=false"  #解决配置文件权限问题 - Solving the configuration file permissions
    volumes:
      - ./deploy/filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml
      # 此处需指定docker的containers目录,取决于你docker的配置 - The containers directory of docker needs to be specified here, depending on your docker configuration
      # 如snap安装的docker,则为/var/snap/docker/common/var-lib-docker/containers - Example if docker is installed by Snap /var/snap/docker/common/var-lib-docker/containers
      # - /var/snap/docker/common/var-lib-docker/containers:/var/lib/docker/containers
      - /var/lib/docker/containers:/var/lib/docker/containers
    networks:
      - testProject_net
    depends_on:
      - kafka


  #zookeeper是kafka的依赖 - Zookeeper is the dependencies of Kafka
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    restart: always
    ports:
      - 2181:2181
    networks:
      - testProject_net

  #消息队列 - Message queue
  kafka:
    image: wurstmeister/kafka
    container_name: kafka
    ports:
      - 9092:9092
    environment:
      - KAFKA_ADVERTISED_HOST_NAME=kafka
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_AUTO_CREATE_TOPICS_ENABLE=false
      - TZ=Asia/Shanghai
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - testProject_net
    depends_on:
      - zookeeper

  #asynqmon asynq延迟队列、定时队列的webui - Asynqmon asynq delay queue, timing queue's webUI
  asynqmon:
    image: hibiken/asynqmon:latest
    container_name: asynqmon
    ports:
      - 8980:8080
    command:
      - '--redis-addr=redis:6379'
      - '--redis-password=G62m50oigInC30sf'
    restart: always
    networks:
      - testProject_net
    depends_on:
      - redis

  mysql:
    image: mysql/mysql-server:8.0.28
    container_name: mysql
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
      # root 密码 - root password
      MYSQL_ROOT_PASSWORD: PXDN93VRKUm8TeE7
    ports:
      - 33069:3306
    volumes:
      # 数据挂载 - Data mounting
      - ./data/mysql/data:/var/lib/mysql
      # 日志
    command:
      # 将mysql8.0默认密码策略 修改为 原先 策略 (mysql8.0对其默认策略做了更改 会导致密码无法匹配) 
      # Modify the Mysql 8.0 default password strategy to the original strategy (MySQL8.0 to change its default strategy will cause the password to be unable to match)
      --default-authentication-plugin=mysql_native_password
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_general_ci
      --explicit_defaults_for_timestamp=true
      --lower_case_table_names=1
    privileged: true
    restart: always
    networks:
      - testProject_net

  #redis容器 - Redis container
  redis:
    image: redis:6.2.5
    container_name: redis
    ports:
      - 36379:6379
    environment:
      # 时区上海 - Time zone Shanghai (Change if needed)
      TZ: Asia/Shanghai
    volumes:
      # 数据文件 - data files
      - ./data/redis/data:/data:rw
    command: "redis-server --requirepass G62m50oigInC30sf  --appendonly yes"
    privileged: true
    restart: always
    networks:
      - testProject_net


networks:
  testProject_net:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.1.0/24

modd.conf内容如下:

#usercenter
app/usercenter/cmd/rpc/**/*.go {
    prep: go build -o data/server/usercenter-rpc  -v app/usercenter/cmd/rpc/usercenter.go
    daemon +sigkill: ./data/server/usercenter-rpc -f app/usercenter/cmd/rpc/etc/usercenter.yaml
}
app/usercenter/cmd/api/**/*.go {
    prep: go build -o data/server/usercenter-api  -v app/usercenter/cmd/api/usercenter.go
    daemon +sigkill: ./data/server/usercenter-api -f app/usercenter/cmd/api/etc/usercenter.yaml
}

关于上面的配置文件,有不懂的,后期我们会继续出教程文章,带大家学习相关配置文件的知识(记得关注我哟!)

deploy文件夹下的目录结构如下:

filebeat: docker部署filebeat配置

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/lib/docker/containers/*/*-json.log

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

output.kafka:
  enabled: true
  hosts: ["kafka:9092"]
  #要提前创建topic
  topic: "testProject-log"
  partition.hash:
    reachable_only: true
  compression: gzip
  max_message_bytes: 1000000
  required_acks: 1

go-stash:go-stash配置

Clusters:
  - Input:
      Kafka:
        Name: gostash
        Brokers:
          - "kafka:9092"
        Topics:
          - testProject-log
        Group: pro
        Consumers: 16
    Filters:
      - Action: drop
        Conditions:
          - Key: k8s_container_name
            Value: "-rpc"
            Type: contains
          - Key: level
            Value: info
            Type: match
            Op: and
      - Action: remove_field
        Fields:
          # - message
          - _source
          - _type
          - _score
          - _id
          - "@version"
          - topic
          - index
          - beat
          - docker_container
          - offset
          - prospector
          - source
          - stream
          - "@metadata"
      - Action: transfer
        Field: message
        Target: data
    Output:
      ElasticSearch:
        Hosts:
          - "http://elasticsearch:9200"
        Index: "testProject-{{yyyy-MM-dd}}"

nginx: nginx网关配置

server{
      listen 8081;
      access_log /var/log/nginx/testProject.com_access.log;
      error_log /var/log/nginx/testProject.com_error.log;

      location ~ /usercenter/ {
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_pass http://testProject:1004;
      }
}
  • prometheus :prometheus配置
global:
  scrape_interval:
  external_labels:
    monitor: 'codelab-monitor'

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s  #global catch time
    static_configs:
      - targets: [ '127.0.0.1:9090' ]
  - job_name: 'usercenter-api'
    static_configs:
      - targets: [ 'testProject:4008' ]
        labels:
          job: usercenter-api
          app: usercenter-api
          env: dev
  - job_name: 'usercenter-rpc'
    static_configs:
      - targets: [ 'testProject:4009' ]
        labels:
          job: usercenter-rpc
          app: usercenter-rpc
          env: dev
  - job_name: 'mqueue-job'
    static_configs:
      - targets: [ 'testProject:4010' ]
        labels:
          job: mqueue-job
          app: mqueue-job
          env: dev
  - job_name: 'mqueue-scheduler'
    static_configs:
      - targets: [ 'testProject:4011' ]
        labels:
          job: mqueue-scheduler
          app: mqueue-scheduler
          env: dev

script:

  • gencode:生成api、rpc,以及创建kafka语句,复制粘贴使用
  • mysql:生成model的sh工具

Tips : 如果你不熟悉这里面很多技术栈也不要怕,只要你会mysql、redis可以先启动这两个中间件在启动项目先跑起来项目,其他可以慢慢学。其他技术栈的相关知识会在以后的文章中有学习,记得关注我!

编写API层

syntax = "v1"

info(
    title: "用户服务"
    desc: "用户服务"
    author: "王中阳"
    email: "425772719@qq.com"
    version: "v1"
)

type User {
    Id       int64  `json:"id"`
    Mobile   string `json:"mobile"`
    Nickname string `json:"nickname"`
    Sex      int64  `json:"sex"`
    Avatar   string `json:"avatar"`
    Info     string `json:"info"`
    IsAdmin  int64  `json:"isAdmin"`
    Signature string `json:"signature"`
    Longitude float64 `json:"longitude"`
    Latitude float64 `json:"latitude"`
    ParticipationCount      int64  `json:"participation_count"`
    CreatedCount      int64  `json:"created_count"`
    WonCount      int64  `json:"won_count"`
    Integral      int64  `json:"integral"`

}

type (
    RegisterReq {
        Mobile   string `json:"mobile"`
        Password string `json:"password"`
    }
    RegisterResp {
        AccessToken  string `json:"accessToken"`
        AccessExpire int64  `json:"accessExpire"`
        RefreshAfter int64  `json:"refreshAfter"`
    }
)

type (
    LoginReq {
        Mobile   string `json:"mobile"`
        Password string `json:"password"`
    }
    LoginResp {
        AccessToken  string `json:"accessToken"`
        AccessExpire int64  `json:"accessExpire"`
        RefreshAfter int64  `json:"refreshAfter"`
    }
)
syntax = "v1"

info (
title: "用户服务"
desc: "用户服务"
author: "王中阳"
email: "425772719@qq.com"
version: "v1"
)

import (
"lottery/lottery.api"
)

//=====================================> usercenter v1 <=================================
//no need login
@server (
prefix: usercenter/v1
group: user
)
service usercenter {
@doc "注册"
@handler register
post /user/register (RegisterReq) returns (RegisterResp)

@doc "登录"
@handler login
post /user/login (LoginReq) returns (LoginResp)
}

并生成swagger

编写RPC层


syntax = "proto3";

option go_package = "./pb";

package pb;

// ------------------------------------
// Messages
// ------------------------------------

//--------------------------------用户表--------------------------------
message RegisterReq {
string mobile = 1;
string nickname = 2;
string password = 3;
string authKey = 4;
string authType = 5;
}
message RegisterResp {
string accessToken = 1;
int64 accessExpire = 2;
int64 refreshAfter = 3;
}

message LoginReq {
string authType = 1;
string authKey = 2;
string password = 3;
}
message LoginResp {
string accessToken = 1;
int64 accessExpire = 2;
int64 refreshAfter = 3;
}


// ------------------------------------
// Rpc Func
// ------------------------------------
service usercenter{
// 自定义的服务
rpc login(LoginReq) returns(LoginResp);
rpc register(RegisterReq) returns(RegisterResp);
}

关于api层和rpc层如何编写业务代码,这里我们不进行详细阐述,有不会的同学可以看我的往期文章

实战

更新依赖

go mod tidy

启动项目所依赖的环境

$ docker-compose -f docker-compose-env.yml up -d

导入数据

进入容器

$ docker exec -it kafka /bin/sh
$ cd /opt/kafka/bin/

创建1个topic:looklook-log,用于日志收集使用的

$ ./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 -partitions 1 --topic testProject-log

导入mysql数据

本地工具连接mysql的话要先进入容器,给root设置下远程连接权限

$ docker exec -it mysql mysql -uroot -p
##输入密码:PXDN93VRKUm8TeE7
$ use mysql;
$ update user set host='%' where user='root';
$ FLUSH PRIVILEGES;

使用navicat连接数据库

把deploy/sql/下面所有的sql文件导入到你自己的数据库中

将下面usercenter.sql导入该数据库

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for user
-- ----------------------------
DROP TABLE IF EXISTS `user`;
CREATE TABLE `user`  (
  `id` bigint(0NOT NULL AUTO_INCREMENT,
  `password` varchar(255CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '',
  `nickname` varchar(255CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '',
  `create_time` datetime(0NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `update_time` datetime(0NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP(0),
  `delete_time` datetime(0DEFAULT NULL,
  PRIMARY KEY (`id`USING BTREE
ENGINE = InnoDB AUTO_INCREMENT = 9 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '用户表' ROW_FORMAT = Dynamic;

查看服务环境

Elastic search: http://127.0.0.1:9200/ (⚠️:这个启动时间有点长)

jaeger: http://127.0.0.1:16686/search (⚠️:如果失败了,依赖es,因为es启动时间长这个有可能超时,等es启动玩restart一下)

asynq (延迟任务、定时任务、消息队列): http://127.0.0.1:8980/

kibana : http://127.0.0.1:5601/

Prometheus: http://127.0.0.1:9090/

Grafana: http://127.0.0.1:3001/ , 默认账号、密码都是admin

Mysql : 自行客户端工具(Navicat、Sequel Pro)查看

  • host : 127.0.0.1
  • port : 33069
  • username : root
  • pwd : PXDN93VRKUm8TeE7

Redis : 自行工具(redisManager)查看

  • host : 127.0.0.1
  • port : 36379
  • pwd : G62m50oigInC30sf

Kafka: (发布、订阅|pub、sub)自行客户端工具查看

  • host : 127.0.0.1
  • port : 9092

注意:上述密码是docker中内置的默认密码,你在跑通流程后,也可以尝试修改成自己的密码。

启动服务

拉取运行环境镜像

$ docker pull lyumikael/gomodd:v1.20.3 

#
这个是app下所有的api+rpc启动服务使用的,如果你是 "mac m1" : 
lyumikael/go-modd-env:v1.0.0

启动项目

$ docker-compose up -d 

【注】依赖的是项目根目录下的docker-compose.yml配置

查看项目运行情况

点击testProject,可以进入容器内部,查看服务运行情况。

看见项目成功运行在1004端口说明项目成功启动

访问项目

由于我们使用nginx做的网关,nginx网关配置在docker-compose中,也是配置在docker-compose中,nginx对外暴露端口是8888,所以我们通过8888端口访问

访问接口,得到响应,印证服务启动成功

总结

至此,我们已经通过docker将我们的开发环境搭建起来了,其中涉及的诸多知识点在开发过程是可以暂时忽略的,对于一些其他中间件的使用,日志收集等,我们会在后面的文章中进行详细讲解,欢迎关注我!

我将继续更新Go-Zero系列文章,如果你对Go语言或者微服务感兴趣,欢迎关注我,也欢迎直接私信我。

gozero&微服务 交流群

对微服务或者go-zero感兴趣的朋友们可以加我微信:wangzhongyang1993,备注:微服务




云原生SRE
懂点K8S的SRE,关注云原生、DevOps、AI&ChatGPT等技术热点
 最新文章