Используя Nginx, node-http-proxy для маскировки IP-адресов - PullRequest
0 голосов
/ 27 апреля 2018

Прежде всего, я хотел бы извиниться за длинный пост!

Я почти близок ко всему! Я хочу использовать node-http-proxy для маскировки ряда динамических IP-адресов, которые я получаю из базы данных MySQL. Я делаю это путем перенаправления поддоменов на узел-http-прокси и анализа его оттуда. Я смог сделать это локально без каких-либо проблем.

Удаленно, он находится за веб-сервером Nginx с включенным HTTPS (у меня есть сертификат с подстановочными знаками, выданный через Let's Encrypt, и Comodo SSL для домена). Мне удалось настроить его так, чтобы он без проблем передавал его на узел http-proxy Единственная проблема в том, что последний дает мне

 The error is { Error: connect ECONNREFUSED 127.0.0.1:80
     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
   errno: 'ECONNREFUSED',
   code: 'ECONNREFUSED',
   syscall: 'connect',
   address: '127.0.0.1',
   port: 80 }

Всякий раз, когда я устанавливаю:

proxy.web(req, res, { target, ws: true }

И я не знаю, связана ли проблема с удаленным адресом (очень маловероятно, поскольку я могу подключиться через дополнительное устройство), или я неправильно настроил nginx (весьма вероятно). Существует также вероятность того, что он может конфликтовать с Nginx, который прослушивает порт 80. Но я не знаю, почему узел-http-proxy будет подключаться через порт 80

Некоторая дополнительная информация: Есть приложение Ruby on Rails, работающее параллельно. Node-http-proxy, nginx, ruby ​​on rails работают в каждом собственном контейнере Docker. Я не думаю, что это проблема от Docker, так как я смог локально проверить это без каких-либо проблем.

Вот мой текущий файл nginx.conf (из соображений безопасности я заменил свое доменное имя на example.com)

В server_name "~^\d+\.example\.co$"; я хочу, чтобы он перенаправлял на node-http-proxy, а в example.com находится приложение Ruby on Rails.

# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180  

upstream puma_example_docker_app {
  server app:5000;
}


server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    # Enable once you solve wildcard subdomain issue.
    return 301 https://$host$request_uri;
}

server {

  server_name "~^\d+\.example\.co$";

  # listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  # ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  # ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;




  location / {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://ipmask_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }
}





# SSL configuration was obtained through Mozilla's 
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {

server_name localhost example.co www.example.co; #puma_example_docker_app;

# listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  # ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  #ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;

  # resolver 127.0.0.1;
  # https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx

  # The above was generated through Mozilla's SSL Config Generator
  # https://mozilla.github.io/server-side-tls/ssl-config-generator/

  # This is important for Rails to accept the headers, otherwise it won't work:
  # AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
  underscores_in_headers on; 

  client_max_body_size 4G;
  keepalive_timeout 10;

  error_page 500 502 504 /500.html;
  error_page 503 @503;


  root /var/www/example/public;
  try_files $uri/index.html $uri @puma_example_docker_app;

  # This is a new configuration and needs to be tested.
  # Final slashes are critical
  # https://stackoverflow.com/a/47658830/1057052
  location /kibana/ {
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/.htpasswd;
      #rewrite ^/kibanalogs/(.*)$ /$1 break;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      proxy_pass http://kibana:5601/;

  }


  location @puma_example_docker_app {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://puma_example_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }

  location ~ ^/(assets|images|javascripts|stylesheets)/   {    
      try_files $uri @rails;     
      access_log off;    
      gzip_static on; 

      # to serve pre-gzipped version     
      expires max;    
      add_header Cache-Control public;     

      add_header Last-Modified "";    
      add_header ETag "";    
      break;  
   } 

  location = /50x.html {
    root html;
  }

  location = /404.html {
    root html;
  }

  location @503 {
    error_page 405 = /system/maintenance.html;
    if (-f $document_root/system/maintenance.html) {
      rewrite ^(.*)$ /system/maintenance.html break;
    }
    rewrite ^(.*)$ /503.html break;
  }

  if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
    return 405;
  }

  if (-f $document_root/system/maintenance.html) {
    return 503;
  }

  location ~ \.(php|html)$ {
    return 405;
  }
}

Текущий файл docker-compose:

# This is a docker compose file that will pull from the private
# repo and will use all the images. 
# This will be an equivalent for production.

version: '3.2'
services:
  # No need for the database in production, since it will be connecting to one
  # Use this while you solve Database problems
  app:
    image: myrepo/rails:latest
    restart: always
    environment:
      RAILS_ENV: production
      # What this is going to do is that all the logging is going to be printed into the console. 
      # Use this with caution as it can become very verbose and hard to read.
      # This can then be read by using docker-compose logs app.
      RAILS_LOG_TO_STDOUT: 'true'
      # RAILS_SERVE_STATIC_FILES: 'true'
    # The first command, the remove part, what it does is that it eliminates a file that 
    # tells rails and puma that an instance is running. This was causing issues, 
    # https://github.com/docker/compose/issues/1393
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
    # volumes:
    #   - /var/www/cprint
    ports:
      - "5000:5000"
    expose:
      - "5000"
    networks:
      - elk
    links:
      - logstash
  # Uses Nginx as a web server (Access everything through http://localhost)
  # https://stackoverflow.com/questions/30652299/having-docker-access-external-files
  # 
  web:
    image: myrepo/nginx:latest
    depends_on:
      - elasticsearch
      - kibana
      - app
      - ipmask
    restart: always
    volumes:
      # https://stackoverflow.com/a/48800695/1057052
      # - "/etc/ssl/:/etc/ssl/"
      - type: bind
        source: /etc/ssl/certs
        target: /etc/ssl/certs
      - type: bind
        source: /etc/ssl/private/
        target: /etc/ssl/private
      - type: bind
        source: /etc/nginx/.htpasswd
        target: /etc/nginx/.htpasswd
      - type: bind
        source: /etc/letsencrypt/
        target: /etc/letsencrypt/
    ports:
      - "80:80"
      - "443:443"
    networks:
      - elk
      - nginx
    links:
      - elasticsearch
      - kibana
  # Defining the ELK Stack! 
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
    container_name: elasticsearch
    networks:
      - elk
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
      # - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.3
    container_name: logstash
    volumes:
      - ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      # This is the most important part of the configuration
      # This will allow Rails to connect to it. 
      # See application.rb for the configuration!
      - ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
    command: logstash -f /etc/logstash/conf.d/logstash.conf
    ports:
      - "5228:5228"
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.3
    volumes:
      - ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  ipmask:
    image: myrepo/proxy:latest
    command: "npm start"
    restart: always
    environment:
      - "NODE_ENV=production"
    expose:
      - "5050"
    ports:
      - "4430:80"
    links:
      - app
    networks:
      - nginx


# # Volumes are the recommended storage mechanism of Docker. 
volumes:
  elasticsearch:
    driver: local
  rails:
    driver: local

networks:
    elk:
      driver: bridge
    nginx:
      driver: bridge

Большое спасибо!

1 Ответ

0 голосов
/ 27 апреля 2018

Waaaaaaitttt. Была нет проблема с кодом!

Проблема заключалась в том, что я пытался передать мягкий IP-адрес, не добавляя перед ним http! При добавлении HTTP все работает !!

Пример:

Я делал:

proxy.web(req, res, { target: '128.29.41.1', ws: true })

Когда на самом деле это был ответ:

proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...