I am very new to using docker. I have been used to using dedicated VM’s and hosting the applications within the servers OS.

When hosting multiple applications/services that require the same port, is it best practice to spin up a whole new docker server or how should I go about the conflicts?

Ie. Hosting multiple web applications that utilize 443.

Thank you!

  • Haui@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Hi there,

    thats an interesting question. I suppose it depends on what you need to do.

    If you can, divert the ports in the run command or compose file with -p 4430:443 (run) Or Ports:

    • 4430:443

    Then you tell the apps that need this port to use that one instead.

    Thats the easiest solution I know of.

    If you want a more elegant solution, you use custom domains with a reverse proxy like npm (nginx proxy manager).

    You spin up npm and start defining hosts like cloud.yourhomedomain.com and define those over your dns if possible (router or in my case, pihole)

    Docker is a universe of itself and you can invest hundreds of hours and still feel like a noob (good game mechanic btw, easy go get into, hard to master).

    Hit me up if you need more info. Get familiar with stack overflow and the likes because you will need em. :)

    Good luck

    • EliteCow@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Thanks a ton! I did not realize you could have a different listing port vs internally used port.

      I have done what you mentioned and used a random port internally and kept 443 as the listening port. I am using Caddy to then direct the traffic reverse proxy it.

      Thanks again!

      • PupBiru@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        if you’re only going to be using those services through the proxy, it can also be a useful security upgrade to not forward their ports at all, and run caddy inside docker to connect to them directly!

        if you forward the ports (without firewalling them), people can connect to them directly which can be a security risk (for example, many services require a proxy to add the x-forwarded-for header to show which IP address originally made the request… if users can access the service directly, they can add this header themselves and make it appear as though they came from anywhere! even 127.0.0.1, which can sometimes bypass things like admin authentication)

        • EliteCow@lemmy.dbzer0.comOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Thank you! Just to clarify - I should only forward 443 & 80 for Caddy. Then in the Caddy config define the ports within the reverse proxy. Is that correct?

          How safe/secure is it to host a public website or services like a Lemmy instance doing this?

          For services I don’t care to be available outside of my network, I am not adding to Caddy and accessing them directly via internal IP.

          • PupBiru@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            so what you ideally want is people to ONLY be able to access your backend service through caddy, so caddy should be the only one with ports publicly accessible, yes

            caddy running in the same docker network as your services can talk to those services on their original ports; they don’t need to even be mapped to the host! in this case, you have 3 containers: caddy, service 1, service 2… caddy is the only one that needs to have ports forwarded and you can just forward caddy:443 and no need to worry! then caddy can talk directly to services:80 or services:443 (docker containers show up to other docker containers by their container name! so if you run eg: docker run … —name lemmy, then caddy in the same docker network would be able to connect to http://lemmy:80!)

            … but if you forward say service 1 and 2 on :8443 and :9443 (without firewall, and even with it makes me uncomfortable - that’s 1 step away from a subtle security problem), someone could be able to access <yourserver>:8443, right? so they don’t have to go through caddy to get to the backend service… for some services, that can be a big deal in ways that it’s difficult to understand, so it’s best to just not allow it if possible

            an alternative is to make sure your services are firewalled so that nobody from the internet can hit them, but caddy still can… but i like this less, because it’s less explicit what’s happening so it’s easier to forget about

            • EliteCow@lemmy.dbzer0.comOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Thank you for all of this info. 443 is now my only open port and directs to my Caddy server. For extra security, I’m going to look into implementing an authentication portal for each backend service that is not “public” for all.

    • earthling@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      This is the correct answer.

      I run several containers that offer up http/s and they obviously can’t all use 80/443. Just adjust the left side of that port setting and you’re good.

      That plus a reverse proxy for offering these services up over the public internet, if you choose to do so, is a killer pair.

      • Haui@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        One addition to this: I actually run those in my private setup since I have highly sensitive data on there. Even if you’re not opening them, reverse proxy works wonders. :)