I am currently setting up a Proxmox box that has the usual selfhosted stuff (Nextcloud, Jellyfin, etc) and I want all of these services in different containers/VMs. I am planning to start sharing this with family/friends who are not tech savvy, so I want excellent security.

I was thinking of restricting certain services to certain VLANs, and only plugging those VLANs into the CT/VMs that need them.

Currently, each CT/VM has a network interface (for example eth0) which gives them internet access (for updates and whatnot) and an interface that I use for SSH and management (for example eth1). These interfaces are both on different VLANs and I must use Wireguard to get onto the management network.

I am thinking of adding another interface just for “consumption” which my users would get onto via a separate Wireguard server, and they would use this to actually use the services.

I could also add another network just to connect to an internal NFS server to share files between CT/VMs, and this would have its own VLAN and require an additional interface per host that connects to it.

I have lots of other ideas for networks which would require additional interfaces per CT/VM that uses them.

From my experience, using a “VLAN-Aware” bridge and assigning VLANs per interface via the GUI is best practice. However, Proxmox does not support multiple VLANs per interface using this method.

I have an IPv6-only network, so I could theoretically assign multiple IPs per interface. Then I would use Linux VLANs from within the guest OS. However, this is a huge pain and I do not want to do this. And it is less secure because a compromised VM/CT could change its VLAN tag itself.

I am asking if adding many virtual interfaces per CT/VM is good practice, or if there is a better way to separate internal networks. Or maybe I should rethink the whole thing and not use one network per use-case.

I am especially curious about performance impacts of multiple interfaces.

  • Pyrosis@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I use using docker networks but that’s me. They are created for every service and it’s easy to target the gateway. Just make sure DNS is correct for your hostnames.

    Lately I’ve been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don’t match outside and in?

    So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.

    It actually can break passthru via sni if they don’t use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.

    So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like…

    Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.

    server {
        listen 443 ssl;
        listen [::]:443 ssl;  # Listen on IPv6 address
    
        server_name jellyfin.example.net;
    
        ssl_certificate /path/to/ssl_certificate.crt;
        ssl_certificate_key /path/to/ssl_certificate.key;
    
        location / {
            proxy_pass https://jellyfin.example.net:8920;  # Use FQDN
            ...
        }
    }
    
    • raldone01@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Full pass through has no advantage when my reverse proxy terminates ssl and internal services are http only right?

      Regardless of fqdn nginx has to decrypt and restream anyways.

      • Pyrosis@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        This would be correct if you are terminating ssl at the proxy and it’s just passing it to http. However, if you can enable SSL on the service it’s possible to take advantage of full passthru if you care about such things.

        • raldone01@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Ahh nice good to know. For my use case I’d rather not distribute the certificates to all my services.

          • Pyrosis@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            When I was experimenting with this it didn’t seem like you had to distribute the cert to the service itself. As long as the internal service was an https port. The certificate management was still happening on the proxy.

            The trick was more getting the host names right and targeting the proxy for the hostname resolution.

            Either way IP addresses are much easier but it is nice to observe a stream being completely passed through. I’m sure it takes a load off the proxy and stabilizes connections.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption
    nginx Popular HTTP server

    6 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #748 for this sub, first seen 15th May 2024, 03:15] [FAQ] [Full list] [Contact] [Source code]