I see Docker mentioned every other thread and was wondering how useful it is for non development things, and if so what they are.

  • @umbrella@lemmy.ml
    link
    fedilink
    English
    12
    edit-2
    1 year ago

    its a container system that saves you from dealing with interactions between server software, config files scattered everywhere and is even more secure and more portable.

    it helps you use 1 server for many services without issues, being able to redeploy a given service without issues whenever needed.

    its a bit counter intuitive to learn, but makes it plain easier and almost maintenance free to run a server if you set up things right.

  • @StrawberryPigtails@lemmy.sdf.org
    link
    fedilink
    English
    131 year ago

    For me the advantage of Docker is that a random update to my system is unlikely to crash my self hosted services. It simplifies setting up the services as well but the biggest advantage is that it is generally more stable.

  • @JVT038@feddit.nl
    link
    fedilink
    English
    231 year ago

    Docker is a container manager, but that doesn’t say anything if you don’t know what containers are.

    Containers are basically isolated apps. For example, take something like Nextcloud. Nextcloud can run in a Docker container, which means that it runs in an isolated environment completely separated from the user’s system. If Nextcloud breaks, the user’s server won’t be affected at all, because it’s running isolated.

    Why is this useful? Well, it’s useful because dependencies and such automatically update. Nextcloud for example, is dependent on PHP and if you install Nextcloud directly on your server, you’ll need to ensure that PHP 8 has been installed and set up properly. If PHP (or the required PHP extensions) aren’t properly installed, Nextcloud won’t work. Or, maybe if there’s a Nextcloud update that requires a new version of PHP (PHP 9 or 10 in the future), you’ll have to manually update PHP to the newer version.

    All that dependency management is completely gone with containers. The container itself automatically installs and sets up a proper environment for the app that’s running. So in the case of Nextcloud, the PHP binaries, extensions, and all the other stuff is all automatically included without the developer having to do anything at all. Just run one command and your entire Nextcloud instance is automatically updated.

    • Clay_pidgin
      link
      fedilink
      English
      11 year ago

      How does the container know what’s safe to update? Nextcloud (in this example) may need to stay on a specific version of some package and updating everything would break it.

      • @atzanteol@sh.itjust.works
        link
        fedilink
        English
        71 year ago

        The Dockerfile used to build the container controls what is in the container. It’s “infrastructure as code”-like. You create a script that builds the environment the application needs.

        If you need a newer version of PHP you update the Dockerfile to include the new version. Then you publish the new container.

      • @brewery@lemmy.world
        link
        fedilink
        English
        31 year ago

        I only use docker images supplied by the devs themselves or community maintained (e.g. Linux server.io) so they essentially tell docker what needs to be installed in the container, not me. It takes the hassle out of trying to figure out what I need to do to get the service running. If they update their app, they’ll probably know best what else needs to be updated and will do that in the image. I guess you are relying on them to keep everything updated but they are way more knowledgeable than me and if there is a vulnerability, it is only in that container and not your other services.

    • @tal@lemmy.today
      link
      fedilink
      English
      31 year ago

      Also, if server software running in a container gets compromised, hopefully the container can contain the compromise from spreading to the rest of the system.

      • @JVT038@feddit.nl
        link
        fedilink
        English
        11 year ago

        Depends.

        If there are no external volumes and the container is in its own network without any other containers, then any malware in the container shouldn’t be able to reach / affect the host server, because it’s isolated.

        • @evranch@lemmy.ca
          link
          fedilink
          English
          11 year ago

          Even with external volumes, I don’t think there should be any mechanism where a container can escape a bind mount to affect the rest of the host fs? I use bind mounts all the time, far more than docker volumes.

  • JoeCoT
    link
    fedilink
    91 year ago

    So it’s always going to be used for technical things, but not necessarily development things. I use it for both.

    For my home server setup I have docker setup like this:

    1. A VPN docker container
    2. A transmission (bittorrent client) container, using the VPN’s network
    3. An nginx (web server) container, which provides access to the transmission container
    4. A 3proxy socks proxy container, using the VPN’s network
    5. A tor client container
    6. A 3proxy socks proxy container, using the tor container’s network

    Usually it’s pretty hard to say “these specific programs and only these should run over my VPN”. Docker makes that easy. I can just attach containers to the same network as my VPN container, and their traffic will all go over the VPN. And then with my socks proxies I can selectively put my browser traffic over either the VPN or Tor, using extensions like FoxyProxy. I watch wrestling through my vpn because it’s cheaper overseas and has better streaming options, so I have those specific sites set to route through my VPN socks proxy. And I have all onion links set to go through my Tor proxy.

    • @Amongussussyballs100@sh.itjust.worksOP
      link
      fedilink
      English
      31 year ago

      This looks like an interesting project. Can the vpn container only route traffic that are in other containers, or can regular applications get their traffic routed by the vpn container too?

      • calm.like.a.bomb
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        The answer is yes in both cases.

        1. Docker has an internal networking setup. You can create a “network” and all containers in that network communicate with each other, but not with other containers in other networks. So you can set up a VPN container in a network and all containers in that netowrk could use the VPN to route their traffic through.
        2. You can configure your VPN container to expose some ports that it uses to communicate, and then the “regular applications” can make use of those ports to connect through the VPN.
      • JoeCoT
        link
        fedilink
        31 year ago

        I don’t know of a good way to route other application’s traffic through the VPN container with them being in docker containers, unless you use some intermediary setup. That’s why I have socks proxies routed through the VPN, so I can selectively put traffic through it. If the app supports a socks proxy you could do it that way. At the least you could use Proxychains to do so if the program does TCP networking.

  • rentar42
    link
    fedilink
    81 year ago

    https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.

    One slight addition I want to add: “Docker” is just one implementation of “OCI containers”. It’s the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to “docker” can be applied to.

    So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.

  • frozen
    link
    fedilink
    English
    20
    edit-2
    1 year ago

    I could go in-depth, but really, the best way I can describe my docker usage is as a simple and agnostic service manager. Let me explain.

    Docker is a container system. A container is essentially an operating system installation in a box. It’s not really a full installation, but it’s close enough that understanding it like that is fine.

    So what the service devs do is build a container (operating system image) with their service and all the required dependencies - and essentially nothing else (in order to keep the image as small as possible). A user can then use Docker to run this image on their system and have a running service in just a few terminal commands. It works the same across all distributions. So I can install whatever distro I need on the server for whatever purpose and not have to worry that it won’t run my Docker services. This also means I can test services locally on my desktop without messing with my server environment. If it works on my local Docker, it will work on my server Docker.

    There are a lot of other uses for it, like isolated development environments and testing applications using other Linux distro libraries, to name a couple, but again, I personally mostly just use it as a simple service manager.

    tldr + eli5 - App devs said “works on my machine”, so Docker lets them ship their machine.

    • Norah (pup/it/she)
      link
      fedilink
      English
      6
      edit-2
      1 year ago

      So I can install whatever distro I need on the server for whatever purpose and not have to worry that it won’t run my Docker services.

      The one caveat to that is switching between something ARM-based like a Pi and an x86 server. Many popular services have ARM versions but not all do.

      Edit: In saying that, building your own image from source isn’t too complicated most of the time.

  • Kevin
    link
    fedilink
    English
    471 year ago

    Containers, the concept that Docker implements, lets app developers give a self-contained environment for distribution. For devs that means consistency in deployments across environments, which in turn means sysadmins can deploy each of these apps as fully isolated units.

    With that, you get really clean installs/updates/uninstalls, and your deployments get done with a well-defined, declarative definition file which can also handle multi service dependencies (a la Docker Compose/K8s)

  • Cyber PingU
    cake
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    I don’t get the question… Docker is awesome for developing, but to put things on production too. It just avoids you the hassle of configuring a virtual machine / server from scratch since you can use prebuilt minimal images of the software you need. If you get in trouble you can restore things easier than on a whole compromised system. An update consists in the vast majority of times in changing a tag inside a docker-compose.yaml file. You have resource optimisation vs virutal machines, and so on. I don’t use docker to develop at all, I use it for production. And when you don’t need the service you installed anymore, you can just delete it and the system stays clean wihtout orphan files.

  • Lemmy
    link
    fedilink
    English
    41 year ago

    Wondering too, since Docker has a non-root mode, is there a reason to use Podman?

    • Domi
      link
      fedilink
      English
      31 year ago

      They have a different architecture so it comes down to preference.

      Docker runs a daemon that you talk to to deploy your services. podman does not have a daemon, you either directly use the podman command to deploy services or use systemd to integrate them into your system.

  • @CbtB@lemmynsfw.com
    link
    fedilink
    English
    21 year ago

    In the context of self-hosted it means easier cleaner installs and avoiding different poorly packaged projects from interfering with each other.

    • Muddybulldog
      link
      fedilink
      English
      111 year ago

      “The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution”

      I stopped there. Thirty years of LINUX experience here. You’re a fool.

      • @TCB13@lemmy.world
        link
        fedilink
        English
        -71 year ago

        Just look at landscape around here and other “selfhosting” boards and you’ll see what I’m saying.

          • @TCB13@lemmy.world
            link
            fedilink
            English
            -71 year ago

            Your choice, you’re the one believing that 100% of the people using Docker are as proficient and you and me and use it the right reasons. Guess what, they don’t.

            • @atzanteol@sh.itjust.works
              link
              fedilink
              English
              11 year ago

              Your choice, you’re the one believing that 100% of the people… Blah blah blah

              Didn’t be shitty. Telling somebody what they believe is shitty. Telling them they believe “100% of people do (anything)” is super shitty. And this whole shitty argument has nothing to do with docker.

              Go be shitty elsewhere.

            • bjorney
              link
              fedilink
              English
              71 year ago

              “how dare they use the right tool for the job without taking the time to learn how to do it sub optimally first”

            • Muddybulldog
              link
              fedilink
              English
              51 year ago

              Are you clairvoyant? I’m curious as to how you are aware of what I believe, beyond what I stated; that you’re a fool.

  • @jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    11 year ago

    Aside from the technical explanation that others have given, here’s how I use Docker:

    MeTube to rip videos and stuff easily. Just plug in a link and most times it’ll work. Here’s a list of all the supported sites.

    I use Sonarr/Radarr and qBittorrent with gluetun to search for and download TV and movies that I watch on Plex.

    I host my own Immich server that will automatically back up my photos from my phone just like Google Photos, except I own it all and it’s all kept private. It has its own machine learning and facial recognition, so I can search for “dog” and get all the pictures of my dogs, or I can search by person.

    I use Docker for all this because the images come in little prepackaged containers. It’s super easy to get into once you figure out some of the basics.

    Another great benefit of these containers is that you can transfer it to another system if needed. Just copy the config and data over to the new system and point the container in the right direction and it’ll pick up where it left off.

  • @sabreW4K3@lazysoci.al
    link
    fedilink
    English
    61 year ago

    The thing with self hosting is that you want in most cases, to set and forget and that means you want as little going wrong as possible. To ensure that, you need to find a way that other things can’t fuck with what you’re hosting. This is called a container. The trade off is disk space, but that’s okay because it’s a server, unlike on a computer, but let me not start my rant about the stupidity of Snap and Flatpak. Anyway… Thanks to containers, you don’t have any external factors and basically everything runs in its own world. Which means you can always delete, restore and edit without anything else being affected.