• @xantoxis@lemmy.world
    link
    fedilink
    English
    1326 months ago

    Folks, the docker runtime is open source, and not even the only one of its kind. They won’t charge for that. If they tried to make it closed source, everyone would just laugh and switch to one of several completely free alternatives. They charge for hosting images, build time on their build servers, and various “premium” developer tools you don’t need. In fact, you need none of this, you can do all of it yourself on whatever hardware you deem to be good enough. There are also many other hosted alternatives out there.

    Docker thinks they have a monopoly, for some reason. If you use the technology, you are probably already aware that they don’t.

      • @cheet@infosec.pub
        link
        fedilink
        English
        76 months ago

        Windows container runtime is free as well, simply install the docker runtime from chocolatey or winget along with the Windows Containers and Hyper-V windows features. This is what we do on some build machines for CI.

        Theres no reason to use desktop other than “ease of use”

        • TrumpetX
          link
          fedilink
          English
          26 months ago

          There are some reasons. Networking can get messed up, so Docker Desktop “fixed that” for you, but the dirty secret is it’s basically a Linux VM with Docker CE and some convenience network routes.

          • @cheet@infosec.pub
            link
            fedilink
            English
            36 months ago

            Youre talking about Linux containers on Windows, I think commenter above was referring to windows containers on Windows, which is its own special hell for lucky folks like me.

            Otherwise I totally agree. Ive done both setups without docker desktop.

    • @Pieisawesome@lemmy.world
      link
      fedilink
      English
      26 months ago

      One of the previous places I worked at had about a dozen outbound IP addresses (company VPN).

      We also had 10k developers who all used docker.

      We exhausted the rate limit constantly. They paid for an unlimited account and we just would queue an automation that would pull the image and mirror it into the local artifact repo

    • @pop@lemmy.ml
      link
      fedilink
      English
      146 months ago

      On Lemmy, it’s a sin to make money off your work, especially if it is opensource core projects providing paid infrastructure/support. You can only ask for donations and/or quit. No in-between.

    • @gencha@lemm.ee
      link
      fedilink
      English
      96 months ago

      A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can’t bring up any of your services, because you’re blocked.

      I’ve been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.

      Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.

      • @beerclue@lemmy.world
        link
        fedilink
        English
        56 months ago

        I use docker at home and at work, nexus at work too. I really don’t understand… even a malfunctioning service should not pull the image over and over, there should be a cache… It could be some fringe case, but I have never experienced it.

        • @gencha@lemm.ee
          link
          fedilink
          English
          -16 months ago

          Ultimately, it doesn’t matter what caused you to be blocked from Docker Hub due to rate-limiting. When you’re in that scenario, it’s most cost efficient to buy your way out.

          If you can’t even imagine what would lead up to such a situation, congratulations, because it really sucks.

          Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest “latest” tag. Every tag floats, not just “latest”. Lots of people don’t pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.

          Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can’t even get on the Internet, because your pihole doesn’t start. You can’t recover, because you’re rate-limited.

          I’ve been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.

          This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I’m not talking commercial.

      • Sir Aramis
        link
        fedilink
        English
        186 months ago

        I second Podman. I’ve been using it recently and find it to be pretty good!

      • mosiacmango
        link
        fedilink
        English
        18
        edit-2
        6 months ago

        Rancher is owned by Suse, which is mainly a solid steward in the community.

        They also have k8 frontend called Harvestor. It can run VMs directly, which is nice.

        • Scribbd
          link
          fedilink
          English
          56 months ago

          Well, there is this one thing: they asked OpenSuse to drop the Suse branding…

          • bizarroland
            link
            fedilink
            176 months ago

            Which is fair. Fedora never called itself red hat. CentOS never called itself red hat.

            Suse is a pretty good company and deserves the right to their intellectual property and trademarks. OpenSuse shouldn’t make a big deal out of simply changing their name.

            They could rename themselves to OpenSusame and keep rolling without any issues whatsoever.

            • @Petter1@lemm.ee
              link
              fedilink
              English
              56 months ago

              Of course, but I still think it is not very smart from SUSE, since I bet many companies got into SUSE because coworkers had very good experiences with OpenSUSE.

              I, at least, if my company would need corporate Linux, would recommend SUSE to my company because of that reason.

    • @Nithanim@programming.dev
      link
      fedilink
      English
      16 months ago

      I am exposing docker via tcp in wsl and set the env var on the host to point to it. A bit more manual but if you don’t need anything special, it works too.

    • @treadful@lemmy.zip
      link
      fedilink
      English
      26 months ago

      So does this setup like a one-node kubernetes cluster on your local machine or something? I didn’t know that was possible.

      • chameleon
        link
        fedilink
        36 months ago

        Basically yes. Rancher Desktop sets up K3s in a VM and gives you a kubectl, docker and a few other binaries preconfigured to talk to that VM. K3s is just a lightweight all-in-one Kubernetes distro that’s relatively easy to set up (of course, you still have to learn Kubernetes so it’s not really easy, just skips the cluster setup).

    • withtheband
      link
      fedilink
      English
      86 months ago

      How is the transition from docker to podman? I’m using two compose scripts and like 10 containers each. And portainer to comfortably restart stuff on the fly

      • @Telodzrum@lemmy.world
        link
        fedilink
        English
        76 months ago

        I can only provide my experience; it was a drop-in replacement. I have 7 services running and 3 db containers. I was able to migrate using the Podman official instructions without issue.

      • @Grass@sh.itjust.works
        link
        fedilink
        English
        46 months ago

        from what I can gather its currently recommended to use quadlets to generate systemd units to achieve what compose was doing. podman compose is a thing but IIRC I didn’t find that was straight drop in and I had to change the syntax or formatting a bit for it to work and from the brief testing I have put in quadlets seems less hassle, but if you use a non systemd distro then I don’t know.

      • @mlg@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        6 months ago

        I’d say about 99% is the same.

        Two notable things that were different were:

        • Podman config file is different which I needed to edit where containers are stored since I have a dedicated location I want to use
        • The preferred method for running Nvidia GPUs in containers is CDI, which imo is much more concise than Docker’s Nvidia GPU device setup.

        The second one is also documented on the CUDA Container Toolkit site, and very easy to edit a compose file to use CDI instead.

        There’s also some small differences here and there like podman asking for a preferred remote source instead of defaulting to dockerhub.

    • @jim3692@discuss.online
      link
      fedilink
      English
      26 months ago

      Docker is not only about dependency management. It also offers service “composing”, via docker compose, and network isolation for each service.

      Although I personally love Nix, and I run NixOS on some of my servers, I do not believe it can replace Docker/Podman. Unless you go the NixOS Containers route.

      • @Wooki@lemmy.world
        link
        fedilink
        English
        06 months ago

        Interfaces,vlans and capable gateway. Except instead of the vendor lock in you have access to the gold standards of which all out scale

        • @jim3692@discuss.online
          link
          fedilink
          English
          16 months ago

          I am trying to understand.

          Docker, which uses OCI containers that are supported by Docker, Podman, Containerd, systemd-nspawn, etc, is lock-in.

          But Nix Shells, which require Nix, are not lock-in.

          Also, how are you going to run Nix shells in VLANs? They run on the host’s network namespace.

  • @ipkpjersi@lemmy.ml
    link
    fedilink
    English
    106 months ago

    Enshitification is a very, very real thing. GitLab did something similar with raising pricing by 5x a few years back.

  • arthurpizza
    link
    fedilink
    English
    326 months ago

    Hot take: Good for them.

    This will have zero impact on 99% of independent developers. Most small companies can move to an alternative or roll their own infrastructure. This will only really impact large corporations. I’m all for corporation-on-corporation violence. Let them fight.

    • @corsicanguppy@lemmy.ca
      link
      fedilink
      English
      76 months ago

      This is a different take on the VMscare broadcom purchase.

      The real losers here are SoHos where it is too pricy to migrate and also too pricy not to. I don’t know whether that’s in your 1% or 99% but:

      • devs don’t develop for infrastructure their customers don’t use. It’s as dead as LKC, then.
      • big customers have deprecated their VMware infra and are only spending on replacement products, and if they do the same for docker the company will suffer in a year.

      If docker doesn’t have the gov/mil revenue, are we prepared for the company shedding projects and people as it shrinks?

      Remember: when tech elephants fight, it’s we the grass who suffers.

  • @randon31415@lemmy.world
    link
    fedilink
    English
    106 months ago

    Is this the program that open source people use to install all the random depencies that their program needs to work? The one that people tell me to use when I complain about git bash pico sudo pytorch Install commands?

    Or did another company copy their name?

    • @gsfraley@lemmy.world
      link
      fedilink
      English
      29
      edit-2
      6 months ago

      I mean, they’re one implementor of about 10 that use the same container standards. It sucks that they were first so their name is now synonymous with containers a la Kleenex, but the technology itself is standard, very open and ubiquitous, and a huge step forward in simplifying deployments and development lifecycles that would otherwise be too complex to reasonably handle.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        96 months ago

        But it does in a lot of cases. At work, we use Docker images to bundle our dependencies for each microservice, and at home, I use Docker images for the same reason on my self-hosted repos. It’s fantastic for running servers in a sandbox so you don’t have to worry about what dependencies the host has.

        But perhaps OP is talking about flatpaks instead.

    • @gencha@lemm.ee
      link
      fedilink
      English
      36 months ago

      Not having to install dependencies is a benefit of containers and their images. That’s a pretty big thing to miss. Maybe give it a closer look.

    • Kushan
      link
      fedilink
      English
      196 months ago

      Our 200 developers all switched from docker desktop to rancher after Docker tried to jack up the price about a year and a half ago, along with a bunch of legal threats. Their attitude was so piss poor, we went from debating paying the higher fees to just fucking them off entirely.

      I will pay $4 per user to NOT use Docker.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        36 months ago

        Hmm, I might have to do that at work. We pay for Docker, but we don’t actually use any of the features from Docker, the service. We build our images locally, and production pulls from AWS ECR, yet we all have Docker Hub licenses because my boss felt like we should be paying for it.

        Docker works fine, but honestly, we don’t need it, and I have been considering eliminating Docker on my self-hosted stuff.

        • Kushan
          link
          fedilink
          English
          26 months ago

          I love docker images, hate docker Inc.

      • @KellysNokia@lemmy.world
        link
        fedilink
        English
        36 months ago

        That gives me an idea - managers can ask staff to learn the CLI and give them gift cards for what it would have cost to license the Docker Desktop client 🧠

  • katy ✨
    link
    fedilink
    English
    156 months ago

    you didn’t need anything like docker with web 1.0; you just needed cuteftp and a text editor.