About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • @ikidd@lemmy.world
    link
    fedilink
    English
    04 months ago

    btrfs raid subsystem hasn’t been fixed and is still buggy, and does weird shit on scrubs. But fill your boots, it’s your data.

    • Possibly linuxOP
      link
      fedilink
      English
      -14 months ago

      Why?

      I already take backups but I’m curious if you have had any serious issues

      • @horse_battery_staple@lemmy.world
        link
        fedilink
        English
        24 months ago

        Are you backing up files from the FS or sre you backing up the snapshots? I had a corrupted journal from a power outage that borked my install. Could not get to the snapshots on boot. Booted into a live disk and recovered the snapshot that way. Would’ve taken hours to restore from a standard backup, however it was minutes restoring the snapshot.

        If you’re not backing up BTRFS snapshots and just backing up files you’re better off just using ext4.

        https://github.com/digint/btrbk

  • @cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    604 months ago

    Don’t use btrfs if you need RAID 5 or 6.

    The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

    https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

    • @Anonymouse@lemmy.world
      link
      fedilink
      English
      24 months ago

      I’ve got raid 6 at the base level and LVM for partitioning and ext4 filesystem for a k8s setup. Based on this, btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

      Additionaly, for my system, btrfs uses more bits per file or something such that I was running out of disk space vs ext4. Yeah, I can go buy more disks, but I like to think that I’m running at peak efficiency, using all the bits, with no waste.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        44 months ago

        btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

        Well yeah, because it’s supposed to replace those lower levels.

        Also, BTRFS does provide advantages over ext4, such as snapshots, which I think are fantastic since I can recover if things go sideways. I don’t know what your use-case is, so I don’t know if the features BTRFS provides would be valuable to you.

        • @Anonymouse@lemmy.world
          link
          fedilink
          English
          14 months ago

          Generally, if a lower level can do a thing, I prefer to have the lower level do it. It’s not really a reason, just a rule of thumb. I like to think that the lower level is more efficient to do the thing.

          I use LVM snapshots to do my backups. I don’t have any other reason for it.

          That all being said, I’m using btrfs on one system and if I really like it, I may migrate to it. It does seem a whole lot simpler to have one thing to learn than all the layers.

          • @jj4211@lemmy.world
            link
            fedilink
            English
            14 months ago

            Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.

            For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.

            Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.

            There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.

            That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.

            • @Anonymouse@lemmy.world
              link
              fedilink
              English
              14 months ago

              Those are some good points. I guess I was thinking about the hardware. At least where I do RAID, it’s on the controller, so that offloads much of the parity checking and such to the controller and not the CPU. It’s all probably negligible for the apps that I run, but my hardware is quite old, so maybe trying to squeeze all the performance I can is a worthwhile activity.

          • @sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            14 months ago

            Yup, I used to use LVM, but the two big NAS filesystems have a ton of nice features and they expect to control the disk management. I looked into BTRFS and ZFS, and since BTRFS is native to Linux (some of my SW doesn’t support BSD) and I don’t need anything other than RAID mirror, that’s what I picked.

            I used LVM at work for simple RAID 0 systems where long term uptime was crucial and hardware swaps wouldn’t likely happen (these were treated like IOT devices), and snapshots weren’t important. It works well. But if you want extra features (file-level snapshots, compression, volume quotas, etc), BTRFS and ZFS make that way easier.

            • @Anonymouse@lemmy.world
              link
              fedilink
              English
              24 months ago

              I am interested in compression. I may give it a try when I swap out my desktop system. I did try btrfs in it’s early, post alpha stage, but found that the support was not ready yet. I think I had a VM system that complained. It is older now and more mature and maybe it’s worth another look.

      • @dogma11@lemmy.world
        link
        fedilink
        English
        14 months ago

        I’ve been running a btrfs storage array with data on raid5 and metadata I believe raid1 for the last 5 or so years and have yet to have a problem because of it. I did unfortunately learn not to fully trust the windows btrfs driver but was fortunately able to restore from backups and redownloading.

        I wouldn’t hesitate to set it up again for myself or anybody else, and adding a UPS would be icing on the cake. (I added UPS to my setup this last summer)

  • exu
    link
    fedilink
    English
    164 months ago

    Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

    Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

    https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

    • @randombullet@programming.dev
      link
      fedilink
      English
      34 months ago

      I’m thinking of bumping mine up to 128k since I do mostly photography and videography, but I’ve heard that 1M can increase write speeds but decrease read speeds?

      I’ll have a RAIDZ1 and a RAIDZ2 pool for hot storage and warm storage.

  • @tripflag@lemmy.world
    link
    fedilink
    English
    34 months ago

    Not proxmox-specific, but I’ve been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it’s bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

    The only place I don’t use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

  • poVoq
    link
    fedilink
    English
    14 months ago

    I am using btrfs on raid1 for a few years now and no major issue.

    It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

    Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

    • @blackstrat@lemmy.fwgx.uk
      link
      fedilink
      English
      14 months ago

      The whole point of RAID redundancy is uptime. The fact that btrfs doesn’t boot with a degraded disk is utterly ridiculous and speaks volumes of the developers.

  • @SendMePhotos@lemmy.world
    link
    fedilink
    English
    24 months ago

    I run it now because I wanted to try it. I haven’t had any issues. A friend recommended it as a stable option.

    • Possibly linuxOP
      link
      fedilink
      English
      04 months ago

      Btrfs Raid 5 and raid 6 are unstable and dangerous

      Bcachefs is cool but it is way to new and isn’t even part of the kernel as of yet.

        • Possibly linuxOP
          link
          fedilink
          English
          04 months ago

          I though was then removed later as there was a disagreement between Linus and the bcachefs dev

          • bruhduh
            link
            fedilink
            English
            14 months ago

            Yeah, i remember something like that, i don’t remember exactly which kernel version it was when they removed it

  • Brownian Motion
    link
    fedilink
    English
    54 months ago

    My setup is different to yours but not totally different. I run ESXi 8, and I started to use BTRFS on some of my VM’s.

    I had a power failure, that was longer than the UPS could handle. Most of the system shutdown safely, a few VM’s did not. All of the EXT4 VM’s were easily recovered (including another one that was XFS). TWO of the BTRFS systems crashed into a non recoverable state.

    Nothing I could do to fix them, they were just toast. I had no choice but to recover using backups. This made me highly aware that BTRFS is still not a reliable FS.

    I am migrating everything from BTRFS to something more stable and reliable like EXT4. It’s simply not worth the headache.

      • Brownian Motion
        link
        fedilink
        English
        24 months ago

        It was only a few weeks ago (maybe 4). Systems are all kept up to date with ansible. Most are Debian but there are few Ubuntu. The two that failed were both Debian.

        Granted both that failed have high [virtual] disk usage compared to the other VM’s. I cannot remember the failure now, but lots of searching confirmed that it was likely unrecoverable (they could boot, but only into read only). None of the btrfs-check “dangerous” commands could recover it, spitting out tons of errors about mismatching somethings (again, forgotten the error).

  • Domi
    link
    fedilink
    English
    164 months ago

    btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

  • Suzune
    link
    fedilink
    English
    74 months ago

    The question is how do you get a bad performance with ZFS?

    I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

    The fourth run (obviously cached) gave me over 3.8 GB/s.

    • Possibly linuxOP
      link
      fedilink
      English
      -2
      edit-2
      4 months ago

      I have never heard of anyone getting those speeds without dedicated high end hardware

      Also the write will always be your bottleneck.

        • Possibly linuxOP
          link
          fedilink
          English
          14 months ago

          How much ram and what is the drive size?

          I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.

                • Possibly linuxOP
                  link
                  fedilink
                  English
                  2
                  edit-2
                  4 months ago

                  From the Proxmox documentation:

                  As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.

                  I changed the arc size on all my machines to 4GB and it runs a bit better. I am getting much better performance. I though I had changed it but I didn’t regenerate initramfs so it didn’t apply. I am still having issues with VM transfers locking up the cluster but that might be fixable by tweaking some settings.

                  16GB might be overkill or underkill depending on what you are doing.

      • @stuner@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        4 months ago

        I’m seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

        • 169 MB/s write
        • 254 MB/s read

        What’s your setup?

        • Possibly linuxOP
          link
          fedilink
          English
          14 months ago

          Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k

          The drives are a mix but I get almost the same performance across machines

          • @stuner@lemmy.world
            link
            fedilink
            English
            24 months ago

            It’s possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

            • Possibly linuxOP
              link
              fedilink
              English
              04 months ago

              Is your machine part of a cluster by chance? Of so, when you do a VM transfer what performance do you see?

      • Suzune
        link
        fedilink
        English
        4
        edit-2
        4 months ago

        This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it’s quite busy because it’s my home server with a VM and containers.