Hello,

I’m currently using Minio as an easy database for serving my images. To make things simpler everything is set to public, so that just with the URL, you can access it directly. While it’s working great for my website, by setting everything public you can easily see ALL the images. So my question is : What is the best way to setup my node JS app as a proxy ? Is it going through the full S3 protocol hell mess, or is there any solution ?

PS : I have a lot of images, so setting everything in the node app is not possible

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Very cool! Thanks for posting this. Minio was great, but they started tailoring to enterprise clients, and it’s become more and more annoying to keep it running in a homelab. (Security is 100% a great thing, but forcing high levels of security on me when I’m running 2 containers in a compose stack, where the minio container will never have exterior access… eh, I just gave up). So, I’m happy there’s one tailored a bit more towards self hosters

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      We ship a single dependency-free binary that runs on all Linux distributions

      It’s like 20 years of security awareness vanished in an instant.

        • julianwgs@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Dependency-free doesnt mean they dont have dependencies. Its just that they bundle them all in the executable. When there is a security vulnerability in a library on your Linux system the vendor of your distribution (Canonical, Redhat, SUSE) takes care that it is fixed. All dependent software and libraries are then fixed as well. All I say? Not the ones which have been bundled in the executable. First they need to find out that you are affected and then the maintainer has to update the dependency manually. Often they can only do this after there has been a coordinated release of the fix by the major distributors, which can leave you vulnerable no matter how fast the maintainer is. This is the way it is in Windows. (This was a short summary)

            • julianwgs@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 months ago

              Yes, in the sense that you are responsible to update the Docker container and often this can lead to vulnerable containers. No, in the sense that it is much easier to scan for dependencies inside a Docker container and identify vulnerabilities. Also most containers are based on Linux distribution, so those distribute the security fixes for specific libraries. All you have to is update the base image.