DefederateLemmyMl

  • Gen𝕏
  • Engineer ⚙
  • Techie 💻
  • Self hoster 🖧
  • Linux user 🐧
  • Ukraine supporter 🇺🇦
  • Pro science 💉
  • Dutch speaker
  • 1 Post
  • 350 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle
  • Yes, I get your point and I paused for a second if I should really use the word guarantee, because sometimes development just stops on software, regardless of license.

    The thing is, if development stops on proprietary software, the project is truly dead. With FOSS it can always be revived by someone with enough interest in the software because the codebase is freely available. So instead of being dead, it’s merely “in hibernation”.

    A good example is the Amarok mp3 player that I used in KDE 15 years ago. It stopped being maintained around 2011 and fell in disuse until last year some people picked up the code, cleaned it up and ported it to Qt5, and now it’s being actively maintained again.




  • In many cases there’s no extra wear

    You can’t change physics. More HP = more torque = more wear on the whole drive train. Also more boost = more stress on the turbo = it will fail sooner.

    Also, back then, cars with the higher specced variant of the “same” engine almost always had mechanical upgrades compared to the lower specced engine: usually bigger brakes, a stronger clutch, and various other drive train components.

    So while in many cases you could chip your car without much immediate harm, you were definitely cutting into various safety margins determined by automotive engineers who know much better than you and me.




  • I think the problem stems from how LLMs are marketed to, and perceived by the public. They are not marketed as: this is a specific application of this-or-that AI or ML technology. They are marketed as “WE HAVE AI NOW!”, and the general public who is not familiar with AI/ML technologies equates this to AGI, because that’s what they know from the movies. The promotional imagery that some of these companies put out, with humanoid robots that look like they came straight out of Ex Machina doesn’t help either.

    And sure enough, upon first contact, an LLM looks like a duck and quacks like a duck … so people assume it is a duck, but they don’t realize that it’s a cardboard model of a duck with a taperecorder inside that plays back quacking sounds.


  • LLMs are decent with coding tasks if you know what you’re doing

    Only if the thing you are trying to do is commonly used and well documented, but in that case you could just read the documentation instead and learn a thing yourself, right?

    The other day I tried to get some instructions on how to do something specific in a rather obscure and rather opaquely documented cli tool that I need for work. I couldn’t quite make sense of the documentation, and I found the program’s behavior a bit erratic, so that’s why I turned to AI. It cheerfully and confidently told me (I’m paraphrasing): oh to do “this specific thing” you have to use the --something-specific switch, and then it gave some command line examples using that switch that looked like they made complete sense.

    So I thought: oh, did I overlook that switch? Could it be that easy? So I looked in the documentation and sure enough… the AI had been bullshitting me and that switch didn’t exist.

    Then there was the time when I asked it to generate an ARM template (again, poorly documented bullshit) to create some service in Azure with some specific parameters. It gave me something that looked like an ARM template, but sure as hell wasn’t a valid one. This one wasn’t completely useless though, at least I was able to cross reference with an existing template and with some trial-and-error, I was able to copy over some of the elements that I needed.


  • That’s another option, but it’s a bit more cumbersome having to cherrypick which exact backports you need for your specific hardware. Also, if you then for some reason don’t upgrade to the next stable release when it comes out, backports get abandoned after 1 year instead of the customary 3 years for the rest of the oldstable release.

    From my experience, running trixie/testing the past year or so on a minipc with hardware that was a bit too recent for bookworm, I can say that the cadence of security patches has been about the same between bookworm and testing.

    And let’s be honest, on a desktop system your main attack surface is going to be the software you go online with, i.e. the browser. So if you make sure that is kept up to date (flatpak, vendor repo, …) that already goes a long way.


  • the ctrl-super-alt is completely different

    It’s not “completely different” … and that’s the problem. Completely different I can handle. I can manage knowing vim keybindings, readline keybindings and standard windows keybindings at the same time. What I can’t handle is: having to use command + C on one Mac and control + C on Windows to copy, but then in some cases you do use “control” on both OS-es, and sometimes control and alt are switched … It’s because they are similar but different that it’s such a mess trying to get proficient in both at the same time.



  • The correct way with a new computer with recent hardware is to install Debian Testing to get a recent kernel, firmware and mesa and stuff, but put the code name of the next release into your apt config instead of “testing”. So then when the next version is released, you can just stay on that, now stable, version.

    Trixie just got released today though, so for the time being you can probably get away with using that.


  • every service will get your ID or photo

    To be fair, that’s not how it will work. The site and the identity verifier will be two different things, the verifier only attests that you are not underage and the site doesn’t get your identity.

    Still harmful though, because you can be sure that there will be scamsites redirecting people to fake but real looking verifiers for blackmail and identity theft purposes.

    I for one will never put my ID or photo into any age verifier ever.