• 0 Posts
  • 97 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2024

help-circle

  • There’s a difference between ‘language’ and ‘intelligence’ which is why so many people think that LLMs are intelligent despite not being so.

    The thing is, you can’t train an LLM on math textbooks and expect it to understand math, because it isn’t reading or comprehending anything. AI doesn’t know that 2+2=4 because it’s doing math in the background, it understands that when presented with the string 2+2=, statistically, the next character should be 4. It can construct a paragraph similar to a math textbook around that equation that can do a decent job of explaining the concept, but only through a statistical analysis of sentence structure and vocabulary choice.

    It’s why LLMs are so downright awful at legal work.

    If ‘AI’ was actually intelligent, you should be able to feed it a few series of textbooks and all the case law since the US was founded, and it should be able to talk about legal precedent. But LLMs constantly hallucinate when trying to cite cases, because the LLM doesn’t actually understand the information it’s trained on. It just builds a statistical database of what legal writing looks like, and tries to mimic it. Same for code.

    People think they’re ‘intelligent’ because they seem like they’re talking to us, and we’ve equated ‘ability to talk’ with ‘ability to understand’. And until now, that’s been a safe thing to assume.





  • You don’t have to click an ad for it to be a security threat.

    It is possible to abuse the mechanics of a web browser to send a fullscreen ad that resists typical means of app closing, scaring a normal user into clicking to install something malicious.

    The weakest link is always the user, and advertisements are literally meant to target users. Exactly how hard do you think it is for an ad network to target the kinds of people most likely to get scared and just click the [Fix] button that downloads the malware?

    Your average user gets infected and they take a computer to a repair shop to get it fixed, which costs money.

    If the ad network would accept liability for damages caused by malware ads their ad networks delivered to people, I could be more sympathetic to the position that blocking ads is unfair to the content creaters paid by ad views. But if I’m financially responsible for fixing damage caused by ads, then I reserve the right to block them.

    Full stop.





  • Unfortunately, they’re aren’t many options in the 2025 internet browser market.

    Unless something has changed, the gecko engine Firefox uses is the only distinctly different engine from Chrome, and I don’t think writing a browser engine from scratch is easy. So if the solution is to hard pivot away from Firefox entirely, I don’t know how you don’t end up using some Chrome based browser.

    At least Mozilla hasn’t tried to kill adblockers like Google clearly is trying to.

    Forking the codebase and stripping out any AI code is much easier than trying to invent another wheel.





  • It’s got intern-level intelligence

    The problem is, it’s not “intelligence”. It’s an enormous statistical based autocorrect.

    AI doesn’t understand math, it just knows that the next character in a string starting “2+2=” is almost unanimously “4” in all the data it’s statistically analyzed. If you try to have it solve an equation that isn’t commonly repeated, it can’t solve it. Even when you try to train it on textbooks, it doesn’t ‘learn’ the math, it tries to analyze the word patterns in the text of the book and attempts to replicate it. That’s why it ‘hallucinates’, and also why it doesn’t matter how much data you feed it, it won’t be ‘intelligent’.

    It seems intelligent because we associate intelligence with language, and LLMs mimic language in an amazing way. But it’s not ‘thinking’ the way we associate with intelligence. It’s running complex math about what word should come next in a sentence based on the other sentences of that sort it’s seen before.


  • No company has to tell me that they are inclusive. I just assume that they hire the best person who applied for any given job. If that person was LGBT, I fully expect them to have given that person the job. If you have to tell me that you are, that means you werent.

    Welcome to being gay in society just a short few years ago. We live in a world where Alan Turing was arrested, charged, and convicted of being homosexual and chemically castrated as a result. It didn’t matter that he helped the Allies win WW2 and he wasn’t hurting anyone, it was a crime to be gay. When AIDS was first ravaging the homosexual community, there was talk of just letting it run rampant as it was just killing ‘the gays’ not anyone important.

    I’m happy that we’ve made progress as a society that this isn’t as well known anymore, but that doesn’t change that it did happen.


  • I dunno, these don’t feel the same to me.

    Having LGBTQ representation is a way of trying to attract customers: “Get a Mastercard because we’re LGBTQ friendly” is different than your boss saying “Jim, I know you have a wife and kids to support, and that you’re a valuable member on this team; but we’ve decided it’s more cost effective to have this LLM code our app and have two junior developers clean up the code, so you’re being laid off.”

    The quote I’ve seen and agree with is something along the lines of “The AI push exists to try and give the owners of ‘Capital’ access to ‘Talent’ without giving the talented working class people access to ‘Capital’.” It exists solely to try and make paying workers redundant.

    Having a gay character in a show isn’t anything like that at all IMO, unless your the type of person who thinks homosexuality is contagious and/or that you’re scared you might realize you’re gay if you watch two men being romantic with each other.



  • Fire up Wireshark on a different machine and transfer a file between two other machines, you won’t see anything.

    This is true, but only because we’ve replaced Ethernet hubs with switches.

    An Ethernet hub was a dumber, cheaper device that imitated a switch, but with a fundamental difference: all connected devices were in the same collision domain.

    I don’t know too much about WiFi but it probably does the same, it’s just a bridge to the same network.

    Wireless communication has the same problem as Ethernet hubs, with no real solution like a switch though. Any wireless transmission involves an antenna, and transmitting is similar to standing in your yard with a bull horn to talk to your buddy two houses down. Anyone with an antenna can receive the wireless signal you send out. Period.

    So some really smart people found ways to keep the stuff you send private, but anyone can sit nearby and capture data going through the air, it’s just not anything you can use because of the encryption.


  • That’s not a problem at all, so long as the first boot device is the Linux drive.

    GRUB has no issue chain-loading the windows bootloader. You can even set GRUB to default to Windows if you want, it’ll just show the menu for a while (whatever you set the timeout to be, I find 3 seconds to be plenty) and if nothing is selected, it will hand off to Windows.

    If you want to boot Linux, just hit the down arrow key when you see the menu to stop the countdown and choose what you want to boot, then hit enter.


  • I feel like this is missing a big point of the article.

    The vulnerability that the xz backdoor attempt revealed was the developers. The elephant in the room is that for someone capable of writing and maintaining a program so important to modern technical infrastructure, we’re making sure to hang them out to dry. When they burn out because their ‘hobby’ becomes too emotionally draining (either because of a campaign to wear them down intentionally or fully naturally) someone will be waiting to take control. Who can you trust? Here, we see someone attempted (and nearly succeeded) a multi-year effort to establish themselves as a trusted member of the development community who was faking it all along. With the advent of LLMs, it’s going to be even harder to tell if someone is trustworthy, or just a long-running LLM deception campaign.

    Maybe, we should treat the people we rely on for these tools a little better for how much they contribute to modern tech infrastructure?

    And I’ll point out that’s less aimed at the individuals who use tech, and more at the multi-billion-dollar multi-national tech companies that make money hand over fist using the work others donate.