AmbitiousProcess (they/them)

  • 1 Post
  • 308 Comments
Joined 6 months ago
cake
Cake day: June 6th, 2025

help-circle

  • In my experience at least, there has not once been an instance where an LLM was able to find answers on Reddit more reliably than I could, and I’ve been using LLMs since before ChatGPT was even a thing. (though granted, most web-search compatible LLMs came later on)

    I think it will probably be better than the average user, since a lot of people simply aren’t that great at using search engines very effectively in the first place, but I wouldn’t call the answers “practically impossible to find.”


  • The problem is, it’s not unobtrusive.

    When I right click and I instantly get an option silently added to the list that sends data to an AI model hosted somewhere, which I’ve accidentally clicked due to muscle memory, it’s not good just because there’s also the option there to disable it. When I start up my browser after an update and I am instantly given an open sidebar asking me to pick an AI model to use, that’s obtrusive and annoying to have to close and disable.

    Mozilla has indicated they do not want to make these features opt-in, but opt-out. The majority of Mozilla users do not want these features by default, so the logical option is to make them solely opt-in. But Mozilla isn’t doing that. Mozilla is enabling features by default, without consent, then only taking them away when you tell them to stop.

    The approach Mozilla is taking is like if you told a guy you weren’t interested in dating him, but instead of taking that as a “no.” he took it as a “try again with a different pickup line in 2 weeks” and never, ever stopped no matter what you tried. It doesn’t matter that you can tell him to go away now if he’ll just keep coming back.

    Mozilla does not understand consent, and they are violating the consent of their users every time they push an update including AI features that are opted-in by default.


  • Because google only pays Mozilla because of:

    • Maintaining search dominance
    • Preventing anti-monopoly scrutiny

    They don’t want Mozilla to compete in any AI space, because there’s already a ton of competition in the AI space given how much money gets thrown around, so they don’t benefit from anti-monopoly efforts, and there’s so many models that they don’t benefit from search dominance in the AI space. They’d much rather have Mozilla stay a non-AI browser while they get to implement AI features and show shareholders that they’re “the most advanced” of them all, or that “nobody else is doing it like we do”.



  • our elected representatives refuse to do their jobs

    This is under a post about elected representatives quite literally doing their jobs and suing corporations for doing this.

    They might be couching it in language about it all being because of the “Chinese Communist Party surveilling Americans”, but they’re still trying to stop the practice.

    the onus is on the user to protect themselves.

    It’s good when users can protect themselves. It’s easy to forget that these companies design their products specifically to make people set them up how the manufacturer wants, and not all of them will even work without being connected to the internet.

    The average person is not technically literate whatsoever. You’re telling people to take personal responsibility for their privacy when they barely know how any technology works, and are surrounded by corporations who’s budgets go towards finding new ways to convince people to give up their privacy in the most effective ways.


  • Every time I buy any piece of technology, seeing “smart” in the title makes me immediately look for something else.

    I want a label printer. Not something that only works with a mobile app, not something that requires proprietary drivers and doesn’t work on my OS, not something that can only be used with your specific software, not one that requires your labels with a special NFC tag to use, just a label printer that is just as compatible as any regular printer.

    I ended up paying about a 5X premium compared to alternatives on the market for that, and I would do it again.








  • Almost certainly yes, at least based on historical precedent, though Trump loves ignoring that.

    For example, Watts v. United States, where someone said:

    “They always holler at us to get an education. And now I have already received my draft classification as 1-A and I have got to report for my physical this Monday coming. I am not going. If they ever make me carry a rifle the first man I want to get in my sights is L. B. J.”

    He was originally convicted, but that was then reversed, as the Supreme Court stated:

    “We agree with petitioner that his only offense here was ‘a kind of very crude offensive method of stating a political opposition to the president.’ Taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.”

    At the end of the day, the law only really states that you have to:

    1. Make a threat
    2. …That threatens you taking the life of, kidnapping, or inflicting bodily harm upon the president

    So let’s say I say “I am going to kill the President” not as an example, but as an actual statement. That could be interpreted as an actual threat. However, If I am a 13 year old kid with just $20 to my name, no access to a gun, and no means of transport to even get near the president, it would be hard for the government to argue that’s not just a joke, or political hyperbole, as it was in the case of Watts v. United States.

    Given that all of those would be a threat for you to do something, and not just you wishing he was dead by any other means, it’s quite likely that any court would determine you saying something like “I hope Donald Trump dies a horrible, agonizing, painful death,” or even something like “I hope someone else shoots the president” would probably be considered AOK, and again, just political hyperbole, or statements without any material threat behind them.





  • *specifically boomers between years 1946 and 1964, which have actually paid more than they’ll get in benefits.

    The others are still taking more than they contributed. It’s fair to say that some current boomers have paid for their Social Security, but many others have not, and the situation isn’t getting any better.

    To put it simply, there are just fewer workers paying in to the system than there are people taking money out, and that number only grows as people get older. image

    This means only about 80% of existing benefit rates are expected to be paid to people when they retire later, and many of those benefiting from existing rates are already taking more from current generations than they paid in.

    I don’t think we should universally hate boomers just because the economic situation they were in happened to favor them in some ways, after all, I want my grandma to keep being able to afford her retirement care right now before she dies, but it’s also just not true to say that all current boomers have paid for their social security in its entirety.

    Only some of them have, and with the way things are going, it’s not looking like we’ll be any better as we grow older, as rates will have to decline just to prevent draining the entire fund, while people continue to pay the same % of their income into the system.


  • Videos, images, and text can absolutely compel action or credible harm.

    For example, Facebook was aware that Instagram was giving teen girls depression and body image issues, and subsequently made sure their algorithm would continue to show teen girls content of other girls/women who were more fit/attractive than them.

    the teens who reported the most negative feelings about themselves saw more provocative content more broadly, content Meta classifies as “mature themes,” “Risky behavior,” “Harm & Cruelty” and “Suffering.” Cumulatively, such content accounted for 27% of what those teens saw on the platform, compared with 13.6% among their peers who hadn’t reported negative feelings.

    https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf

    https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

    Many girls have committed suicide or engaged in self harm, at least partly inspired by body image issues stemming from Instagram’s algorithmic choices, even if that content is “just videos, and images.”

    They also continued to recommend dangerous content that they claimed was blocked by their filters, including sexual and violent content to children under 13. This type of content is known to have a lasting effect on kids’ wellbeing.

    The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters.

    https://time.com/7324544/instagram-teen-accounts-flawed/

    In the instance you specifically highlighting, that was when Meta would recommend teen girls to men exhibiting behaviors that could very easily lead to predation. For example, if a man specifically liked sexual content, and content of teen girls, it would recommend that man content of underage girls attempting to make up for their newly-created body image issues by posting sexualized photos.

    They then waited 2 years before implementing a private-by-default policy, which wouldn’t recommend these teen girls’ accounts to strangers unless they explicitly turned on the feature. Most didn’t. Meta waited that long because internal research showed it would decrease engagement.

    By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.

    https://techoversight.org/2025/11/22/meta-unsealed-docs/

    If I filled your social media feed with endless posts specifically algorithmically chosen to make you spend more time on the app while simultaneously feeling worse about yourself, then exploited every weakness the algorithm could identify about you, I don’t think you’d look at that and say it’s “catastrophizing over videos, images, text on a screen that can’t compel action or credible harm” when you develop depression, or worse.