• 0 Posts
  • 21 Comments
Joined 18 days ago
cake
Cake day: June 6th, 2025

help-circle
  • I was thinking this too! Gait recognition can completely bypass facial coverings as a means of identification, but I also don’t think it’ll be much help here.

    Gait recognition can be bypassed by things as simple as putting a rock in your shoe so you walk differently, so when you think about how much extra heavy gear, different shoes, and different overall movement patterns ICE agents will possibly be engaging in, it might not hold up well at tracking them down, especially since to recognize someone by gait, you’d need footage of them that you can already identify them in, to then train the model on.

    In the case of fucklapd.com, this was easy because they could just get public record data for headshot photos, but there isn’t a comparable database with names directly tied to it for gait. I will say though, a lot of these undercover agents might be easier to track by gait since they’ll still generally be wearing more normal attire, and it might be more possible to associate them with who they are outside of work since it’s easier to slip up when you’re just wearing normal clothes.


  • Communicating on a platform you don’t own and can’t control seems very shortsighted.

    I feel like this would be a much more realistic take if social media more broadly was all federated, and anyone’s independent instance could still communicate with the others, but that’s unfortunately not the case.

    For a politician, which is better for their campaign? Starting an independent platform they entirely own and control, but with no local users to start out with, or having an account on an existing platform with millions and millions of users?

    Obviously, even though in the first example they would have 100% control over their infrastructure, they wouldn’t exactly be spreading their message very far. They could always publish simultaneously on both platforms, but that still doesn’t mean much if the second platform has no users. However, the platform that has many millions of users can instantly grant them reach, which is kind of the point of them being on social media in the first place.

    On your point about a bot, I’m assuming you mean more like a bridge mechanism that cross-posts from one platform to another. You could correct me if I’m wrong, but I believe AOC at least posts a lot of similar messaging on both Twitter and Bluesky, rather than staying isolated to one or the other. It’s not exactly the same thing, but it has a similar effect.

    In an ideal world, everyone could easily host their own Mastodon server and just communicate with others without being tied to a platform, but unfortunately we still live in a world where the network effect is keeping people trapped in corporate social media silos, and there’s only so much an individual politician can do to change that without harming their own ability to message to the public.


  • Nobody left on that platform is going to be convinced of anything anymore.

    I’d beg to differ. Although it’s true that the ratio of neo-Nazis and generally just far right freaks has far surpassed the number of everyday people, that doesn’t mean those people don’t exist anymore.

    I always bring this up in conversations about leaving social networks, because if you don’t understand it, it will warp your entire perspective of why people stay on shitty platforms in the first place. The Network Effect is what keeps people hooked on these platforms, even when the owner becomes a literal neo-Nazi.

    The people who have already left are the ones that are capable of and willing to sacrifice the scale, reach, and history that Twitter has, in the hopes that whatever platform they move to will treat them better. Leaving Twitter means deleting your digital history, erasing every connection you’ve made on the platform, and entirely cutting all of your messaging off from anyone who hasn’t yet left.

    AOC is already on alternative platforms like Bluesky, so people who are willing and able to move, that would otherwise have stayed solely because she was still on Twitter have already done so. The people that remain do not remain because of her, they remain because of everybody else.

    Yes, there are still quite a few neo-Nazis outnumbering the average person on there, but there are still quite a few average people that are still on Twitter. Don’t forget that the average person doesn’t seem to care when the companies they buy products from exploit child labor, fund wars that keep oil prices low, and suppress the wages of the workers in their own communities. The average person simply does not have the will to sacrifice what they must give up by leaving a large platform like Twitter, so they remain there.

    If AOC didn’t benefit politically from being on Twitter, then she would have entirely left and deleted her account a while ago.





  • This wouldn’t be an issue if Reddit always attached relevant posts, including negative ones even if those were the minority, to actually help people make a more informed judgement about an ad based on community sentiment, but I think we all know that won’t be the way this goes.

    Posts will inevitably only be linked if they are positive, or at the very least neutral about the product being advertised, because that’s what would allow Reddit to sell advertisers on their higher ROI. The bandwagon effect is a real psychological effect, and Reddit knows it.



  • Fair enough. SEO was definitely one of the many large steps Google has taken to slowly crippling the open web, but I never truly expected it to get this bad. At least with SEO, there was still some incentive left to create quality sites, and it didn’t necessarily kill monetizability for sites.

    This feels like an exponentially larger threat, and I truly hope I’m proven wrong about its potential effects, because if it does come true, we’ll be in a much worse situation than we already are now.



  • Presearch is not fully decentralized.

    All the services that manage advertising, staking/marketplace/rewards functionality, and unnamed “other critical Presearch services” are all “centrally managed by Presearch” according to their own documentation.

    The nodes that actually help scrape and serve content are also reliant on Presearch’s centralized servers. Every search must go through Presearch’s “Node Gateway Server,” which is centrally managed by them. That removes identifying metadata and IP info.

    That central server then determines where your request goes. It could be going to open nodes run by volunteers, or it could be their own personal nodes. You cannot verify this due to how the structure of the network works.

    Presearch’s search index is not decentralized. It’s a frontend for other indexes. (e.g. it outsources queries to other search engines, databases, and APIs for services it’s configured to use) This means it does not actually have an index that is independent from these central services. I’ll give it a pass for this since most search engines are like this today, but many of them are developing their own indexes that are much more robust than what Presearch seems to be doing.

    This node can return results to the gateway. There doesn’t seem to be any way that the gateway can verify that what it’s being provided is actually what was available on the open web. For example, the node could just send back results with links that are all affiliate links to services it thinks are vaguely relevant to the query, and the gateway would assume that these queries are valid.

    For the gateway to verify these are accurate, it would have to additionally scrape these services itself, which would render the entire purpose of the nodes pointless. The docs claim it can “ensure that each node is only running trusted Presearch software,” but it does not control the root of trust, and thus it has the same pitfalls that games have had for years trying to enforce anticheat (that is to say, it’s simply impossible to guarantee unless presearch could do all the processing within a TPM module that they entirely control, which they don’t. Not to mention that it would cause a number of privacy issues)

    A better model would be one where nodes are solely used for hosting to take the burden off a central server for storing the index, and chunks sent to nodes would be hashed, with the hash stored on the central server. When the central server needs a chunk of data based on a query, it sends a request, verifies the hash matches, then forwards it to the user, thus taking the storage burden off the main server and making the only cost bottleneck the bandwidth, but that’s not what Presearch is doing here.

    This doesn’t make Presearch bad in itself, but it’s most definitely not decentralized. All core search functionality relies on their servers alone, and it simply adds additional risk of bad actors being able to manipulate search results.



  • Not to mention the fact that the remaining sites that can still hold on, but would just have to cut costs, will just start using language models like Google’s to generate content on their website, which will only worsen the quality of Google’s own answers over time, which will then generate even worse articles, etc etc.

    It doesn’t just create a monetization death spiral, it also makes it harder and harder for answers to be sourced reliably, making Google’s own service worse while all the sites hanging on rely on their worse service to exist.


  • This is fundamentally worse than a lot of what we’ve seen already though, is it not?

    AI overviews are parasitic to traffic itself. If AI overviews are where people begin to go for information, websites get zero ad revenue, subscription revenue, or even traffic that can change their ranking in search.

    Previous changes just did things like pulling a little better context previews from sites, which only somewhat decreased traffic, and adding more ads, which just made the experience of browsing worse, but this eliminates the entire business model of every website completely if Google continues pushing down this path.

    It centralizes all actual traffic solely into Google, yet Google would still be relying on the sites it’s eliminating the traffic of for its information. Those sites cut costs by replacing human writers with more and more AI models, search quality gets infinitely worse, sourcing from articles that themselves were sourced from nothing, then most websites which are no longer receiving enough traffic to be profitable collapse.



  • This is one of the best reasons to socially stigmatize wealth hoarding, even if you can’t change the fundamentals of the capitalist system that causes it in the first place.

    If enough people make people who hoard money feel lesser than, to the point that having less is a preferable alternative, then they’re more likely to give away their wealth and become at least a little bit less shitty people.

    This is also, coincidentally, why rich people isolate themselves within bubbles of similarly rich individuals, who won’t look down on them for being so greedy and narcissistic.



  • Which privacy first smartphones would people recommend for US users

    If you want to run GrapheneOS, then you can only use a Google Pixel.

    If you want to run Calyx, you can use any phone on the CalyxOS “Devices” list, which includes Pixels, Fairphone, and some Motorola phones too.

    I personally recommend Pixels because they tend to get the fastest and longest-lasting OEM-provided security patches (e.g. the Pixel 8 and later get 7 years of updates from when they were released) and Android releases, and they actually have a pretty decent selection of self-repair kits available for if you need to do a repair yourself, or if you want a repair technician to not have to go through a complicated ordering process for spare parts.

    how does it work putting it on a network?

    Make sure to buy one that’s not locked to a carrier, otherwise you’ll be unable to install the custom OS in the first place, since the bootloader will be locked. You can still set it up with any carrier you want once it’s unlocked. (this essentially means you need to buy the phone directly from the manufacturer. Don’t buy through your phone plan, or through a trade-in/upgrade with your carrier)

    Your carrier, once you request it, will either mail you a physical SIM card you can put in your phone, or a digital eSIM you can activate immediately. I prefer eSIMs for convenience, but it’s entirely up to you. (you can check out this list of pros and cons if you’re interested. They’re mostly negligible.)

    Do they go on the regular networks like at&t, sprint, Verizon etc?

    Yes.

    Now, if you’re going to install a custom OS, definitely make sure you watch a couple videos and read the official guide for the OS you choose on how to install it. You definitely want to make sure you don’t screw it up.

    For example, if you’re installing GrapheneOS, you might want to use a chromium-based browser (chrome, ungoogled chromium, brave, etc) over something like Firefox, because it sometimes has issues installing via the WebUSB installer, while having no issues with chromium based browsers.

    These little details are something you’ll want to pick up from those resources so you can actually feel confident when you flash the OS to your phone, and make sure you do it correctly. Plus, you get the upside of knowing more about how exactly the OS protects you compared to stock android.

    I personally recommend GrapheneOS if you’re good with using a Pixel, since it seems to have some of the strongest security guarantees on top of its methodology around privacy. (Google has very strong hardware security measures that other phones don’t always have, which GrapheneOS takes full advantage of)