I was browsing on system76’s offering to see what PCs they have and noticed that they have an ARM Computer that apparently faster than the fastest Apple Mac but for cheaper (Based), but I’m wondering, how well does ARM computers game on linux with proton, it is very expensive to me atm and I can’t afford it, but maybe in the future I could consider it to be my first desktop as I always been using laptops, obviously gaming isn’t like the main priority as I would like a workstation to do heavy work such as blender and stuff and perhaps put gentoo on it in the future (if its supported) but I would like to game on the side when I’m winding down that’s all, so can it game well?

  • moonpiedumplings@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    16 hours ago

    Should be awful for gaming. It’s possible to run x86 things with emulation, sure, but performance (especially single-thread)

    Most modern software (games excluded), is dynamically compiled. This means that it’s not all one “bundle” that runs, but rather a binary that calls reusable pieces of code, “libraries” from the binary itself. Wine is dynamically compiled.

    What makes modern x86 to arm translators special, is that the x86 binary, like an x86 version of wine, can call upon the arm versions of the libraries it uses ­— like graphic drivers. It’s because of this that the people on r/emulationonandroid managed to play GTA 5 with 30 fps via the computer version. There definitely is overhead, but it’s not that much, and a beefy machine like this could absolutely handle it.

    https://moonpiedumplings.github.io/blog/scale-22/#exhibition-hall

    The Facebook/Meta table had a booth where they had an ARM macbook that was running steam and they were installing games on it.

    • zarenki@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      14 hours ago

      “Dynamically compiled” and dynamic linking are very different things, and in turn dynamic linking is completely different from system calls and inter-process communication. I’m no emulation expert but I’m pretty sure you can’t just swap out a dynamically linked library for a different architecture’s build for it at link time and expect the ABI to somehow work out, unless you only do this with a small few manually vetted libraries where you can clean up the ABI. Calling into drivers or communicating with other processes that run as the native architecture is generally fine, at least.

      I don’t know how much Asahi makes use of the capability (if at all), but Apple’s M series processors add special architecture extensions that makes x86 emulation be able to perform much better than on any other ARM system.

      I wouldn’t deny that you can get a lot of things playable enough, but this is very much not hardware you get for the purpose of gaming: getting a CPU and motherboard combo that costs $1440 (64-core 2.2GHz) or $2350 (128-core 2.6GHz) that performs substantially worse at most games than a $300 Ryzen CPU+motherboard combo (and has GPU compatibility quirks to boot) will be very disappointing if that’s what you want it for. Though the same could to a lesser extent be said even about x86 workstations that prioritize core count like Xeon/Epyc/Threadripper. For compiling code, running automated tests, and other highly threaded workloads, this hardware is quite a treat.

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        13 hours ago

        You’re right, my bad. Dynamic linking and dynamic compilation are different thinks.

        The library inter operation is a part of the translation layers that, like fex-emu which is becoming more and more supported by Fedora.

        https://github.com/FEX-Emu/FEX/blob/main/ThunkLibs/README.md

        manually vetted libraries where you can clean up the ABI

        Yes, but usually games are ran with wine which does have a standard set of libraries it uses.