I got 32 additional GB of ram at a low, low cost from someone. What can I actually do with it?

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          4 days ago

          In my case, it’s less about being able to open more Firefox tabs and more about Firefox being able to go longer between crashes due to a memory leak. (I know, I know… Firefox doesn’t have memory leaks anymore. It’s probably due to an extension or some bad JavaScript in one of my perpetually-open sites or something. One of these days I’ll get around to troubleshooting it…)

  • vividspecter@lemm.ee
    link
    fedilink
    arrow-up
    30
    ·
    4 days ago
    • Compressed swap (zram)

    • Compiling large C++ programs with many threads

    • Virtual machines

    • Video encoding

    • Many Firefox tabs

    • Games

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        arrow-up
        17
        ·
        4 days ago

        I realise that you are making a joke, but here’s what I used it for:

        • Debian VM as my main desktop
        • Debian VN as my main Docker host
        • Windows VM for a historical application
        • Debian VM for signal processing
        • Debian VM for a CNC

        At times only the first two or three were running. I had dozens of purpose built VM directories for clients, different hardware emulation, version testing, video conferencing, immutable testing, data analysis, etc.

        My hardware failed in June last year. I didn’t lose any data, but the hardware has proven hard to replace. Mind you, it worked great for a decade, so, swings and roundabouts.

        I’m currently investigating, evaluating and costing running all of this in AWS. Whilst it’s technically feasible, I’m not yet convinced of actual suitability.

          • Onno (VK6FLAB)@lemmy.radio
            link
            fedilink
            arrow-up
            5
            ·
            4 days ago

            In my case, I’m not a fan of running unknown code on the host. Docker and LXC are ways of running a process in a virtual security sandbox. If the process escapes the sandbox, they’re in your host.

            If they escape inside a VM, that’s another layer they have to penetrate to get to the host.

            It’s not perfect by any stretch of the imagination, but it’s better than a hole in the head.

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 days ago

    Here’s what you can do with your impressive 64 GB of RAM:

    Store approximately 8.1 quintillion (that’s 8,100,000,000,000,000) zeros! Yes, that’s right, an endless ocean of nothingness that will surely bring balance to the universe.

  • Jesus_666@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    4 days ago

    Run a fairly large LLM on your CPU so you can get the finest of questionable problem solving at a speed fast enough to be workable but slow enough to be highly annoying.

    This has the added benefit of filling dozens of gigabytes of storage that you probably didn’t know what to do with anyway.

  • zkfcfbzr@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 days ago

    I have 16 GB of RAM and recently tried running local LLM models. Turns out my RAM is a bigger limiting factor than my GPU.

    And, yeah, docker’s always taking up 3-4 GB.

      • zkfcfbzr@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Fair, I didn’t realize that. My GPU is a 1060 6 GB so I won’t be running any significant LLMs on it. This PC is pretty old at this point.

        • fubbernuckin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          You could potentially run some smaller MoE models as they don’t take up too much memory while running. I’d suspect the deepseek r1 8B distill with some quantization would work well.

          • zkfcfbzr@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            I tried out the 8B deepseek and found it pretty underwhelming - the responses were borderline unrelated to the prompts at times. The smallest I had any respectable output with was the 12B model - which I was able to run, at a somewhat usable speed even.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 days ago

    Fold At Home!

    https://foldingathome.org/

    You can essentially donate your processing power to various science projects that need it to compute protein folding simulations. I used to run it whenever I wasn’t actively using my PC. This does cost electricity and increase rate of wear and tear on the device, as with any sustained high computational load. But it’s cool! :]

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 days ago

    700 Chrome tabs, a very bloated IDE, an Android emulator, a VM, another Android emulator, a bunch of node.js processes (and their accompanying chrome processes)

    • daggermoon@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      5
      ·
      4 days ago

      I actually did. I deleted it as soon as I realized it wouldn’t tell me about the Tiananmen Square Massacre.

      • Yerbouti@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        4 days ago

        But the local version is not supposed to be censored…? I’ve asked it questions about human rights in China and got a fully detailed answer, very critical of the government, something that I could not get on the web version. Are you sure you were running it locally?

        • some_guy@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          Nah, it’s just fewer parameters. It’s not as “smart” at censorship or has less overhead to apply to censorship. This came up on Ed Zitron’s podcast, Better Offline.

        • kevincox@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          IIUC it isn’t censored per se. Not like the web service that will retract a “bad” response. But the training data is heavily biased. And there may be some explicit training towards refusing answers to those questions.

      • Dasus@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        Oh, c’mon, I’m sure it told you all about how there’s nothing to tell. Insisted on that, most likely.

        • daggermoon@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          Nah it said something along the lines of “I cannot answer that, I was created to be helpful and harmless”

          • Dasus@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 days ago

            Answer that with “your answer implies that you know the answer and can give it but are refusing to because you’re being censored by the perpetrators” or some such.

            I made Gemini admit it lied to me and thus Google lied to me. I haven’t tried Deepseek.