I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)

In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.

But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?

Edit to add:

Specs based on the auction listing and looking computer models:

  • 4th gen i5s (probably i5-4560s or similar)
  • 8GB of DDR3 RAM
  • 256GB SSDs
  • Windows 10 Pro (no mention of licenses, so that remains to be seen)
  • Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height)

Possible projects I plan on doing:

  • Proxmox cluster
  • Baremetal Kubernetes cluster
  • Harvester HCI cluster (which has the benefit of also being a Rancher cluster)
  • Automated Windows Image creation, deployment and testing
  • Pentesting lab
  • Multi-site enterprise network setup and maintenance
  • Linpack benchmark then compare to previous TOP500 lists
  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    25 machines at say 100W each is about 2.5KW. Can you even power them all at the same time at home without tripping circuit breakers? At your mentioned .12/KWH that is about 30 cents an hour, or over $200 to run them for a month, so that adds up too.

    i5-4560S is 4597 passmark which isn’t that great. 25 of them is 115k at best, so about like a big Ryzen server that you can rent for the same $200 or so. I can think of various computation projects that could use that, but I don’t think I’d bother with a room full of crufty old PC’s if I was pursuing something like that.

      • 11111one11111@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        7 months ago

        Psh 1 plug aint shit. Every Pic I see from anyone who lives out in those ghettos of India, Central America or any spacific islands they also only rock 1 plug but theyre running the corner store, the liquor store, the hospital, their style of little school middle school and old school, 3 hair salons if Latin or 3 nail salons if spasific, Bollywood, every stadium from every country in the world cup, and always 1 dude trying to squeeze 1 more plug in cuz hes runing low bats. Idk why the American ghetto is so pussy. One time i seen a family that fuckin put covers over empty sockets?!? Come on dog thats like wearing a condom jerking off. NGL tho, I get super jelly seeing pictures from those countries thp with their thousands of power lines, phone lines, sidelines, cable lines, borderlines, internet lines… fuck I don’t know much about how my AOL works but those wizards must be streaming some Hella fast Tokyo banddrifts with all them wires.

  • Linkerbaan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    7 months ago

    I don’t understand why people want to use so many PC’s rather than just run multiple VM’s on a single server that has more cores.

    • LukyJay@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      “I don’t understand why you’d run so many VMs can you can just run it on bare metal”

      It’s fun! This is a hobby. It doesn’t have to be practical.

      • Linkerbaan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        7 months ago

        Of course, but installing everything on multiple bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses… It just takes a lot of extra power and doesn’t achieve much. Of course that can be said about any hobby, but I just want OP to know that there is no real reason to do this and I don’t understand so many people hyping it up.

        • Trainguyrom@reddthat.comOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I already said in the original post I plan on sellong off and giving away ~15 of them, keeping a few as spares, and only actually leaving one on 24/7

          bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses

          Both bare metal and VMs require IPs, it’s just about what networks you toss them on. Thanks to NAT IPs are free and there’s about 18 million of them to pick from in just the private IPv4 space

          Big reason for bare metal for clustering is it takes the guess work out of virtual networking since there’s physical cables to trace. I don’t have to guess if a given virtual network has an L3 device that the virtual network helpfully added or is all L2, I can see the blinky lights for an estimate as to how much activity is going on on the network, and I can physically degrade a connection if I want to simulate an unreliable connection to a remote site. I can yank the power on a physical machine to simulate a power/host failure, you have to hope the virtual host actually yanks the virtual power and doesn’t do some pre shutdown stuff before killing the VM to protect you from yourself. Sure you can ultimately do all of this virtually, but having a few physical machines in the mix takes the guesswork out of it and makes your labbing more “real world”

          I also want to invest the time and money into doing some real clustering technologies kinda close to right. Ever since I ran a ceph cluster in college on DDR2 era hardware over gigabit links I’ve been curious to see what level of investment is needed to make ceph perform reasonably, and how ceph compares to say glusterFS for example. I also want to setup an OpenShift cluster to play with and that calls for about 5 4-8 core 32GB RAM machines as a minimum (which happens to be the maximum hardware config of these machines). Similar with Harvester HCI

          It just takes a lot of extra power and doesn’t achieve much

          I just plan on running all of them just long enough to get some benchmark porn then starting to sell them off. Most won’t even be plugged in for more than a few hours before I sell them off

          there is no real reason to do this and I don’t understand so many people hyping it up.

          Because it’s fun? I got 25 computers for a bit more than the price of one (based on current eBay pricing). Why not do some stupid silly stuff while I have all of them? Why have an actual reason beyond “because I can!”

          25 PC’s does seem slightly overkill. I can imagine 3-5 max.

          25 computers is definitely overkill, but the auction wasn’t for 6 computers it was for 25 of them. And again, I seriously expected to be out of and the winning bid to be over a grand. I didn’t expect to get 25 computers for about the price of one. But now I have them so I’m gonna play with them

          • Linkerbaan@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            7 months ago

            I see I was picturing a 25 pile stack of PC’s this makes a lot more sense thanks for the explanation.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Having multiple machines can protect against hardware failures.
      If hardware fails, you have dono machines.
      It’s good learning, both for provisioning and for the physical (cleaning, customising, wiring, networking with multiple nics), and for multi-node clusters.

      Virt is convenient, but doesn’t teach you everything

      • Linkerbaan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        7 months ago

        I’m not sure if running multiple single SSD machines would provide much redundancy over a server with multiple PSU’s and drives. Sure the CPU or mobo could fail but the downtime would be less hassle than 25 old PC’s.

        Of course there is a learning experience in more hardware but 25 PC’s does seem slightly overkill. I can imagine 3-5 max.

        I’m probably looking at this from a homelab point of view who just wants to run stuff though, not really as the hobby being “setting up the PC’s themselves”.