• 3 Posts
  • 310 Comments
Joined 11 months ago
cake
Cake day: January 3rd, 2024

help-circle





  • Yeah. Thankfully, Windows server cleaned up that stupidity starting around 2006 and finished in around 2018.

    Which all sounds fine until we meditate on the history that basically all other server operating systems have had efficient remote administration solutions since before 1995 (reasonable solutions existed before SSH, even).

    Windows was over 20 years late to adopt non-grapgical low latency (aka sane) options for remote administration.

    I think it’s a big part of the reason Windows doesn’t appear much on this chart.




  • That’s certainly a big part of it. When one needs to buy a metric crap load of CPUs, one tends to shop outside the popular defaults.

    Another big reason, historically, is that Supercomputers didn’t typically have any kind of non-command-line way to interact with them, and Windows needed it.

    Until PowerShell and Windows 8, there were still substantial configuration options in Windows that were 100% managed by graphical packages. They could be changed by direct file edits and registry editing, but it added a lot of risk. All of the “did I make a mistake” tools were graphical and so unavailable from command line.

    So any version of Windows stripped down enough to run on any super-computer cluster was going to be missing a lot of features, until around 2006.

    Since Linux and Unix started as command line operating systems, both already had plenty fully featured options for Supercomputing.



  • Where did you find that azure runs on linux?

    I dont know of anywhere that Microsoft confirms, officially, that Azure, itself, is largely running on Linux. They share stats about what workloads others are running on it, but not, to my knowledge, about what it is composed of.

    I suppose that would be an oversimplification, anyway.

    But that Azure itself is running mostly on Linux is an open secret among folks who spend time chatting with engineers who have worked on the framework of the Azure cloud.

    When I have chatted with them, Azure cloud engineers have displayed huge amouts of Linux experience while they sometimes needed to “phone a friend” to answer Windows server edition questions.

    For a variety of reasons related to how much longer people have been scaling Linux clusters, than Windows servers, this isn’t particularly shocking.

    Edit: To confirm what others have mentioned, inferring from chatting with MS staff suggests, more specifically, that Azure, itself, is mostly Linux OS running on a Hyper-V virtualization later.


  • But, surely Windows is the wrong OS?

    Oh yes! To be clear - trying to put any version of Windows on a super-computer is every bit as insane as you might imagine. By what I heard in the rumor mill, it went every bit as badly as anyone might have guessed.

    But I like to root for an underdog, and it was neat to hear about Microsoft engineers trying to take the Windows kernel somewhere it had no rational excuse to run, perhaps by sheer force of will and hard work.


  • They are not all the same.

    Measure the diameter of the hole at the bottom of the water holding tank. It’s the main difference between older and newer toilets in the US.

    Any US toilet repair kit should list what diameter(s) it supports.

    Depth of the holding tank will vary as well, but most repair kits account for this. Some kits may require using a hand saw to cut some plastic tubes to fit smaller tanks. Other kits have an extendable or collapsible tube.



  • I wonder if the numbers are still this good if you consider more supercomputers.

    Great question. My guess is not terribly different.

    “Top 500 Supercomputers” is arguably a self-referential term. I’ve seen the term “super-computer” defined whether it was among the 500 fastest computer in the world, on the day it went live.

    As new super-computers come online, workloads from older ones tend to migrate to the new ones.

    So there usually aren’t a huge number of currently operating supercomputers outside of the top 500.

    When a super-computer falls toward the bottom of the top 500, there’s a good chance it is getting turned off soon.

    That said, I’m referring here only to the super-computers that spend a lot of time advertising their existence.

    I suspect there’s a decent number out there today that prefer not to be listed. But I have no reason to think those don’t also run Linux.


  • but it did not stick.

    Yeah. It was bad. The job of a Supercomputer is to be really fast and really parallel. Windows for Supercomputing was… not.

    I honestly thought it might make it, considering the engineering talent that Microsoft had.

    But I think time proves that Unix and Linux just had an insurmountable head start. Windows, to the best of my knowledge, never came close to closing the gap.


  • The first thing I do to, if I need to get the size down, is swap out Gnome for one of the X11 Windows managers, usually XFCE.

    I usually do this by starting from the minimal install and building up, as schizo already suggested.

    That said, I guess I would be remiss if I didn’t point out that Linux Mint is an easy way to get Debian’s core with the XFCE window manager.

    Looks like Mint starts at 3GB - 8GB, depending on options chosen?

    Disclaimer: It’s honestly been awhile since I really paid attention to my own Linux install size, as long as it’s below 40GB.


  • MajorHavoc@programming.devtoLinux@lemmy.mlSlim Down Debian Install
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 days ago

    the live disk won’t find my Wifi

    Oof.

    In case it helps: I have solved that problem for myself using a $9.00 USB Wifi dongle.

    For whatever reason (other contributors facing the same issue?), I have found that every cheapo USB Wifi dongle I have tried has worked perfectly with the minimal Linux images.

    I realize I might have just gotten really lucky a bunch of times, but it could be worth a try.