Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 270 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • The issue DNS solves is the same as the phone book. You could memorize everyone’s phone number/IP, but it’s a lot easier to memorize a name or even guess the name. Want the website for walmart? Walmart.com is a very good guess.

    Behind the scenes the computer looks it up using DNS and it finds the IP and connects to it.

    The way it started, people were maintaining and sharing host files. A new system would come online and people would take the IP and add it to their host file. It was quickly found that this really doesn’t scale well, you could want to talk to dozens of computers you’d have to find the IP for! So DNS was developed as a central directory service any computer can request to look things up, which a hierarchy to distribute it and all. And it worked, really well, so well we still use it extensively today. The desire to delegate directory authority is how the TLD system was born. The host file didn’t use TLDs just plain names as far as I know.


  • There’s definitely been a surge in speculation on domain names. That’s part of the whole dotcom bubble thing. And it’s why I’m glad TLDs are still really hard to obtain, because otherwise they would all be taken.

    Unfortunately there’s just no other good way to deal with it. If there’s a shared namespace, someone will speculate the good names.

    Different TLDs can help with that a lot by having their own requirements. .edu for example, you have to be a real school to get one. Most ccTLDs you have to be a citizen or have a company operating in the country to get it. If/when it becomes a problem, I expect to see a shift to new TLDs with stronger requirements to prove you’re serious about your plans for the domain.

    It’s just a really hard problem when millions of people are competing to get a decent globally recognized short name, you’re just bound to run out. I’m kind of impressed at how well it’s holding up overall despite the abuse, I feel like it’s still relatively easy to get a reasonable domain name especially if you avoid the big TLDs like com/net/org/info. You can still get xyz for dirt cheap, and sometimes there’s even free ones like .tk and .ml were for a while. There’s also several free short-ish ones, I used max-p.fr.nf for a while because it was free and still looks like a real domain, it looks a lot like a .co.uk or something.


  • Because if they’re not owned, then how do you know who is who? How do we independently conclude that yup, microsoft.com goes to Microsoft, without some central authority managing who’s who?

    It’s first come first served which is a bit biased towards early adopters, but I can’t think of a better system where you go to google.com and reliably end up at Google. If everyone had a different idea of where that should send you it would be a nightmare, we’d be back to passing IP addresses on post-it notes to your friends to make sure we end up on the same youtube.com. When you type an address you expect to end up on the site you asked, and nothing else. You don’t want to end up on Comcast YouTube because your ISP decided that’s where youtube.com goes, you expect and demand the real one, the same as everyone else.

    And there’s still the massive server costs to run a dictionary for literally the entire Internet for all of that to work.

    A lot of the times, when asking those kinds of questions, it’s useful to think about how would you implement it such that it would work. It usually answers the question.


  • In case you didn’t know, domain names form a tree. You have the root ., you have TLDs com., and then usually the customer’s domain google.com., then subdomains www.google.com.. Each level of dots typically hands over the rest of the lookup to another server. So in this example, the root servers tell you go ask .com at this IP, you go ask .com where Google is, and it tells you the IP of Google’s DNS server, then you query Google’s DNS server directly. Any subdomain under Google only involves Google, the public DNS infrastructure isn’t involved at that point, significantly reducing load. Your ISP only needs to resolve Google once, then it knows how to get *.google.com directly from Google.

    You’re not just buying a name that by convention ends with a TLD. You’re buying a spot in that chain of names, the tree that is used to eventually go query your server and everything under it. The fee to get the domain contributes to the cost of running the TLD.


  • Mostly because you need to be able to resolve the TLD. The root DNS servers need to know about every TLD and it would quickly be a nightmare if they had to store hundreds of thousands records vs the handful of TLDs we have now. The root servers are hardcoded, they can’t easily be scaled or moved or anything. Their job is solely to tell you where .com is, .net is, etc. You’re supposed to query those once and then you hold to your cached reply for like 2+ days. Those servers have to serve the entire world, so you want as few queries to those as possible.

    Hosting a TLD is a huge commitment and so requires a lot of capital and a proper legal company to contractually commit to its maintenance and compliance with regulations. Those get a ton of traffic, and users getting their own TLDs would shift the sum of all gTLD traffic to the root servers which would be way too much.

    With the gTLDs and ccTLDs we have at least there’s a decent amount of decentralization going, so .ca is managed by Canada for example, and only Canada has jurisdiction on that domain, just like only China can take away your .cn. If everyone got TLDs the namespace would be full already, all the good names would be squatted and waiting to sell it for as much as possible like already happens with the .com and .net TLDs.

    There’s been attempts at a replacement but so far they’ve all been crypto scams and the dotcom bubble all over again speculating on the cool names to sell to the highest bidder.

    That said if you run your own DNS server and configure your devices to use it, you can use any domain as you want. The problem is gonna get the public Internet at large to recognize it as real.




  • It does need both. Requires= alone will only pull the unit as a dependency and will activate it, but doesn’t define a hard dependency. You need the After= to also declare that the unit must be started after its dependencies are finished loading, not merely being activated. Otherwise they will start in parallel, it just guarantees that both units will be activated. There’s an even stronger directive, BindsTo=, that will tie them such that if its dependency is stopped, this unit will be deactivated too. If SMB is a hard dependency that might be preferable. Requires+After still allows the mount to fail, but ensures if it’s mountable it’ll be mounted before Docker, whereas with BindsTo+After, failing the SMB mount would also shut down Docker.



  • Yeah it’ll depend on how good your coreboot implementation is. AFAIK it’s pretty good on Chromebooks because Google whereas a corebooted ThinkPad might have some downsides to it.

    The slowdowns I would attribute to likely bad power management, because ultimately the code runs on the CPU with no involvement with the BIOS unless you call into it, which should be very little.

    Looking up the article seems to confirm:

    The main reason it seems for the Dasharo firmware offering lower performance at times was the Core i5 12400 being tested never exceeded a maximum peak frequency of 4.0GHz while the proprietary BIOS successfully hit the 4.4GHz maximum turbo frequency of the i5-12400. Meanwhile the Dasharo firmware never led to the i5-12400 clocking down to 600MHz on all cores as a minimum frequency during idle but there was a ~974MHz.

    I’d expect System76 laptops to have a smaller performance gap if any since it’s a first-party implementation and it’s in their interest for that stuff to work properly. But I don’t have coreboot computers so I can’t validate, that’s all assumptions.

    That said for a 5% performance loss, I’d say it counts as viable. My games VM has a similar hit vs native. I’ve been gaming on Linux well before Proton and Steam and have taken much larger performance hits before just to avoid closing all my work to reboot for break time games.



  • Yes dual GPU. I set that up like 6 years ago, so its use changed over time. It used to be Windows but now it’s another Linux VM.

    The reason I still use it is it serves as a second seat and is very convenient at that. The GPU’s output is connected to the TV, so the TV gets its own dedicated and independent OS. So my wife can use it when I’m not. When the VM isn’t running I use the card as a render offload, so games get the full power of the better card as well.

    I also use it for toying with macOS and Windows because both of those are basically unusable without some form of 3D acceleration. For Windows I use Looking Glass which makes it feel pretty native performance. I don’t play games in it anymore but I still need to run Visual Studio to build the Windows exes for some projects.

    This week I also used the second card to test out stuff on Bazzite because one if my friends finally made the switch and I need to be able to test things out in it as I have no fucking clue how uBlue works.


  • The BIOS does a lot less than you’d expect, it doesn’t really have an impact on gaming performance. For what it’s worth, I’ve been gaming in a VM for years, and it uses the TianoCore/OVMF/EDK2 firmware, and no issues. Once Linux is booted, it doesn’t really matter all that much. You’re not even allowed to use firmware services after the OS is booted, it’s only meant for bootloaders or simple applications. As long as all the hardware is initialized and configured properly it shouldn’t matter.



  • Guarantee there will be questions of cost of setup, maintenance, and risks.

    And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.

    And if they use another instance, there’s other liability questions about the particular instance to choose. If they’re gonna represent an official city account, you’d expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it’s a free service. The instance admins interfering, possibly steering opinions during city elections, etc.

    Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They’ll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.

    In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they’re also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.


  • It’s nicknamed the autohell tools for a reason.

    It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

    Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.

    (If only c++ build systems caught up to Golang lol)

    At least it’s not node_modules