Centrist, progressive, radical optimist. Geophysicist, R&D, Planetary Scientist and general nerd in Winnipeg, Canada.

troyunrau.ca (personal)

lithogen.ca (business)

  • 2 Posts
  • 76 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle



  • Duck typing is the best if fully embraced. But it also means you have to worry just a little bit about clean failures once the project grows a little. I like this better than type checking relentlessly.

    It also means that your test suite or doctests or whatever should throw some unexpected types around now and again to check how it handles ducks and chickens and such :)



  • Troy@lemmy.catoTechnology@lemmy.worldStack Overflow Website Traffic
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    8 days ago

    The ideal result? LLMs are just early versions of much better things that come later.

    The unlikely result: we develop a separate human curated internet somewhere, complete with verification that a human wrote every bit. Basically verifiable digital id and signing on everything. Maybe.

    The probable result: the internet turns to shit as AIs are trained on content created by AIs.





  • Your assertion that the document is malicious without any evidence is what I’m concerned about.

    At some point you have to decide to trust someone. The comment above gave you reason to trust that the document was in a standard, non-malicious format. But you outright rejected their advice in a hostile tone. You base your hostility on a youtube video.

    You should read the essay “on trusting trust” and then make a decision on whether you are going to participate in digital society or live under a bridge with a tinfoil hat.

    In Canada, and elsewhere, insurance companies know everything about you before you even apply, and it’s likely true elsewhere too. Even if they don’t have personally identifiable information, you’ll be in a data bucket with your neighbours, with risk profiles based on neighbourhood, items being insuring, claim rates for people with similar profiles, etc. Very likely every interaction you have with them has been going into a LLM even prior to the advent of ChatGPT, and they will have scored those interactions against a model.

    The personally identifiable information has largely been anonymized in these models. In Canada, for example, there are regulatory bodies like OSFI that they have to report to, and get audited by, to ensure the data is being used in compliance with regulations. Each company will have a compliance department tasked with making sure they’re adhering.

    But what you will end up doing instead is triggering fraudulent behaviour flags. There’s something called “address fraud”, where people go out of their way to disguise their location, because some lower risk address has better rates or whatever. When you do everything you can to scrub your location, this itself is a signal that you are operating as a highly paranoid individual and that might put you in a bucket. If you want to be the most invisible to them, you want to act like you’re in the median of all categories. Because any outlying behaviours further fingerprint you.

    Source: I have a direct connection to advanced analytics within insurance industry (one degree of separation).









  • It’s barely even funny at this point.

    Although I’d quibble about Newsweek defining this guy as an oligarch. He was vice president at a company that was disbanded due the company president’s opposition to Putin. It’s very possible this guy is just a former executive that refused to bend or hand over some dirt or something. It doesn’t appear he fits the definition of oligarch at all.

    Unless we’re just using the word to refer to all Russians above peasant.