• Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    If companies are crying about it then it’s probably a great thing for consumers.

    Eat billionaires.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      So if smaller companies are crying about huge companies using reglation they have lobbied for (as in this case through a lobbying oranisation set up with “effective altruism” money) being used prevent them from being challenged: should we still assume its great?

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Which assumption? It’s a fact that this was co-sponsored by the CAIS, who have ties to effective altruism and Musk, and it is a fact that smaller startups and open source groups are complaining that this will hand an AI oligopoly to huge tech firms.

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Asimov didn’t design the three laws to make robots safe.

      He designed them to make robots break in ways that’d make Powell and Donovan’s lives miserable in particularly hilarious (for the reader, not the victims) ways.

      (They weren’t even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        I wish more people realized science fiction authors aren’t even trying to make good predictions about the future, even if that’s something they were good at. They’re trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there’ll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.

        There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel “Don’t Create The Torment Nexus” was nonsense. We shouldn’t be making policy decisions based off of that.

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I’ll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.

  • ofcourse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

    I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

    The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.

    So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

    As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I don’t see how you could realistically provide that guarantee.

    I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

    If we knew how to make AI – and this is going past just LLMs and stuff – avoid doing hazardous things, we’d have solved the Friendly AI problem. Like, that’s a good idea to work towards, maybe. But point is, we’re not there.

    Like, I’d be willing to see the state fund research on that problem, maybe. But I don’t see how just mandating that models be conformant to that is going to be implementable.