I’m just some guy, you know.

  • 8 Posts
  • 338 Comments
Joined 7 months ago
cake
Cake day: May 7th, 2024

help-circle













  • We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.

    That’s a poor conclusion. A similar image could be created using masks and AI inpainting. You could take a photo on a rainy day and add in the disaster components using GenAI.

    That’s definitely not the case in this scenario, but we shouldn’t rely on things like verifying real-world locations to assume that GenAI wasn’t involved in making a photo.






  • “Open Source” is mostly the right term. AI isn’t code, so there’s no source code to open up. If you provide the dataset you trained off of, and open up the code used to train the model, that’s pretty close.

    Otherwise, we need to consider “open weights” and “free use” to be more accurate terms.

    For example, ChatGPT 3+ in undeniably closed/proprietary. You can’t download the model and run it on your own hardware. The dataset used to train it is a trade secret. You have to agree to all of OpenAI’s terms to use it.

    LLaMa is way more open. The dataset is largely known (though no public master copy exists). The code used to train is open source. You can download the model for local use, and train new models based off of the weights of the base model. The license allows all of this.

    It’s just not a 1:1 equivalent to open source software. It’s basically the equivalent of royalty free media, but with big collections of conceptual weights.