I guess Germans read about the fate of the Eldar & birth of Slaanesh and took that as an instruction guide.
Linux server admin, MySQL/TSQL database admin, Python programmer, Linux gaming enthusiast and a forever GM.
I guess Germans read about the fate of the Eldar & birth of Slaanesh and took that as an instruction guide.
Another Deezer user in the wild! Been a subscriber to it for years now.
Need a dispenser here!
Thanks for the link, it was a very interesting read. While it is disappointing that it’s not actually a collective (assuming this blog post is accurate), having a platform run and owned by 6 creators is still better than YouTube’s governance structure, and still has the advantage in having both the capacity and desire to invest in creators.
An advantage of funding things via a collective like Nebula as opposed to each individual creator managing their own patrons is that new creators can start making bigger, more expensive projects quicker. Even established creators have this advantage, they can take bigger risks on bigger projects with the safety net of a share of the nebula pie.
I don’t think a project like The Prince would exist without Nebula, for example.
Ublock origin isn’t the only ad blocker out there. If you like Ublock origin, use Ublock origin lite. It’s fully V3 compliant.
LLM’s aren’t real AI
I think that’s mostly a semantics issue. When people talk about AI here on Lemmy, they generally mean AGI. LLMs are not AGIs, as far as I understand it.
anti-anything google
I hear that. Went through the technical reasons for the manifest V2 deprecation (if this is only to target ublock origin, why did they implement filter lists into the browser? Why does ublock origin lite work just fine?) and it got more downvotes than upvotes. Haters gonna hate I guess :))
This video does an ok job of it.
This seems to be the earliest article about Vasile Gorgos, and it seems to be missing a lot of details from this retelling. I think this means that a lot of these details (wearing the same clothes, the same train ticket he left with, the mysterious car speeding off) were all added later.
EDIT: In the video, they actually show the train ticket. It’s from 2021, from Ploesti to his home village, and his daughter-in-law says a friend of his picked him up from the train station after recognizing him. Also, unlike this version, he didn’t say he’d been at home, he said he wanted to go home. He does look and sound very visibly senile.
Also, unlike this version, the original does not give him a clean bill of health. It specifically says he has neurological problems and can no longer recognize his son or his son’s wife.
If you’ll allow a bit of speculation, my guess is the guy abandoned his family, went off and lived life, the police never really took the missing persons case seriously and never really looked for him. Decades later, he starts becoming senile. A befuddled old man, still with his unchanged ID card, gets picked up by a good samaritan who drops the old man home. gets a train ticket home and gets recognized at the train station.
Exactly what I wanted to say. All that talk of “perfection” makes me imagine them snapping and going full psycho because a train was cancelled and they need to book a different one.
To OP: just stop trying to plan that much. A general plan is good. Just be aware things will change and that’s ok. As long as you two are having a good time, the rest really doesn’t matter as much as you think it does.
If you want a little psychological trick to make the trip more memorable than it otherwise would be, whatever you think is going to be the most impressive, save it for last. Our memories have a very strong recency bias.
So basically the Lemmy version of Subreddit Simulator, but allowing users as well?
Yes, absolutely. That is a concern that I too share, fellow meat being. We should be vigilant against superior, more capable, and really friendly artificial intelligences.
to have this relationship between A and B you have to make a third database
Probably just a mistake here, but you make a third table, not a new database.
Apart from that (and the fact that one to many and many to one is the same thing), yeah, looks correct.
Even the question of “who” is a fascinating deep dive in and of itself. Consciousness as an emergent property implies that your gut microbiome is part of the “who” doing the thinking in the first place :))
So, first of all, thank you for the cogent attempt at responding. We may disagree, but I sincerely respect the effort you put into the comment.
The specific part that I thought seemed like a pretty big claim was that human brains are “simply” more complex neural networks and that the outputs are based strictly on training data.
Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation?
While true, this is way too reductive to be a one to one comparison with LLMs. Humans have genetic instinct and body-mind connection that isn’t cleanly mappable onto a neural network. For example, biologists are only just now scraping the surface of the link between the brain and the gut microbiome, which plays a much larger role on cognition than previously thought.
Another example where the brain = neural network model breaks down is the fact that the two hemispheres are much more separated than previously thought. So much so that some neuroscientists are saying that each person has, in effect, 2 different brains with 2 different personalities that communicate via the corpus callosum.
There’s many more examples I could bring up, but my core point is that the analogy of neural network = brain is just that, a simplistic analogy, on the same level as thinking about gravity only as “the force that pushes you downwards”.
To say that we fully understand the brain, to the point where we can even make a model of a mosquito’s brain (220,000 neurons), I think is mistaken. I’m not saying we’ll never understand the brain enough to attempt such a thing, I’m just saying that drawing a casual equivalence between mammalian brains and neural networks is woefully inadequate.
That’s a strong claim. Got an academic paper to back that up?
This is why I strictly refer to these things as LLMs. That’s what they are.
I’m happy with the Oxford definition: “the ability to acquire and apply knowledge and skills”.
LLMs don’t have knowledge as they don’t actually understand anything. They are algorithmic response generators that apply scores to tokens, and spit out the highest scoring token considering all previous tokens.
If asked to answer 10*5, they can’t reason through the math. They can only recognize 10, * and 5 as tokens in the training data that is usually followed by the 50 token. Thus, 50 is the highest scoring token, and is the answer it will choose. Things get more interesting when you ask questions that aren’t in the training data. If it has nothing more direct to copy from, it will regurgitate a sequence of tokens that sounds as close as possible to something in the training data: thus a hallucination.
Considering they’d just spent the previous few questions discussing the visual-first aspect of touchscreens and accessibility issues for the visually impaired, I think that’s exactly what they were talking about.
The generalizations are about completely different devices. They talk about CT machines & automatic defibrillators later.