No phones can run “LLMs” currently because by definition, large.
Some Android phones however can and does run smaller models locally. Gemini Nano runs on Pixel 8 and can run on Samsung phones.
No phones can run “LLMs” currently because by definition, large.
Some Android phones however can and does run smaller models locally. Gemini Nano runs on Pixel 8 and can run on Samsung phones.
It’s not a LLM, it’s a much smaller model (~3B) which is closer to what Microsoft labels as a SLM (Small Language Models, e.g. MS Phi-3 Mini).
https://machinelearning.apple.com/research/introducing-apple-foundation-models
How do you tell if a piece of work contains AI generated content or not?
It’s not hard to generate a piece of AI content, put in some hours to round out AI’s signatures / common mistakes, and pass it off as your own. So in practise it’s still easy to benefit from AI systems by masking generate content as largely your own.
My personal observation is that people have been fed up for quite a while, not so much by the Instagram app itself but by Meta’s brand, their untrustworthiness and the general vapid and scammy nature of the hordes of Influencers and “hustlers”. It’s just regular folks aren’t aware of decent alternatives or the alternatives aren’t quite there yet.
AI is a blanket term that is used to describe many different things and more recently used as a Bogeyman by the media to scare everyone’s pants off.
The “AI” that’s all the hype recently à la ChatGPT, Bard etc are “generative AI” based on Large Language Models. They seem really good at answering questions, creating content, rewriting text etc. The “threat” to humanity at the moment is more about industries being disrupted, jobs being replaced by these technologies, etc. Customer Service, Copywriting, Legal and creative industries are all impacted. In the longer term, as with all technologies, there is a concern that there will be an imbalance in the access of this tech and for example, only the rich and powerful can truly harness the power of these tools.
There is also the more Doomsday interpretation of “AI” which in this case, really means AGI (Artificial General Intelligence), where the AI actually becomes sentient and can think / reason for itself. I think this is still in the realm of science fiction today but who knows about the future. The worry here is that if such a sentient being become malevolent for one reason or another, we would be dealing with an AI Overlord kind of scenario with the superior computing power, access and knowledge that it will have.
I do think it’s a useful distinction considering open models can be more than 100B+ nowdays and GPT4 is rumored to be 1.7T params. Plus this class of models are far more likely to be on-device.