Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.
Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.
Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via “function calling”; kernel process LLM that can schedule and coordinate work across them given user queries.
Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, “always on”, and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.
Step 4 Initiative. Don’t perform a task given a prompt, anticipate the prompt, suggest, initiate.
Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.
Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).
Step 7 Privacy. <3
We’re quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.
How? The only way to truly be able to do that to a 100% verifiable degree is if it were open source, and I highly doubt Apple would do that, especially considering it’s OS level integration. At best, they’d probably only have a self-report mechanism which would also likely be proprietary and therefore not verifiable in itself.
They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/
All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.
They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.
As stated above, Private cloud compute has nothing to do with the OS level AI itself. ರ_ರ That’s in the cloud not on device.
As stated here, it still has the same issue of not being 100% verifiable, they only publish a few code snippets they deam “security-critical”, it doesn’t allow us to verify the handling of user data.
Adding to what it says here, if the on device AI is compromised in anyway, be it from an attacker or Apple themselves then PCC is rendered irrelevant regardless if PCC were open source or not.
Additionally, I’ll raise the issue that this entire blog is nothing but just that a blog, nothing stated here is legally binding, so any claims of how they handled user data is irrelevant and can easily be dismissed as marketing.
Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.
Yeah and apple is completely untrustworthy like any other corporation, my point exactly. Idk about you, but I’ll stick to what I can verify the security & privacy of for myself, e.g. Ollama, GrapheneOS, Linux, Coreboot, Libreboot/Canoeboot, etc.
Ok, I just don’t see the relevance to this post then. Sure, you’re fine to rant about Apple in any thread you want to, it’s just not particularly relevant to AI, which was the technology in question here.
I hear good things about GrapheneOS but just stay away from it because of all the stranger. I love Olan’s.
We’re discussing Apple’s implementation of an OS level AI, it’s entirely relevant.
GrapheneOS has technical merit and is completely open source, infact many of the security improvements to Android/AOSP are from GrapheneOS.
Who?