Why do you think trump started peddling Jesus bullshit last time he ran even though he’s not even remotely cristian or religious. He’ll shill anything as long as he thinks it’ll profit him.
Why do you think trump started peddling Jesus bullshit last time he ran even though he’s not even remotely cristian or religious. He’ll shill anything as long as he thinks it’ll profit him.
The fact people bought NFTs just proves that crypto bros just buy into hype without understanding the technology.
I knew NFTs were bullshit from the start because I actually took the time to understand how they “worked”.
That goes for both…
We’re discussing Apple’s implementation of an OS level AI, it’s entirely relevant.
GrapheneOS has technical merit and is completely open source, infact many of the security improvements to Android/AOSP are from GrapheneOS.
I love Olan’s.
Who?
Yeah and apple is completely untrustworthy like any other corporation, my point exactly. Idk about you, but I’ll stick to what I can verify the security & privacy of for myself, e.g. Ollama, GrapheneOS, Linux, Coreboot, Libreboot/Canoeboot, etc.
However, to process more sophisticated requests, Apple Intelligence needs to be able to enlist help from larger, more complex models in the cloud. For these cloud requests to live up to the security and privacy guarantees that our users expect from our devices, the traditional cloud service security model isn’t a viable starting point. Instead, we need to bring our industry-leading device security model, for the first time ever, to the cloud.
As stated above, Private cloud compute has nothing to do with the OS level AI itself. ರ_ರ That’s in the cloud not on device.
While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.
As stated here, it still has the same issue of not being 100% verifiable, they only publish a few code snippets they deam “security-critical”, it doesn’t allow us to verify the handling of user data.
- It’s difficult to provide runtime transparency for AI in the cloud.
Cloud AI services are opaque: providers do not typically specify details of the software stack they are using to run their services, and those details are often considered proprietary. Even if a cloud AI service relied only on open source software, which is inspectable by security researchers, there is no widely deployed way for a user device (or browser) to confirm that the service it’s connecting to is running an unmodified version of the software that it purports to run, or to detect that the software running on the service has changed.
Adding to what it says here, if the on device AI is compromised in anyway, be it from an attacker or Apple themselves then PCC is rendered irrelevant regardless if PCC were open source or not.
Additionally, I’ll raise the issue that this entire blog is nothing but just that a blog, nothing stated here is legally binding, so any claims of how they handled user data is irrelevant and can easily be dismissed as marketing.
Their keynotes are irrelevant, their official privacy policies and legal disclosures take precedence over marketing claims or statements made in keynotes or presentations. Apple’s privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models. Apple’s keynotes and marketing materials do not carry legal weight when it comes to their data practices. With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information.
Apple claimed that their privacy could be independently audited and verified.
How? The only way to truly be able to do that to a 100% verifiable degree is if it were open source, and I highly doubt Apple would do that, especially considering it’s OS level integration. At best, they’d probably only have a self-report mechanism which would also likely be proprietary and therefore not verifiable in itself.
you can use it in almost any app
if done right
How are you going to be able to use it in “almost any app” in a way that is secure? How are you going to design it so that the apps don’t abuse the AI to get more information on the user out of it than intended? Seems pretty damn inherently insecure to me.
1. Monopolistic business practices to crush competition (Netscape, Java, web browsers, etc.).
2. Illegal bundling of Internet Explorer with Windows to eliminate browser rivals.
3. Keeping useful Windows APIs secret from third-party developers to disadvantage competitors.
4. Embracing proprietary software and vendor lock-in tactics to prevent users from switching.
5. “Embrace, Extend, Extinguish” strategy against open source software.
6. Privacy violations through excessive data collection, user tracking, and sharing data with third parties.
7. Complicity in enabling government surveillance and spying on user data (PRISM scandal).
8. Deliberately making hardware/software incompatible with open source alternatives.
9. Anti-competitive acquisitions to eliminate rivals or control key technologies (GitHub, LinkedIn, etc.).
10. Unethical contracts providing military technology like HoloLens for warfare applications.
11. Failing to address workplace issues like sexual harassment at acquired companies.
12. Forced automatic Windows updates that override user control and cause system issues.
13. Maintaining monopolistic dominance in productivity software and operating systems.
14. Vague and toothless AI ethics principles while pursuing lucrative military AI contracts.
15. Continued excessive privacy violations and treating users as products with Windows.
16. Restrictive proprietary licensing that stifles open source adoption.
Aptoide sucks.
There are a few candidates, the most prominent are probably :
Greg Kroah-Hartman is speculated to be the most likely candidate, but it also depends on a few factors. Like, if Linus dies suddenly vs dying slowly or just stepping down, there’d be a big difference in selection process.
Ofc, things may change in the future and there’s many other talented developers who can be considered. Nothing is set in stone.
Linus hasn’t written kernel code in years at this point, however he still is the final gate keeper of what gets merged and an active code reviewer, he manages the entire direction of the project.
As of what will happen when Linus passes, that’s already been decided. The position of projects leader will go to his most trusted project co-maintainer, which we have a good idea of who that is.
Why should we have the same standard for two fundamentally different languages with distinct design philosophies and features?
Even if the C coding standard was used, it fundamentally will not make Rust more legible to C-only kernel devs. Imposing the C coding standard on Rust would be fundamentally counterproductive, as it would undermine Rust’s safety and productivity features. Rust’s coding guidelines align with its design principles, promoting idiomatic Rust code that leverages language features like ownership, borrowing, and lifetimes.
This ensures that Rust code in the kernel is safe, concurrent, and maintainable, while adhering to the language’s best practices.
While the C coding standard served its purpose well for the procedural C language, it is ill-suited for a modern language like Rust, which has different priorities and language constructs. Having separate coding standards allows each language to shine in its respective domain within the kernel, leveraging their strengths while adhering to their respective design philosophies. Having separate coding standards for C and Rust within the kernel codebase is the sensible approach.
In any case, it’s the temporary file directory so it should be fine to delete them manually.
Just make sure that podman isn’t running while you’re deleting them, assuming it is podman.
That’s the way Vaxry spun it to try making them look bad while he downplays his own behavior and malpractice. Basically he just lied and spread misinformation, he really needs to get his act together if he ever wants to work with them again.
That’s not what’s happening here. They are barring him from participating in their project, he can still say whatever the fuck he wants on his own project. They just don’t want to work with him.
If you walk into a job interview and they find your tweets of you saying the nword over and over, and you don’t get the job, that’s a you problem. Nobody wants to work with people who’ll bring negative PR.
Windows 12 will be a massive hit, just like google stadia.
Ofc it’s prone to bullshitting, it can’t even stay consistent; shit will contradict itself and sight the same sources.