MARC HOAG AI LAW.

View Original

LinkedIn’s AI Training: Your Data, Your Privacy, Your Choice?

SOURCE: Marc Hoag via Grok

LinkedIn has recently updated its Privacy Policy, effective September 18, 2024. While upcoming changes to the User Agreement were just announced today (September 25, 2024) via an email sent out to its users, a more pressing issue lies within its unchanged Privacy Policy: the automatic opt-in for AI training using user data.

This policy allows LinkedIn to use member information to train artificial intelligence models, including generative AI systems; users must manually opt out if they wish to exclude their data from this process. While LinkedIn attempts to cast light on its use of AI training to assuage users’ concerns, outcry from privacy advocates and users alike has been vocal (here, here, and here, just to name a few), highlighting the ongoing tension between technological advancement and individual privacy rights.

LinkedIn’s decision to opt for an opt-out model

Users have valid reasons to be apprehensive about their data being used for AI training. The professional nature of information shared on LinkedIn makes it particularly sensitive. Resumes, work histories, and professional connections are valuable data points that users might not want to contribute to AI development without explicit consent.

Moreover, the potential for this data to be used beyond LinkedIn’s immediate services raises additional concerns. As a subsidiary of Microsoft, there’s a possibility that user data could be shared with or used by this tech giant, essentially turning LinkedIn profiles into training fodder for a much broader AI ecosystem.

Beyond merely the privacy concerns, there’s a valid financial argument as well. User content includes articles and other forms of creative, original content. If this material is being used to improve LinkedIn’s services to the end users’ benefit, so be it; but if LinkedIn ends up deriving monetary benefit by training on such content, then arguably — much like the copyright infringement cases around LLM training generally — the users should derive some financial gain.

The primary concern with LinkedIn’s decision to opt for an opt-out model places the burden of privacy protection on the user. Many may be unaware of this policy or find the process of opting out cumbersome, accessible only via a convoluted tree of account setting menus (Your Profile Picture > Settings & Privacy > Data Privacy > Data for Generative AI Improvement > Toggle switch OFF), and thus leading to unintended participation in AI training programs.

Just another form of product improvement?

It’s instructive however to consider the broader context of data usage in online services. Nearly all digital platforms use user data to improve their services in some capacity. From refining search algorithms to enhancing user experience, data-driven improvements are a cornerstone of digital evolution.

AI training, while more advanced, can be seen as an extension of this practice. It has the potential to significantly enhance LinkedIn’s services, offering more personalized job recommendations, improved content curation, and more efficient networking tools. These advancements could provide tangible benefits to users, making their professional networking more effective and tailored to their needs.

Furthermore, LinkedIn’s provision of an opt-out option, while not ideal, does offer more control than many other platforms provide. It represents a step towards transparency and user choice in an era where data usage policies are often opaque and inflexible.

The EU’s anti-AI stance

Interestingly, LinkedIn’s approach to AI training diverges significantly in the European Union. Due to the stringent regulations set forth by the EU AI Act, users in EU countries are not subject to automatic opt-in for AI training.

The EU’s approach may serve as a model for future regulations worldwide, potentially forcing companies like LinkedIn to adopt more user-centric data policies globally. Indeed, it is also why Apple’s recently released iOS 18 has been stripped of its Apple Intelligence functionality across the EU.

OpenAI’s so-called “advanced voice mode,” beginning its wide deployment this week, is likewise unavailable in EU, albeit for a somewhat darker reason: because it can understand users’ emotion based on their voice, it runs afoul of the EU AI Act’s clause that “AI systems [used] to infer emotions of a natural person in the areas of workplace and education institutions” “shall be prohibited.” (EUAIA Article 5(1)(f))

Suffice to say, then, as awareness grows and regulations evolve, we may see a shift towards opt-in models or more granular control over data usage for AI training.

AI’s future

LinkedIn’s AI training policy exemplifies the complex interplay between technological progress and personal privacy in the digital age. While users have legitimate concerns about the use of their professional data for AI development, the potential benefits of enhanced services cannot be ignored. Perhaps rather than a merely binary choice of opt-in versus opt-out, a compensation model for users could be explored.

As we move forward, it’s crucial for users to stay informed about data policies and exercise their rights when possible. Equally important is the role of policymakers in establishing clear guidelines for AI development and data usage, ensuring that the march of progress doesn’t come at the cost of individual privacy or fully stifle innovation.

The coming years will likely see continued debate and evolution in this space. For now, LinkedIn users outside the EU must decide whether the benefits of potential service improvements outweigh the privacy concerns of contributing to AI training. Whatever one’s stance, this situation serves as a reminder of the importance of digital literacy and proactive engagement with privacy policies in our increasingly AI-driven world.