
Artificial intelligenceis a data devourer. For effectiveness, it has to be, but scarcity of what it feeds on can be a serious problem, particularly forAI agents, conversational robots that can act on behalf of users to make purchases, respond to emails, and manage invoices and schedules, among dozens of other possibilities. To do so, they need to know the person they are talking to, learn about their life, and violate their privacy - which sometimes, they have permission to do. Big tech companies are already exploring ways to address this issue from several angles. But meanwhile, according to Hervé Lambert, global consumer operations manager at Panda Security, AI access to data poses risks of "commercial manipulation, exclusion, or even extortion."
The problematic relationship of AI with private information has been proven by researchers from University College London and the Mediterranea University of Reggio Calabria in a study presented at the USENIX security symposium in Seattle. According to the report, AI web browser assistants carry out widespread tracking, profiling, and personalization practices that raise serious privacy concerns.
During tests using a user profile invented by researchers, AI web browser assistants shared search information with their servers, as well as banking and health data, and the user's IP address. All demonstrated the ability to guess attributes such as age, gender, salary, and interests of users, and they used this information to personalize responses, even during different browsing sessions. Only one assistant, Perplexity, did not show evidence of profiling or personalization.
Although many people are aware that search engines and social media platforms compile information about them for targeted advertising,AI web browser assistantsoperate with unprecedented access to user online behavior in areas of their online life that should remain private. Even if they offer convenience, our findings show that sometimes, they do so at the cost of user privacy, without any transparency or consent and sometimes, in violation of privacy legislation and their company’s own terms of service. This collection and exchange of information is not trivial: in addition to the sale and exchange of data with third parties, in a world where mass hackings are frequent, there is no way of knowing what is happening with search history once it has been collected," explains Anna Maria Mandalari, primary author of the study conducted by the UCL's electronic and electrical engineering department.
Lambert agrees with the study's conclusions. "Technology is collecting users' data, even that which is personal, to train and improve intelligent and automatic learning models. This helps companies to offer - to put it diplomatically - more personalized services. But developing these new technologies, obviously, raises a host of questions and concerns about privacy and user consent. Ultimately, we don’t know how companies and their smart systems are using our personal data."
Among the potential risks cited by Lambert are commercial and geopolitical manipulation, exclusion, extortion, andIdentity TheftThese dangers exist even when users have given their consent, consciously or otherwise. "Platforms," says Lambert, "are updating their privacy policies and that's a little suspicious. In fact, such updates — and this is important — include clauses that allow for the use of data." But consumers, in the vast majority of cases, accept the conditions without reading or thinking about them, to ensure continuity of service or out of pure haste.
Google is one of the companies that recently changed its privacy terms to, according to an email sent to its users, "improve our services." In that statement, it admits to its use of interactions with its AI applications through Gemini, and has launched a new function for those who wish to opt out. That is the so-called "temporary chat" feature, which allows for the elimination of recent queries, and avoids the company using them "to personalize" future queries or "to train models."
The user has to be proactive in protecting themselves from these functions by deactivating the "keep activity" function and by managing and deleting Gemini app activity. If they fail to do so, their lives will be shared with the company. "A subset of uploads submitted starting September 2 - like files, videos, screens you ask about, and photos shared with Gemini - will also be used to help improve Google services for everyone," states the company. It will also use audios recorded by the AI tools and data from Gemini Live recordings.
As before, when Google uses your activity to improve its services (includingtraining generative AI models), it gets help from human reviewers. To protect your privacy, we disconnect chats from your account before sending them to service providers," explains the company in its statement, in which it admits that, even though it is disconnected from the user's account, it uses and has used personal data ("As before") and that it sells or shares it ("sending them to service providers").
Marc Rivero, lead security researcher atKaspersky, agrees on the risks involved with the dissemination of information, pointing to the use of WhatsApp data for AI: "It raises serious privacy concerns. Private messaging apps are one of the most sensitive digital environments for users, as they contain intimate conversations, personal data, and even confidential information. Allowing an AI tool to automatically access these messages without clear and explicit consent undermines user trust."
He adds: "From the cybersecurity perspective, this is also troubling."Cyber criminalsThey are increasingly using AI to expand their attacks on social engineering and collection of personal data. If those attackers find a way to exploit this kind of interaction, we could be facing a new pathway to fraud, identity theft, and other criminal activities.
WhatsApp insists that "your personal messages with friends and family are off limits." Its AI is trained through direct interaction with the artificial intelligence application, and according to the company, "you have to take action to start the conversation by opening a chat or sending a message to the AI. Only you or a group participant can initiate this, not Meta or WhatsApp. Talking to an AI provided by Meta doesn't link your personal WhatsApp account information on Facebook, Instagram, or any other apps provided by Meta." Nonetheless, it does offer a warning: "What you send to Meta may be used to provide you with accurate responses or to improve Meta's AI models, so don't send messages to Meta with information you don't want it to know."
Storage and archive transfer services have also come under scrutiny. The latest example occurred after the popular site WeTransfer's modification to its terms of service, which was seen as a request for unlimited access to user data to improve future artificial intelligence systems. In response to consumer concerns about the possible free use of their documents and creations, the company was forced to rephrase the clause, offering the clarification: "To be extra clear: YES - your content is always your content. In fact, section 6.2 of our Terms of Service clearly states that you 'own and retain all right, title, and interest, including all intellectual property rights, in and to the Content.' YES - you're granting us permission to ensure we can run and improve the WeTransfer service properly. YES - our terms are compliant with applicable privacy laws, including the GDPR [theEuropean Union's General Data Protection Regulation]. NO — we are not using your content to train AI models. NO — we do not sell your content to third parties.
Given the proliferation of intelligent devices, which go far beyond conversational AI chats, Eusebio Nieva, technical director of Check Point Software for Spain and Portugal, advocates for regulations that guarantee transparency and explicit consent, security regulations for devices, and prohibition and restrictions on high-risk providers, as seen in the European regulation. "Incidents of privacy violations underline the need for consumers, regulators, and companies to work together to guarantee security," he says.
Lambert agrees and calls on users and companies to take responsibility in this new landscape. He rejects the idea that preventive regulation is a step backward in development. "Protecting our users does not mean that we are going to slow down; it means that, from the beginning of a project, we include privacy and digital footprint protection, thereby becoming more effective and efficient in protecting our most important assets, which are our users."
Alternatives being researched by companies
Tech companies are aware of the problem generated by the use of personal data, not just because of the ethical and legal privacy conflicts, but also because they claim that limitations in accessing them are slowing down the development of their systems.
Meta founder Mark Zuckerberghas directed the work of its Superintelligence Lab towards "self-improving AI," systems capable of increasing the performance of artificial intelligence through advancements in equipment (particularly processors), in programming (including self-programming) and through the AI itself training language learning models on which it is based.
And it's not just experiences based on synthetic data — tools and guidelines are also used in adapting behavior to user needs. The startup Sakana AI has created a system called Darwin Gödel Machine, in which an AI agent adapts its code to improve its performance in carrying out the tasks it is assigned.
All these advances toward AI that surpasses human intelligence by overcoming obstacles such as data limitations also carry risks. Chris Painter, policy director at the non-profit AI research organization METR, warns that if AI accelerates the development of its own capabilities, it could also be used for piracy, weapons design, and human manipulation.
The rise in geopolitical tensions, economic volatility, and operational environments that are becoming more complex, along with attacks carried out using AI, have made organizations more vulnerable to cyber threats," says Agustín Muñoz-Grandes, director of Accenture Security in Spain and Portugal. "Cybersecurity can no longer be a last-minute fix. It should be integrated from the beginning of every initiative using AI.
Sign up forour weekly newsletterto get more English-language news coverage from EL PAÍS USA Edition
0 comments:
Ikutan Komentar