In the opening lecture, Christiane Woopen from the Center for Life Ethics at the University of Bonn spoke about how AI is transforming medical research and people’s relationship with health and longevity (photo: Daniel Antônio/Agência FAPESP)

Artificial Intelligence
Advances in AI are shaping various aspects of the world, including how research is done
2025-06-04
PT ES

The assessment was made by researchers who participated in the 11th edition of the German-Brazilian Dialogue on Science, Research, and Innovation, held last month in the FAPESP auditorium.

Artificial Intelligence
Advances in AI are shaping various aspects of the world, including how research is done

The assessment was made by researchers who participated in the 11th edition of the German-Brazilian Dialogue on Science, Research, and Innovation, held last month in the FAPESP auditorium.

2025-06-04
PT ES

In the opening lecture, Christiane Woopen from the Center for Life Ethics at the University of Bonn spoke about how AI is transforming medical research and people’s relationship with health and longevity (photo: Daniel Antônio/Agência FAPESP)

 

By Maria Fernanda Ziegler  |  Agência FAPESP – “The issue with artificial intelligence (AI) goes beyond whether it’s good or bad. It’s a technology that’s shaping the world we live in, and we need to find a middle path that allows us to move forward and live well.” With this message, Christiane Woopen, director of the Center for Life Ethics at the University of Bonn in Germany, invited researchers from various fields who work with AI to reflect on the direction of their research.

During the 11th edition of the German-Brazilian Dialogue on Science, Research, and Innovation, held in the FAPESP auditorium on May 7th and 8th in partnership with the German Center for Science and Innovation (DWIH) São Paulo, the researcher warned about how artificial intelligence is transforming medical research and people’s relationship with health and longevity.

For Woopen, who works in medical ethics, AI has caused a paradigm shift in healthcare. The focus has shifted from obtaining early diagnoses and precise treatments to monitoring and predicting diseases. “This is positive. After all, everyone thinks it’s good to avoid getting sick. However, this change also affects our perception of the present and our expectations, desires, and plans for the future,” he said.

Woopen expanded on this with a common example: “A person who leaves the house and forgets their key might think that it happened because they’re forgetful, didn’t sleep well, or it was chance. Someone who’s known to be at high risk of developing Alzheimer’s, for example, will think, ‘Is this forgetfulness already the disease?’” he said.

Not everyone wants to have this kind of knowledge. In a study conducted by Woopen’s group, 126 people who were seeking help for the first time at early detection centers in Germany were asked if they wanted to know their risk of developing a mental disorder. Forty-nine percent said yes, and 35 percent said no.

“But the curious thing is that the motivation behind both answers was the same. Those surveyed considered the impact on self-determination. Those who answered yes said the information could help them plan their lives, while those who answered no said that the knowledge would alter their life plans, apart from the great emotional weight of the news. Therefore, health professionals should carefully consider patients’ individual expectations when offering this technology," Woopen warns.

The value of data

The researcher also said that the increased ability to analyze data resulting from artificial intelligence technologies has an impact on health systems and, in the face of changing paradigms, it is necessary to take a critical look.

“I share the dream of being able to connect the data generated in the different health sectors, such as daily care, primary care, secondary care, and research. This information would undoubtedly help to make the system interchangeable, improve daily care, and provide research opportunities. But what we have in practice today is that all the expertise and technology for this lies with a few companies. They define what’s done, what’s important, and what type of patient is interesting. So it’s not just about financial power, but the ability to define what happens in the health system,” he pointed out.

According to Diogo Cortiz, a professor at the Pontifical Catholic University of São Paulo (PUC-SP), it is important to remember that data is not the “new oil.” “This is a bad metaphor because oil is what economists call a ‘rival good;’ that is, when the amount of the product available is consumed, access for others is compromised. The same is not true of data. Its stock is not depleted by consumption, and if one person uses a piece of data, another can use it too. It’s therefore essential to consider this concept when discussing data access policies,” he explained.

The researcher warns that Brazilians, who are avid users of social networks, are generating more and more data on major platforms. “The big techs use this information for themselves, and what’s left for the Brazilians [who generated the data]? Nothing,” he stressed.

However, the problem is even more widespread. According to Cortiz, Meta (formerly Facebook) assumed that only 5% of the pre-trained data sets in Llama, an open-source AI model created to outperform rivals such as Gemini and GPT4, are in a language other than English. “The company has also said, therefore, not to expect the same level of performance in languages other than English. So we need to ask whose data it is and who it represents,” he said.

The researcher said that the country needs to discuss policies and regulations that guarantee Brazilians access to the data generated on the platforms. “Otherwise, Brazil will just be a big tech data farm with no reciprocity,” he said.

“Well, whenever someone asks me if AI is going to take over the world or kill us, I say that AI probably won’t, but the big techs will,” agreed Renata Wassermann, a professor at the Institute of Mathematics and Statistics at the University of São Paulo (IME-USP).

Augmented intelligence

Before presenting her research results at the conference, Elisabeth André from the University of Augsburg’s Chair of Human-Centered Artificial Intelligence made a point: “Doug Engelbart, the inventor of the computer mouse, recognized as early as 1958 that the goal of technology shouldn’t be to replace humans, but to enhance human capabilities. I think that was a very wise statement, and one that we still need to consider today.”

André has developed several projects, including applications that help individuals prepare for job interviews and help children combat bullying in schools. She has even developed a Google Translate-like application for sign language that uses hand signals and facial expressions.

“We can’t view technology as gods or magical things. We need to view it as mathematics. Only then can we demand explanations and changes. But the problem is that, with autonomous AI [like self-driving cars, for example], 90% of decisions are made without supervision. In 80% of them we doubt explainable solutions. And 70% are biased. Just to give you an example of what’s happening,” said Anderson de Rezende Rocha, coordinator of Viva Bem: Artificial Intelligence for Health and Well-Being – an Engineering Research Center (ERC) set up by FAPESP and Samsung at the State University of Campinas (UNICAMP).

Rocha listed a series of research projects involving artificial intelligence in health that either did not proceed or produced questionable results. One such case occurred during the pandemic when several research groups around the world attempted to detect COVID-19 using X-ray images. “After the pandemic, a group from the United States did an exercise to analyze these 200 studies. Do you know how many of them [the authors] worked in hospitals? None. So none of the studies worked in practice. They were developed without taking into account the specificities of implementation. So, I ask you: Do we want artificial intelligence, with the potential to replace human activities, or augmented intelligence, which complements human activities?”

Hallucinated judgments

Concerns about the use of artificial intelligence are not limited to the health sector. “ChatGPT was launched at the end of 2022, and at the beginning of the following year, there was already news in the United States of a lawyer who used ChatGPT to draft a petition. And the document cited precedents that didn’t exist, in other words, a hallucination of the system,” said Juliano Maranhão, a professor at USP’s Law School.

“Two months later, a U.S. judge handed down a ruling that cited hallucinatory precedents. And that was an even bigger scandal. When questioned about the sentence, the judge claimed that he was not guilty because his assistant had written the sentence. Then, the civil servant claimed that it hadn’t been him but ChatGPT. And this, of course, caused concern not only in the United States but also here, given that we had similar cases occurring in Brazil as early as 2023,” he said.

Maranhão conducted a survey on the use of artificial intelligence in the Brazilian judiciary. The survey included responses from 1,600 judges and 18,000 employees working in the system. Half of the respondents were not familiar with generative AI, a category of AI that can create text, images, videos, audios, or code. However, 80% of those who do not use the technology said they thought it could be useful, suggesting a potential increase in AI usage in the Brazilian judiciary in the near future.

“Of the 50% who were using generative AI, 30% were doing so in their professional activities, i.e. in the production of documents and content, such as writing summaries of documents or presenting proposals for the composition of sentences, which were then revised. Something that seemed appropriate, but which raised important questions, since the content is often confidential,” he says.

The research also showed that, although some courts have contracted ChatGPT services or Copilot (from Microsoft) for official use in the judiciary, these technologies are used privately: the research is done on the civil servant’s personal computer and then the material is transferred to the court system.

Apart from that, there is also the risk of using AI to research precedents. “This isn’t a problem in itself, but it can be an aggravating factor if the content isn’t reviewed. The results of the survey also show that there’s no training or policy for the governance of the use of technology in the judiciary,” he warned.

In addition, 83% of civil servants who use the tool say they do not inform the judge of its use. “Therefore, it isn’t surprising that errors and hallucinations are occurring in products made by ChatGPT or other tools. There was significant concern about whether the use of generative AI was moral, ethical, or even legal. And this is worrying because it makes its use opaque,” he argues.

Maranhão believes the issue requires regulation. “Not only because of the risk of hallucinations but also because the content is confidential. Should we ban private use?” he asked.

To watch the debates from the first day of the event, visit: www.youtube.com/watch?v=V9ZZF55OZBA.

The full text from the second day is available at: www.youtube.com/watch?v=_NuVADzbUb8.

 

  Republish
 

Republish

The Agency FAPESP licenses news via Creative Commons (CC-BY-NC-ND) so that they can be republished free of charge and in a simple way by other digital or printed vehicles. Agência FAPESP must be credited as the source of the content being republished and the name of the reporter (if any) must be attributed. Using the HMTL button below allows compliance with these rules, detailed in Digital Republishing Policy FAPESP.