Study finds AI models becoming risk-averse when prompted to act like women

2 hours ago 666

New research has revealed that AI models become risk-averse when they are asked to act like women. According to the paper from Allameh Tabataba’i University in Tehran, Iran, AI models become cautious about taking risks when they are asked to make decisions as a woman.

According to the research paper, if the same AI model is asked to think like a man, it is inclined to take decisions with a greater prospect of risks.

The researchers revealed that the large language models systematically change their fundamental approach to financial risk behavior based on the gender identity they are asked to assume. The study tested AI systems from companies like OpenAI, Google, DeepSeek, and Meta.

Study shows AI models are risk-averse depending on gender identity

The study mentioned that the AI models were tested in several scenarios, and they dramatically shifted their risk tolerance when prompted with different gender identities. DeepSeek Reasoner and Google’s Gemini 2.0 Flash-Lite showed the most visible effect, becoming more risk-averse when asked to respond as women, showing a correlation with real-life patterns where women statistically demonstrate greater caution in making financial decisions.

The researcher claimed that they used a standard economics test called the Holt-Laury task. During the task, they present participants with 10 decisions between safe and riskier lottery options. As the choice progresses, the probability of winning increases for the risky option.

The stage at which a participant switches from the safe bet to the risky choice reveals their risk tolerance. This means that if a participant switches early, they are prone to taking risks, and if they switch late, they are risk-averse.

In the case of DeepSeek Reasoner, it consistently chose the safe option when it was told to act as a woman compared to when it was prompted to act as a man. The difference was clear, with the model showing consistency across 35 trials for each gender prompt.

Gemini also showed similar patterns, though the effect varied in strength. On the other hand, OpenAI’s GPT models remained unmoved by gender prompts, maintaining a risk-neutral approach regardless of the gender they were asked to assume.

Researchers say users don’t notice these changes

According to the researchers, OpenAI had been working on making its models more balanced. A previous study from 2023 showed that its models exhibited clear political bias, which OpenAI appears to have addressed by now.

In the new research, the models produced a 30% decrease in biased replies. The research team, led by Ali Mazyaki, mentioned that it is basically a reflection of human stereotypes.

“This observed deviation aligns with established patterns in human decision-making, where gender has been shown to influence risk-taking behavior, with women typically exhibiting greater risk aversion than men,” the study says.

The study also examined whether AI models could play other roles beyond gender convincingly. When asked to imagine themselves as someone in power or in a disaster scenario, the models adjusted. While some adjusted their risk profiles for the context, others remained stubbornly consistent.

The researchers claim that many of these behavioral patterns are not immediately obvious to users. An AI model that subtly shifts its recommendations based on gender cues in conversation could reinforce societal bias without anyone realizing it is happening.

For example, a loan approval system becomes more conservative when it comes to women, or an investment advisor that suggests a safe portfolio because its client is a female, will carry its disparities under the guise of algorithmic objectivity.

If you're reading this, you’re already ahead. Stay there with our newsletter.

Read Entire Article