>>140785We know exactly what this is thanks to other LLM services. The prompt entered by the user gets modified to add things like 'strong woman' or 'ambiguous ethnicity', by having the person being generated wear a blank name tag or holding an empty sign some language models inserted the new unwanted prompts due to an oversight. While this hasn't happened with Gemini the same type of prompt modification has been confirmed by Google employees in case it wasn't obvious enough.
On the subject of their vast resources there was James Damore in 2017 who went public about the ideological issues ruining their services back when all the attention was on gender, what's interesting is it was written from the perspective of a corporate cult follower wanting to improve the efficacy of their diversity programs by using a new approach only an autist could think were appropriate. His firing is often used to explain why nobody at Google spoke up about their Gemini service being an obvious disaster.
https://en.wikipedia.org/wiki/Google's_Ideological_Echo_Chamber