You must log in or register to comment.
So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?
It just means there’s a bias in the data that is probably being amplified during training.
It answers what’s relevant according to its training.
Please remember what the A in AI stands for.