The only job current AI models have been given is to devalue human labor. It doesn't need to be used anywhere other than as a threat to depress wages. Hopefully this bubble will burst sooner rather than later.
That would be fine if LLMs were actually using the weight of the opinions and represent the most common one. But from my observation that isn't happening, just as with the PG and Alnico 5 in this case. The LLM surely found more sources stating A2 than A5, but it imagines that A5 is the right answer nontheless.