RanchManSandy
Active member
The only job current AI models have been given is to devalue human labor. It doesn't need to be used anywhere other than as a threat to depress wages. Hopefully this bubble will burst sooner rather than later.
Anything to avoid having to think for themselves.As somebody that works in technology, AI is scary, but the way that people wholeheartedly just trust it is even scarier.
AI just regurgitates stuff it reads on the internet. The internet is already full of bad opinions. So, good not to take it too seriously.
Fargginheadnoggindmfluxhow are you using ai to be productive?
I use it for a full range of areas.how are you using ai to be productive?
Confirmation bias?I ask AI about a lot of things I know about just to see what it will say. I have gotten as good or better advice than I get on here sometimes...
Confirmation bias?
That would be fine if LLMs were actually using the weight of the opinions and represent the most common one. But from my observation that isn't happening, just as with the PG and Alnico 5 in this case. The LLM surely found more sources stating A2 than A5, but it imagines that A5 is the right answer nontheless.
As if there are not people here who would "imagine" that.
I doubt the PG+s have offset coils. The bot will invent information. I actually really like working with it, and even joking with it and doing personal stuff, but you can't treat it like it's infallible. Imo it's better than the peanut gallery on average. You get instantaneous information reasoned out pretty well and you can refine the outline. I told it to put in its memory no making up crap and inventing stuff and I always call it out when it's making stuff up, being a dumbass, or feeding me woke propaganda.
There is a theory that AI is plateauing. There is a test called ARC AGI that is used to assess a LLMs ability to learn the same way a human can. Current models score around 20-30% with a compute cost of less than $5. ChatGPT5 scores 18% with a cost of around $5. Googles new Gemini 3 can score 45% with a compute cost of $100. An average human can score 65%, and consider how smart the average human is. Many people view this information many different ways.
The 7 most profitable companies in the USA are throwing imaginary money back and forth over an idea that will very quickly run into major environmental concerns (beyond what we have now) and a large portion of people have ethical concerns over it as a concept. And I already said a while back how AI quality is degrading due to "inbreeding" with AI injesting AI-made data into its training sets.
The point being, AI is not likely to be going anywhere any time soon, but it's definitely going to be a huge bubble.