So, talking to AI about pickups...

AI just regurgitates stuff it reads on the internet. The internet is already full of bad opinions. So, good not to take it too seriously.
 
AI just regurgitates stuff it reads on the internet. The internet is already full of bad opinions. So, good not to take it too seriously.

That would be fine if LLMs were actually using the weight of the opinions and represent the most common one. But from my observation that isn't happening, just as with the PG and Alnico 5 in this case. The LLM surely found more sources stating A2 than A5, but it imagines that A5 is the right answer nontheless.
 
I find AI EXTREMELY helpful and advantageous. It has limitations and disadvantages, but I'm able to be extremely productive with it and collate info and generate and summarize new ideas quickly.
 
how are you using ai to be productive?
I use it for a full range of areas.

1. School - I'm doing a Masters in IT and the bot helps me a lot to figure out stuff and organize assignments.
2. Personal - I talk to it to optimize stuff like diet and workout plans. I talk to it in Spanish since I'm becoming fluent in Spanish.
3. Enrichment - I have enrichment projects where I create new original works. I had a scientific discovery where I found out that the largest rogue wave ever recorded is not an ocean swell like the draupner wave, it's a misclassified clip on youtube. Me and the bot used trig to prove the wave is over 200ft. I have original works about intrinsic music properties like where the animation comes from in playing - it's a wave with different fractals inside.
 
Confirmation bias?

No - but a great question and the general agreeability of models interactions can make that a problem.

I like to see if it really knows what it is talking about. One of the best things people could do is actually learn how to question/probe/work with AI.

This brings us back to human critical thinking skills. Average AI is like the average Google. Types half-@$$ed question, and goes with the first listing.
 
That would be fine if LLMs were actually using the weight of the opinions and represent the most common one. But from my observation that isn't happening, just as with the PG and Alnico 5 in this case. The LLM surely found more sources stating A2 than A5, but it imagines that A5 is the right answer nontheless.

As if there are not people here who would "imagine" that.
 
^ Why would you be looking at me, assuming you have the ability to read the thread?
I doubt the PG+s have offset coils. The bot will invent information. I actually really like working with it, and even joking with it and doing personal stuff, but you can't treat it like it's infallible. Imo it's better than the peanut gallery on average. You get instantaneous information reasoned out pretty well and you can refine the outline. I told it to put in its memory no making up crap and inventing stuff and I always call it out when it's making stuff up, being a dumbass, or feeding me woke propaganda.
 
Can't even.......

Dadgum man are you just that triggered

You quoted the ai as an authority on a different thread some time back

I was reminding you of that

Dang Clint
So sensitive 🥺
 
There is a theory that AI is plateauing. There is a test called ARC AGI that is used to assess a LLMs ability to learn the same way a human can. Current models score around 20-30% with a compute cost of less than $5. ChatGPT5 scores 18% with a cost of around $5. Googles new Gemini 3 can score 45% with a compute cost of $100. An average human can score 65%, and consider how smart the average human is. Many people view this information many different ways.

The 7 most profitable companies in the USA are throwing imaginary money back and forth over an idea that will very quickly run into major environmental concerns (beyond what we have now) and a large portion of people have ethical concerns over it as a concept. And I already said a while back how AI quality is degrading due to "inbreeding" with AI injesting AI-made data into its training sets.

The point being, AI is not likely to be going anywhere any time soon, but it's definitely going to be a huge bubble.
 
There is a theory that AI is plateauing. There is a test called ARC AGI that is used to assess a LLMs ability to learn the same way a human can. Current models score around 20-30% with a compute cost of less than $5. ChatGPT5 scores 18% with a cost of around $5. Googles new Gemini 3 can score 45% with a compute cost of $100. An average human can score 65%, and consider how smart the average human is. Many people view this information many different ways.

The 7 most profitable companies in the USA are throwing imaginary money back and forth over an idea that will very quickly run into major environmental concerns (beyond what we have now) and a large portion of people have ethical concerns over it as a concept. And I already said a while back how AI quality is degrading due to "inbreeding" with AI injesting AI-made data into its training sets.

The point being, AI is not likely to be going anywhere any time soon, but it's definitely going to be a huge bubble.

This is very important. There is demand for a pre-LLM version of the Internet, just as there is for pre-nuclear-detonation steel. https://en.wikipedia.org/wiki/Low-background_steel

In the steel case the demand comes from instruments that measure small doses of radiation (such as medical equipment) and they cannot use steel made after the atmospheric nuclear detonations over Japan and the above-ground testing. One source of such pre-nuke steel is the German WW1 fleet of warships that has sunk itself at Scapa Flow after the war.

The same applies to machine learning input. There is use for a pre-LLM copy of the internet so that you don't absorb LLM slob (including your own). A company like Google that has such a copy is at a tremendous advantage. That's also the reason why AI companies scrape so excessively right now before it gets any more diluted.
 
Speaking as a Cognitive Psychologist (guy who who knows a crap ton about how humans actually learn), learning how humans learn is not necessarily a good thing. We are chock full of capacity limitations, bias, heuristics, and all sorts of other semi-accurate psuedo probabilistic thought processes.
 
Back
Top