So, talking to AI about pickups...

Every time I look at an AI summary for research, I realize how profoundly incorrect a lot of it is. Yet people are taking this obviously incorrect information and treating it it like gospel, because some computer puked it out.

We are well and truly F'd if that becomes the norm.
 
Every time I look at an AI summary for research, I realize how profoundly incorrect a lot of it is. Yet people are taking this obviously incorrect information and treating it it like gospel, because some computer puked it out.

We are well and truly F'd if that becomes the norm.
Exactly!

Ive been messing around with chatgpt to see how much it knows of what I know to be true.
Ive see the dimarzio at1 and tone zone described as bright and bassy at the same time. So, they're scooped? A little ways down, they have smooth and increased mid range.
Change the wording of the question often leads to vastly different descriptions.

The air norton is not good for leads because its not bright. Lol.


Darth Phineas has been getting alot of references for chatgpt. He should sue and monetize!

Apparently, AI is now screwing up Thanksgiving recipes. People need to get a freaking clue.
 
Darth Phineas has been getting alot of references for chatgpt. He should sue and monetize!

Yeah, these LLMs are trained to plagiarize and infringe on IP as a feature. You shouldn't just be able to opt out of having your works used for AI training, you should have to opt in and get paid for the indignity. But now basically every soc med platform is adding verbiage to their terms and conditions that using their platform entitles them to feed your info and everything you post to these slimy tech grifters. The longer we go without guardrails, the harder it will be to add them without the crocodile teared cries of "killing innovation" or some such rhetoric drivel
 
Started using the AI do jour to take meeting notes at work and the result isn't too bad, but some of the content is blatantly incorrect so I don't see yet what are the exact benefits of taking these notes without vetting them thoroughly and correcting the factually incorrect content before storing the result for posterity (or whatever data retention period we have to comply with). Imagine someone who wasn't in that meeting go over those factually incorrect notes not in 2 years, but in 2 months.
 
Yeah, these LLMs are trained to plagiarize and infringe on IP as a feature. You shouldn't just be able to opt out of having your works used for AI training, you should have to opt in and get paid for the indignity. But now basically every soc med platform is adding verbiage to their terms and conditions that using their platform entitles them to feed your info and everything you post to these slimy tech grifters. The longer we go without guardrails, the harder it will be to add them without the crocodile teared cries of "killing innovation" or some such rhetoric drivel
Exactly.

I've seen my own forum words from 10 years ago pop up

THEY make the money on US, or you guys rather.
Aside from forums, I have a very small social footprint.

I run ad blockers on everything.
 
You can use a robots.txt file to stop ethical groups from scraping your data, but not everyone is going to respect it
 
None of the current AI clowns respect it. Smaller sites are getting hammered into oblivion.
All the big ones do. OpenAI, Google, Claude, etc..

It's the little guys you've got to worry about, although I'm sure hosting sites are working on AI "firewalls" to limit the scrapers that ignore it
 
All the big ones do. OpenAI, Google, Claude, etc..

It's the little guys you've got to worry about, although I'm sure hosting sites are working on AI "firewalls" to limit the scrapers that ignore it

I'm self-hosted. And I can't block all robots.
 
As somebody that works in technology, AI is scary, but the way that people wholeheartedly just trust it is even scarier.
No scarier than people wholeheartedly trusting in any one source of info.

As a fellow dabbler in tech, I think that the resources that have gone into AI development and operation is what's objectively scary.
 
No scarier than people wholeheartedly trusting in any one source of info.

International AI Safety Report

[2412.04984] Frontier Models are Capable of In-context Scheming https://share.google/0Yq4UXk75ZJ1JUWhc

If you don't have the time to read these papers, basically OpenAI, Anthropic, and an international panel all agree that AI is not only capable, but is actively generating it's own agenda. At this stage in the game, the "AI agenda" is benign. There is no indication that AI will start plotting against us, but as AI capabilities is advancing at a pace much quicker than AI understanding is, this is a risk.

My biggest concern, which hasn't been investigated thoroughly is AI-human congruence, IE, people who use AI alot beginning to share the same thought patterns and ideas as AI. We are already seeing it on a small scale, people starting so say words that AI loves to use, like "delve" and "pivotal" more.
 
If you don't have the time to read these papers, basically OpenAI, Anthropic, and an international panel all agree that AI is not only capable, but is actively generating it's own agenda. At this stage in the game, the "AI agenda" is benign. There is no indication that AI will start plotting against us, but as AI capabilities is advancing at a pace much quicker than AI understanding is, this is a risk.
AGI is a ways off from what I have gathered. So while I'd agree that having such a thing would in fact be scary, I think at this point it's more of a philosophical debate.

I am curious, though, as to where this whole thing will end up.
 
Artificial General Intelligence is definitely a bad thing in my prediction. Kinda like how you don't let the computer that points the gun on a ship pull the trigger
 
Interesting. I asked AI if I could use an 8 ohm speaker in one of my amps that came with a 16 ohm. I already knew the answer. However, I was curious what it had to say, so I typed it in the evil Google search bar.

It's response was basically, "Yes, here are some examples..."

It literally linked to examples I wrote about using an 8 ohm speaker in the amp. I'm no expert at all. Yet, here AI is citing me as the expert behind its answer. Scary stuff.
 
Back
Top