larryguitar
Active member
There are a ton of complex algorithms that grade the data on hundreds of data points. How many pages are linking into the information, what kind of pages are linking into the information, what kind of site is the information being found on - blog, .EDU, news site, how old the information is, what language it is in....the list goes on and on. So if ChatGPT finds information on allergies on the AMA or Mayo Clinic website, it will grade that information higher than the 10-year-old info it finds on a Mommy & Me blog.
So, it is not regurgitating data or simply putting together what it sees as the most common answer.
I understand the theory, but I also realize that the people creating the rules for the algorithm have a tremendous amount of leverage in which direction it will go-at the very best, it reinforces the orthodox view, at worst (and there have been plenty of examples) it denies reality, creates fake data or simply lies. That will get better, but it will still be at the mercy of the people drawing the lane markers, and those people are completely anonymous, not interrogatable and essentially immune from consequences. We've put a faceless group of unelected, unknown people in charge of what is 'true' and what will feed the LLP; that's terrifying, IMHO.
Larry