On the "AI" thing - it's absolutely marketing but they do it because in 99% of people's heads machine learning (which is what they are doing) and "AI" are the same thing - and it's not really their fault. Companies like Google, Apple, Tesla, and many others have all dubbed machine learning as "Ai" because AI sells basically.
So yes Neural isn't building generalized artificial intelligence, but they have built and trained a very specific and sophisticated ML algorithm to do their capture. While what they are doing exactly will obviously never be talked about, they do talk about it hearing "like a human" and so I suspect they have invested a lot of time into trying to train a model that is aware of (and hence captures) the psychoacoustic anomalies of the human auditory system. Which is really quite interesting. If you've done music and mixing for any length of time you know there are just as many auditory "illusions" as there are visual ones - our brains and ears play tricks on us in certain ways. That's what led to the loudness wars for a while - because "louder" sounded "better" - only it didn't really. It's the basis for a lot of compression technology because frequency masking is a thing and the "loudest" thing at any given moment masks a quieter thing in the same frequency ranges so we can preserve the loud thing, discard info for the quieter thing and our dumb human hearing can't really tell the difference. We are more sensitive to 5-7k (baby cry) because.. we just are, cause babies crying, and so on - we perceive certain frequency balances more or less depending overall volume, etc, etc
So, I believe neural have spent time trying to model THAT. Not to take a flawless, 20-20k perfectly flat "sample" and determine the precise deltas between source and response. I think their capture tries to "hear" the response as a human ear would, with all of its dumb little quirks and illusions. Without that, yes you can capture a perfect "tonal" profile - you can reverse engineer an eq curve and determine how a very specific waveform you fed in gets clipped, you can do Fourier analysis to determine harmonics and overtones... but sometimes there's just a weird... swell, or a "bubble" or a grunt, or thump, or any other esoteric term we try to use to describe the "feel" of an amp. but we KNOW it's there we HEAR it - like a human. And they happen over time, things bloom or diminish over time, so you need to sample and "learn" the response of what you're capturing a certain way to get that. I think Neural - whatever they are doing - has THAT, a little something extra. Something that doesn't just get very close on tone, but also gets more (not all) of the "feel" because it's "hearing" things like people do. It's not a rigid algorithm, it's adaptive, it's "learning" (so yes, misnomered as 'intelligent') and they can keep training it on new things, making it better, and then update the models on all the QCs. It's the adaptivity, I think, that sets their POV on how to capture apart from the rest.
So yes... "AI" is marketing. but it's not fair to discount their approach to the technology of capture as garbage or just more of the same because they use a marketing term they basically MUST use because others before them have abused it so much that it's the only thing Joe Public will understand and respond to when referring to machine learning and adaptive algorithms.
And this isn't meant to sway you, just wanted to clear up the marketing nonsense POV - yes it's hype, but it's also unique and has a very real basis in some good technology.
Disclosure - I work in technology for a silicon valley company and actively build, train, and use ML/AI models for various parts of my day job. And yes, much of what I said is speculation because Neural isn't going to give up the special sauce, but I at least have a basis in understanding how and what I think they are trying to do and I suspect I'm in the ballpark.