flebitz 11 hours ago

> Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth.

I’d say that LLMs maybe understand better than we do, because of their lack of grandstanding classification of information, that belief and fact are tightly interwoven.

There is a dichotomy that truth can exist while fiction can be widely accepted as truth without the ability for humans to distinguish which is which and all the while thinking that some or most can.

I’m not pushing David Hume on you, but I think this is a learning opportunity.

  • scrubs 10 hours ago

    Pretty boys talking nonsense on tv (or social media) with all the implied grandstanding is a problem. But good lord we have to aim a lot higher than that.

fuzzfactor 13 hours ago

Naturally as expected, language models can more strongly leverage fiction than fact quite a bit like those fluent in regular languages have done since the beginning of time.

Sometimes the more fluent, the more often the fiction may fly under the radar.

For AI this could likely be when they are as close to human as possible.

Anything less, and well the performance will be lower by some opinion or another.

mock-possum 8 hours ago

Well no, of course not - people seemingly can’t, or don’t care to, do that; and LLMs can only generate what they’ve seen people say.

It’s just another round of garbage in garbage out.