Ron Swanson
Illuminator
Well .. AI still thinks these are film camaras that were use to film the entire movie "Top Hat"
Well .. AI still thinks these are film camaras that were use to film the entire movie "Top Hat"
So (the law of etc) it is not “intelligent” enough to know it is wrong, or when it is wrong.I say it's technically not lying, as the program is doing the same procedure as when it is giving back accurate results.
There is no intentional deception, as the program literally doesn't know right from wrong, as it as no model of reality to refer to.
I think we need both "wrong" and "hallucinations". Hallucinations aren't just it being wrong, it's about it making stuff up, it's the difference between a ◊◊◊◊◊◊◊◊◊◊ and someone making a mistake.I hate the ◊◊◊◊◊◊◊◊ “hallucinations” excuse for AI.
Just say WRONG!!!!
Yes.Has anybody tried out DeepSeek?
As any programmer, sorry software engineer will tell you that extra capacity will be used.James O'Malley has a piece on why DeepSeek doesn't mean the end of ChatGPT and friends. A more efficient use of resources will mean the existing computational capacity can be used to do more.
![]()
DeepSeek isn't a victory for the AI sceptics
Am I mad... or is everyone else?takes.jamesomalley.co.uk
Yes, there's clearly some other system watching over it. The model itself is not censored, I've tried 14b subset on my computer and it can write poems about oppression in China all day long.From descriptions its censoring is like ChatGPT used to do, I.e. starts to give an uncensored answer and then wipes it as you watch.