Pro@programming.dev to Technology@lemmy.worldEnglish · 1 day agoThe Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?cacm.acm.orgexternal-linkmessage-square52linkfedilinkarrow-up1274arrow-down118
arrow-up1256arrow-down1external-linkThe Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?cacm.acm.orgPro@programming.dev to Technology@lemmy.worldEnglish · 1 day agomessage-square52linkfedilink
minus-squaredoodledup@lemmy.worldlinkfedilinkEnglisharrow-up1·15 hours agoLLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument. Also, what has deep-fakes anything to do with LLMs? This is not related at all. A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.
LLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument.
Also, what has deep-fakes anything to do with LLMs? This is not related at all.
A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.