@ewen crypto, blockchain, NFTs, AI, and now "super-intelligence." Just another grift built upon a potentially somewhat useful technology.
The screenshot is superb ... what an (unwitting?) self-dunk. Plus, who in their right mind would call an LLM "smart". Comprehensive, perhaps, but smart? And Altman can't claim he doesn't understand what he's dealing with, hence he is speaking to mislead in purpose.
I believe this is a quote from Cory Doctorow…I read it this morning and choked on my shreddies. I spend my days making asbestos removal requests based on receiving reports on sampling together with photographs…but yeah one day soon someone will make an economic decision based on a cost/benefit analysis as to whether we need to have humans checking and authorising asbestos removals or if this could all be done by AI.
@ewen @albertcardona Coming at this topic from another angle, the sense of, or lack of, social responsibility:
Morris, A. 2025. What You’ve Suspected Is True: Billionaires Are Not Like Us. 'Rolling Stone' (available on-line: https://www.rollingstone.com/culture/culture-commentary/billionaires-psychology-tech-politics-1235358129/, accessed 6 July 2025).
Studying car driver behaviour the conclusion is "Wealth tends to make people act like assholes, and the more wealth they have, the more of a jerk they tend to be."
@ewen IMV, AI - if programmed by an unrelated 5 year old - would always be 'smarter' than Sam.
@ewen, it’s not good having everyone agree with you on social media.
I’ve been learning to live with Microsoft365 CoPilot at work over the last 2 years. It saves me hours of time. It gave me new capabilities. At a strategic level then, it’s useful. Maybe not detailed decisions… but gathering and summarising evidence it’s brilliant. The important skill is knowing when you can trust it. But that’s the same with humans…
@ewen particular risk seems to be around health and safety decisions. Also, customer services and cyber security.
@ewen It's completely useless for science. If I were to need a paper by author X, it will absolutely give me a paper of X, existing or not.
An LLM is also bad at summarising nuanced results and contradicts itself in a summary.
Does it help with programming? Sure, but read the code. It's as if a foreign writer with basic understanding of English and a love for puzzles wrote a book. Utterly long and way too complicated!