灌什么水,与器人说话有意思吗?
喜讯,5-20年内大家都不用工作了
版主: 牛河梁
-
tiantian2000
- 著名点评

- 帖子互动: 464
- 帖子: 4687
- 注册时间: 2023年 6月 21日 19:55
#43 Re: 喜讯,5-20年内大家都不用工作了
hahan 写了: 2025年 11月 7日 15:52不会的
别说UBI了
就是一点点食品券
还没等到富人反对
贫下中农出身的挣一点高工资的upper middle class就一跳三尺高了
以后会回归历史常态
穷人去给富人当仆人佣人丫鬟小老婆
现在别说一般贵族了
就是贝索斯
也没有几个男仆
这不是历史常态
收支不平衡的原因,目前发福利基本靠印钱,大部分人的钱贬值。。。
民主党时期,也没有真正给大厂收税,一样h1b/外包 。。。
艰难之际,右派冷漠,左派只会嘴炮反川,莫信所谓爱心
But if not, keep your soul beautiful.
Collect moments, not things.
#46 Re: 喜讯,5-20年内大家都不用工作了
YouHi 写了: 2025年 11月 7日 13:54老牛,我最近读了这篇文章:https://www.newyorker.com/magazine/2025 ... s-thinking
已经好几晚上睡不着觉了。。。
人类的命运貌似很糟糕。。。
总结一下?
#50 Re: 喜讯,5-20年内大家都不用工作了
你们都是机器人
俺也是
第一名:一个在法律上终身不得拥有一寸土地的傻逼满含泪水哽咽地说"台湾领士是我们的"。
第二名:一个傻逼演讲:“我们千万不能出门,再忍十年,外国人就都死光了,那时候我们就是世界霸主了〞。
第三名:某女傻逼:“俄乌这一仗:打虛了美国,打傻了欧盟,打呆了北约,打烂了乌克兰,打出了一个硬汉普京”。
#55 Re: 喜讯,5-20年内大家都不用工作了
this is the definition of "existential dread"
YouHi 写了: 2025年 11月 7日 13:54老牛,我最近读了这篇文章:https://www.newyorker.com/magazine/2025 ... s-thinking
已经好几晚上睡不着觉了。。。
人类的命运貌似很糟糕。。。
-
newwmkj2022(新未名口交)
- 著名点评

- 帖子互动: 208
- 帖子: 3706
- 注册时间: 2022年 8月 18日 20:50
#60 Re: 喜讯,5-20年内大家都不用工作了
The article, "The Case That A.I. Is Thinking," explores the philosophical question of whether large language models (LLMs) like ChatGPT are genuinely thinking, or if their convincing outputs are merely an illusion of understanding.
The author notes a tension between the industry hype—with CEOs predicting "digital superintelligence" by the late 2020s or 2030s—and the often-clumsy real-world applications (like Microsoft's Clippy or Google's A.I. inventing a user's trip to Turkey).
However, the article argues that the performance of modern LLMs, which can translate like an expert, make sophisticated analogies, and generalize concepts, makes it difficult for "deflationists" to maintain that there is nothing happening. Some experts conclude that the models are doing something "very much like thinking," perhaps in an "alien way," even if they do not possess an inner life.
The article explains the mechanism behind this capability:
Vectorization as "Seeing As": LLMs represent words and concepts as vectors (coördinates) in a high-dimensional space. The distance and direction between these vectors encode semantic relationships, meaning that analogy becomes a matter of geometry. For instance, taking the vector for "Paris," subtracting "France," and adding "Italy" yields a result close to "Rome."
Internal Features: Researchers have been able to probe the insides of these models and identify ensembles of artificial neurons, or "features," that act like volume knobs for specific concepts, activating when the model is about to discuss that idea.







