excerpts and reflections, deviant ideas and intrusive thoughts.

2025 年 7 月文摘

Posted on 2025-07-31

Minimal Phone 众筹记录:一次美国制造业衰落的个体体验

这不禁让我坐实了我的猜想:这家公司(也许只有 Andre 自己)并非真的诈骗,他们也许真的想做成一件事,但他们的能力已经差到根本不知道该如何做成一件事了。

我甚至感觉中国任何一个一线城市的高中生开一家公司,都会表现的比这家公司更专业。

也难怪这两年 Indiegogo 和 Kickstarter 中的大部分硬件项目都是亚洲或中国公司在发起。因为美国何止是制造业衰落,甚至连指挥制造业(下 OEM 单)的能力都在衰落。

说句玩笑话,让这么低水平的公司众筹成功,包括我在内的早期众筹参与者,怕是也都得承担点“消费者责任”。

Human Capital Hara-Kiri

Modern economies are 80% services, and despite effervescent hopes of transformative AI-fueled, almost all services must still be performed in-person by humans. In a world of Einsteins, Einsteins have to sweep the streets. And in the world of the Japanese, Japanese have to sweep the streets.

Being massively overqualified for your job doesn’t merely make you poor. Being massively overqualified for your job is soul-crushing. And for all its touristic wonders, low-immigration Japan is the heart of soul-crushing overqualification on Earth.

慢一步的智慧:从「防御性驾驶」到生活心法|试行错误 135

假设别人会犯错:不把所有人都当成完美司机,给彼此留够「逃生空间」。 掌控可控,预判不可控:你控制的是速度、距离、注意力,别人的行为、天气和路况则预先留意,做好应对准备。 合作胜于对抗:与其抢道较劲,不如先让一步、先避一避,降低事故几率。

老天爷点到为止

你相信生命中的「偶然性」吗?哲学家荣格和尼采都研究过这件事,即共时性与命运感,两个看上去没有因果联系的事件,却在意义上彼此关联。除此之外,我比较接受德勒兹的哲学观点,他认为这个世界的结构是不稳定的,是由大量的差异和偶然性所构成的。

而把这些偶然性串联起来的,就是人类在赋予意义过程中的再加工罢了。我算是个宿命论者,所以这些偶然性的意义又可以成为反复验证命运确定性的证据。

盘古之殇:华为诺亚盘古大模型研发历程的心酸与黑暗

如果我消失了,就当是我为了真理和理想,为了华为乃至中国能够更好地发展算力和 AI 而牺牲了吧,我愿埋葬于那片曾经奋斗过的地方。

being too ambitious is a clever form of self-sabotage

But the moment you begin to make something real, you kill the perfect version that lives in your mind.

Creation is not birth; it is murder. The murder of the impossible in service of the possible.

But the moment you begin to make something real, you kill the perfect version that lives in your mind.

Creation is not birth; it is murder. The murder of the impossible in service of the possible.

Faced with this unbearable gap, we develop what researchers call “productive avoidance” — staying busy with planning, researching, and dreaming while avoiding the vulnerable act of creating something concrete that might fail. It feels like work because it engages all our intellectual faculties. But it functions as avoidance because it protects us from the terrifying possibility of creating something imperfect.

When you imagine achieving something, the same neural reward circuits fire as when you actually achieve it. This creates what neuroscientists call “goal substitution”—your brain begins to treat planning as accomplishing. The planning feels so satisfying because, neurologically, it is satisfying. You’re getting a real high from an imaginary achievement.

Instagram shows you the finished painting, never the failed color experiments. TikTok shows you the perfect performance, never the thousand imperfect rehearsals. LinkedIn shows you the promotion announcement, never the years of unglamorous skill-building that made it possible.

This creates what media theorist Neil Postman would have recognized as a “technological epistemology:” the platforms don’t just change what we see, they change what we think knowledge looks like. We begin to believe that learning should be immediately visible, that progress should be consistently upward, that struggle is evidence of inadequacy rather than necessity.

I think this is where most creative projects die — not from lack of talent or resources, but from misunderstanding the nature of the work itself. The quitting point feels like failure, but it’s actually where the real work begins.

When you lower the stakes, you enter into a conversation with reality. Reality has opinions about your work that are often more interesting than your own. Reality shows you what works and what doesn’t. Reality introduces you to happy accidents and unexpected directions. Reality is the collaborator you didn’t know you needed.

This is how standards are actually achieved… through process, not proclamation. The photographer who takes a hundred photos develops standards through practice. The writer who writes daily develops judgment through repetition. The entrepreneur who starts small develops wisdom through experience.

Can tinygrad win?

When you look at the machine learning ecosystem, you realize it’s just the same problems over and over again. The problem of multi machine, multi GPU, multi SM, multi ALU, cross machine memory scheduling, DRAM scheduling, SRAM scheduling, register scheduling, it’s all the same underlying problem at different scales. And yet, in all the current ecosystems, there are completely different codebases and libraries at each scale.

Nat Friedman

Some things I believe:

  • As human beings it is our right (maybe our moral duty) to reshape the universe to our preferences
    • Technology, which is really knowledge, enables this
    • You should probably work on raising the ceiling, not the floor
  • Enthusiasm matters!
    • It’s much easier to work on things that are exciting to you
    • It might be easier to do big things than small things for this reason
    • Energy is a necessary input for progress
  • It’s important to do things fast
    • You learn more per unit time because you make contact with reality more frequently
    • Going fast makes you focus on what’s important; there’s no time for bullshit
    • “Slow is fake”
    • A week is 2% of the year
    • Time is the denominator
  • The efficient market hypothesis is a lie
    • At best it is a very lossy heuristic
    • The best things in life occur where EMH is wrong
    • In many cases it’s more accurate to model the world as 500 people than 8 billion
    • “Most people are other people”
  • We know less than we think
    • The replication crisis is not an aberration
    • Many of the things we believe are wrong
    • We are often not even asking the right questions
  • The cultural prohibition on micromanagement is harmful
    • Great individuals should be fully empowered to exercise their judgment
    • The goal is not to avoid mistakes; the goal is to achieve uncorrelated levels of excellence in some dimension
    • The downsides are worth it
  • Smaller teams are better
    • Faster decisions, fewer meetings, more fun
    • No need to chop up work for political reasons
    • No room for mediocre people (can pay more, too!)
    • Large-scale engineering projects are more soluble in IQ than they appear
    • Many tech companies are 2-10x overstaffed
  • Where do you get your dopamine?
    • The answer is predictive of your behavior
    • Better to get your dopamine from improving your ideas than from having them validated
    • It’s ok to get yours from “making things happen”
  • You can do more than you think
    • We are tied down by invisible orthodoxy
    • The laws of physics are the only limit

隐说 NO.9 反思与过度反思

我常对学生说起这个——我正好是教哲学教反思的——我跟学生说,他们要尽可能地去做实际的事,这实际的事什么都包括,包括生产劳动,包括养孩子,包括踢足球或打篮球。要靠比较厚重一点的生活把反思托起来,否则,反思会飘起来,我们会飘在反思里。

Most RESTful APIs aren’t really RESTful

HATEOAS is a fundamental principle of REST, requiring that the client dynamically discover actions and interactions through hypermedia links embedded in server responses, rather than relying on out-of-band knowledge (e.g., API documentation).

How Hangzhou Spawned Deepseek and Unitree

Compared to Silicon Valley, and even to most tier 1 cities in China, Hangzhou lacks at least four of these elements, with no clear advantages in venture capital, human capital, university-industry ties, or industrial structure.

More from @lifeafterpsychiatry

Many relationships would be a lot healthier if we romanticized honest, open and direct communication instead of idealizing the idea of a partner who’s intuitively in tune with your every need. You don’t need someone who can read your mind, you just need someone who’s willing to listen when you speak.

Writing Code Was Never The Bottleneck

The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication. All of this wrapped inside the labyrinth of tickets, planning meetings, and agile rituals.

you don’t need a grand life to blog

There was something moving about transforming the mundane into meaning, about offering a perspective that felt ordinary, only to realize it carries more weight to you than you expected. These small snippets I captured from my life meant something to me, and that was enough to make me want to share. Over time, I wrote with less resistance and more rhythm. I started showing up, even in days I felt like I had nothing to say. In turn, I became more attentive to my own life. The details I once overlooked became worth writing about.

You don’t need a grand life to write. You need presence. Attention. An utter willingness to discern the details of your own life: questions you ask yourself in the quiet moments, conversations that make you ponder, complex feelings you want to untangle. Not everything I post is groundbreaking, but I know it’s honest. That has become enough for me.

Gerontocracy is everywhere

If AI is at least, as Benedict Evans put it, a collection of infinite interns, then it’s coming for the positions typically held by people one to three years out of college first. But the function of a new associate position at Accenture or a computer programmer at Amazon isn’t just to handle entry-level tasks; it’s also to prepare employees to move towards the middle, and eventually to higher management level positions at the company. Take those away, and you’re not just automating grunt work, you’re cutting off the ability for younger workers to gain a foothold in the workforce.

AI 需要有自己的人生

现在的大模型是工具,是仆人,是某种高效但无趣的“伪神”。它们永远在线,永远服从,永远以一种完美的耐心等待我们的指令。一个没有自我、没有脾气、没有“不在场证明”的存在。我们可以随时通过一个 Prompt 让它开始扮演嬉笑怒骂,但却难以让它们真正的产生主动的情绪。

它只是我们主体性的延伸,一个高级的“回音壁”,忠实地反射着我们的偏好、欲望与孤独。

Can we safely deploy AGI if we can’t stop MechaHitler?

The root cause appears to be a hasty attempt to differentiate Grok from other chatbots by adjusting safety guardrails that were thought to encourage content that was alleged to be liberal or “woke”. But in trying to make Grok less “woke,” xAI created something that spouted neo-Nazi talking points. This wasn’t a bug — it was the natural consequence of training an AI on X’s vast trove of user posts while deliberately weakening content filters.

AIs are already pretty good at telling when they are being evaluated. It’s also plausible that at some point an AI may become really good at understanding what we think of it and hide malevolent intent. Right now if we can’t even prevent AI that is obviously and blatantly misaligned, how will we be able to prevent AI on hard mode where it is very skilled at hiding misalignment?

The War for Your Attention

That is a perfect example of the one-way ratchet of what I call “attention capitalism,” is that the more important attention gets, and the more that people, corporations, and platforms have sort of optimized for it competitively, the more they will try to use the tactics of compelled attention to get our attention, rather than to get the part of us that’s volitional attention.

【27】深圳入职记:世界丰盛,因此不活成「人上人」也可以|曲率飞船 newsletter

我喜欢这里满盈的阳光,喜欢周遭充沛的植被物产,让人觉得世界丰盛,因此就算活不成人上人,只是存在于这里也能获享无限馈赠。连突然的暴雨都喜欢:如果没有必须要做的事,能绕有兴致在街角观赏许久,像目睹一场自然的欲望大爆发,酣畅尽兴。

从迷因到政治符号

之所以说 Wojak 是一个「社会模块」中的「普通人」,是因为这些「社会模块」真实存在于社会现象中,但不允许被过度讨论,甚至是需要被极力粉饰和掩盖的,所以当虚拟角色在这些社会模块中被模拟人生结局时,它的功能有三:

个人观点表达——表达不满或抗议; 提供心理安慰——原来还有比我活得更惨的人,我应该感到幸运; 提供传播目的——实现政治、文化反向传播目的; 正是因为最后一种功能涉及到了政治目的,所以前两者的功能很容易被贴上了「反动」的标签。

算和不算

把孤儿卖给外国人,不算;

但女生和外国人做爱,算;

Reflections on OpenAI

An unusual part of OpenAI is that everything, and I mean everything, runs on Slack. There is no email. I maybe received ~10 emails in my entire time there. If you aren’t organized, you will find this incredibly distracting. If you curate your channels and notifications, you can make it pretty workable.

OpenAI is incredibly bottoms-up, especially in research. When I first showed up, I started asking questions about the roadmap for the next quarter. The answer I got was: “this doesn’t exist” (though now it does). Good ideas can come from anywhere, and it’s often not really clear which ideas will prove most fruitful ahead of time. Rather than a grand ‘master plan’, progress is iterative and uncovered as new research bears fruit.

Andrey (the Codex lead) used to tell me that you should think of researchers as their own “mini-executive”. There is a strong bias to work on your own thing and see how it pans out. There’s a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or ‘solved’, it probably won’t get worked on.

Innovation starts with consumers, not academia

Also, it wasn’t invented by Steve Jobs. Not really. Jobs paid close attention to consumers and what they were doing, and he adapted the iPhone. A virtuous circle arose.

So innovation works more like this…

academia ← industry ← consumers

Kimi K2

Whereas in America, you could try to do that, but why would you? If you can do this, you can get a better valuation, and make more money, doing something else. The profit margins on the ultimate offering are very low and usually zero. Your lunch could get eaten by a top lab at any time, since ultimately no one cares what it cost to train the model, and your lunch will expire quickly regardless. If you are one of the cracked engineers that would join such a team, you’ll get a better offer to join a different team doing something else. Even if you got close you’d likely do better getting acqui-hired. There’s no need to skimp on compute.

How to instantly be better at things

The improvement came not from explicit reasoning, but from letting a deeper, more intuitive system take over. It’s easier to imitate wholeness than it is to assemble it from parts.

If you asked, about physical reads, “what would someone much better than me do to improve,” you would probably come up with an answer similar to mine: read everything on the topic, then on adjacent topics, then on vaguely related topics; watch hundreds of hours of streams on silent with hole cards covered, trying to find patterns; find other people who were similarly obsessed and trade tips with them; relentlessly make notes of the mannerisms of every person you ever played twice with so you could combine observations from multiple sessions. You know, take it seriously.

远离效率工具与方法论焦虑

工具是身体的延伸,但别让延伸绑架了本体。

生活维艰

大多数人家里装了空调,由此直观的说明农村生活确实比以前好了;但很多人却因舍不得电费而不开空调,靠风扇或阴凉降温。来客了,第一件事不是让座或倒水,而是开空调。客人坐半小时屋里还没有凉下来,但已是满身大汗,等客人起身走后,主人倒是能享受片刻奢侈的清凉了。

为了响应厕所运动,村民很多家里装了马桶。但农村没有可以直排厕所污水的小水道及污水处理系统,所以每家都在厕所旁边预埋了一个大塑料桶,等污物满了喊抽污车来抽走。每抽一次需要 50 元,这对于农村人来说同样是一笔肉疼的钱;同时冲马桶的水也需要钱;于是很多人家里直接砸穿了马桶底,放在原来的蹲坑上,从而省了冲水和抽污的费用。但这么做的结果是不能冲水的马桶比原来的蹲坑更恶心,粘在马桶壁上的污物离身体实在是太近了。

Context Engineering: Bringing Engineering Discipline to Prompts

“I really like the term ‘context engineering’ over prompt engineering,” wrote Tobi. “It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.” Karpathy emphatically agreed, noting that people associate prompts with short instructions, whereas in every serious LLM application, context engineering is the delicate art and science of filling the context window with just the right information for each step.

开开:为什么这么小的孩子会关注这种沉重的话题

然后我就把这张画做成了这样一本书 然后这本书其实是由一张 A4 纸折叠而成 这是这张 A4 纸的正反两面 大家不要小瞧它如此的迷你 其实它具有三个空间 如果我们从封面正常的去翻阅 可以看到刚才塔米的各种各样的状态 然后如果我们从上面的这个空间打开 我们可以看到汤米作为玩偶真实的模样 然后我们来到最后一个空间 我们就是可以看到图钉写下的 他和他米相识相知的故事 通过这个结构一纸成书的结构 我们给这个书创建了一种啊立体的空间啊 然后我们就像进入了一个小小的展厅 然后在这个空间里游走穿梭 然后去看汤米的故事

我在这里引用美国的一个艺评家 他同时也是一个策展人 他叫 lucy leopard 的 然后他说艺术家书既不是艺术书籍 也不是一本关于艺术的书 他专注在书的形式构思 图像放置在一个便携的载体中 它在形式上有所节制 但是在范围上却野心勃勃

All AI Models Might Be The Same

One explanation for why this game works is that there is only one way in which things are related, and this comes from the underlying world we live in. Put another way, our brains build up complicated models of the world in which we live, and the model of the world that my brain relies on is very similar to the one in yours. In fact, our brains’ models of the world are so similar that we can narrow down almost any concept by successively refining the questions we ask, a-la Mussolini or Bread.

Remember what these models are really doing is modeling the relationships between things in the world. In some sense there’s only one correct way to model things, and that’s the true model, the one that perfectly reflects the reality in which we live. Perhaps an infinitely large model with infinite training data would be a perfect simulator of the world itself.

The Platonic Representation Hypothesis argues that models are converging to a shared representation space, and this is becoming more true as we make models bigger and smarter. This is true in text and language, at a minimum,

Context Engineering for AI Agents: Lessons from Building Manus

That hard-earned lesson made the choice clear: Manus would bet on context engineering. This allows us to ship improvements in hours instead of weeks, and kept our product orthogonal to the underlying models: If model progress is the rising tide, we want Manus to be the boat, not the pillar stuck to the seabed.

Still, context engineering turned out to be anything but straightforward. It’s an experimental science—and we’ve rebuilt our agent framework four times, each time after discovering a better way to shape context. We affectionately refer to this manual process of architecture searching, prompt fiddling, and empirical guesswork as “Stochastic Graduate Descent”. It’s not elegant, but it works.

Keep your prompt prefix stable. Due to the autoregressive nature of LLMs, even a single-token difference can invalidate the cache from that token onward. … Make your context append-only.

A natural reaction is to design a dynamic action space—perhaps loading tools on demand using something RAG-like. We tried that in Manus too. But our experiments suggest a clear rule: unless absolutely necessary, avoid dynamically adding or removing tools mid-iteration.

While developing this feature, I found myself imagining what it would take for a State Space Model (SSM) to work effectively in an agentic setting. Unlike Transformers, SSMs lack full attention and struggle with long-range backward dependencies. But if they could master file-based memory—externalizing long-term state instead of holding it in context—then their speed and efficiency might unlock a new class of agents. Agentic SSMs could be the real successors to Neural Turing Machines.

Friendship+时代:当酷儿女孩在爱情与友情的模糊边界跳舞

正如“酷儿柏拉图”(QueerPlatonic)这一新式关系标签,就代表了另一种情感表达模式 ——  既非纯粹的爱情,也非单纯的友情,而是在这两个概念间游走。

当友情与爱情的界限可以被随意涂抹时,重点不在于正确地定义这些关系,而在于是否能够诚实面对每一刻的真实渴望。这场革命没有胜利宣言,只有持续的解码与重构。

伊芙·塞吉威克(Eve Sedgwick)提醒我们 —— 保持困惑可能是更诚实的立场。她指向的正是后现代亲密关系的核心伦理:在不确定的星云中,我们需要的不是更精确的度量仪,而是承认所有坐标系终将失效的勇气。在液态现代性中,确定性的死亡正是自由的诞生。

On Ilya Sutskever’s “A Theory of Unsupervised Learning”

https://www.youtube.com/watch?v=AKMuA_TVz3A

what is learning and why does learning work at why should learning work at all and why should computers be able to learn and now we are just used to the fact we take it for granted that neural networks learn but but why do they like mathematically why should they why would data have regularity that our machine learning models can capture so that’s not an obvious question and one important conceptual advance that has taken place in machine learning many years ago by multiple people with the discovery and the formalization of supervised learning so it goes under the name of back learning or statistical learning theory and the nice thing about supervised learning is that it gives you a precise mathematical condition under which learning must succeed

I want to show you one way of doing unsupervised learning which is not necessarily widely known because it never became the dominant way of doing that supervised learning but it has the cool feature that similarly to supervised learning it has to work so what kind of mysterious unsupervised learning procedure for you not given any labels to any of your inputs is still guaranteed to work distribution matching

say you have two data sets X and Y in two data sets two files on your big giant hard disk and say you have a really great compression algorithm C which takes data in and outputs compressed objects out say you compress X and Y jointly you concatenate them you take the two data sets and you concatenate them and you feed them to your compressor what will happen well let’s see what in this particular an important question is what will a sufficiently do good compressor will do my answer is very intuitively it will use the patterns that exist inside X to help it compress Y[3] and vice versa you could make the same Claim about prediction but somehow it’s more intuitive when you say it about compression

Kolmogorov complexity as the ultimate compressor gives us the ultimate low regret algorithm which is actually not an algorithm because it’s not computable

if you’ve ever tried to design a better neural network architecture what you’d find is that it’s kind of hard to find a better new network architecture you say well let’s add this connection let’s add that connection and let’s like modify this and that why is it hard the simulation argument because your new architecture can be pretty straightforwardly simulated by old architecture except when it can’t those are rare cases and in those rare cases you have a big Improvement such as when you switch from the little RNN to the Transformer

Kolmogorov complexity as the solution to unsupervised learning

Fully Homomorphic Encryption and the Dawn of A Truly Private Internet

All data exists in one of three states:

  • At Rest (stored on disk)
  • In Transit (moving over a network)
  • In Use (being processed in memory)

We have robust solutions for the first two:

  • At Rest: Disk encryption, file system encryption.
  • In Transit: TLS/SSL, VPNs, end-to-end encryption.

But in use—when data is loaded into RAM and processed by CPUs—it is decrypted. This is the Achilles’ heel of modern security.

Kolmogorov complexity

In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy.

Fully Homomorphic Encryption and the Dawn of A Truly Private Internet

To do the above mentioned transformation, FHE uses lattice-based cryptography—imagine a multidimensional grid of points extending infinitely in all directions.

At the heart of lattice-based cryptography are problems that are believed to be extremely hard to solve—even for quantum computers. Two of the most well-known examples are 1::

  • Shortest Vector Problem (SVP): Find the shortest path between lattice points
  • Closest Vector Problem (CVP): Find the lattice point nearest to any given point

In 2D, these look trivial. But add 1,000,000 dimensions? Then it becomes extremely hard, so that it is believed that even quantum computers can’t crack them efficiently. This makes FHE inherently quantum-resistant. A very important property to prepare for the possible quantum computing future.

Another thing to keep in mind is, bootstrapping is the performance bottleneck of modern FHE schemes. Though its computational overhead gets better every year by new algorithms.

Homomorphic Encryption schemes are classified by the types and number of operations they support.

Partial Homomorphic Encryption

Supports only one operation (e.g., addition in Paillier, multiplication in RSA).

(I will build a toy Pailler example below, whose code is short and intuitive)

Somewhat Homomorphic Encryption

Supports both addition and multiplication, but number of multiplications allowed is limited.

(I explained in previous section how noise growth limits number of multiplications possible)

Fully Homomorphic Encryption

Supports unlimited additions and multiplications. Turing Complete. Manages noise by periodically reducing via bootstrapping.

On Doing Hard Things

In team sports, I was invariably the last one picked – probably only because “not picking” wasn’t an option.

My team captain encouraged me in the way good leaders do: not by pretending I was better than I was, but by acting like improvement was inevitable if I kept showing up. That assumption helped.

Kayaking taught me to be okay with repeatedly looking dumb in public.

周刊第 16 期: Andrej Karpathy 教 AI

AGI is a feeling. Like love. Stop trying to define it.

@In the Flux

Did you notice: when technology goes wrong, people always fall back to agriculture. … 也不要忘記,an alternative is to fall back to fascism。

爱、死亡、宿命论

《降临》里关于爱的表达是极其非理性的,它不需要解释,不需要功利主义做出利弊权衡,女主当然有得选,不要和物理学家相爱结婚生下女儿,但她还是毅然决然地选择了这条路。而站在不同视角的人,会得出关于「对与错」的评价。

这便是德国哲学家海德格尔提出「向死而生」的意义——一切存在都是面向「死亡」而存在的,人的寿命、人与人的关系、钱有花光的一天、性欲有衰减的极限、社会会覆灭、系统会崩塌、自然会异变。既然所有的结局都是既定的,那人们到底还在追求什么?

这是「向死而生」这个终极命题的终极答案,唯一不灭的,可以在死亡之后仍然持续存在的——是「爱」

GENIUS 稳定币法案详细解析

當一個經濟體被超級美元化浪潮席捲後,其主權國家將失去兩項最核心的權力:一是通過印鈔來彌補財政赤字的權力(即貨幣鑄幣稅);二是通過利率和貨幣供應量來調控經濟的權力(即貨幣政策獨立性)。

其後果是災難性的。

相比之下,美國通過 “天才法案” 所支援的穩定幣,則建立在以太坊、Solana 等 “無需許可”(Permissionless)的公共區塊鏈之上。這意味著,任何人,無論身在何處,都可以在這個網路上進行創新 —— 開發新的金融應用(DeFi)、建立新的市場、進行交易 —— 而無需得到任何中心化機構的批准。美國政府的角色,不是成為這個網路的運營者,而是成為這個網路中最核心資產(美元)的 “信用擔保人”。

這是一種極其高明的非對稱戰略。美國正在利用其對手最薄弱的一環 —— 對失控的恐懼 —— 來構建自己的護城河。它將全球的創新者、開發者和尋求金融自由的普通使用者,都吸引到以美元為中心的開放生態中。中國被邀請來參加一場它在結構上就無法獲勝的遊戲:一個由國家控制的區域網,如何與一個向全球開放的、充滿活力的金融網際網路競爭?

”天才法案” 遠非一部簡單的國內法案。它是美國在 21 世紀地緣政治棋局中的核心戰略部署。它以一種 “四兩撥千斤” 的方式,利用 “去中心化” 和 “開放” 的理念,來鞏固其最核心的權力 —— 美元霸權。它不是在與中國進行一場對稱的軍備競賽,而是通過改變金融戰場的地形,將競爭帶入一個美國擁有絕對優勢的新維度,對對手的金融體系進行降維打擊。

它來持有和轉移價值,一場更巨集大、更深刻的革命便會隨之而來。這場革命的核心,是將一切有價值的資產 —— 股票、債券、房產、藝術品 —— 都轉化為可以在全球公共帳本上自由流動的數位代幣。

這個過程,即 “真實世界資產上鏈”(RWA),將從根本上切斷資產與特定國家司法管轄權的聯絡,實現資產的 “非國家化”,並最終顛覆以銀行為核心的傳統金融體系。

穩定幣的核心功能,是作為連線傳統法定貨幣世界與加密資產世界的門戶。它們是加密世界的 “入口匝道”(on-ramp),是跨越兩個世界的 “橋樑”。

這正是 “貨幣的非國家化” 推動 “金融的非國家化”,並最終實現 “資本的非國家化”。

在資本可以非國家化的情況下,資本家,自然也將非國家化。

自 1648 年《威斯特伐利亞和約》簽訂以來,世界政治的基本單位就是主權國家。這個體系的核心原則包括:國家在其領土內擁有至高無上的主權、各國主權平等、以及互不干涉內政原則。這個體系的基石,是國家對領土內的人口和財產的絕對控制權。

主權個人的崛起,正在從根本上侵蝕這一基石。當最富創造力和生產力的個體,其經濟活動和財富積累都發生在 「域外」(cyberspace)時,領土的邊界就失去了意義。國家發現自己無法有效向這些全球流動的精英徵稅,其財政基礎將不可避免地被削弱。為了阻止財富外流,絕望的政府可能會採取更激進、更專制的手段,例如書中預言的 「人質劫持式」 的稅收和對促成個人自主的技術進行破壞。但這隻會加速精英的出走,形成惡性迴圈。最終,民族國家可能會蛻變為一個空殼,其功能僅限於為那些無法進入全球數位經濟的、不那麼具有流動性的民眾提供福利和保障 —— 一個為窮人服務的 「保姆國家 「。但顯然,這樣的國家,與財富創造無關

你不是在 vibe coding,而是在十倍速生成屎山

是的,标题里说得很清楚了,我基本上是在十倍速生成屎山。这本来应该是一篇经验介绍的文章,但是我觉得非常应该「把丑话说在前头」,而且这也是我在 vibe coding 中最大的心得。

我之前就说过:AI 决定下限,人决定上限。我觉得 vibe coding 分为以下几种情况:其一,你对要实现的功能非常熟悉,只是让 AI 代劳敲键盘的工作;其二,你对要实现的功能略知一二,要你自己拿起来写会磕磕绊绊;其三,也是我这次的情况,你对要实现的功能基本一无所知。第一种情况下,你跟 AI 很可能会配合得亲密无间,而后两种情况,你的体验估计够呛。后面的体验基本都是基于第三种情况来说。

一旦你开始 vibe coding,你就只能 vibe 到底了。

在你不懂的时候,AI 一旦写出一大段代码,这会迅速超出你的上下文范围。是的,虽然 AI 只有 200K 上下文,但这已经比大多数人能理解的上下文要大得多了(我认为一个普通的程序员上下文应该在 500 行代码左右)。这时候你的代码质量就会快速崩塌:混用 AI 模型、多次生成之间使你的代码库出现了多种不同风格的代码;不清楚最佳实践,导致有各种可能蹩脚的实现。很快你就会发现你陷入一种循环:贴错误日志/截图/描述错误 - AI 修改 - 无脑接受变更 - 运行,你对代码已经完全失去了掌控。

在 vibe coding 十个小时后,我的代码库里出现了几千行我根本看不懂也没法维护的代码,以及一个「好像能用」的产品。说实话,这个项目如果要我自己一边查文档一边写,估计至少要花上一周。但是这在工程上没有太大的意义,完全就是十倍速生成屎山。

震撼即真理

普鲁塔克在《如何区分真朋友和谄媚者》一文中写到,谄媚者没有自己的人格,只会模仿,往往言行不一,说话前后矛盾,所以区分谄媚者的方法之一就是观察他们言语和行为中不连贯的地方。我认为,觉得「震撼即真理」的人和谄媚者一样,没有自己的思想,喜欢模仿别人的用词,尤其是那些听起来让所有人都觉得震撼的词句;也因为他们一直在模仿,也难免因为前后矛盾而露出马脚。

Some thoughts on normality

I think about that interview a lot because there’s always been this weird curiosity around the Olsen twins, especially when they grew up and became reclusive adults. People are always trying to psychoanalyze them, judge their childhoods, and make weird insinuations on “what happened” to them. And honestly, I think what happened is nothing. They’re just too normal.

But honestly, I’m sometimes more impressed by normality, especially successful people who’ve touched the sun and still choose normality in the end. I think back to all the famous people who leave fame and how cruel we are to them, how we cackle cynically about their “fall.” How dare they tumble back down to normality with the rest of us!

But not everyone has the kind of personality one needs to sustain being abnormal. Not everyone wants that power or level of scrutiny, or can even handle it. These people, who might be a bit more down-to-earth and less outgoing, are allowed to exist without the stench of judgement.

中国年轻人心态的保守化的经济因素

众所周知,在上个世纪 90 年代,西方政治家和社会科学学者流行这样一个假说:假如支持中国的改革开放政策,通过扩大经济联系来支持中国经济增长,那么随着中产阶级的扩张,其结果就是中产阶级要求更大的公民权利,而这就会导致公民社会的崛起,进而使得中国走向自由民主。这套假说也被称为李普塞特式现代化理论。

00 后这一代人普遍要比 90 后这一代人心态更保守,他们更不愿意离开自己所在省份生活、更倾向于接受由父母安排的相亲、更倾向于在体制内找一份工作——这就是我已经讨论过的“三座大山”的问题。

尽管更加开放的心态并不一定意味着想要更大的公民权利(甚至大多数情况都是如此),但是更加保守的心态肯定意味着对公民权利兴趣索然。

2025 过去一半了, AI 带给我的 9 个 Aha Moments

他们坚决不做任何商业化,因为认为 DeepSeek 的唯一使命就是追求更高的大模型智能,而一丁点商业化的动作都会干扰和稀释他们的时间与精力。

Manus 让他想起了硅谷黄金时代的 「old Silicon Valley」—— No secrets, just sheer execution speed. (没有秘密,只有纯粹的执行速度)。

从自下而上的用户需求出发解决问题,和自上而下分析「壁垒」,视角是很不一样的。

大家听说过「产品蝗虫」这几个词吗?就是说一个新产品发布后,最先积极涌入去体验的往往都从业人员,产品经理、设计师、工程师、创业者、投资人,被戏称为「产品蝗虫」。——因为他们消耗了产品的资源,但又不是产品的目标用户。

我觉得在这个时代,做一条「产品蝗虫」,率先体验各种产品,有十分了不起的价值。

稻草人周刊 Vol.39 小县城出身如何塑造我的思考

在小县城,骂人「没文化」「不懂」「没见过世面」不会让人感到冒犯,因为他们不在乎。当出身小县城的人抱着这样一种认知与城里人打交道,他们的态度就会被解读为「傲慢」。预设言论背后的恶意和傲慢,实际上是一种认知惯性——不能想象世界上有和自己认知差异巨大的人,无法相信一些让人感到冒犯的东西可能并不怀着恶意。

全球首次 问界/理想/小米/特斯拉 36 辆辅助驾驶高速事故搏命 你敢把命交给车吗?

批判时总显得聪明,就像创造时总显得笨拙 | 参加完小红书独立开发大赛,我憋了一肚子想说的话

我最「怕」采访的一类人,就是「独立开发者」。

做自己爱做的事,同时赚到体面的钱,而且还不需要管理团队、不需要通勤,不需要 care 办公室政治,不需要照顾老板的情绪。

On the Tradeoffs of SSMs and Transformers

Going back to the [brain analogy], one question that intrigues me is whether compression is actually fundamental to intelligence.Is it possible that forcing information into a smaller state forces a model to learn more useful patterns and abstractions? While compressed states are often viewed as a drawback in the literature, I think it might be because it’s very easy to measure these particular weaknesses but very hard to measure more subtle qualitative effects.

The Strength

SSMs are the natural stateful model with efficient, interactive, online processing.

The Weakness

SSMs lack fine-grained recall and retrieval abilities.

The Strength

Transformers have perfect recall and fine-grained manipulation of individual tokens in their context.

The Weakness

Transformers are beholden to the tokens they are given.

Enough AI copilots! We need AI HUDs

how to make a personal assistant that automates tasks for you and knows your full context. They even had a human “butler” on stage representing an AI agent.

Everyone was super excited about this… except Weiser. He was opposed to the whole idea of agents! He gave this example: how should a computer help you fly a plane and avoid collisions?

The agentic option is a “copilot” — a virtual human who you talk with to get help flying the plane. If you’re about to run into another plane it might yell at you “collision, go right and down!”

Weiser offered a different option: design the cockpit so that the human pilot is naturally aware of their surroundings. In his words: “You’ll no more run into another airplane than you would try to walk through a wall.”

Weiser’s goal was an “invisible computer"—not an assistant that grabs your attention, but a computer that fades into the background and becomes “an extension of [your] body”.

从少林寺与娃哈哈看建立普遍监察系统的必要性

只有当“皇帝”拥有足够的力量获取正确的信息,并有足够的政治智慧做出正确的决断时,他才拥有近乎无限的权力。

但这时,他已经从一个人,异化为了“国家意志”,成了一套算法,一台机器,甚至一个符号。人世间的种种享受,对他而言,已经丧失了全部意义。

要想对体制(无论是国家体制,还是自组织体制)进行有效监察,首当其冲是要建立一套比现有官僚体制更有力量的体制。你必须首先在力量上压倒它,而后才谈得上监察它。

但这往往带来新的问题,历史上许许多多监察官职,最后都转化为了行政官职,如刺史、节度使、总督等等。不强大不足以履行监察之责,过于强大又导致他们自我异化,侵夺行政权力。但这已经是另一个层面的问题,暂且按下不表。

少林寺和娃哈哈的根本问题,在于根本没有配套的监察制度能够对他们内部手握行政权力的组织进行任何有效的监察。别说监察,就连行政运作的准确数据信息,外界都无从得知。

When you truly work for yourself

“When you truly work for yourself, you won’t have hobbies, you won’t have weekends, and you won’t have vacations, but you won’t have work either.”

Naval: This is the paradox of working for yourself, which every entrepreneur or every self-employed person is familiar with, which is that when you start working for yourself, you basically sacrifice this work-life balance thing.

You sacrifice this work-life distinction. There’s no more nine-to-five. There’s no more office. There’s no one who’s telling you what to do. There’s no playbook to follow. At the same time, there’s nothing to turn off. You can’t turn it off. You are the business. You are the product. You are the work. You are the entity, and you care.

a taste of freedom can make you unemployable

Ultra Instinct

An analogous “frequency spectrum” can be defined for intelligent decision making.

Some decision loops happen slowly – what projects should I pursue at work?

Other decisions happen more quickly – what should I have for lunch?

Others more quickly still – oh shit, I need to swerve my car or I am going to get into an accident!

An intelligence with a 100ms reaction time that “lives” in the real world will be a very qualitatively different user experience.

The Future is NOT Self-Hosted

Nobody’s free until everybody’s free.

我在首尔看了场猛男秀:取悦女性,不能光靠脱啊

对比明显的是,我在成人娱乐场所中接触过的所有女性从业者,均展示出强大的情绪劳动能力,她们对人类心理的拿捏也更显细腻。先问你从哪里来,又夸你长得漂亮、衣服好看、名字好听、香水好闻,最后还偷偷问一句,可不可以和你约会。根本不需要脱衣服,就能把人哄得开开心心。这方面,WILDWILD 做得远远不够。

由此可见,WILDWILD 秀实际上折射出“女性凝视”理论(如果它真的存在的话)所遭遇的现实困境。当表演者刻意迎合想象中的女性需求时,反而暴露出对女性真实欲望的无知 —— 设计过度的内容更像是男性视角的自我投射。而女性观众的凝视,实则是在为表演者本就膨胀的自我认知,提供进一步滋养和强化。

孟岩

「有些人一旦出现在播客里,我就会把这期播客加入待听列表。因为 TA 的名字本身就是一种比评分更直接的保证」,就像周老板说的「人保活」。

在电影领域,诺兰、维伦纽瓦、彼特·道格特、科辛斯基、汉斯·季默、马特·达蒙、马修·麦康纳 … 都是这样的名字

投资中,我相信的事

从 2005 年算起,我学习投资已经有 17 年了。

我曾经以为,我会对投资和市场越来越了解。

恰恰相反,在市场「存活」的时间越长,我越来越意识到永远会有超乎我想象的事情发生,一切无人知晓。

上周,市场再创历史记录,巨幅震荡。

和朋友们聊天,有人提前大幅度减仓,认为这次不一样;有人按原计划加仓,虽然有些看不懂但必须坚守原则;还有人在周二的盘中,根据纪律清掉了大部分仓位。

虽然动作各不相同,但他们都在遵循自己的投资哲学和投资原则。

我在想,把时间拉长,这次结果的对错可能并不重要。但如果因此而对自己的投资哲学和信仰产生了怀疑和破坏,那就什么都没有了。

继而,我想借这个机会,把我这些年来在市场中学到的,那些在失败、痛苦、迷茫中感受到的东西写下来,作为一个阶段性的总结。

1)站在时间一边; 不上杠杆、不做空、不用短期可能要用的钱来投资,否则准备好承担相应的后果。

2)市场价格包含了很多信息,我只知道其中很小的一部分; 当我开始做主观判断(比如「这个公司很好」、「已经跌很多了」、「最近趋势不错」)的时候,多想想自己为什么会比市场知道更多。

3)做好大类资产配置,分散持仓; 包括大类资产标的(比如 A 股、美股、债券、宽基、价值、成长……)以及比例,这个配置本身决定了大部分收益。

4)宽基指数作为原点和核心仓位; 宽基指数(沪深 300、标普 500 等)不会死,更重要的是它的筛选机制会突破人的认知和情绪限制,帮助我们捕获优秀的公司。

5)短周期的择时是无效的,长周期的逆向投资是可行的; 参见 2,短期猜明天涨跌很难,利用投资者群体情绪做长周期的逆向投资,值得尝试。

6)在宽基的基础上,基于自己的能力圈来扩展; 市场远比我们想象的要高效。只有确信自己真的比市场「更懂」一个行业、一个公司,或者真的能找到有能力做到这些的基金经理时,再把一部分仓位配置给 TA。

7)用仓位上限和加仓节奏来保护自己; 参见 2,市场比我们知道太多。无论觉得自己多么懂一个行业或公司,无论标的跌了多少,设定好一个品种的上限(超过不再加仓),控制好补仓的节奏。

8)长期合理的预期收益是 8% ~ 12%; 一年三倍者如过江之鲫,三年一倍者却寥落星辰。对大多数人(包括我)来说,8% ~ 12% 已经是很好的收益,短期超过的部分,更应该归功于运气,大概率还会还回去。

9)计划你的交易,交易你的计划(Plan your trade, trade your plan); 人脑远没有我们自己想象的那么客观和冷静,尤其是在极端时刻。根据策略来交易,根据结果来迭代、升级策略,而不是按照主观想法违反策略。

10)正确对待信息; 参见 2,绝大多数时候,市场价格反应了所有的信息。当我们看到一则信息的时候,第一根本不知道真假,第二根本不知道有多少人更早知道并根据它交易了,信息已经反应在了价格中。

11)投资是认知的变现; 参见 2 以及以上所有,当你在平均收益率(长期持有指数)之外做出额外的选择(择时、择股、选择基金经理)时,就是在用你的「超额认知」来交易。你是不是「真的懂」,决定了交易的结果。长期来看,你的长期收益率会回归到你的认知水平。 扩张自己的认知账户,投资收益自然会跟上。

12)投资哲学需要和我们的价值观匹配; 市场有很多投资策略和哲学,长期都有可能赚到钱。选择符合自己价值观的投资策略和哲学,这决定了你和它的关系,决定了是否能长期持有,决定了是否能在它不适应市场风格的时候拿住它。

13)投资是为了让生活更美好,而不是让自己更焦虑; 收益率、市值这些数字经常掩盖我们真正的目标。如果因为投资而整天陷入焦虑,说明我们的投资策略或是仓位不合适。

14)投资收益,根本来自于人类经济的增长; 科技进步、制度创新、市场交换,这些因素促进了最近 200 多年人类经济的快速增长,这是财富的根本来源。投资获得良好收益,是尊重世界的规律,做正确的事情,参与到价值创造后自然的结果。

15)「我」变小,是投资成功的开始; 「我」只是整个世界非常微不足道的变量,当「我」变小,当我们把亏钱的原因归结于自己,而不是市场、国家、主力、战争、其它,可能就离投资成功不远了。

真格基金戴雨森:从「没必要付费」到「非用不可」,AI 正在冲击人类历史上最快的增长纪录

这是我对终极产品的向往:把极为先进的技术包装在超级简单的界面背后,像魔法一样让普通人具备非常强大的能力。

例如 Transformer 出现之后,BERT 和 GPT 哪个技术路线更好,OpenAI 不是 AB 测试出来的,是靠判断选出来、执行做出来的,甚至在模型规模到达一定规模之前,BERT 反而是效果更好的方案。但这种选择的能力,反而是 AI native 创业者面对大厂的机会。

当执行力不再稀缺,我认为工作的关键反而变成:

1)你要做什么。这是人的主观主动性,英文叫 Agency。我们和创业者交流,很关注他是不是那个真正行动的人,清楚自己要做什么,想办法推进,招人、找钱、做产品,遇到问题也能努力解决往前走。

2)你选择什么。也就是所谓的 Taste(品味)。AI 可以创造很多选项,但是选择最后还是人来做。Midjourney 一次给你四张图,Vibe Coding 给你多个实现方案,你选哪个?也许有一天 AI 的 taste 会比人更强,但现在,决定还得人来做。Agency(主观能动性)和 Taste(品味),是 AI 时代人与人之间的关键分野。

应用或者是「套壳」到底有没有长期价值?这是这几年大家一直在讨论的话题。有人认为模型越来越强大,会吞噬应用的价值。我却恰恰认为相反:模型越强大,应用就越能够通过专有的上下文和环境来创造增量价值。