<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[randallb.com]]></title><description><![CDATA[I’m most focused on AI, as defined as post GPT machine learning, communication, as defined as talking to other individual people, computers, and large groups of people at scale, and mental health.]]></description><link>https://randallb.com</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 20:40:46 GMT</lastBuildDate><atom:link href="https://randallb.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Randall Bennett]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[randallb@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[randallb@substack.com]]></itunes:email><itunes:name><![CDATA[Randall Bennett]]></itunes:name></itunes:owner><itunes:author><![CDATA[Randall Bennett]]></itunes:author><googleplay:owner><![CDATA[randallb@substack.com]]></googleplay:owner><googleplay:email><![CDATA[randallb@substack.com]]></googleplay:email><googleplay:author><![CDATA[Randall Bennett]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The 5 levels of agentic engineering: From vibecoding to production-maxxing]]></title><description><![CDATA[Vibecoding is great, but it's best applied as a communication medium... not building production apps. There's a good path to production that doesn't mean switching to traditional software engineering.]]></description><link>https://randallb.com/p/the-5-levels-of-agentic-engineering</link><guid isPermaLink="false">https://randallb.com/p/the-5-levels-of-agentic-engineering</guid><dc:creator><![CDATA[Randall Bennett]]></dc:creator><pubDate>Thu, 26 Feb 2026 18:04:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D66C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5b38c9-4c4f-49a2-8d27-633ee518af2c_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s ok to vibecode, it&#8217;s not ok to ship slop to users. I have a mental model i&#8217;m working on to try to balance moving quickly and not breaking things. (Building less, shipping more.)</p><h2>Internal only</h2><p><strong>Goal</strong>: Figure out if you should build anything.</p><p><strong>When to use</strong>: You are the only user and are trying to communicate ideas rather than ship usable software.</p><p><strong>Models to use</strong>: Whatever is fast and good enough (in practice, i find this to be gpt-5.3-codex at medium reasoning effort.)</p><p><strong>What you&#8217;re allowed to ship</strong>: Literally anything. Terrible is fine. <a href="https://blog.codinghorror.com/worse-is-better/">Worse is better</a>.</p><p><strong>Attention to agent effort</strong>: Virtually none. Let it run as long as it wants, ship terrible stuff, expect to throw it away.</p><h2>Alpha</h2><p><strong>Goal</strong>: Figure out if you&#8217;re building something anyone wants.</p><p><strong>When to use</strong>: When you have &lt; 10 users, and you know most of them directly through 1 degree of separation. You can talk to all of them, and you kind of expect them to churn, because they&#8217;re being nice to you more than being a real user.</p><p><strong>Models to use</strong>: Basically the same fast / good enough.</p><p><strong>What you&#8217;re allowed to ship</strong>: Things that don&#8217;t have serious security bugs and unusable performance characteristics.</p><p><strong>Attention to agent effort</strong>: Slightly more. Don&#8217;t let it do anything absolutely terrible, but in practice most modern agents are good enough to not make the sloppiest mistakes.</p><h2>Private Beta</h2><p><strong>Goal</strong>: Figure out if you&#8217;re building something anyone wants enough to use frequently.</p><p><strong>When to use</strong>: When you have ~10 users but none of them are 1 degree of separation. More importantly: Some of them haven&#8217;t churned and are actually getting a <a href="https://news.ycombinator.com/item?id=542768">quantum of utility</a>.</p><p><strong>Models to use</strong>: Start thinking about something that&#8217;s better at reasoning, and slower.</p><p><strong>What you&#8217;re allowed to ship</strong>: Roughly the same as Alpha, but it should actually be useful for someone. You should still be embarrassed by how bad it is.</p><p><strong>Attention to agent effort</strong>: I recommend having the agent perform an <a href="https://randallb.com/i/187874597/improve-through-structured-feedback">after action report</a> style summary where it carefully explains all of the changes it made (in a text file) and you should be able to ask questions of your agent to ensure you&#8217;re on the same page.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Public Beta</h2><p><strong>Goal</strong>: Figure out if you&#8217;re building something people want to use frequently</p><p><strong>When to use</strong>: When you have enough users that you don&#8217;t know all of them / can&#8217;t talk to them individually. (<a href="https://en.wikipedia.org/wiki/Dunbar%27s_number">Dunbar&#8217;s number</a> is about ~150 and is probably a decent guide for consumer products. For B2B, it&#8217;s some meaningful amount in your target market.)</p><p><strong>Models to use</strong>: Slower and more thoughtful for anything that touches all of your users.</p><p><strong>What you&#8217;re allowed to ship</strong>: Something that mostly works, but has a few rough edges. Enough people should be using the product that every minute you spend of effort results in at least 10x saved effort by your users.</p><p><strong>Attention to agent effort</strong>: More thoroughly code reviewed&#8230; not necessarily by a human but there should be some process for maintaining code standards beyond YOLO. (Linters, type checkers, actual tests, playwright tests, etc.)</p><h2>Production</h2><p><strong>Goal</strong>: Make something people want.</p><p><strong>When to use</strong>: You have something good enough that it spreads naturally by word of mouth. </p><p><strong>Models to use</strong>: Ones that are consistent and never break. In practice that means thoroughly vetted and able to be trusted.</p><p><strong>What you&#8217;re allowed to ship</strong>: Something that works in a way that you anticipate will be quality. All of your users should use this part of your software, and every minute you spend should result in 100x saved effort by your users.</p><p><strong>Attention to agent effort</strong>: Systematic and process driven. You should have an audit trail that proves your software does what you expect, and you shouldn&#8217;t have any surprises.</p><h1>Nobody is shipping production code with agents today.</h1><p>By my definition, I think the best teams <em>might</em> be shipping public beta quality code. I&#8217;m unconvinced that anyone has a robust production level pipeline without thorough human intervention.</p><p>It won&#8217;t be that way for long, but as of today I think it&#8217;s that way.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Build less, ship more: The three pillars of product command]]></title><description><![CDATA[Improve your one-shot agentic engineering performance by clearly explaining yourself.]]></description><link>https://randallb.com/p/build-less-ship-more-the-three-pillars</link><guid isPermaLink="false">https://randallb.com/p/build-less-ship-more-the-three-pillars</guid><dc:creator><![CDATA[Randall Bennett]]></dc:creator><pubDate>Fri, 13 Feb 2026 16:56:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D66C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5b38c9-4c4f-49a2-8d27-633ee518af2c_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Product command</strong></em> <em>is a coordination model for centralizing intent (the why, success conditions, and constraints), decentralizing execution (letting coworkers decide the less crucial details), and aligning both through explicit verification steps.</em></p><p><em>This contrasts with a command-and-control coordination model, in which a commander is responsible for decision-making, and subordinates are responsible for execution.</em></p><p><em>The end result is outputs (products) that require less effort to achieve better quality.</em></p><p><em>This post focuses on the three pillars of product command.</em></p><div><hr></div><p>Coding with AI feels like a slot machine. Every once in a while, you hit the jackpot, but most of the time, you&#8217;re holding onto hope. Instead of building, you&#8217;re babysitting... watching over the shoulder of an AI bound to make mistakes, and it&#8217;s your job as the developer to make sure it doesn&#8217;t.</p><p>That sounds terrible.</p><p>The good news: If you can just explain your intent, AI can fill in the gaps. Then you&#8217;re creating, designing systems, and building... not babysitting.</p><h2>Centralize intent</h2><p>Your goal is to explain what you want and what you don&#8217;t want, with as few words as possible. Based on mission command, I&#8217;ve adopted these headings:</p><ol><li><p>Purpose</p></li><li><p>End State</p></li><li><p>Constraints</p></li><li><p>Tradeoffs</p></li><li><p>Risk tolerance</p></li><li><p>Escalation conditions</p></li><li><p>Verification Steps</p></li><li><p>Activation / Revalidation</p></li><li><p>Appendix</p></li></ol><p>Here&#8217;s my <a href="https://gist.github.com/randallb/ac0fd027276665c846cf1b13c0218604">copy-pasteable template</a>.</p><p>Each of these headings doesn&#8217;t tell the AI what to do implementation-wise, it tells it the strategy you&#8217;re going after. You don&#8217;t have to tell it how to do things, you have to give it the why.</p><p>I work with the LLM to come up with the doc... usually by asking it to gather context on the subject first, then build the doc, and 9/10 of the time it gets the majority of it right.</p><p>I then review and ensure I agree with every single word in the document.</p><h2>Distribute execution</h2><p>Once I&#8217;ve built a good enough INTENT.md, I start a new context window, paste in the link, along with this text:</p><pre><code><code>let's implement this. Use your best judgement. Feel free to use subagents if they make your life simpler. Make sure to run through the verification steps thoroughly. I'm not going to be around, so prioritize using your best judgement, making frequent commits but don&#8217;t submit them, keep them local. Try to find any runbooks or policies that apply to your work, and make sure you follow them. You can do this! Good luck! -- make sure to build one or more AARs while you're going (if you do a commit, ideally an AAR should accompany it as part of the commit.)</code></code></pre><p>Usually, the AI takes 5-20 minutes to build something, and then at the end, I ask, &#8220;Let&#8217;s build an AAR.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Improve through structured feedback</h2><p>An AAR is military for &#8220;After Action Report.&#8221; It too <a href="https://gist.github.com/randallb/ac0fd027276665c846cf1b13c0218604">has a template</a>:</p><ol><li><p>Context</p></li><li><p>Intent</p></li><li><p>What actually happened (facts only)</p></li><li><p>Delta analysis (why it was different)</p></li><li><p>Initiative Assessment (When the AI made its own decisions)</p></li><li><p>Weaknesses in intent (Parts where the intent wasn&#8217;t clear enough)</p></li><li><p>What we will sustain</p></li><li><p>What we will improve</p></li></ol><p>I run this after each session. It captures from the existing AI&#8217;s current context its analysis of the process it just ran.</p><p>Next, I manually review / test the changes. Usually, this means I&#8217;m using the product.</p><p>If the product is 80% what I expected, I will ship. If there&#8217;s a few changes (minor placement issues in UI, for instance) then I will fix them with the LLM and update the AAR.</p><p>If the product is &lt; 80% what I expected, I will explain what happened, identify true weaknesses in the intent, then trash the current work, go back to the intent, and have it run the same process.</p><p>I&#8217;ve only ever had to run this process 3 times to get what I want, normally I just run it once.</p><p>Try it, it will work, I promise.</p><h2>Policy: Long-term memory</h2><p>One quick side note: I save all the things I want my AI to learn in a folder called &#8220;policy&#8221; next to the INTENT.md. I use the AAR to keep it updated, but like in a real organization, updating policy can have knock-on effects, so I do it sparingly.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Build less, ship more: Inference engineering means making just-in-time context generation]]></title><description><![CDATA[Inference time optimization leads to better outcomes than model quality improvements now. Just don't call it prompt engineering...]]></description><link>https://randallb.com/p/build-less-ship-more-inference-engineering</link><guid isPermaLink="false">https://randallb.com/p/build-less-ship-more-inference-engineering</guid><dc:creator><![CDATA[Randall Bennett]]></dc:creator><pubDate>Thu, 05 Feb 2026 18:05:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D66C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5b38c9-4c4f-49a2-8d27-633ee518af2c_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is the first of a series of posts explaining <strong>Product Command</strong>. </em></p><p><em><strong>Product command</strong> is a coordination model for centralizing intent (the why, success conditions, and constraints), decentralizing execution (letting coworkers decide the less crucial details), and aligning both through explicit verification steps.</em></p><p><em>This contrasts with a command-and-control coordination model, in which a commander makes decisions, and subordinates execute.</em></p><p><em>The result is outputs (products) that require less effort to achieve better quality.</em></p><p><em>This post focuses on building the right environment for AI to find its own context.</em></p><div><hr></div><h2>LLMs are a communication problem, not a computer science problem.</h2><p>I haven&#8217;t written any code in 6 months. In the last month, I&#8217;ve been able to let Codex run unattended for 20-60m and get a feature right the first time.</p><p>The key? Quality communication. I&#8217;m a startup founder and software engineer, but I started my career as a journalist. That means I actually studied <a href="https://www.weber.edu/communication/">Communication in college</a>, not computer science.</p><p>CS and Comm might not be as divergent as you think. Weirdly, these two fields have a shared heritage and patron saint: <a href="https://en.wikipedia.org/wiki/Claude_Shannon">Claude Shannon</a>. </p><p>Shannon describes a scientific definition for communication called <a href="https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf">A Mathematical Theory of Communication</a>. CS focuses on the math, communication studies focuses on <a href="https://www.communicationtheory.org/shannon-and-weaver-model-of-communication/">the mechanisms</a>.</p><p>Computers are obviously a Computer Science problem. From 1950 to 2015, communication mechanisms were literal. Feedback, signal-to-noise, transmission mechanism, etc., all literally meant wires transmitting electrons. Machine learning isn&#8217;t about understanding words, it&#8217;s about <a href="https://en.wikipedia.org/wiki/Bayesian_linear_regression">Bayesian regression</a>. That means math, not communication.</p><p>LLMs are different. Training time is still all math and builds the model's instincts. But the intelligence part of artificial intelligence happens at inference time, not training time.</p><p>And inference time is communicated in words, not numbers.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/p/build-less-ship-more-inference-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/p/build-less-ship-more-inference-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://randallb.com/p/build-less-ship-more-inference-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Inference engineering</h2><p>I&#8217;d like to introduce a new term: <strong>Inference engineer</strong>. An inference engineer is someone who manages both sides of Claude Shannon&#8217;s model... both the math and the mechanics. That means, yes, writing great prompts, but more importantly, it means clear communication between the person using the AI and the AI implementing it.</p><p>A few years back, I left Facebook to build a startup with my best friends. <a href="https://boltfoundry.com">Our startup</a> helped take long videos and turn them into short ones using LLMs. </p><p>As we started to scale, the quality of our AI implementation didn&#8217;t. Our company&#8217;s biggest hurdle ended up being humans reviewing the quality of clips, not developing our product or marketing. </p><p>If AI is supposed to save time, it has to actually do that. And if it doesn&#8217;t, humans-in-the-loop behind the scenes have to cover for it. Our LLM wasn&#8217;t good enough, and so our business couldn&#8217;t scale without people. That made our team ask more fundamental questions. </p><p>How can we actually know something is reliable? How do we communicate our preferences to the AI? How can we communicate our users&#8217; preferences to the AI?</p><p><a href="https://www.youtube.com/watch?v=8rABwKRsec4">Sean Grove is one of the smartest people in this area</a>, and he turned me onto <a href="https://model-spec.openai.com/2025-12-18.html">Model Spec</a>, <a href="https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback">Constitutional AI</a>, and other concepts about how to align and verify model behavior.</p><p>He gave me the words to communicate what I was feeling: Communication with models defines success with AI. Specs are permanent, code is an artifact of a spec.</p><p>Programming will go away, specs will not.</p><h2>Learning -&gt; building</h2><p>We took this idea and tried to figure out a practical application. For the last year, <a href="https://boltfoundry.com">our team</a> has been working as sort of consultants... circling ideas like <a href="https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning)">fine-tuning</a>, <a href="https://github.com/openai/evals">evals</a>, <a href="https://mastra.ai/">execution frameworks</a>, and <a href="https://contexteng.ai/p/context-engineering-101-the-hourglass">communication theories</a> to fill a growing confidence gap between model performance and consistent outputs. </p><p>We&#8217;ve worked with some great teams, and were always able to close their trust gaps in surprising ways. (I&#8217;ll be posting about those ways! Like and subscribe!)</p><p>Finally, we&#8217;ve figured out how to explain it. We call it <strong>product command</strong>. </p><p><strong>Product command is a coordination methodology</strong> designed to create an environment for an agent <strong>to discover the right information at precisely the right moment</strong>. </p><p>That means agents, rather than humans, tell agents precisely what to do. Humans instead describe their intent and leave the AI to build its own context windows sufficiently to do a task.</p><p>People (or other parent agents) are responsible for specifying what the goal is of the next interaction, and the agent is responsible for execution. People shouldn&#8217;t be digging in to see which tool calls were made, they should be specifying an intent and then ensuring that intent was met.</p><p>The most basic implementation is creating artifacts that say precisely the right amount of information to execute a plan, let agents execute that plan mostly unattended, and then learn from the execution to improve the loop. </p><p>We&#8217;ll have more tactics and info as we go. In the next post, I&#8217;ll explain why this works and why human institutions don&#8217;t fundamentally get more effective as they scale, but AI can. </p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Next post: The Three Pillars of product command. I&#8217;ll give practical definitions and specs for centralizing intent, distributing execution, and standardizing verification and learning.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Written by me.]]></title><description><![CDATA[Anything on this site will have been typed by my fingers. AI will not generate it.]]></description><link>https://randallb.com/p/written-by-me</link><guid isPermaLink="false">https://randallb.com/p/written-by-me</guid><dc:creator><![CDATA[Randall Bennett]]></dc:creator><pubDate>Sat, 31 Jan 2026 00:09:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D66C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5b38c9-4c4f-49a2-8d27-633ee518af2c_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://randallb.com/subscribe?"><span>Subscribe now</span></a></p><h2>Writing is thinking, my words are my thoughts.</h2><p>I&#8217;m pretty forward looking, and a lot of my friends have asked why I&#8217;m ok with AI writing my code but not writing my words.</p><p>To me, modern generative AI represents the p50 of human experience. In a lot of places, I have higher than p50 human experience, and anywhere where I have less than p50, I&#8217;m hoping to expand and grow using my own skull.</p><p>Anything on this site, or anything on my social media, with my byline, will be written by me. No exceptions.</p><p>Anything that doesn&#8217;t have my byline but is on something I work with (ie <a href="http://boltfoundry.com">boltfoundry.com</a> etc.) you can probably safely assume AI wrote it.</p><h3>Why is this your first post? </h3><p>I want to share my journey from eager 15-year-old would-be journalist at dreamcast.net, to minor tech influencer, to startup founder, big tech employee, and now an Inference Engineer.</p><p>This is my story, and I think it&#8217;s important for you, dear reader, to know that I&#8217;m planning to spend at least 10x more time writing a post than it takes you to read it.</p><p>You are important to me, and even though I don&#8217;t yet know your name, I&#8217;m writing for you.</p><h3>I care.</h3><p> I&#8217;m building this for you because I think I&#8217;ve had a pretty deep breadth of experience ranging from traumatic upbringing to professional success&#8230; from the depths of depression and suicidal behavior to the joys of parenthood, self care and emotional management.</p><p>I want to share this so you can hopefully speedrun your own trauma, and if you can learn something from my life experiences and perspectives, I want them to be as lossless as possible.</p><p>I don&#8217;t want to diffuse my experience through the p50 of humanity, I want to present the p100 of Randall Bennett&#8217;s life.</p><h3>I&#8217;m mostly going to talk about AI, communication, and mental health. </h3><p>For me, the three areas of my life I&#8217;m most focused on are AI, as defined as post GPT machine learning, communication, as defined as talking to other individual people, computers, and large groups of people at scale, and mental health, as defined as learning from traumatic experiences that someone doesn&#8217;t choose, challenging genetic traits and less-than-ideal neurochemistry.</p><h3>I am not just those things, I just know how to write about them.</h3><p>I care about a lot of other topics. Feel free to ask! (If you&#8217;re a paid subscriber, you can access my chats.) The main things you can ask about: What it&#8217;s like to be a parent of 3 autistic boys, and 4 kids in general&#8230; moving to a place never having been there before&#8230; why having blue hair was fun and why i don&#8217;t have it now (i might write about this tho at some point)&#8230; how religion is basically the anchor of my entire existence (please ask! I don&#8217;t preach, it&#8217;s broadly applicable).</p><p>I&#8217;m not going to write about those topics generally just because I don&#8217;t really know how to approach them on a generalized basis yet. They&#8217;re things that are kind of unique to me, and require a more custom chat where I understand a person&#8217;s needs to tailor the message.</p><p>The religion stuff is especially important to me, but I don&#8217;t want to have people think I&#8217;m prescribing it as something that is universally important to believe the way I do, or that I should convince you of its truth or falseness. Or, that I&#8217;m somehow a good illustration of my religious habits / heritage.</p><p>I just want to focus on things that I&#8217;m able to explain in a free-er manner, that I can generally apply in a way that doesn&#8217;t cause me a lot of mental energy to say.</p><div class="community-chat" data-attrs="{&quot;url&quot;:&quot;https://open.substack.com/pub/randallb/chat?utm_source=chat_embed&quot;,&quot;subdomain&quot;:&quot;randallb&quot;,&quot;pub&quot;:{&quot;id&quot;:20971,&quot;name&quot;:&quot;randallb.com&quot;,&quot;author_name&quot;:&quot;Randall Bennett&quot;,&quot;author_photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!PIel!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd691124e-2a8b-4dfa-8288-78a9a859833c_1216x1216.jpeg&quot;}}" data-component-name="CommunityChatRenderPlaceholder"></div><h3>Thanks for coming, tell your friends, like and subscribe!</h3><p>I&#8217;m building this for me as much as you, but I&#8217;m really hoping that I&#8217;m actually a decent communicator and my perspective will resonate with someone. If that&#8217;s you, then welcome! Hopefully you know others like you who I can help.</p><p>If that&#8217;s not you, then thanks for reading! You should still share it with someone else anyway. </p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading randallb.com! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>A final note: I will use generate AI to generate ideas, and possibly be an editor, but I&#8217;m literally not planning to use it to insert text on this site. Anything read on randallb.com will have come from my fingers, videos, etc.</p>]]></content:encoded></item><item><title><![CDATA[High growth startups are hard, here's some help for founders and employees.]]></title><description><![CDATA[Welcome to randallb.com by me, Randall Bennett.]]></description><link>https://randallb.com/p/coming-soon</link><guid isPermaLink="false">https://randallb.com/p/coming-soon</guid><dc:creator><![CDATA[Randall Bennett]]></dc:creator><pubDate>Fri, 08 Nov 2019 11:46:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D66C!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5b38c9-4c4f-49a2-8d27-633ee518af2c_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to randallb.com by me, Randall Bennett. founder @vidpresso (yc w14, acq&#39;d by @Facebook) Making video editable after it&#39;s created, more like HTML. Allegedly a nice guy.</p><p>Sign up now so you don&#8217;t miss the first issue.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://randallb.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://randallb.com/subscribe?"><span>Subscribe now</span></a></p><p>In the meantime, <a href="https://randallb.com/p/coming-soon?utm_source=substack&utm_medium=email&utm_content=share&action=share">tell your friends</a>!</p>]]></content:encoded></item></channel></rss>