<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Long on Condor Writing</title>
    <link>/tags/long/</link>
    <description>Recent content in Long on Condor Writing</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Tue, 07 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="/tags/long/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>LLM Development at Enterprise Scale</title>
      <link>/posts/llmdev2026/</link>
      <pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate>
      <guid>/posts/llmdev2026/</guid>
      <description>&lt;div class=&#34;ox-hugo-toc toc&#34;&gt;&#xA;&lt;div class=&#34;heading&#34;&gt;Table of Contents&lt;/div&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#introduction&#34;&gt;Introduction&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#microservices&#34;&gt;Microservices&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#resources-and-constraints&#34;&gt;Resources and Constraints&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#latency-hierarchy&#34;&gt;Latency Hierarchy&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#device&#34;&gt;Device&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#network&#34;&gt;Network&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#data-and-storage&#34;&gt;Data &amp;amp; Storage&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#testing&#34;&gt;Testing&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#testing-layers&#34;&gt;Testing Layers&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#iteration-speed-is-development-output&#34;&gt;Iteration Speed is Development Output&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#effective-docker-design&#34;&gt;Effective Docker Design&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#involving-llms&#34;&gt;Involving LLMs&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#working-with-models-via-ooda-loops&#34;&gt;Working with Models via OODA Loops&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#model-selection&#34;&gt;Model Selection&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#claude&#34;&gt;Claude&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#gpt&#34;&gt;GPT&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#open-models&#34;&gt;Open Models&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#model-collaboration&#34;&gt;Model Collaboration&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#effective-collaboration-tools&#34;&gt;Effective Collaboration Tools&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#the-human-in-the-machine&#34;&gt;The Human in the Machine&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/llmdev2026/#wrap-up&#34;&gt;Wrap Up&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/div&gt;&#xA;&lt;!--endtoc--&gt;&#xA;&lt;p&gt;This is a talk I gave at a company on-site turned into an article.  Expect it to be lengthy and less suitable for a non-technical audience after the introduction.  The article is intended for new developer onboarding, detailing a range of infrastructure considerations, and a guide to working with LLMs for enterprise development.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GTC 2026</title>
      <link>/posts/gtc2026/</link>
      <pubDate>Thu, 19 Mar 2026 00:00:00 +0000</pubDate>
      <guid>/posts/gtc2026/</guid>
      <description>&lt;div class=&#34;ox-hugo-toc toc&#34;&gt;&#xA;&lt;div class=&#34;heading&#34;&gt;Table of Contents&lt;/div&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#gtc-highlights&#34;&gt;GTC Highlights&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#talks&#34;&gt;Talks&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#scaling-out-and-across-networking-innovations-for-giga-scale-ai-systems-s81561&#34;&gt;Scaling Out and Across: Networking Innovations for Giga-Scale AI Systems [S81561]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#build-a-high-performance-research-cluster-s81731&#34;&gt;Build a High-Performance Research Cluster [S81731]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#inside-nvidia-dgx-ai-factory-accelerating-networking-for-ai-across-cloud-core-and-edge-s81856&#34;&gt;Inside NVIDIA DGX AI Factory: Accelerating Networking for AI Across Cloud, Core, and Edge [S81856]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#achieve-truly-serverless-gpus-with-libfuse-criu-and-cuda-checkpoint-s81424&#34;&gt;Achieve Truly Serverless GPUs With libfuse, CRIU, and CUDA-Checkpoint [S81424]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#accelerate-cloud-platforms-for-the-next-era-of-ai-s81788&#34;&gt;Accelerate Cloud Platforms for the Next Era of AI [S81788]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#how-we-scaled-kimi-k2-dot-5-s81695&#34;&gt;How We Scaled Kimi K2.5 [S81695]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#vllm-in-2026-architectural-challenges-and-performance-optimizations-s82059&#34;&gt;vLLM in 2026: Architectural Challenges and Performance Optimizations [S82059]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#general-networking&#34;&gt;General Networking&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#building-measuring-and-using-ai-scientists-s81694&#34;&gt;Building, Measuring, and Using AI Scientists [S81694]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#an-ai-driven-autonomous-lab-of-the-future-for-chemistry-and-materials-science-s81790&#34;&gt;An AI-Driven Autonomous Lab of the Future for Chemistry and Materials Science [S81790]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#the-state-of-open-source-ai-s81791&#34;&gt;The State of Open Source AI [S81791]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#a-new-paradigm-verifiable-ai-s81489&#34;&gt;A New Paradigm: Verifiable AI [S81489]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#optimize-kv-caches-for-llm-inference-dynamo-kvbm-flexkv-lmcache-s82033&#34;&gt;Optimize KV Caches for LLM Inference: Dynamo KVBM, FlexKV, LMCache [S82033]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#mlops-202-from-models-to-production-ai-systems-at-scale-s81662&#34;&gt;MLOps 202: From Models to Production AI Systems at Scale [S81662]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#science-and-engineering-with-ai-physics-and-kit-cae-s81781&#34;&gt;Science and Engineering With AI Physics and Kit-CAE [S81781]&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;/posts/gtc2026/#conclusion&#34;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/div&gt;&#xA;&lt;!--endtoc--&gt;&#xA;&lt;h2 id=&#34;gtc-highlights&#34;&gt;GTC Highlights&lt;/h2&gt;&#xA;&lt;p&gt;GTC 2026 is in the books.  I did my best to make the rounds.  Below is a lengthy list of talks and conversations I had on a wide range of relevant subject material.  I figured I&amp;rsquo;d give a quick summary for those less interested in the details.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
