<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>github &#8211; About Things | A Hans Scharler Blog</title>
	<atom:link href="https://nothans.com/tag/github/feed" rel="self" type="application/rss+xml" />
	<link>https://nothans.com</link>
	<description>Life, Comedy, Games, Tech, Marketing, and Community</description>
	<lastBuildDate>Tue, 14 Apr 2026 22:35:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">114568856</site>	<item>
		<title>The Next GitHub Won&#8217;t Be GitHub</title>
		<link>https://nothans.com/the-next-github-wont-be-github</link>
					<comments>https://nothans.com/the-next-github-wont-be-github#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 22:35:12 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Agentic Web]]></category>
		<category><![CDATA[github]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=5403</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Scott Chacon cofounded GitHub. He wrote the book on Git. Literally.&nbsp;<em>Pro Git</em>&nbsp;has been the default resource for a decade. If anyone has earned the right to say &#8220;this is fine,&#8221; it&#8217;s him.</p>



<p>He didn&#8217;t say that. He left and started building something else.</p>



<p>GitButler raised $17 million to rethink version control from scratch. When the person who built the cathedral starts drawing blueprints for something new, you should probably look at the blueprints.</p>



<p>In a recent interview with a16z, Chacon laid out the problem in terms that made me stop scrolling. GitHub was designed for humans. Specifically, for humans working at human speed. And that assumption is baked into everything.</p>



<figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe class="youtube-player" width="750" height="422" src="https://www.youtube.com/embed/vJiCnQeYLho?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox"></iframe>
</div></figure>


<h2 class="wp-block-heading" id="the-pull-request-was-built-for-people">The Pull Request Was Built for People</h2>


<p>Here&#8217;s the deal with pull requests. They assume someone will read them.</p>



<p>You open a PR. A teammate gets a notification. Maybe today, maybe tomorrow. They click through, scan the diff, leave a comment or two, approve it, and you merge. The whole cycle takes hours or days. Sometimes weeks if the reviewer is busy or the PR is big enough to trigger &#8220;I&#8217;ll get to it later&#8221; energy.</p>



<p>This works when your team is five humans shipping a few PRs a day. It even works at scale, if the scale is more humans. GitHub handled that part beautifully. Issues, reviews, discussions, profiles, stars. The social layer that made open source collaboration feel natural.</p>



<p>But pull requests were never designed for a teammate that generates 200 of them before lunch.</p>


<h2 class="wp-block-heading" id="what-happens-at-a-thousand-prs-per-week">What Happens at a Thousand PRs Per Week</h2>


<p>Stripe merged over a thousand agent-generated pull requests in a single week. Let that number sit for a second.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="750" data-attachment-id="5404" data-permalink="https://nothans.com/the-next-github-wont-be-github/image-104" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?fit=1024%2C1024&amp;ssl=1" data-orig-size="1024,1024" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="What Happens at a Thousand PRs Per Week" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?fit=750%2C750&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=750%2C750&#038;ssl=1" alt="" class="wp-image-5404" style="width:646px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=768%2C768&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=530%2C530&amp;ssl=1 530w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=750%2C750&amp;ssl=1 750w, https://i0.wp.com/nothans.com/wp-content/uploads/2026/04/image.png?resize=500%2C500&amp;ssl=1 500w" sizes="(max-width: 750px) 100vw, 750px" /></figure>
</div>


<p>A thousand PRs. In one week. From AI agents.</p>



<p>Now picture yourself as the human on that team. Your GitHub notification count doesn&#8217;t just go up. It becomes meaningless. The PR queue isn&#8217;t a todo list anymore. It&#8217;s a firehose pointed at your inbox.</p>



<p>Code review breaks first. Be honest: when you do code review, do you really read every line? Chacon asked this same question in the interview and the answer is what everyone already knows. Not always. Not even close. At a thousand PRs per week, &#8220;cursory glance&#8221; becomes &#8220;triage by title.&#8221; You&#8217;re not reviewing code. You&#8217;re reviewing your faith in the system that generated it.</p>



<p>Commit history breaks next. Git log becomes a wall of &#8220;fix: update component&#8221; and &#8220;refactor: apply suggestion&#8221; with no narrative thread. The story of how your codebase evolved disappears under a flood of mechanical changes.&nbsp;<code>git blame</code>&nbsp;points to an agent. The context that used to live in commit messages evaporates.</p>



<p>Notifications break last, because they were already broken. But now they&#8217;re broken at scale. The signal-to-noise ratio doesn&#8217;t degrade gracefully. It collapses.</p>



<p>Chacon put it simply in the interview: the whole model assumes human-speed collaboration. When you introduce participants that work at machine speed, the model doesn&#8217;t bend. It shatters.</p>


<h2 class="wp-block-heading" id="the-pull-request-is-the-wrong-unit">The Pull Request Is the Wrong Unit</h2>


<p>Here&#8217;s the question I keep coming back to: what replaces the pull request?</p>



<p>Not &#8220;how do we make pull requests better for agents.&#8221; That&#8217;s the wrong question. That&#8217;s like asking how to make horse-drawn carriages faster when someone just showed you an engine.</p>



<p>The pull request is a unit of collaboration designed around a specific workflow. One person makes changes. Another person reviews those changes. They discuss. They merge. It&#8217;s turn-based. It&#8217;s sequential. It&#8217;s fundamentally a conversation between two humans about a diff.</p>



<p>When the &#8220;person&#8221; making changes is twelve agents working in parallel, and they&#8217;re generating changes faster than any human can read them, the conversation model doesn&#8217;t apply. You don&#8217;t need a better conversation. You need a different unit of work.</p>



<p>Chacon&#8217;s answer at GitButler is interesting. They built what he calls a &#8220;mega-merge&#8221; system where multiple branches coexist in a single working directory. Agents can see what other agents are doing in real time. Conflicts surface before they happen, not after someone tries to merge.</p>



<p>That&#8217;s a version control answer. But the platform question is bigger. What does the server-side look like? What does collaboration look like when most of the participants aren&#8217;t human?</p>


<h2 class="wp-block-heading" id="what-the-next-platform-actually-needs">What the Next Platform Actually Needs</h2>


<p>I don&#8217;t know what the next GitHub looks like. Nobody does. But I can see the shape of the requirements from here.</p>



<p><strong>Real-time conflict detection, not post-merge.</strong>&nbsp;GitHub tells you about conflicts when you try to merge. By then, someone (or some agent) has already done the work. In a world with twelve agents writing code simultaneously, you need to know about conflicts as they form. Not after.</p>



<p><strong>Agent provenance.</strong>&nbsp;Which agent wrote this code? What prompt generated it? What was the reasoning chain? Right now, the best you get is a commit message that says &#8220;Generated by Claude&#8221; or &#8220;Co-authored-by: Copilot.&#8221; That&#8217;s like listing &#8220;computer&#8221; as the author. You need the full trail: the intent, the context, the decision points.</p>



<p><strong>Review at the intent level, not the diff level.</strong>&nbsp;Humans shouldn&#8217;t be reading thousand-line diffs generated by agents. They should be reviewing the&nbsp;<em>intent</em>: &#8220;I asked the agent to refactor the auth module to use JWT instead of session tokens.&#8221; Did it do that? Did it break anything? Let another agent verify the diff. The human reviews the goal.</p>



<p><strong>Trust scores on commits.</strong>&nbsp;Not every change carries the same risk. A CSS color change and a database migration are not equal. The platform should know this. Flag the high-risk changes for human review. Let the low-risk ones flow through with automated verification.</p>



<p><strong>Parallel visibility.</strong>&nbsp;If three agents are working on the same codebase, each one should know what the others are doing. Not through pull requests after the fact. In real time. This is what GitButler&#8217;s mega-merge is trying to solve at the local level, but it needs to exist at the platform level too.</p>



<p>None of this looks like a pull request queue. It looks more like air traffic control. Multiple things moving at once, a human watching the board, stepping in when something looks wrong.</p>


<h2 class="wp-block-heading" id="the-builders-question">The Builder&#8217;s Question</h2>


<p>GitHub won because it made one thing simple: collaborating on code with other humans. The entire product was built around that idea. It worked brilliantly for twenty years.</p>



<p>The next platform will win by making a different thing simple: collaborating on code with a mixed team of humans and agents. That&#8217;s a different design problem. The social features that made GitHub great (profiles, stars, discussions, PR reviews) were designed for people who have attention spans, opinions, and feelings. Agents have none of those.</p>



<p>Chacon said something in the interview that stuck with me. He said the constraint isn&#8217;t &#8220;can we produce the code&#8221; anymore. It&#8217;s &#8220;can we agree on what we want.&#8221; The bottleneck moved from implementation to communication. From typing to thinking.</p>



<p>If that&#8217;s true, the next collaboration platform isn&#8217;t optimized for code review. It&#8217;s optimized for intent. For specification. For making sure twelve agents and three humans are all building the same thing.</p>



<p>I don&#8217;t know who builds it. Maybe GitButler expands into the server side. Maybe someone we haven&#8217;t heard of yet starts from scratch. Maybe GitHub pivots faster than Chacon expects.</p>



<p>But I&#8217;m pretty sure of one thing. When it arrives, it won&#8217;t look like a pull request queue.</p>



<p>It&#8217;ll look like something we don&#8217;t have a name for yet.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/the-next-github-wont-be-github/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5403</post-id>	</item>
		<item>
		<title>The Open Source FlexGen Project Enables LLMs Like ChatGPT to Run on a Single GPU</title>
		<link>https://nothans.com/flexgen-enables-llms-like-chatgpt-to-run-on-a-single-gpu</link>
					<comments>https://nothans.com/flexgen-enables-llms-like-chatgpt-to-run-on-a-single-gpu#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Thu, 23 Feb 2023 22:49:37 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[FlexGen]]></category>
		<category><![CDATA[github]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[open source]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=3657</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Things are moving fast, getting weird, and staying exciting. <a href="https://github.com/FMInference/FlexGen" target="_blank" rel="noreferrer noopener">FlexGen</a> dropped on GitHub on February 20, 2023. It&#8217;s a game changer. You can now run ChatGPT like large language models on a single graphics card. You used to need to 10 GPUs to get to the same performance.</p>



<figure class="wp-block-pullquote"><blockquote><p><a href="https://github.com/FMInference/FlexGen" target="_blank" rel="noreferrer noopener">FlexGen</a> is a high-throughput generation engine for running large language models with limited GPU memory (e.g., a 16GB T4 GPU or a 24GB RTX3090 gaming card). FlexGen allows high-throughput generation by increasing the effective batch size through IO-efficient offloading and compression.</p></blockquote></figure>



<p>FlexGen is an open source collaboration among the <a href="https://discord.com/invite/JfphDTkBAh" target="_blank" rel="noreferrer noopener">LMSys community</a>, Stanford, UC Berkeley, <a rel="noreferrer noopener" href="https://www.together.xyz/" target="_blank">TOGETHER</a>, and <a href="http://www.ethz.ch/">ETH Zürich</a>.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><a href="https://github.com/FMInference/FlexGen"><img data-recalc-dims="1" decoding="async" width="637" height="426" data-attachment-id="3658" data-permalink="https://nothans.com/flexgen-enables-llms-like-chatgpt-to-run-on-a-single-gpu/image-4-6" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?fit=637%2C426&amp;ssl=1" data-orig-size="637,426" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="FlexGen" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?fit=637%2C426&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?resize=637%2C426&#038;ssl=1" alt="" class="wp-image-3658" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?w=637&amp;ssl=1 637w, https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?resize=300%2C200&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2023/02/image-4.png?resize=420%2C280&amp;ssl=1 420w" sizes="(max-width: 637px) 100vw, 637px" /></a><figcaption class="wp-element-caption">FlexGen Reduces Weight I/O By Traversing Column by Column</figcaption></figure>
</div>


<p>The high computational and memory requirements of large language model (LLM) inference traditionally make it feasible only with multiple high-end accelerators. FlexGen aims to lower the resource requirements of LLM inference down to a single commodity GPU (e.g., T4, 3090) and allow flexible deployment for various hardware setups. The key technique behind FlexGen is to trade off between&nbsp;<strong>latency</strong>&nbsp;and&nbsp;<strong>throughput</strong>&nbsp;by developing techniques to increase the effective batch size.</p>



<p>The key features of FlexGen include:</p>



<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a1.png" alt="⚡" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>High-Throughput, Large-Batch Offloading</strong><br>Higher-throughput generation than other offloading-based systems (e.g., Hugging Face Accelerate, DeepSpeed Zero-Inference). The key innovation is a new offloading technique that can effectively increase the batch size. This can be useful for batch inference scenarios, such as benchmarking (e.g., <a href="https://github.com/stanford-crfm/helm">HELM</a>) and <a href="https://arxiv.org/abs/2205.09911">data wrangling</a>.</p>



<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4e6.png" alt="📦" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Extreme Compression</strong><br>Compress both the parameters and attention cache of models, such as OPT-175B, down to 4 bits with negligible accuracy loss.</p>



<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f680.png" alt="🚀" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Scalability</strong><br>Comes with a distributed pipeline parallelism runtime to allow scaling if more GPUs are available.</p>



<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/274c.png" alt="❌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Limitations</strong><br>As an offloading-based system running on weak GPUs, FlexGen also has its limitations. The throughput of FlexGen is significantly lower than the case when you have enough powerful GPUs to hold the whole model, especially for small-batch cases. FlexGen is mostly optimized for throughput-oriented batch processing settings (e.g., classifying or extracting information from many documents in batches), on single GPUs.</p>



<p><strong>Learn more by checking out the FlexGen GitHub <a rel="noreferrer noopener" href="https://github.com/FMInference/FlexGen" target="_blank">project</a> and read the supporting <a rel="noreferrer noopener" href="https://github.com/FMInference/FlexGen/blob/main/docs/paper.pdf" target="_blank">paper</a>.</strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/flexgen-enables-llms-like-chatgpt-to-run-on-a-single-gpu/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3657</post-id>	</item>
		<item>
		<title>The IoT Debugger App for ThingSpeak Now Includes a CheerLights Example</title>
		<link>https://nothans.com/the-iot-debugger-app-for-thingspeak</link>
					<comments>https://nothans.com/the-iot-debugger-app-for-thingspeak#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Sat, 30 Apr 2022 12:14:54 +0000</pubDate>
				<category><![CDATA[CheerLights]]></category>
		<category><![CDATA[ThingSpeak]]></category>
		<category><![CDATA[github]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iot]]></category>
		<category><![CDATA[iot-debugger]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[projects]]></category>
		<category><![CDATA[thingspeak]]></category>
		<guid isPermaLink="false">https://nothans.com/?p=2816</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>I updated the <a rel="noreferrer noopener" href="https://github.com/nothans/iot-debugger" target="_blank">IoT Debugger</a> app to fix some bugs and while I was there I added a new default example for <a rel="noreferrer noopener" href="https://nothans.github.io/iot-debugger/app/thingspeak.html" target="_blank">CheerLights</a>. <em>I just couldn&#8217;t help myself.</em> The app is open source and with it, you can explore your ThingSpeak channel data to make sure your <em>things </em>are working properly. </p>


<div class="wp-block-image is-style-default">
<figure class="aligncenter size-large is-resized"><a href="https://nothans.github.io/iot-debugger/app/thingspeak.html"><img data-recalc-dims="1" loading="lazy" decoding="async" data-attachment-id="2817" data-permalink="https://nothans.com/the-iot-debugger-app-for-thingspeak/iot-debugger-cheerlights" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?fit=1277%2C759&amp;ssl=1" data-orig-size="1277,759" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1651144610&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="iot-debugger-cheerlights" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?fit=750%2C446&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?resize=750%2C446&#038;ssl=1" alt="" class="wp-image-2817" width="750" height="446" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?resize=1024%2C609&amp;ssl=1 1024w, https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?resize=300%2C178&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?resize=768%2C456&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2022/04/iot-debugger-cheerlights.jpg?w=1277&amp;ssl=1 1277w" sizes="auto, (max-width: 750px) 100vw, 750px" /></a><figcaption class="wp-element-caption"><a href="https://nothans.github.io/iot-debugger/app/thingspeak.html" target="_blank" rel="noreferrer noopener">IoT Debugger App</a> Showing CheerLights Results</figcaption></figure>
</div>


<p>I had to make some updates for jQuery and Bootstrap. &#8220;Software&#8221; does not age well. Even a lowly app like mine sitting in a GitHub repository needs attention to make sure they work. </p>



<p>Here are some features of the <a href="https://github.com/nothans/iot-debugger" target="_blank" rel="noreferrer noopener">IoT Debugger App</a>:</p>



<ul class="wp-block-list">
<li>ThingSpeak Data Logger and Channel Browser</li>



<li>Particle.io Webhooks Manager</li>



<li>Settings are saved in LocalStorage</li>



<li>Built using HTML5, Bootstrap, and jQuery</li>
</ul>



<p>I now host a demo of the app using GitHub Pages. Even if you don&#8217;t want to download the app and run it yourself, you can just point your browser to <a rel="noreferrer noopener" href="https://nothans.github.io/iot-debugger/app/thingspeak.html" target="_blank">https://nothans.github.io/iot-debugger/app/thingspeak.html</a> and use it from there. To use the app, start by entering a channel number. If the channel is public and has data, you should start seeing data on your screen. You can try <a rel="noreferrer noopener" href="https://thingspeak.com/channels/1417" target="_blank">ThingSpeak channel 1417</a>. This is the channel that stores the CheerLights colors. Use the app to explore your channel data.</p>



<p>Feel free to suggest new features, submit bugs, and make your own changes. That is the awesome thing about open source. We are building this together. See you on <a href="https://github.com/nothans/iot-debugger" target="_blank" rel="noreferrer noopener">GitHub</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/the-iot-debugger-app-for-thingspeak/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2816</post-id>	</item>
		<item>
		<title>Open Source IoT Debug Tool for ThingSpeak and Particle</title>
		<link>https://nothans.com/open-source-iot-debug-tool-for-thingspeak-and-particle</link>
					<comments>https://nothans.com/open-source-iot-debug-tool-for-thingspeak-and-particle#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Wed, 03 Aug 2016 17:38:10 +0000</pubDate>
				<category><![CDATA[IoT]]></category>
		<category><![CDATA[ThingSpeak]]></category>
		<category><![CDATA[github]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[particle]]></category>
		<category><![CDATA[thingspeak]]></category>
		<guid isPermaLink="false">http://nothans.com/?p=904</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>Often, when building IoT projects, you wonder what data is being sent to a cloud service like ThingSpeak. The IoT Debugger tool allows you to see the data inside a ThingSpeak channel in a table view. The ThingSpeak Logger shows you data as the channel gets updated. This is an easy way to see if you are sending bad or null data. The project is open-source and available on <a href="https://github.com/nothans/iot-debugger">GitHub</a>.</p>


<div class="wp-block-image">
<figure class="aligncenter"><a href="https://github.com/nothans/iot-debugger"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="575" data-attachment-id="905" data-permalink="https://nothans.com/open-source-iot-debug-tool-for-thingspeak-and-particle/iot-debugger" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?fit=859%2C658&amp;ssl=1" data-orig-size="859,658" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="IoT Debug Tool" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?fit=750%2C575&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?resize=750%2C575" alt="IoT Debug Tool" class="wp-image-905" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?w=859&amp;ssl=1 859w, https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?resize=300%2C230&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2016/08/IoT-Debugger.png?resize=768%2C588&amp;ssl=1 768w" sizes="auto, (max-width: 750px) 100vw, 750px" /></a><figcaption class="wp-element-caption">IoT Debugger</figcaption></figure>
</div>

<h3 class="wp-block-heading" id="features-of-iot-debugger">Features of IoT Debugger</h3>


<ul class="wp-block-list">
<li>ThingSpeak Data Logger</li>



<li>Particle.io Webhooks Manager</li>



<li>Settings are saved in LocalStorage</li>



<li>Built using HTML5, Bootstrap, and jQuery</li>



<li>Open Source!</li>
</ul>



<p><a href="https://nothans.github.io/iot-debugger/app/thingspeak.html" target="_blank" rel="noreferrer noopener">Demo</a> and download the source code for IoT Debugger on <a href="https://github.com/nothans/iot-debugger">GitHub</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/open-source-iot-debug-tool-for-thingspeak-and-particle/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">904</post-id>	</item>
		<item>
		<title>Send Your Windows Server’s Disk Free Space to ThingSpeak Using PowerShell</title>
		<link>https://nothans.com/thingspeak-powershell-for-free-disk-space</link>
					<comments>https://nothans.com/thingspeak-powershell-for-free-disk-space#respond</comments>
		
		<dc:creator><![CDATA[Hans Scharler]]></dc:creator>
		<pubDate>Wed, 11 Mar 2015 20:26:27 +0000</pubDate>
				<category><![CDATA[ThingSpeak]]></category>
		<category><![CDATA[github]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iot]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[powershell]]></category>
		<category><![CDATA[thingspeak]]></category>
		<category><![CDATA[web of things]]></category>
		<category><![CDATA[windows]]></category>
		<category><![CDATA[windows server]]></category>
		<guid isPermaLink="false">http://nothans.com/thingspeak-powershell-for-free-disk-space</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[
<p>I manage many servers. One of the things that I am always curious about is how much disk space is left on my servers. I know there are many ways to track this, but almost always, the service that I am using changes or breaks over time.</p>



<p>My super simple solution for tracking server disk space is to use Windows PowerShell and <a href="https://thingspeak.com" target="_blank" rel="noopener">ThingSpeak</a>. I went to the trouble <span style="margin: 0px; padding: 0px;">of releasing the code to <a href="https://github.com/nothans/ThingSpeak-PowerShell" target="_blank" rel="noopener">GitHub</a> so that you can try this out for yourself. This can be used on any Windows Server as long as you can execute PowerShell scripts. ThingSpeak gives you a place to store data from anything. In this case, I am sending my disk-free</span> space to ThingSpeak once per day by scheduling a Windows Task.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://github.com/nothans/thingspeak-powershell-examples"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="463" data-attachment-id="4819" data-permalink="https://nothans.com/thingspeak-powershell-for-free-disk-space/image-1-27" data-orig-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?fit=865%2C534&amp;ssl=1" data-orig-size="865,534" data-comments-opened="0" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Windows PowerShell and ThingSpeak" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?fit=750%2C463&amp;ssl=1" src="https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?resize=750%2C463&#038;ssl=1" alt="" class="wp-image-4819" style="width:498px;height:auto" srcset="https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?w=865&amp;ssl=1 865w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?resize=300%2C185&amp;ssl=1 300w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?resize=768%2C474&amp;ssl=1 768w, https://i0.wp.com/nothans.com/wp-content/uploads/2024/10/image-1.png?resize=750%2C463&amp;ssl=1 750w" sizes="auto, (max-width: 750px) 100vw, 750px" /></a></figure>
</div>


<p><strong><em>Check out the open-source code on <a href="https://github.com/nothans/ThingSpeak-PowerShell" target="_blank" rel="noopener">GitHub</a>!</em></strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nothans.com/thingspeak-powershell-for-free-disk-space/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">624</post-id>	</item>
	</channel>
</rss>
