Recent Events for foo.be MainPageDiary (Blog)

FeedCollection

hack.lu 2007

http://www.hack.lu/news.rdf returned no data, or LWP::UserAgent is not available.

adulau SVN

http://a.6f2.net/svnweb/index.cgi/adulau/rss/ returned no data, or LWP::UserAgent is not available.

Michael G. Noll

http://www.michael-noll.com/feed/ returned no data, or LWP::UserAgent is not available.

Justin Mason

2026-02-13

  • 15:04 UTC peon-pingpeon-ping "AI coding agents don't notify you when they finish or need permission. You tab away, lose focus, and waste 15 minutes getting back into flow. peon-ping fixes this with voice lines from Warcraft, StarCraft, Portal, Zelda, and more — works with Claude Code, Codex, Cursor, OpenCode, Kiro, and Google Antigravity." This is genius. I never realised how much my CLI interactions could be improved with a little bit of SFX from classic 90's games.... Tags: gaming games warcraft sfx sounds cli claude coding ux funny
  • 10:22 UTC An AI Agent Published a Hit Piece on Me – The ShamblogAn AI Agent Published a Hit Piece on Me – The Shamblog This is an utterly bananas situation: I’m a volunteer maintainer for matplotlib, python’s go-to plotting library. At ~130 million downloads each month it’s some of the most widely used software in the world. We, like many other open source projects, are dealing with a surge in low quality contributions enabled by coding agents. This strains maintainers’ abilities to keep up with code reviews, and we have implemented a policy requiring a human in the loop for any new code, who can demonstrate understanding of the changes. This problem was previously limited to people copy-pasting AI outputs, however in the past weeks we’ve started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. ... It wrote an angry hit piece disparaging my character and attempting to damage my reputation. Initially I thought this was quite funny -- it's just a closed PR! (Where did the idea come from that any contribution to an open source project had to be accepted? I've noticed this a few times recently. Give the maintainers leeway to run their projects with taste and discernment!) Anyway, the moltbot has continued on a posting spree about this event, but I think Scott Shambaugh has an extremely important point here: This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite? LLMs, given this much autonomy, will be able to use these inputs to make inscrutable and dangerous decisions. Allowing the "MJ Rathbun" AI free reign with no human supervision is dangerous and irresponsible. Wherever the "human in the loop" is here, they need to wake up and rein things in. BTW, there has been some speculation that this is actually a human pretending to be AI. I'm not sure about that, as the quantity of posts on the MJ Rathbun "blog" are voluminous and very LLMish in style. Tags: matplotlib ethics culture llm ai coding programming github pull-requests open-source moltbot trust openclaw

2026-02-09

  • 10:47 UTC How StrongDM’s AI team build serious software without even looking at the codeHow StrongDM’s AI team build serious software without even looking at the code This is really thought-provoking: StrongDM's AI team are apparently trying a new model of software engineering where there is no human code review: In k?an or mantra form: Why am I doing this? (implied: the model should be doing this instead) In rule form: Code must not be written by humans Code must not be reviewed by humans Finally, in practical form: If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement Frankly, I'm not there yet. There's a load of questions about how viable that level of spend is, and how much slop code is going to come out the other side. Particularly concerning when it's a security product! But I did find this bit interesting: StrongDM’s answer was inspired by Scenario testing (Cem Kaner, 2003). As StrongDM describe it: We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM. [The Digital Twin Universe is] behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors. With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs. We actually did this in Swrve! Our end-to-end system tests for the push notifications system obviously cannot send real push notifications to real user devices in the field, so we have a "fake" push backend emulating Google, Apple, Amazon, Huawei and other push notification systems, which accurately emulate the real public APIs for those providers. So yeah -- Digital Twins for third party services is a great way to test, and being able to scale up end-to-end testing with LLM automation is a very interesting idea. Tags: end-to-end-testing testing qa digital-twins fake-services integration-testing llms ai strongdm software engineering coding

2026-02-06

  • 15:59 UTC Ditching bike helmets laws better for healthDitching bike helmets laws better for health On the counter-intuitive side effects of banning non-helmeted bike riding: In 1991 Australia introduced mandatory bicycle helmet laws requiring all adults and children to wear a helmet at all times when riding a bike, despite opposition from cycling groups. The legislation increased helmet use - from about 30 to 80% - but was coupled with a 30 to 40% decline in the number of people cycling. Rates of head injuries among cyclists, which had been dropping through the 1980s, continued to fall before levelling out in 1993. We didn’t see the kind of marked reduction in head injury rates that would be expected with the rapid increase in helmet use. In fact, any reductions in injuries may simply have been the result of having fewer cyclists on the road and therefore fewer people exposed to the risk of head injuries. One researcher noted that after mandatory helmet laws were introduced there was a bigger decrease in head injuries among pedestrians than there was among cyclists. The improvements in the general road safety environment introduced in the 1980s are likely to have contributed far more to cyclist safety than helmet legislation. And the effects when compared against the benefits of physical activity: A recent analysis compared the risks and benefits of leaving the car at home and commuting by bike. It found the life expectancy gained from physical activity was much higher than the risks of pollution and injury from cycling. Increased physical activity added 3 to 14 months to a person’s life expectancy, while the life expectancy lost from air pollution was 0.8 to 40 days. Increased traffic accidents wiped 5-9 days off the life expectancy. It is clear that the benefits of cycling outweigh the risks, with helmet legislation actually costing society more from lost health gains than saved from injury prevention. Tags: transport bikes safety health papers science helmets cycling laws australia

2026-02-03

  • 11:24 UTC Dario Amodei’s Warnings About AI Are About Politics, TooDario Amodei’s Warnings About AI Are About Politics, Too It’s sort of hard to know how to read a manifesto like this from one of the most powerful figures in tech. Is it a sober, strategic precursor to policy papers for the next administration? The highest-profile episode of AI psychosis yet? A lament about the problems of today written in the technological dialect of tomorrow? If you take out the AI, it reads like a social-democratic electoral platform full of reforms and normative expectations that an American progressive would find appealing, resembling a plea to treat the tech industry’s future wealth accumulation as something akin to a Nordic sovereign-wealth fund. It’s likewise legible as a series of arguments about things that “we” should have started addressing a long time ago, like wealth inequality — partially a consequence of mass automations past — or the gradual construction of a terrifying surveillance state within a nominal democracy, with the help of the last generation of big tech companies. Amodei’s shoulds are, to his credit, more honest than the vague gestures at UBI or hyperabundance you get from some of his peers, but that also means they’re available to scrutinize. To the extent you can pick up on fear in “Adolescence,” it doesn’t seem to revolve around terrorists using AI to build “mirror life” that might destroy the planet or the prospect of that “country of geniuses” taking charge, but rather the way things already are and have been heading for years. Tags: ai llms future dario-amodei us-politics ubi
  • 09:53 UTC 1-Click RCE To Steal Your Moltbot Data and Keys (CVE-2026-25253)1-Click RCE To Steal Your Moltbot Data and Keys (CVE-2026-25253) This is really polishing a very stinky turd of a security "decision" in Moltbot -- an attacker simply persuades a user to click on a link which uses client-side Javascript to trigger Moltbot to load a crafted URL, to be granted a fully functional authentication token Tags: security infosec moltbot openclaw exploits

2026-01-26

  • 17:34 UTC The Computer DiseaseThe Computer Disease I love this Feynman quote, regarding what he called "the computer disease": "Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine. After a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arc-tangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation. Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease - the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing." Richard P. Feynman, Surely You're Joking, Mr. Feynman!: Adventures of a Curious Character (via Swizec Teller) Tags: automation fun computers richard-feynman the-computer-disease arc-tangents enjoyment hacking via:swizec-teller
  • 12:04 UTC Iran is building a two-tier internet that locks 85 million citizens out of the global webIran is building a two-tier internet that locks 85 million citizens out of the global web Following a repressive crackdown on protests, the government is now building a system that grants web access only to security-vetted elites, while locking 90 million citizens inside an intranet: Government spokesperson Fatemeh Mohajerani confirmed international access will not be restored until at least late March. Filterwatch, which monitors Iranian internet censorship from Texas, cited government sources, including Mohajerani, saying access will “never return to its previous form.” The system is called Barracks Internet, according to confidential planning documents obtained by Filterwatch. Under this architecture, access to the global web will be granted only through a strict security whitelist. The idea of tiered internet access is not new in Iran. Since at least 2013, the regime has quietly issued “white SIM cards,” giving unrestricted global internet access to approximately 16,000 people, while 85 million citizens remain cut off. Tags: barracks-internet iran censorship internet networking

2026-01-20

  • 12:16 UTC On the Coming Industrialisation of Exploit Generation with LLMsOn the Coming Industrialisation of Exploit Generation with LLMs Yiiiiikes: Recently I ran an experiment where I built agents on top of Opus 4.5 and GPT-5.2 and then challenged them to write exploits for a zeroday vulnerability in the QuickJS Javascript interpreter. I added a variety of modern exploit mitigations, various constraints (like assuming an unknown heap starting state, or forbidding hardcoded offsets in the exploits) and different objectives (spawn a shell, write a file, connect back to a command and control server). The agents succeeded in building over 40 distinct exploits across 6 different scenarios, and GPT-5.2 solved every scenario. Opus 4.5 solved all but two. I’ve put a technical write-up of the experiments and the results on Github, as well as the code to reproduce the experiments. In this post I’m going to focus on the main conclusion I’ve drawn from this work, which is that we should prepare for the industrialisation of many of the constituent parts of offensive cyber security. We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ. Nothing is certain, but we would be better off having wasted effort thinking through this scenario and have it not happen, than be unprepared if it does. (via emauton) Tags: via:emauton llms security infosec exploits ai chatgpt claude
  • 10:14 UTC ScottESanDiego/gmail-api-clientScottESanDiego/gmail-api-client Deliver email messages directly into GMail using their proprietary API, instead of SMTP or IMAP. Looks like it still applies spam filtering, but this can also be disabled with a switch (via JWZ) Tags: via:jwz email smtp gmail google mail

Paul Graham